public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/35] x86/bugs: Attack vector controls
@ 2024-11-05 21:54 David Kaplan
  2024-11-05 21:54 ` [PATCH v2 01/35] x86/bugs: Add X86_BUG_SPECTRE_V2_USER David Kaplan
                   ` (34 more replies)
  0 siblings, 35 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

This series restructures arch/x86/kernel/cpu/bugs.c and proposes new
command line options to make it easier to control which CPU mitigations
are applied.  These options select relevant mitigations based on chosen
attack vectors, which are hopefully easier for users to understand.

There are two parts to this patch series:

The first 18 patches restructure the existing mitigation selection logic
to use a uniform set of functions.  First, the "select" function is
called for each mitigation to select an appropriate mitigation.  Unless
a mitigation is explicitly selected or disabled with a command line
option, the default mitigation is AUTO and the "select" function will
then choose the best mitigation.  After the "select" function is called
for each mitigation, some mitigations define an "update" function which
can be used to update the selection, based on the choices made by other
mitigations.  Finally, the "apply" function is called which enables the
chosen mitigation.

This structure simplifies the mitigation control logic, especially when
there are dependencies between multiple vulnerabilities.  It also
prepares the code for the second set of patches.

The rest of the patches define new "attack vector" command line options
to make it easier to select appropriate mitigations based on the usage
of the system.  While many users may not be intimately familiar with the
details of these CPU vulnerabilities, they are likely better able to
understand the intended usage of their system.  As a result, unneeded
mitigations may be disabled, allowing users to recoup more performance.
New documentation is included with recommendations on what to consider
when choosing which attack vectors to enable/disable.

Note that this patch series does not change any of the existing
mitigation defaults.

Changes in v2:
   - Removed new enum, just use X86_BUG* to identify vulnerabilities
   - Mitigate gds if cross-thread protection is selected as pointed out
     by Andrew Cooper
   - Simplifications around verw-based mitigation handling
   - Various bug fixes

David Kaplan (35):
  x86/bugs: Add X86_BUG_SPECTRE_V2_USER
  x86/bugs: Relocate mds/taa/mmio/rfds defines
  x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
  x86/bugs: Restructure mds mitigation
  x86/bugs: Restructure taa mitigation
  x86/bugs: Restructure mmio mitigation
  x86/bugs: Restructure rfds mitigation
  x86/bugs: Remove md_clear_*_mitigation()
  x86/bugs: Restructure srbds mitigation
  x86/bugs: Restructure gds mitigation
  x86/bugs: Restructure spectre_v1 mitigation
  x86/bugs: Restructure retbleed mitigation
  x86/bugs: Restructure spectre_v2_user mitigation
  x86/bugs: Restructure bhi mitigation
  x86/bugs: Restructure spectre_v2 mitigation
  x86/bugs: Restructure ssb mitigation
  x86/bugs: Restructure l1tf mitigation
  x86/bugs: Restructure srso mitigation
  Documentation/x86: Document the new attack vector controls
  x86/bugs: Define attack vectors
  x86/bugs: Determine relevant vulnerabilities based on attack vector
    controls.
  x86/bugs: Add attack vector controls for mds
  x86/bugs: Add attack vector controls for taa
  x86/bugs: Add attack vector controls for mmio
  x86/bugs: Add attack vector controls for rfds
  x86/bugs: Add attack vector controls for srbds
  x86/bugs: Add attack vector controls for gds
  x86/bugs: Add attack vector controls for spectre_v1
  x86/bugs: Add attack vector controls for retbleed
  x86/bugs: Add attack vector controls for spectre_v2_user
  x86/bugs: Add attack vector controls for bhi
  x86/bugs: Add attack vector controls for spectre_v2
  x86/bugs: Add attack vector controls for l1tf
  x86/bugs: Add attack vector controls for srso
  x86/pti: Add attack vector controls for pti

 .../hw-vuln/attack_vector_controls.rst        |  172 +++
 Documentation/admin-guide/hw-vuln/index.rst   |    1 +
 arch/x86/include/asm/cpufeatures.h            |    1 +
 arch/x86/include/asm/processor.h              |    2 +
 arch/x86/kernel/cpu/bugs.c                    | 1231 ++++++++++-------
 arch/x86/kernel/cpu/common.c                  |    4 +-
 arch/x86/kvm/vmx/vmx.c                        |    2 +
 arch/x86/mm/pti.c                             |    3 +-
 include/linux/cpu.h                           |   11 +
 kernel/cpu.c                                  |   58 +
 10 files changed, 1010 insertions(+), 475 deletions(-)
 create mode 100644 Documentation/admin-guide/hw-vuln/attack_vector_controls.rst

-- 
2.34.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCH v2 01/35] x86/bugs: Add X86_BUG_SPECTRE_V2_USER
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 02/35] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
                   ` (33 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

All CPU vulnerabilities with command line options map to a single
X86_BUG bit except for Spectre V2 where both the spectre_v2 and
spectre_v2_user command line options are related to the same bug.  The
spectre_v2 command line options mostly relate to user->kernel and
guest->host mitigations, while the spectre_v2_user command line options
relate to user->user or guest->guest protections.

Define a new X86_BUG bit for spectre_v2_user so each
*_select_mitigation() function in bugs.c is related to a unique X86_BUG
bit.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/include/asm/cpufeatures.h | 1 +
 arch/x86/kernel/cpu/common.c       | 4 +++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 924f530129d7..0c4c974a616b 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -528,4 +528,5 @@
 #define X86_BUG_RFDS			X86_BUG(1*32 + 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */
 #define X86_BUG_BHI			X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */
 #define X86_BUG_IBPB_NO_RET	   	X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */
+#define X86_BUG_SPECTRE_V2_USER		X86_BUG(1*32 + 5) /* "spectre_v2_user" CPU is affected by Spectre variant 2 attack between user processes */
 #endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 8f41ab219cf1..a2bc1c1b31a9 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1332,8 +1332,10 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 
 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
 
-	if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2))
+	if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2)) {
 		setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+		setup_force_cpu_bug(X86_BUG_SPECTRE_V2_USER);
+	}
 
 	if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
 	    !(x86_arch_cap_msr & ARCH_CAP_SSB_NO) &&
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 02/35] x86/bugs: Relocate mds/taa/mmio/rfds defines
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
  2024-11-05 21:54 ` [PATCH v2 01/35] x86/bugs: Add X86_BUG_SPECTRE_V2_USER David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds David Kaplan
                   ` (32 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Move the mds, taa, mmio, and rfds mitigation enums earlier in the file
to prepare for restructuring of these mitigations as they are all
inter-related.

No functional change.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 60 ++++++++++++++++++++------------------
 1 file changed, 31 insertions(+), 29 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 83b34a522dd7..3fd7a2ce11b5 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -243,6 +243,37 @@ static const char * const mds_strings[] = {
 	[MDS_MITIGATION_VMWERV]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
 };
 
+enum taa_mitigations {
+	TAA_MITIGATION_OFF,
+	TAA_MITIGATION_UCODE_NEEDED,
+	TAA_MITIGATION_VERW,
+	TAA_MITIGATION_TSX_DISABLED,
+};
+
+/* Default mitigation for TAA-affected CPUs */
+static enum taa_mitigations taa_mitigation __ro_after_init =
+	IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
+
+enum mmio_mitigations {
+	MMIO_MITIGATION_OFF,
+	MMIO_MITIGATION_UCODE_NEEDED,
+	MMIO_MITIGATION_VERW,
+};
+
+/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
+static enum mmio_mitigations mmio_mitigation __ro_after_init =
+	IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
+
+enum rfds_mitigations {
+	RFDS_MITIGATION_OFF,
+	RFDS_MITIGATION_VERW,
+	RFDS_MITIGATION_UCODE_NEEDED,
+};
+
+/* Default mitigation for Register File Data Sampling */
+static enum rfds_mitigations rfds_mitigation __ro_after_init =
+	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
+
 static void __init mds_select_mitigation(void)
 {
 	if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
@@ -286,16 +317,6 @@ early_param("mds", mds_cmdline);
 #undef pr_fmt
 #define pr_fmt(fmt)	"TAA: " fmt
 
-enum taa_mitigations {
-	TAA_MITIGATION_OFF,
-	TAA_MITIGATION_UCODE_NEEDED,
-	TAA_MITIGATION_VERW,
-	TAA_MITIGATION_TSX_DISABLED,
-};
-
-/* Default mitigation for TAA-affected CPUs */
-static enum taa_mitigations taa_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
 static bool taa_nosmt __ro_after_init;
 
 static const char * const taa_strings[] = {
@@ -386,15 +407,6 @@ early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
 #undef pr_fmt
 #define pr_fmt(fmt)	"MMIO Stale Data: " fmt
 
-enum mmio_mitigations {
-	MMIO_MITIGATION_OFF,
-	MMIO_MITIGATION_UCODE_NEEDED,
-	MMIO_MITIGATION_VERW,
-};
-
-/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
-static enum mmio_mitigations mmio_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
 static bool mmio_nosmt __ro_after_init = false;
 
 static const char * const mmio_strings[] = {
@@ -483,16 +495,6 @@ early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
 #undef pr_fmt
 #define pr_fmt(fmt)	"Register File Data Sampling: " fmt
 
-enum rfds_mitigations {
-	RFDS_MITIGATION_OFF,
-	RFDS_MITIGATION_VERW,
-	RFDS_MITIGATION_UCODE_NEEDED,
-};
-
-/* Default mitigation for Register File Data Sampling */
-static enum rfds_mitigations rfds_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
-
 static const char * const rfds_strings[] = {
 	[RFDS_MITIGATION_OFF]			= "Vulnerable",
 	[RFDS_MITIGATION_VERW]			= "Mitigation: Clear Register File",
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
  2024-11-05 21:54 ` [PATCH v2 01/35] x86/bugs: Add X86_BUG_SPECTRE_V2_USER David Kaplan
  2024-11-05 21:54 ` [PATCH v2 02/35] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-14  2:26   ` Pawan Gupta
  2024-11-05 21:54 ` [PATCH v2 04/35] x86/bugs: Restructure mds mitigation David Kaplan
                   ` (31 subsequent siblings)
  34 siblings, 1 reply; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Add AUTO mitigations for mds/taa/mmio/rfds to create consistent
vulnerability handling.  These AUTO mitigations will be turned into the
appropriate default mitigations in the <vuln>_select_mitigation()
functions.  In a later patch, these will be used with the new attack
vector controls to help select appropriate mitigations.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/include/asm/processor.h |  1 +
 arch/x86/kernel/cpu/bugs.c       | 20 ++++++++++++++++----
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index c0975815980c..ea4b87b44455 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -754,6 +754,7 @@ extern enum l1tf_mitigations l1tf_mitigation;
 
 enum mds_mitigations {
 	MDS_MITIGATION_OFF,
+	MDS_MITIGATION_AUTO,
 	MDS_MITIGATION_FULL,
 	MDS_MITIGATION_VMWERV,
 };
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3fd7a2ce11b5..34d55f368bff 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -234,7 +234,7 @@ static void x86_amd_ssb_disable(void)
 
 /* Default mitigation for MDS-affected CPUs */
 static enum mds_mitigations mds_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_FULL : MDS_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF;
 static bool mds_nosmt __ro_after_init = false;
 
 static const char * const mds_strings[] = {
@@ -245,6 +245,7 @@ static const char * const mds_strings[] = {
 
 enum taa_mitigations {
 	TAA_MITIGATION_OFF,
+	TAA_MITIGATION_AUTO,
 	TAA_MITIGATION_UCODE_NEEDED,
 	TAA_MITIGATION_VERW,
 	TAA_MITIGATION_TSX_DISABLED,
@@ -252,27 +253,29 @@ enum taa_mitigations {
 
 /* Default mitigation for TAA-affected CPUs */
 static enum taa_mitigations taa_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_AUTO : TAA_MITIGATION_OFF;
 
 enum mmio_mitigations {
 	MMIO_MITIGATION_OFF,
+	MMIO_MITIGATION_AUTO,
 	MMIO_MITIGATION_UCODE_NEEDED,
 	MMIO_MITIGATION_VERW,
 };
 
 /* Default mitigation for Processor MMIO Stale Data vulnerabilities */
 static enum mmio_mitigations mmio_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ?	MMIO_MITIGATION_AUTO : MMIO_MITIGATION_OFF;
 
 enum rfds_mitigations {
 	RFDS_MITIGATION_OFF,
+	RFDS_MITIGATION_AUTO,
 	RFDS_MITIGATION_VERW,
 	RFDS_MITIGATION_UCODE_NEEDED,
 };
 
 /* Default mitigation for Register File Data Sampling */
 static enum rfds_mitigations rfds_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
 
 static void __init mds_select_mitigation(void)
 {
@@ -281,6 +284,9 @@ static void __init mds_select_mitigation(void)
 		return;
 	}
 
+	if (mds_mitigation == MDS_MITIGATION_AUTO)
+		mds_mitigation = MDS_MITIGATION_FULL;
+
 	if (mds_mitigation == MDS_MITIGATION_FULL) {
 		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
 			mds_mitigation = MDS_MITIGATION_VMWERV;
@@ -510,6 +516,9 @@ static void __init rfds_select_mitigation(void)
 	if (rfds_mitigation == RFDS_MITIGATION_OFF)
 		return;
 
+	if (rfds_mitigation == RFDS_MITIGATION_AUTO)
+		rfds_mitigation = RFDS_MITIGATION_VERW;
+
 	if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
 	else
@@ -1995,6 +2004,7 @@ void cpu_bugs_smt_update(void)
 		update_mds_branch_idle();
 		break;
 	case MDS_MITIGATION_OFF:
+	case MDS_MITIGATION_AUTO:
 		break;
 	}
 
@@ -2006,6 +2016,7 @@ void cpu_bugs_smt_update(void)
 		break;
 	case TAA_MITIGATION_TSX_DISABLED:
 	case TAA_MITIGATION_OFF:
+	case TAA_MITIGATION_AUTO:
 		break;
 	}
 
@@ -2016,6 +2027,7 @@ void cpu_bugs_smt_update(void)
 			pr_warn_once(MMIO_MSG_SMT);
 		break;
 	case MMIO_MITIGATION_OFF:
+	case MMIO_MITIGATION_AUTO:
 		break;
 	}
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 04/35] x86/bugs: Restructure mds mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (2 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-14  3:03   ` Pawan Gupta
  2024-11-05 21:54 ` [PATCH v2 05/35] x86/bugs: Restructure taa mitigation David Kaplan
                   ` (30 subsequent siblings)
  34 siblings, 1 reply; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure mds mitigation selection to use select/update/apply
functions to create consistent vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 69 +++++++++++++++++++++++++++++++++-----
 1 file changed, 61 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 34d55f368bff..4f35dcd9dee8 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -34,6 +34,25 @@
 
 #include "cpu.h"
 
+/*
+ * Speculation Vulnerability Handling
+ *
+ * Each vulnerability is handled with the following functions:
+ *   <vuln>_select_mitigation() -- Selects a mitigation to use.  This should
+ *				   take into account all relevant command line
+ *				   options.
+ *   <vuln>_update_mitigation() -- This is called after all vulnerabilities have
+ *				   selected a mitigation, in case the selection
+ *				   may want to change based on other choices
+ *				   made.  This function is optional.
+ *   <vuln>_apply_mitigation() -- Enable the selected mitigation.
+ *
+ * The compile-time mitigation in all cases should be AUTO.  An explicit
+ * command-line option can override AUTO.  If no such option is
+ * provided, <vuln>_select_mitigation() will override AUTO to the best
+ * mitigation option.
+ */
+
 static void __init spectre_v1_select_mitigation(void);
 static void __init spectre_v2_select_mitigation(void);
 static void __init retbleed_select_mitigation(void);
@@ -41,6 +60,8 @@ static void __init spectre_v2_user_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
+static void __init mds_update_mitigation(void);
+static void __init mds_apply_mitigation(void);
 static void __init md_clear_update_mitigation(void);
 static void __init md_clear_select_mitigation(void);
 static void __init taa_select_mitigation(void);
@@ -165,6 +186,7 @@ void __init cpu_select_mitigations(void)
 	spectre_v2_user_select_mitigation();
 	ssb_select_mitigation();
 	l1tf_select_mitigation();
+	mds_select_mitigation();
 	md_clear_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
@@ -175,6 +197,14 @@ void __init cpu_select_mitigations(void)
 	 */
 	srso_select_mitigation();
 	gds_select_mitigation();
+
+	/*
+	 * After mitigations are selected, some may need to update their
+	 * choices.
+	 */
+	mds_update_mitigation();
+
+	mds_apply_mitigation();
 }
 
 /*
@@ -229,9 +259,6 @@ static void x86_amd_ssb_disable(void)
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 
-#undef pr_fmt
-#define pr_fmt(fmt)	"MDS: " fmt
-
 /* Default mitigation for MDS-affected CPUs */
 static enum mds_mitigations mds_mitigation __ro_after_init =
 	IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF;
@@ -277,12 +304,19 @@ enum rfds_mitigations {
 static enum rfds_mitigations rfds_mitigation __ro_after_init =
 	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
 
+/* Return TRUE if any VERW-based mitigation is enabled. */
+static bool __init mitigate_any_verw(void)
+{
+	return (mds_mitigation != MDS_MITIGATION_OFF ||
+		taa_mitigation != TAA_MITIGATION_OFF ||
+		mmio_mitigation != MMIO_MITIGATION_OFF ||
+		rfds_mitigation != RFDS_MITIGATION_OFF);
+}
+
 static void __init mds_select_mitigation(void)
 {
-	if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
+	if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
 		mds_mitigation = MDS_MITIGATION_OFF;
-		return;
-	}
 
 	if (mds_mitigation == MDS_MITIGATION_AUTO)
 		mds_mitigation = MDS_MITIGATION_FULL;
@@ -290,9 +324,29 @@ static void __init mds_select_mitigation(void)
 	if (mds_mitigation == MDS_MITIGATION_FULL) {
 		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
 			mds_mitigation = MDS_MITIGATION_VMWERV;
+	}
+}
 
-		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+static void __init mds_update_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_MDS))
+		return;
+
+	/* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
+	if (mitigate_any_verw()) {
+		if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+			mds_mitigation = MDS_MITIGATION_FULL;
+		else
+			mds_mitigation = MDS_MITIGATION_VMWERV;
+	}
+
+	pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
+}
 
+static void __init mds_apply_mitigation(void)
+{
+	if (mds_mitigation == MDS_MITIGATION_FULL) {
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
 		if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
 		    (mds_nosmt || cpu_mitigations_auto_nosmt()))
 			cpu_smt_disable(false);
@@ -595,7 +649,6 @@ static void __init md_clear_update_mitigation(void)
 
 static void __init md_clear_select_mitigation(void)
 {
-	mds_select_mitigation();
 	taa_select_mitigation();
 	mmio_select_mitigation();
 	rfds_select_mitigation();
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 05/35] x86/bugs: Restructure taa mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (3 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 04/35] x86/bugs: Restructure mds mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-14  4:43   ` Pawan Gupta
  2024-11-05 21:54 ` [PATCH v2 06/35] x86/bugs: Restructure mmio mitigation David Kaplan
                   ` (29 subsequent siblings)
  34 siblings, 1 reply; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure taa mitigation to use select/update/apply functions to
create consistent vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 58 +++++++++++++++++++++++++-------------
 1 file changed, 39 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4f35dcd9dee8..c676804dfd84 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,8 @@ static void __init mds_apply_mitigation(void);
 static void __init md_clear_update_mitigation(void);
 static void __init md_clear_select_mitigation(void);
 static void __init taa_select_mitigation(void);
+static void __init taa_update_mitigation(void);
+static void __init taa_apply_mitigation(void);
 static void __init mmio_select_mitigation(void);
 static void __init srbds_select_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
@@ -187,6 +189,7 @@ void __init cpu_select_mitigations(void)
 	ssb_select_mitigation();
 	l1tf_select_mitigation();
 	mds_select_mitigation();
+	taa_select_mitigation();
 	md_clear_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
@@ -203,8 +206,10 @@ void __init cpu_select_mitigations(void)
 	 * choices.
 	 */
 	mds_update_mitigation();
+	taa_update_mitigation();
 
 	mds_apply_mitigation();
+	taa_apply_mitigation();
 }
 
 /*
@@ -374,9 +379,6 @@ static int __init mds_cmdline(char *str)
 }
 early_param("mds", mds_cmdline);
 
-#undef pr_fmt
-#define pr_fmt(fmt)	"TAA: " fmt
-
 static bool taa_nosmt __ro_after_init;
 
 static const char * const taa_strings[] = {
@@ -399,19 +401,19 @@ static void __init taa_select_mitigation(void)
 		return;
 	}
 
-	if (cpu_mitigations_off()) {
+	if (cpu_mitigations_off())
 		taa_mitigation = TAA_MITIGATION_OFF;
-		return;
-	}
 
 	/*
 	 * TAA mitigation via VERW is turned off if both
 	 * tsx_async_abort=off and mds=off are specified.
+	 *
+	 * MDS mitigation will be checked in taa_update_mitigation().
 	 */
-	if (taa_mitigation == TAA_MITIGATION_OFF &&
-	    mds_mitigation == MDS_MITIGATION_OFF)
+	if (taa_mitigation == TAA_MITIGATION_OFF)
 		return;
 
+	/* This handles the AUTO case. */
 	if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
 		taa_mitigation = TAA_MITIGATION_VERW;
 	else
@@ -430,17 +432,36 @@ static void __init taa_select_mitigation(void)
 	    !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
 		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
 
-	/*
-	 * TSX is enabled, select alternate mitigation for TAA which is
-	 * the same as MDS. Enable MDS static branch to clear CPU buffers.
-	 *
-	 * For guests that can't determine whether the correct microcode is
-	 * present on host, enable the mitigation for UCODE_NEEDED as well.
-	 */
-	setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+}
+
+static void __init taa_update_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_TAA))
+		return;
+
+	if (mitigate_any_verw())
+		taa_mitigation = TAA_MITIGATION_VERW;
+
+	pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
+}
+
+static void __init taa_apply_mitigation(void)
+{
+	if (taa_mitigation == TAA_MITIGATION_VERW ||
+	    taa_mitigation == TAA_MITIGATION_UCODE_NEEDED) {
+		/*
+		 * TSX is enabled, select alternate mitigation for TAA which is
+		 * the same as MDS. Enable MDS static branch to clear CPU buffers.
+		 *
+		 * For guests that can't determine whether the correct microcode is
+		 * present on host, enable the mitigation for UCODE_NEEDED as well.
+		 */
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+
+		if (taa_nosmt || cpu_mitigations_auto_nosmt())
+			cpu_smt_disable(false);
+	}
 
-	if (taa_nosmt || cpu_mitigations_auto_nosmt())
-		cpu_smt_disable(false);
 }
 
 static int __init tsx_async_abort_parse_cmdline(char *str)
@@ -649,7 +670,6 @@ static void __init md_clear_update_mitigation(void)
 
 static void __init md_clear_select_mitigation(void)
 {
-	taa_select_mitigation();
 	mmio_select_mitigation();
 	rfds_select_mitigation();
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 06/35] x86/bugs: Restructure mmio mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (4 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 05/35] x86/bugs: Restructure taa mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-14  5:03   ` Pawan Gupta
  2024-11-05 21:54 ` [PATCH v2 07/35] x86/bugs: Restructure rfds mitigation David Kaplan
                   ` (28 subsequent siblings)
  34 siblings, 1 reply; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure mmio mitigation to use select/update/apply functions to
create consistent vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 55 +++++++++++++++++++++++++++-----------
 1 file changed, 39 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c676804dfd84..1332b70e48f8 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -68,6 +68,8 @@ static void __init taa_select_mitigation(void);
 static void __init taa_update_mitigation(void);
 static void __init taa_apply_mitigation(void);
 static void __init mmio_select_mitigation(void);
+static void __init mmio_update_mitigation(void);
+static void __init mmio_apply_mitigation(void);
 static void __init srbds_select_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
@@ -190,6 +192,7 @@ void __init cpu_select_mitigations(void)
 	l1tf_select_mitigation();
 	mds_select_mitigation();
 	taa_select_mitigation();
+	mmio_select_mitigation();
 	md_clear_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
@@ -207,9 +210,11 @@ void __init cpu_select_mitigations(void)
 	 */
 	mds_update_mitigation();
 	taa_update_mitigation();
+	mmio_update_mitigation();
 
 	mds_apply_mitigation();
 	taa_apply_mitigation();
+	mmio_apply_mitigation();
 }
 
 /*
@@ -505,6 +510,40 @@ static void __init mmio_select_mitigation(void)
 		return;
 	}
 
+	if (mmio_mitigation == MMIO_MITIGATION_OFF)
+		return;
+
+	/*
+	 * Check if the system has the right microcode.
+	 *
+	 * CPU Fill buffer clear mitigation is enumerated by either an explicit
+	 * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
+	 * affected systems.
+	 */
+	if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
+	    (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
+	     boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
+	     !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
+		mmio_mitigation = MMIO_MITIGATION_VERW;
+	else
+		mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
+}
+
+static void __init mmio_update_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
+		return;
+
+	if (mitigate_any_verw())
+		mmio_mitigation = MMIO_MITIGATION_VERW;
+
+	pr_info("%s\n", mmio_strings[mmio_mitigation]);
+	if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
+		pr_info("Unknown: No mitigations\n");
+}
+
+static void __init mmio_apply_mitigation(void)
+{
 	if (mmio_mitigation == MMIO_MITIGATION_OFF)
 		return;
 
@@ -533,21 +572,6 @@ static void __init mmio_select_mitigation(void)
 	if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
 		static_branch_enable(&mds_idle_clear);
 
-	/*
-	 * Check if the system has the right microcode.
-	 *
-	 * CPU Fill buffer clear mitigation is enumerated by either an explicit
-	 * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
-	 * affected systems.
-	 */
-	if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
-	    (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
-	     boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
-	     !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
-		mmio_mitigation = MMIO_MITIGATION_VERW;
-	else
-		mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
-
 	if (mmio_nosmt || cpu_mitigations_auto_nosmt())
 		cpu_smt_disable(false);
 }
@@ -670,7 +694,6 @@ static void __init md_clear_update_mitigation(void)
 
 static void __init md_clear_select_mitigation(void)
 {
-	mmio_select_mitigation();
 	rfds_select_mitigation();
 
 	/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 07/35] x86/bugs: Restructure rfds mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (5 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 06/35] x86/bugs: Restructure mmio mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-14  5:55   ` Pawan Gupta
  2024-11-05 21:54 ` [PATCH v2 08/35] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
                   ` (27 subsequent siblings)
  34 siblings, 1 reply; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure rfds mitigation to use select/update/apply functions to
create consistent vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 33 ++++++++++++++++++++++++++-------
 1 file changed, 26 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 1332b70e48f8..c3a2d3b8d153 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -70,6 +70,9 @@ static void __init taa_apply_mitigation(void);
 static void __init mmio_select_mitigation(void);
 static void __init mmio_update_mitigation(void);
 static void __init mmio_apply_mitigation(void);
+static void __init rfds_select_mitigation(void);
+static void __init rfds_update_mitigation(void);
+static void __init rfds_apply_mitigation(void);
 static void __init srbds_select_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
@@ -193,6 +196,7 @@ void __init cpu_select_mitigations(void)
 	mds_select_mitigation();
 	taa_select_mitigation();
 	mmio_select_mitigation();
+	rfds_select_mitigation();
 	md_clear_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
@@ -211,10 +215,12 @@ void __init cpu_select_mitigations(void)
 	mds_update_mitigation();
 	taa_update_mitigation();
 	mmio_update_mitigation();
+	rfds_update_mitigation();
 
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
+	rfds_apply_mitigation();
 }
 
 /*
@@ -597,9 +603,6 @@ static int __init mmio_stale_data_parse_cmdline(char *str)
 }
 early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
 
-#undef pr_fmt
-#define pr_fmt(fmt)	"Register File Data Sampling: " fmt
-
 static const char * const rfds_strings[] = {
 	[RFDS_MITIGATION_OFF]			= "Vulnerable",
 	[RFDS_MITIGATION_VERW]			= "Mitigation: Clear Register File",
@@ -618,12 +621,29 @@ static void __init rfds_select_mitigation(void)
 	if (rfds_mitigation == RFDS_MITIGATION_AUTO)
 		rfds_mitigation = RFDS_MITIGATION_VERW;
 
-	if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
-		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
-	else
+	if (!(x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR))
 		rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
 }
 
+static void __init rfds_update_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_RFDS))
+		return;
+
+	if (mitigate_any_verw())
+		rfds_mitigation = RFDS_MITIGATION_VERW;
+
+	pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
+}
+
+static void __init rfds_apply_mitigation(void)
+{
+	if (rfds_mitigation == RFDS_MITIGATION_VERW) {
+		if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
+			setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+	}
+}
+
 static __init int rfds_parse_cmdline(char *str)
 {
 	if (!str)
@@ -694,7 +714,6 @@ static void __init md_clear_update_mitigation(void)
 
 static void __init md_clear_select_mitigation(void)
 {
-	rfds_select_mitigation();
 
 	/*
 	 * As these mitigations are inter-related and rely on VERW instruction
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 08/35] x86/bugs: Remove md_clear_*_mitigation()
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (6 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 07/35] x86/bugs: Restructure rfds mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 09/35] x86/bugs: Restructure srbds mitigation David Kaplan
                   ` (26 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

The functionality in md_clear_update_mitigation() and
md_clear_select_mitigation() is now integrated into the select/update
functions for the MDS, TAA, MMIO, and RFDS vulnerabilities.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 65 --------------------------------------
 1 file changed, 65 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c3a2d3b8d153..5ad989e8eea3 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -62,8 +62,6 @@ static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
 static void __init mds_update_mitigation(void);
 static void __init mds_apply_mitigation(void);
-static void __init md_clear_update_mitigation(void);
-static void __init md_clear_select_mitigation(void);
 static void __init taa_select_mitigation(void);
 static void __init taa_update_mitigation(void);
 static void __init taa_apply_mitigation(void);
@@ -197,7 +195,6 @@ void __init cpu_select_mitigations(void)
 	taa_select_mitigation();
 	mmio_select_mitigation();
 	rfds_select_mitigation();
-	md_clear_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
 
@@ -661,68 +658,6 @@ static __init int rfds_parse_cmdline(char *str)
 }
 early_param("reg_file_data_sampling", rfds_parse_cmdline);
 
-#undef pr_fmt
-#define pr_fmt(fmt)     "" fmt
-
-static void __init md_clear_update_mitigation(void)
-{
-	if (cpu_mitigations_off())
-		return;
-
-	if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
-		goto out;
-
-	/*
-	 * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO
-	 * Stale Data mitigation, if necessary.
-	 */
-	if (mds_mitigation == MDS_MITIGATION_OFF &&
-	    boot_cpu_has_bug(X86_BUG_MDS)) {
-		mds_mitigation = MDS_MITIGATION_FULL;
-		mds_select_mitigation();
-	}
-	if (taa_mitigation == TAA_MITIGATION_OFF &&
-	    boot_cpu_has_bug(X86_BUG_TAA)) {
-		taa_mitigation = TAA_MITIGATION_VERW;
-		taa_select_mitigation();
-	}
-	/*
-	 * MMIO_MITIGATION_OFF is not checked here so that mmio_stale_data_clear
-	 * gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
-	 */
-	if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
-		mmio_mitigation = MMIO_MITIGATION_VERW;
-		mmio_select_mitigation();
-	}
-	if (rfds_mitigation == RFDS_MITIGATION_OFF &&
-	    boot_cpu_has_bug(X86_BUG_RFDS)) {
-		rfds_mitigation = RFDS_MITIGATION_VERW;
-		rfds_select_mitigation();
-	}
-out:
-	if (boot_cpu_has_bug(X86_BUG_MDS))
-		pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
-	if (boot_cpu_has_bug(X86_BUG_TAA))
-		pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
-	if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
-		pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
-	else if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
-		pr_info("MMIO Stale Data: Unknown: No mitigations\n");
-	if (boot_cpu_has_bug(X86_BUG_RFDS))
-		pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
-}
-
-static void __init md_clear_select_mitigation(void)
-{
-
-	/*
-	 * As these mitigations are inter-related and rely on VERW instruction
-	 * to clear the microarchitural buffers, update and print their status
-	 * after mitigation selection is done for each of these vulnerabilities.
-	 */
-	md_clear_update_mitigation();
-}
-
 #undef pr_fmt
 #define pr_fmt(fmt)	"SRBDS: " fmt
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 09/35] x86/bugs: Restructure srbds mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (7 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 08/35] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 10/35] x86/bugs: Restructure gds mitigation David Kaplan
                   ` (25 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure srbds to use select/apply functions to create consistent
vulnerability handling.

Define new AUTO mitigation for SRBDS.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 5ad989e8eea3..452aa5994aac 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -72,6 +72,7 @@ static void __init rfds_select_mitigation(void);
 static void __init rfds_update_mitigation(void);
 static void __init rfds_apply_mitigation(void);
 static void __init srbds_select_mitigation(void);
+static void __init srbds_apply_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
 static void __init gds_select_mitigation(void);
@@ -218,6 +219,7 @@ void __init cpu_select_mitigations(void)
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
 	rfds_apply_mitigation();
+	srbds_apply_mitigation();
 }
 
 /*
@@ -663,6 +665,7 @@ early_param("reg_file_data_sampling", rfds_parse_cmdline);
 
 enum srbds_mitigations {
 	SRBDS_MITIGATION_OFF,
+	SRBDS_MITIGATION_AUTO,
 	SRBDS_MITIGATION_UCODE_NEEDED,
 	SRBDS_MITIGATION_FULL,
 	SRBDS_MITIGATION_TSX_OFF,
@@ -670,7 +673,7 @@ enum srbds_mitigations {
 };
 
 static enum srbds_mitigations srbds_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_FULL : SRBDS_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_AUTO : SRBDS_MITIGATION_OFF;
 
 static const char * const srbds_strings[] = {
 	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
@@ -724,6 +727,9 @@ static void __init srbds_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
 		return;
 
+	if (srbds_mitigation == SRBDS_MITIGATION_AUTO)
+		srbds_mitigation = SRBDS_MITIGATION_FULL;
+
 	/*
 	 * Check to see if this is one of the MDS_NO systems supporting TSX that
 	 * are only exposed to SRBDS when TSX is enabled or when CPU is affected
@@ -738,6 +744,12 @@ static void __init srbds_select_mitigation(void)
 		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
 	else if (cpu_mitigations_off() || srbds_off)
 		srbds_mitigation = SRBDS_MITIGATION_OFF;
+}
+
+static void __init srbds_apply_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return;
 
 	update_srbds_msr();
 	pr_info("%s\n", srbds_strings[srbds_mitigation]);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 10/35] x86/bugs: Restructure gds mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (8 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 09/35] x86/bugs: Restructure srbds mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-14  6:21   ` Pawan Gupta
  2024-11-05 21:54 ` [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
                   ` (24 subsequent siblings)
  34 siblings, 1 reply; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure gds mitigation to use select/apply functions to create
consistent vulnerability handling.

Define new AUTO mitigation for gds.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 452aa5994aac..37056bdd3a9b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -76,6 +76,7 @@ static void __init srbds_apply_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
 static void __init gds_select_mitigation(void);
+static void __init gds_apply_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR without task-specific bits set */
 u64 x86_spec_ctrl_base;
@@ -220,6 +221,7 @@ void __init cpu_select_mitigations(void)
 	mmio_apply_mitigation();
 	rfds_apply_mitigation();
 	srbds_apply_mitigation();
+	gds_apply_mitigation();
 }
 
 /*
@@ -801,6 +803,7 @@ early_param("l1d_flush", l1d_flush_parse_cmdline);
 
 enum gds_mitigations {
 	GDS_MITIGATION_OFF,
+	GDS_MITIGATION_AUTO,
 	GDS_MITIGATION_UCODE_NEEDED,
 	GDS_MITIGATION_FORCE,
 	GDS_MITIGATION_FULL,
@@ -809,7 +812,7 @@ enum gds_mitigations {
 };
 
 static enum gds_mitigations gds_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_FULL : GDS_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_AUTO : GDS_MITIGATION_OFF;
 
 static const char * const gds_strings[] = {
 	[GDS_MITIGATION_OFF]		= "Vulnerable",
@@ -850,6 +853,7 @@ void update_gds_msr(void)
 	case GDS_MITIGATION_FORCE:
 	case GDS_MITIGATION_UCODE_NEEDED:
 	case GDS_MITIGATION_HYPERVISOR:
+	case GDS_MITIGATION_AUTO:
 		return;
 	}
 
@@ -873,13 +877,16 @@ static void __init gds_select_mitigation(void)
 
 	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
 		gds_mitigation = GDS_MITIGATION_HYPERVISOR;
-		goto out;
+		return;
 	}
 
 	if (cpu_mitigations_off())
 		gds_mitigation = GDS_MITIGATION_OFF;
 	/* Will verify below that mitigation _can_ be disabled */
 
+	if (gds_mitigation == GDS_MITIGATION_AUTO)
+		gds_mitigation = GDS_MITIGATION_FULL;
+
 	/* No microcode */
 	if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
 		if (gds_mitigation == GDS_MITIGATION_FORCE) {
@@ -892,7 +899,7 @@ static void __init gds_select_mitigation(void)
 		} else {
 			gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
 		}
-		goto out;
+		return;
 	}
 
 	/* Microcode has mitigation, use it */
@@ -914,8 +921,14 @@ static void __init gds_select_mitigation(void)
 		gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
 	}
 
+}
+
+static void __init gds_apply_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_GDS))
+		return;
+
 	update_gds_msr();
-out:
 	pr_info("%s\n", gds_strings[gds_mitigation]);
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (9 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 10/35] x86/bugs: Restructure gds mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-14  6:57   ` Pawan Gupta
  2024-11-05 21:54 ` [PATCH v2 12/35] x86/bugs: Restructure retbleed mitigation David Kaplan
                   ` (23 subsequent siblings)
  34 siblings, 1 reply; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure spectre_v1 to use select/apply functions to create
consistent vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 37056bdd3a9b..ea50c77ccb70 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -54,6 +54,7 @@
  */
 
 static void __init spectre_v1_select_mitigation(void);
+static void __init spectre_v1_apply_mitigation(void);
 static void __init spectre_v2_select_mitigation(void);
 static void __init retbleed_select_mitigation(void);
 static void __init spectre_v2_user_select_mitigation(void);
@@ -216,6 +217,7 @@ void __init cpu_select_mitigations(void)
 	mmio_update_mitigation();
 	rfds_update_mitigation();
 
+	spectre_v1_apply_mitigation();
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
@@ -989,10 +991,14 @@ static bool smap_works_speculatively(void)
 
 static void __init spectre_v1_select_mitigation(void)
 {
-	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
 		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
+}
+
+static void __init spectre_v1_apply_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
 		return;
-	}
 
 	if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
 		/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 12/35] x86/bugs: Restructure retbleed mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (10 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 13/35] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
                   ` (22 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure retbleed mitigation to use select/update/apply functions to
create consistent vulnerability handling.  The retbleed_update_mitigation()
simplifies the dependency between spectre_v2 and retbleed.

The command line options now directly select a preferred mitigation
which simplifies the logic.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 168 +++++++++++++++++--------------------
 1 file changed, 75 insertions(+), 93 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ea50c77ccb70..36657bd7143b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -57,6 +57,8 @@ static void __init spectre_v1_select_mitigation(void);
 static void __init spectre_v1_apply_mitigation(void);
 static void __init spectre_v2_select_mitigation(void);
 static void __init retbleed_select_mitigation(void);
+static void __init retbleed_update_mitigation(void);
+static void __init retbleed_apply_mitigation(void);
 static void __init spectre_v2_user_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
@@ -180,11 +182,6 @@ void __init cpu_select_mitigations(void)
 	/* Select the proper CPU mitigations before patching alternatives: */
 	spectre_v1_select_mitigation();
 	spectre_v2_select_mitigation();
-	/*
-	 * retbleed_select_mitigation() relies on the state set by
-	 * spectre_v2_select_mitigation(); specifically it wants to know about
-	 * spectre_v2=ibrs.
-	 */
 	retbleed_select_mitigation();
 	/*
 	 * spectre_v2_user_select_mitigation() relies on the state set by
@@ -212,12 +209,14 @@ void __init cpu_select_mitigations(void)
 	 * After mitigations are selected, some may need to update their
 	 * choices.
 	 */
+	retbleed_update_mitigation();
 	mds_update_mitigation();
 	taa_update_mitigation();
 	mmio_update_mitigation();
 	rfds_update_mitigation();
 
 	spectre_v1_apply_mitigation();
+	retbleed_apply_mitigation();
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
@@ -1053,6 +1052,7 @@ enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;
 
 enum retbleed_mitigation {
 	RETBLEED_MITIGATION_NONE,
+	RETBLEED_MITIGATION_AUTO,
 	RETBLEED_MITIGATION_UNRET,
 	RETBLEED_MITIGATION_IBPB,
 	RETBLEED_MITIGATION_IBRS,
@@ -1060,14 +1060,6 @@ enum retbleed_mitigation {
 	RETBLEED_MITIGATION_STUFF,
 };
 
-enum retbleed_mitigation_cmd {
-	RETBLEED_CMD_OFF,
-	RETBLEED_CMD_AUTO,
-	RETBLEED_CMD_UNRET,
-	RETBLEED_CMD_IBPB,
-	RETBLEED_CMD_STUFF,
-};
-
 static const char * const retbleed_strings[] = {
 	[RETBLEED_MITIGATION_NONE]	= "Vulnerable",
 	[RETBLEED_MITIGATION_UNRET]	= "Mitigation: untrained return thunk",
@@ -1078,9 +1070,7 @@ static const char * const retbleed_strings[] = {
 };
 
 static enum retbleed_mitigation retbleed_mitigation __ro_after_init =
-	RETBLEED_MITIGATION_NONE;
-static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_CMD_AUTO : RETBLEED_CMD_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_MITIGATION_AUTO : RETBLEED_MITIGATION_NONE;
 
 static int __ro_after_init retbleed_nosmt = false;
 
@@ -1097,15 +1087,15 @@ static int __init retbleed_parse_cmdline(char *str)
 		}
 
 		if (!strcmp(str, "off")) {
-			retbleed_cmd = RETBLEED_CMD_OFF;
+			retbleed_mitigation = RETBLEED_MITIGATION_NONE;
 		} else if (!strcmp(str, "auto")) {
-			retbleed_cmd = RETBLEED_CMD_AUTO;
+			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
 		} else if (!strcmp(str, "unret")) {
-			retbleed_cmd = RETBLEED_CMD_UNRET;
+			retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
 		} else if (!strcmp(str, "ibpb")) {
-			retbleed_cmd = RETBLEED_CMD_IBPB;
+			retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
 		} else if (!strcmp(str, "stuff")) {
-			retbleed_cmd = RETBLEED_CMD_STUFF;
+			retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
 		} else if (!strcmp(str, "nosmt")) {
 			retbleed_nosmt = true;
 		} else if (!strcmp(str, "force")) {
@@ -1126,53 +1116,38 @@ early_param("retbleed", retbleed_parse_cmdline);
 
 static void __init retbleed_select_mitigation(void)
 {
-	bool mitigate_smt = false;
-
-	if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
-		return;
-
-	switch (retbleed_cmd) {
-	case RETBLEED_CMD_OFF:
+	if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) {
+		retbleed_mitigation = RETBLEED_MITIGATION_NONE;
 		return;
+	}
 
-	case RETBLEED_CMD_UNRET:
-		if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
-			retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
-		} else {
+	switch (retbleed_mitigation) {
+	case RETBLEED_MITIGATION_UNRET:
+		if (!IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
+			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
 			pr_err("WARNING: kernel not compiled with MITIGATION_UNRET_ENTRY.\n");
-			goto do_cmd_auto;
 		}
 		break;
-
-	case RETBLEED_CMD_IBPB:
+	case RETBLEED_MITIGATION_IBPB:
 		if (!boot_cpu_has(X86_FEATURE_IBPB)) {
 			pr_err("WARNING: CPU does not support IBPB.\n");
-			goto do_cmd_auto;
-		} else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
-			retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
-		} else {
+			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+		} else if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
 			pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
-			goto do_cmd_auto;
+			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
 		}
 		break;
-
-	case RETBLEED_CMD_STUFF:
-		if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) &&
-		    spectre_v2_enabled == SPECTRE_V2_RETPOLINE) {
-			retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
-
-		} else {
-			if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING))
-				pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
-			else
-				pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
-
-			goto do_cmd_auto;
+	case RETBLEED_MITIGATION_STUFF:
+		if (!IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING)) {
+			pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
+			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
 		}
 		break;
+	default:
+		break;
+	}
 
-do_cmd_auto:
-	case RETBLEED_CMD_AUTO:
+	if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
 		if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
 		    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
 			if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
@@ -1181,17 +1156,55 @@ static void __init retbleed_select_mitigation(void)
 				 boot_cpu_has(X86_FEATURE_IBPB))
 				retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
 		}
+	}
+}
 
-		/*
-		 * The Intel mitigation (IBRS or eIBRS) was already selected in
-		 * spectre_v2_select_mitigation().  'retbleed_mitigation' will
-		 * be set accordingly below.
-		 */
+static void __init retbleed_update_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_RETBLEED) ||
+	    retbleed_mitigation == RETBLEED_MITIGATION_NONE)
+		return;
+	/*
+	 * Let IBRS trump all on Intel without affecting the effects of the
+	 * retbleed= cmdline option except for call depth based stuffing
+	 */
+	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+		switch (spectre_v2_enabled) {
+		case SPECTRE_V2_IBRS:
+			retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
+			break;
+		case SPECTRE_V2_EIBRS:
+		case SPECTRE_V2_EIBRS_RETPOLINE:
+		case SPECTRE_V2_EIBRS_LFENCE:
+			retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
+			break;
+		default:
+			if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
+				pr_err(RETBLEED_INTEL_MSG);
+		}
+	}
 
-		break;
+	if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF) {
+		if (spectre_v2_enabled != SPECTRE_V2_RETPOLINE) {
+			pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
+			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+			/* Try again */
+			retbleed_select_mitigation();
+		}
 	}
 
+	pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
+}
+
+
+static void __init retbleed_apply_mitigation(void)
+{
+	bool mitigate_smt = false;
+
 	switch (retbleed_mitigation) {
+	case RETBLEED_MITIGATION_NONE:
+		return;
+
 	case RETBLEED_MITIGATION_UNRET:
 		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
 		setup_force_cpu_cap(X86_FEATURE_UNRET);
@@ -1243,27 +1256,6 @@ static void __init retbleed_select_mitigation(void)
 	    (retbleed_nosmt || cpu_mitigations_auto_nosmt()))
 		cpu_smt_disable(false);
 
-	/*
-	 * Let IBRS trump all on Intel without affecting the effects of the
-	 * retbleed= cmdline option except for call depth based stuffing
-	 */
-	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
-		switch (spectre_v2_enabled) {
-		case SPECTRE_V2_IBRS:
-			retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
-			break;
-		case SPECTRE_V2_EIBRS:
-		case SPECTRE_V2_EIBRS_RETPOLINE:
-		case SPECTRE_V2_EIBRS_LFENCE:
-			retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
-			break;
-		default:
-			if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
-				pr_err(RETBLEED_INTEL_MSG);
-		}
-	}
-
-	pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
 }
 
 #undef pr_fmt
@@ -1816,16 +1808,6 @@ static void __init spectre_v2_select_mitigation(void)
 			break;
 		}
 
-		if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
-		    boot_cpu_has_bug(X86_BUG_RETBLEED) &&
-		    retbleed_cmd != RETBLEED_CMD_OFF &&
-		    retbleed_cmd != RETBLEED_CMD_STUFF &&
-		    boot_cpu_has(X86_FEATURE_IBRS) &&
-		    boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
-			mode = SPECTRE_V2_IBRS;
-			break;
-		}
-
 		mode = spectre_v2_select_retpoline();
 		break;
 
@@ -1981,7 +1963,7 @@ static void __init spectre_v2_select_mitigation(void)
 	    (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
 	     boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) {
 
-		if (retbleed_cmd != RETBLEED_CMD_IBPB) {
+		if (retbleed_mitigation != RETBLEED_MITIGATION_IBPB) {
 			setup_force_cpu_cap(X86_FEATURE_USE_IBPB_FW);
 			pr_info("Enabling Speculation Barrier for firmware calls\n");
 		}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 13/35] x86/bugs: Restructure spectre_v2_user mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (11 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 12/35] x86/bugs: Restructure retbleed mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-06 18:56   ` kernel test robot
  2024-11-05 21:54 ` [PATCH v2 14/35] x86/bugs: Restructure bhi mitigation David Kaplan
                   ` (21 subsequent siblings)
  34 siblings, 1 reply; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure spectre_v2_user to use select/update/apply functions to
create consistent vulnerability handling.

The ibpb/stibp choices are first decided based on the spectre_v2_user
command line but can be modified by the spectre_v2 command line option
as well.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 147 ++++++++++++++++++++-----------------
 1 file changed, 81 insertions(+), 66 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 36657bd7143b..9a41fd121b71 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -60,6 +60,8 @@ static void __init retbleed_select_mitigation(void);
 static void __init retbleed_update_mitigation(void);
 static void __init retbleed_apply_mitigation(void);
 static void __init spectre_v2_user_select_mitigation(void);
+static void __init spectre_v2_user_update_mitigation(void);
+static void __init spectre_v2_user_apply_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
@@ -183,11 +185,6 @@ void __init cpu_select_mitigations(void)
 	spectre_v1_select_mitigation();
 	spectre_v2_select_mitigation();
 	retbleed_select_mitigation();
-	/*
-	 * spectre_v2_user_select_mitigation() relies on the state set by
-	 * retbleed_select_mitigation(); specifically the STIBP selection is
-	 * forced for UNRET or IBPB.
-	 */
 	spectre_v2_user_select_mitigation();
 	ssb_select_mitigation();
 	l1tf_select_mitigation();
@@ -210,6 +207,7 @@ void __init cpu_select_mitigations(void)
 	 * choices.
 	 */
 	retbleed_update_mitigation();
+	spectre_v2_user_update_mitigation();
 	mds_update_mitigation();
 	taa_update_mitigation();
 	mmio_update_mitigation();
@@ -217,6 +215,7 @@ void __init cpu_select_mitigations(void)
 
 	spectre_v1_apply_mitigation();
 	retbleed_apply_mitigation();
+	spectre_v2_user_apply_mitigation();
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
@@ -1335,6 +1334,8 @@ enum spectre_v2_mitigation_cmd {
 	SPECTRE_V2_CMD_IBRS,
 };
 
+enum spectre_v2_mitigation_cmd spectre_v2_cmd __ro_after_init = SPECTRE_V2_CMD_AUTO;
+
 enum spectre_v2_user_cmd {
 	SPECTRE_V2_USER_CMD_NONE,
 	SPECTRE_V2_USER_CMD_AUTO,
@@ -1373,22 +1374,14 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
 		pr_info("spectre_v2_user=%s forced on command line.\n", reason);
 }
 
-static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd;
-
 static enum spectre_v2_user_cmd __init
 spectre_v2_parse_user_cmdline(void)
 {
 	char arg[20];
 	int ret, i;
 
-	switch (spectre_v2_cmd) {
-	case SPECTRE_V2_CMD_NONE:
+	if (cpu_mitigations_off())
 		return SPECTRE_V2_USER_CMD_NONE;
-	case SPECTRE_V2_CMD_FORCE:
-		return SPECTRE_V2_USER_CMD_FORCE;
-	default:
-		break;
-	}
 
 	ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
 				  arg, sizeof(arg));
@@ -1412,65 +1405,70 @@ static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
 	return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
 }
 
+
 static void __init
 spectre_v2_user_select_mitigation(void)
 {
-	enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
-	bool smt_possible = IS_ENABLED(CONFIG_SMP);
 	enum spectre_v2_user_cmd cmd;
 
 	if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
 		return;
 
-	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
-	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
-		smt_possible = false;
-
 	cmd = spectre_v2_parse_user_cmdline();
 	switch (cmd) {
 	case SPECTRE_V2_USER_CMD_NONE:
-		goto set_mode;
+		return;
 	case SPECTRE_V2_USER_CMD_FORCE:
-		mode = SPECTRE_V2_USER_STRICT;
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
 		break;
 	case SPECTRE_V2_USER_CMD_AUTO:
 	case SPECTRE_V2_USER_CMD_PRCTL:
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
+		break;
 	case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
-		mode = SPECTRE_V2_USER_PRCTL;
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
 		break;
 	case SPECTRE_V2_USER_CMD_SECCOMP:
-	case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
 		if (IS_ENABLED(CONFIG_SECCOMP))
-			mode = SPECTRE_V2_USER_SECCOMP;
+			spectre_v2_user_ibpb = SPECTRE_V2_USER_SECCOMP;
 		else
-			mode = SPECTRE_V2_USER_PRCTL;
+			spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+		spectre_v2_user_stibp = spectre_v2_user_ibpb;
+		break;
+	case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
 		break;
 	}
 
-	/* Initialize Indirect Branch Prediction Barrier */
-	if (boot_cpu_has(X86_FEATURE_IBPB)) {
-		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+	/*
+	 * At this point, an STIBP mode other than "off" has been set.
+	 * If STIBP support is not being forced, check if STIBP always-on
+	 * is preferred.
+	 */
+	if (spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT &&
+	    boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
+		spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT_PREFERRED;
+}
 
-		spectre_v2_user_ibpb = mode;
-		switch (cmd) {
-		case SPECTRE_V2_USER_CMD_NONE:
-			break;
-		case SPECTRE_V2_USER_CMD_FORCE:
-		case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
-		case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
-			static_branch_enable(&switch_mm_always_ibpb);
-			spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
-			break;
-		case SPECTRE_V2_USER_CMD_PRCTL:
-		case SPECTRE_V2_USER_CMD_AUTO:
-		case SPECTRE_V2_USER_CMD_SECCOMP:
-			static_branch_enable(&switch_mm_cond_ibpb);
-			break;
-		}
+static void __init spectre_v2_user_update_mitigation(void)
+{
+	bool smt_possible = IS_ENABLED(CONFIG_SMP);
 
-		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
-			static_key_enabled(&switch_mm_always_ibpb) ?
-			"always-on" : "conditional");
+	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
+	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
+		smt_possible = false;
+
+	/* The spectre_v2 cmd line can override spectre_v2_user options */
+	if (spectre_v2_cmd == SPECTRE_V2_CMD_NONE) {
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_NONE;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
+	} else if (spectre_v2_cmd == SPECTRE_V2_CMD_FORCE) {
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
 	}
 
 	/*
@@ -1488,30 +1486,47 @@ spectre_v2_user_select_mitigation(void)
 	if (!boot_cpu_has(X86_FEATURE_STIBP) ||
 	    !smt_possible ||
 	    (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
-	     !boot_cpu_has(X86_FEATURE_AUTOIBRS)))
+	     !boot_cpu_has(X86_FEATURE_AUTOIBRS))) {
+		spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
 		return;
+	}
 
-	/*
-	 * At this point, an STIBP mode other than "off" has been set.
-	 * If STIBP support is not being forced, check if STIBP always-on
-	 * is preferred.
-	 */
-	if (mode != SPECTRE_V2_USER_STRICT &&
-	    boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
-		mode = SPECTRE_V2_USER_STRICT_PREFERRED;
-
-	if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
-	    retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
-		if (mode != SPECTRE_V2_USER_STRICT &&
-		    mode != SPECTRE_V2_USER_STRICT_PREFERRED)
+	if (spectre_v2_user_stibp != SPECTRE_V2_USER_NONE &&
+	    (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
+	    retbleed_mitigation == RETBLEED_MITIGATION_IBPB)) {
+		if (spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT &&
+		    spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT_PREFERRED)
 			pr_info("Selecting STIBP always-on mode to complement retbleed mitigation\n");
-		mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT_PREFERRED;
 	}
+	pr_info("%s\n", spectre_v2_user_strings[spectre_v2_user_stibp]);
+}
 
-	spectre_v2_user_stibp = mode;
+static void __init spectre_v2_user_apply_mitigation(void)
+{
+	/* Initialize Indirect Branch Prediction Barrier */
+	if (boot_cpu_has(X86_FEATURE_IBPB) &&
+	    spectre_v2_user_ibpb != SPECTRE_V2_USER_NONE) {
+		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+
+		switch (spectre_v2_user_ibpb) {
+		case SPECTRE_V2_USER_NONE:
+			break;
+		case SPECTRE_V2_USER_STRICT:
+			static_branch_enable(&switch_mm_always_ibpb);
+			break;
+		case SPECTRE_V2_USER_PRCTL:
+		case SPECTRE_V2_USER_SECCOMP:
+			static_branch_enable(&switch_mm_cond_ibpb);
+			break;
+		default:
+			break;
+		}
 
-set_mode:
-	pr_info("%s\n", spectre_v2_user_strings[mode]);
+		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
+			static_key_enabled(&switch_mm_always_ibpb) ?
+			"always-on" : "conditional");
+	}
 }
 
 static const char * const spectre_v2_strings[] = {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 14/35] x86/bugs: Restructure bhi mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (12 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 13/35] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 15/35] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
                   ` (20 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure bhi mitigation to use select/apply functions to create
consistent vulnerability handling.

Define new AUTO mitigation for bhi.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 22 ++++++++++++++++++----
 1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 9a41fd121b71..62ba49062182 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -82,6 +82,8 @@ static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
 static void __init gds_select_mitigation(void);
 static void __init gds_apply_mitigation(void);
+static void __init bhi_select_mitigation(void);
+static void __init bhi_apply_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR without task-specific bits set */
 u64 x86_spec_ctrl_base;
@@ -201,6 +203,7 @@ void __init cpu_select_mitigations(void)
 	 */
 	srso_select_mitigation();
 	gds_select_mitigation();
+	bhi_select_mitigation();
 
 	/*
 	 * After mitigations are selected, some may need to update their
@@ -222,6 +225,7 @@ void __init cpu_select_mitigations(void)
 	rfds_apply_mitigation();
 	srbds_apply_mitigation();
 	gds_apply_mitigation();
+	bhi_apply_mitigation();
 }
 
 /*
@@ -1743,12 +1747,13 @@ static bool __init spec_ctrl_bhi_dis(void)
 
 enum bhi_mitigations {
 	BHI_MITIGATION_OFF,
+	BHI_MITIGATION_AUTO,
 	BHI_MITIGATION_ON,
 	BHI_MITIGATION_VMEXIT_ONLY,
 };
 
 static enum bhi_mitigations bhi_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_ON : BHI_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_AUTO : BHI_MITIGATION_OFF;
 
 static int __init spectre_bhi_parse_cmdline(char *str)
 {
@@ -1769,6 +1774,18 @@ static int __init spectre_bhi_parse_cmdline(char *str)
 early_param("spectre_bhi", spectre_bhi_parse_cmdline);
 
 static void __init bhi_select_mitigation(void)
+{
+	if (!boot_cpu_has(X86_BUG_BHI) || cpu_mitigations_off())
+		bhi_mitigation = BHI_MITIGATION_OFF;
+
+	if (bhi_mitigation == BHI_MITIGATION_OFF)
+		return;
+
+	if (bhi_mitigation == BHI_MITIGATION_AUTO)
+		bhi_mitigation = BHI_MITIGATION_ON;
+}
+
+static void __init bhi_apply_mitigation(void)
 {
 	if (bhi_mitigation == BHI_MITIGATION_OFF)
 		return;
@@ -1900,9 +1917,6 @@ static void __init spectre_v2_select_mitigation(void)
 	    mode == SPECTRE_V2_RETPOLINE)
 		spec_ctrl_disable_kernel_rrsba();
 
-	if (boot_cpu_has(X86_BUG_BHI))
-		bhi_select_mitigation();
-
 	spectre_v2_enabled = mode;
 	pr_info("%s\n", spectre_v2_strings[mode]);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 15/35] x86/bugs: Restructure spectre_v2 mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (13 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 14/35] x86/bugs: Restructure bhi mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 16/35] x86/bugs: Restructure ssb mitigation David Kaplan
                   ` (19 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure spectre_v2 to use select/update/apply functions to create
consistent vulnerability handling.

The spectre_v2 mitigation may be updated based on the selected retbleed
mitigation.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 62 ++++++++++++++++++++++++++------------
 1 file changed, 42 insertions(+), 20 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 62ba49062182..ec5cc66513bd 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -56,6 +56,8 @@
 static void __init spectre_v1_select_mitigation(void);
 static void __init spectre_v1_apply_mitigation(void);
 static void __init spectre_v2_select_mitigation(void);
+static void __init spectre_v2_update_mitigation(void);
+static void __init spectre_v2_apply_mitigation(void);
 static void __init retbleed_select_mitigation(void);
 static void __init retbleed_update_mitigation(void);
 static void __init retbleed_apply_mitigation(void);
@@ -209,6 +211,7 @@ void __init cpu_select_mitigations(void)
 	 * After mitigations are selected, some may need to update their
 	 * choices.
 	 */
+	spectre_v2_update_mitigation();
 	retbleed_update_mitigation();
 	spectre_v2_user_update_mitigation();
 	mds_update_mitigation();
@@ -217,6 +220,7 @@ void __init cpu_select_mitigations(void)
 	rfds_update_mitigation();
 
 	spectre_v1_apply_mitigation();
+	spectre_v2_apply_mitigation();
 	retbleed_apply_mitigation();
 	spectre_v2_user_apply_mitigation();
 	mds_apply_mitigation();
@@ -1818,18 +1822,18 @@ static void __init bhi_apply_mitigation(void)
 
 static void __init spectre_v2_select_mitigation(void)
 {
-	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
 	enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
+	spectre_v2_cmd = spectre_v2_parse_cmdline();
 
 	/*
 	 * If the CPU is not affected and the command line mode is NONE or AUTO
 	 * then nothing to do.
 	 */
 	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
-	    (cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
+	    (spectre_v2_cmd == SPECTRE_V2_CMD_NONE || spectre_v2_cmd == SPECTRE_V2_CMD_AUTO))
 		return;
 
-	switch (cmd) {
+	switch (spectre_v2_cmd) {
 	case SPECTRE_V2_CMD_NONE:
 		return;
 
@@ -1873,10 +1877,29 @@ static void __init spectre_v2_select_mitigation(void)
 		break;
 	}
 
-	if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
+	spectre_v2_enabled = mode;
+}
+
+static void __init spectre_v2_update_mitigation(void)
+{
+	if (spectre_v2_cmd == SPECTRE_V2_CMD_AUTO) {
+		if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
+		    boot_cpu_has_bug(X86_BUG_RETBLEED) &&
+		    retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
+		    retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
+		    boot_cpu_has(X86_FEATURE_IBRS) &&
+		    boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+			spectre_v2_enabled = SPECTRE_V2_IBRS;
+		}
+	}
+}
+
+static void __init spectre_v2_apply_mitigation(void)
+{
+	if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
 		pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
 
-	if (spectre_v2_in_ibrs_mode(mode)) {
+	if (spectre_v2_in_ibrs_mode(spectre_v2_enabled)) {
 		if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
 			msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
 		} else {
@@ -1885,8 +1908,10 @@ static void __init spectre_v2_select_mitigation(void)
 		}
 	}
 
-	switch (mode) {
+	switch (spectre_v2_enabled) {
 	case SPECTRE_V2_NONE:
+		return;
+
 	case SPECTRE_V2_EIBRS:
 		break;
 
@@ -1912,13 +1937,12 @@ static void __init spectre_v2_select_mitigation(void)
 	 * JMPs gets protection against BHI and Intramode-BTI, but RET
 	 * prediction from a non-RSB predictor is still a risk.
 	 */
-	if (mode == SPECTRE_V2_EIBRS_LFENCE ||
-	    mode == SPECTRE_V2_EIBRS_RETPOLINE ||
-	    mode == SPECTRE_V2_RETPOLINE)
+	if (spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE ||
+	    spectre_v2_enabled == SPECTRE_V2_EIBRS_RETPOLINE ||
+	    spectre_v2_enabled == SPECTRE_V2_RETPOLINE)
 		spec_ctrl_disable_kernel_rrsba();
 
-	spectre_v2_enabled = mode;
-	pr_info("%s\n", spectre_v2_strings[mode]);
+	pr_info("%s\n", spectre_v2_strings[spectre_v2_enabled]);
 
 	/*
 	 * If Spectre v2 protection has been enabled, fill the RSB during a
@@ -1973,7 +1997,7 @@ static void __init spectre_v2_select_mitigation(void)
 		 * the host nor the guest have to clear or fill RSB entries to
 		 * avoid poisoning, skip RSB filling at VMEXIT in that case.
 		 */
-		spectre_v2_determine_rsb_fill_type_at_vmexit(mode);
+		spectre_v2_determine_rsb_fill_type_at_vmexit(spectre_v2_enabled);
 	}
 
 	/*
@@ -1982,10 +2006,10 @@ static void __init spectre_v2_select_mitigation(void)
 	 * firmware calls only when IBRS / Enhanced / Automatic IBRS aren't
 	 * otherwise enabled.
 	 *
-	 * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
-	 * the user might select retpoline on the kernel command line and if
-	 * the CPU supports Enhanced IBRS, kernel might un-intentionally not
-	 * enable IBRS around firmware calls.
+	 * Use "spectre_v2_enabled" to check Enhanced IBRS instead of
+	 * boot_cpu_has(), because the user might select retpoline on the kernel
+	 * command line and if the CPU supports Enhanced IBRS, kernel might
+	 * un-intentionally not enable IBRS around firmware calls.
 	 */
 	if (boot_cpu_has_bug(X86_BUG_RETBLEED) &&
 	    boot_cpu_has(X86_FEATURE_IBPB) &&
@@ -1997,13 +2021,11 @@ static void __init spectre_v2_select_mitigation(void)
 			pr_info("Enabling Speculation Barrier for firmware calls\n");
 		}
 
-	} else if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) {
+	} else if (boot_cpu_has(X86_FEATURE_IBRS) &&
+		   !spectre_v2_in_ibrs_mode(spectre_v2_enabled)) {
 		setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
 		pr_info("Enabling Restricted Speculation for firmware calls\n");
 	}
-
-	/* Set up IBPB and STIBP depending on the general spectre V2 command */
-	spectre_v2_cmd = cmd;
 }
 
 static void update_stibp_msr(void * __unused)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 16/35] x86/bugs: Restructure ssb mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (14 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 15/35] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 17/35] x86/bugs: Restructure l1tf mitigation David Kaplan
                   ` (18 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure ssb to use select/apply functions to create consistent
vulnerability handling.

Remove __ssb_select_mitigation() and split the functionality between the
select/apply functions.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 34 ++++++++++++++++------------------
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ec5cc66513bd..a3bbb0831845 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,7 @@ static void __init spectre_v2_user_select_mitigation(void);
 static void __init spectre_v2_user_update_mitigation(void);
 static void __init spectre_v2_user_apply_mitigation(void);
 static void __init ssb_select_mitigation(void);
+static void __init ssb_apply_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
 static void __init mds_update_mitigation(void);
@@ -223,6 +224,7 @@ void __init cpu_select_mitigations(void)
 	spectre_v2_apply_mitigation();
 	retbleed_apply_mitigation();
 	spectre_v2_user_apply_mitigation();
+	ssb_apply_mitigation();
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
@@ -2214,19 +2216,18 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
 	return cmd;
 }
 
-static enum ssb_mitigation __init __ssb_select_mitigation(void)
+static void ssb_select_mitigation(void)
 {
-	enum ssb_mitigation mode = SPEC_STORE_BYPASS_NONE;
 	enum ssb_mitigation_cmd cmd;
 
 	if (!boot_cpu_has(X86_FEATURE_SSBD))
-		return mode;
+		return;
 
 	cmd = ssb_parse_cmdline();
 	if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS) &&
 	    (cmd == SPEC_STORE_BYPASS_CMD_NONE ||
 	     cmd == SPEC_STORE_BYPASS_CMD_AUTO))
-		return mode;
+		return;
 
 	switch (cmd) {
 	case SPEC_STORE_BYPASS_CMD_SECCOMP:
@@ -2235,28 +2236,34 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
 		 * enabled.
 		 */
 		if (IS_ENABLED(CONFIG_SECCOMP))
-			mode = SPEC_STORE_BYPASS_SECCOMP;
+			ssb_mode = SPEC_STORE_BYPASS_SECCOMP;
 		else
-			mode = SPEC_STORE_BYPASS_PRCTL;
+			ssb_mode = SPEC_STORE_BYPASS_PRCTL;
 		break;
 	case SPEC_STORE_BYPASS_CMD_ON:
-		mode = SPEC_STORE_BYPASS_DISABLE;
+		ssb_mode = SPEC_STORE_BYPASS_DISABLE;
 		break;
 	case SPEC_STORE_BYPASS_CMD_AUTO:
 	case SPEC_STORE_BYPASS_CMD_PRCTL:
-		mode = SPEC_STORE_BYPASS_PRCTL;
+		ssb_mode = SPEC_STORE_BYPASS_PRCTL;
 		break;
 	case SPEC_STORE_BYPASS_CMD_NONE:
 		break;
 	}
 
+	if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
+		pr_info("%s\n", ssb_strings[ssb_mode]);
+}
+
+static void __init ssb_apply_mitigation(void)
+{
 	/*
 	 * We have three CPU feature flags that are in play here:
 	 *  - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
 	 *  - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass
 	 *  - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE - engage the mitigation
 	 */
-	if (mode == SPEC_STORE_BYPASS_DISABLE) {
+	if (ssb_mode == SPEC_STORE_BYPASS_DISABLE) {
 		setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
 		/*
 		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
@@ -2271,15 +2278,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
 		}
 	}
 
-	return mode;
-}
-
-static void ssb_select_mitigation(void)
-{
-	ssb_mode = __ssb_select_mitigation();
-
-	if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
-		pr_info("%s\n", ssb_strings[ssb_mode]);
 }
 
 #undef pr_fmt
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 17/35] x86/bugs: Restructure l1tf mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (15 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 16/35] x86/bugs: Restructure ssb mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 18/35] x86/bugs: Restructure srso mitigation David Kaplan
                   ` (17 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure l1tf to use select/apply functions to create consistent
vulnerability handling.

Define new AUTO mitigation for l1tf.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/include/asm/processor.h |  1 +
 arch/x86/kernel/cpu/bugs.c       | 27 +++++++++++++++++++++------
 arch/x86/kvm/vmx/vmx.c           |  2 ++
 3 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index ea4b87b44455..49da4636ce5a 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -743,6 +743,7 @@ void store_cpu_caps(struct cpuinfo_x86 *info);
 
 enum l1tf_mitigations {
 	L1TF_MITIGATION_OFF,
+	L1TF_MITIGATION_AUTO,
 	L1TF_MITIGATION_FLUSH_NOWARN,
 	L1TF_MITIGATION_FLUSH,
 	L1TF_MITIGATION_FLUSH_NOSMT,
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a3bbb0831845..98ef1cbc9e2a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -67,6 +67,7 @@ static void __init spectre_v2_user_apply_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init ssb_apply_mitigation(void);
 static void __init l1tf_select_mitigation(void);
+static void __init l1tf_apply_mitigation(void);
 static void __init mds_select_mitigation(void);
 static void __init mds_update_mitigation(void);
 static void __init mds_apply_mitigation(void);
@@ -225,6 +226,7 @@ void __init cpu_select_mitigations(void)
 	retbleed_apply_mitigation();
 	spectre_v2_user_apply_mitigation();
 	ssb_apply_mitigation();
+	l1tf_apply_mitigation();
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
@@ -2533,7 +2535,7 @@ EXPORT_SYMBOL_GPL(itlb_multihit_kvm_mitigation);
 
 /* Default mitigation for L1TF-affected CPUs */
 enum l1tf_mitigations l1tf_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_FLUSH : L1TF_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_AUTO : L1TF_MITIGATION_OFF;
 #if IS_ENABLED(CONFIG_KVM_INTEL)
 EXPORT_SYMBOL_GPL(l1tf_mitigation);
 #endif
@@ -2580,23 +2582,36 @@ static void override_cache_bits(struct cpuinfo_x86 *c)
 }
 
 static void __init l1tf_select_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_L1TF) || cpu_mitigations_off()) {
+		l1tf_mitigation = L1TF_MITIGATION_OFF;
+		return;
+	}
+
+	if (l1tf_mitigation == L1TF_MITIGATION_AUTO) {
+		if (cpu_mitigations_auto_nosmt())
+			l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
+		else
+			l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+	}
+
+}
+
+static void __init l1tf_apply_mitigation(void)
 {
 	u64 half_pa;
 
 	if (!boot_cpu_has_bug(X86_BUG_L1TF))
 		return;
 
-	if (cpu_mitigations_off())
-		l1tf_mitigation = L1TF_MITIGATION_OFF;
-	else if (cpu_mitigations_auto_nosmt())
-		l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
-
 	override_cache_bits(&boot_cpu_data);
 
 	switch (l1tf_mitigation) {
 	case L1TF_MITIGATION_OFF:
+		return;
 	case L1TF_MITIGATION_FLUSH_NOWARN:
 	case L1TF_MITIGATION_FLUSH:
+	case L1TF_MITIGATION_AUTO:
 		break;
 	case L1TF_MITIGATION_FLUSH_NOSMT:
 	case L1TF_MITIGATION_FULL:
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 81ed596e4454..fe99022d14c7 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -271,6 +271,7 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
 		case L1TF_MITIGATION_OFF:
 			l1tf = VMENTER_L1D_FLUSH_NEVER;
 			break;
+		case L1TF_MITIGATION_AUTO:
 		case L1TF_MITIGATION_FLUSH_NOWARN:
 		case L1TF_MITIGATION_FLUSH:
 		case L1TF_MITIGATION_FLUSH_NOSMT:
@@ -7634,6 +7635,7 @@ int vmx_vm_init(struct kvm *kvm)
 		case L1TF_MITIGATION_FLUSH_NOWARN:
 			/* 'I explicitly don't care' is set */
 			break;
+		case L1TF_MITIGATION_AUTO:
 		case L1TF_MITIGATION_FLUSH:
 		case L1TF_MITIGATION_FLUSH_NOSMT:
 		case L1TF_MITIGATION_FULL:
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 18/35] x86/bugs: Restructure srso mitigation
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (16 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 17/35] x86/bugs: Restructure l1tf mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2025-01-02 14:55   ` Borislav Petkov
  2024-11-05 21:54 ` [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls David Kaplan
                   ` (16 subsequent siblings)
  34 siblings, 1 reply; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure srso to use select/update/apply functions to create
consistent vulnerability handling.  Like with retbleed, the command line
options directly select mitigations which can later be modified.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 182 ++++++++++++++++++-------------------
 1 file changed, 89 insertions(+), 93 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 98ef1cbc9e2a..178415d8026a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -84,6 +84,8 @@ static void __init srbds_select_mitigation(void);
 static void __init srbds_apply_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
+static void __init srso_update_mitigation(void);
+static void __init srso_apply_mitigation(void);
 static void __init gds_select_mitigation(void);
 static void __init gds_apply_mitigation(void);
 static void __init bhi_select_mitigation(void);
@@ -200,11 +202,6 @@ void __init cpu_select_mitigations(void)
 	rfds_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
-
-	/*
-	 * srso_select_mitigation() depends and must run after
-	 * retbleed_select_mitigation().
-	 */
 	srso_select_mitigation();
 	gds_select_mitigation();
 	bhi_select_mitigation();
@@ -220,6 +217,7 @@ void __init cpu_select_mitigations(void)
 	taa_update_mitigation();
 	mmio_update_mitigation();
 	rfds_update_mitigation();
+	srso_update_mitigation();
 
 	spectre_v1_apply_mitigation();
 	spectre_v2_apply_mitigation();
@@ -232,6 +230,7 @@ void __init cpu_select_mitigations(void)
 	mmio_apply_mitigation();
 	rfds_apply_mitigation();
 	srbds_apply_mitigation();
+	srso_apply_mitigation();
 	gds_apply_mitigation();
 	bhi_apply_mitigation();
 }
@@ -2671,6 +2670,7 @@ early_param("l1tf", l1tf_cmdline);
 
 enum srso_mitigation {
 	SRSO_MITIGATION_NONE,
+	SRSO_MITIGATION_AUTO,
 	SRSO_MITIGATION_UCODE_NEEDED,
 	SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
 	SRSO_MITIGATION_MICROCODE,
@@ -2679,14 +2679,6 @@ enum srso_mitigation {
 	SRSO_MITIGATION_IBPB_ON_VMEXIT,
 };
 
-enum srso_mitigation_cmd {
-	SRSO_CMD_OFF,
-	SRSO_CMD_MICROCODE,
-	SRSO_CMD_SAFE_RET,
-	SRSO_CMD_IBPB,
-	SRSO_CMD_IBPB_ON_VMEXIT,
-};
-
 static const char * const srso_strings[] = {
 	[SRSO_MITIGATION_NONE]			= "Vulnerable",
 	[SRSO_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
@@ -2697,8 +2689,7 @@ static const char * const srso_strings[] = {
 	[SRSO_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT only"
 };
 
-static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
-static enum srso_mitigation_cmd srso_cmd __ro_after_init = SRSO_CMD_SAFE_RET;
+static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_AUTO;
 
 static int __init srso_parse_cmdline(char *str)
 {
@@ -2706,15 +2697,15 @@ static int __init srso_parse_cmdline(char *str)
 		return -EINVAL;
 
 	if (!strcmp(str, "off"))
-		srso_cmd = SRSO_CMD_OFF;
+		srso_mitigation = SRSO_MITIGATION_NONE;
 	else if (!strcmp(str, "microcode"))
-		srso_cmd = SRSO_CMD_MICROCODE;
+		srso_mitigation = SRSO_MITIGATION_MICROCODE;
 	else if (!strcmp(str, "safe-ret"))
-		srso_cmd = SRSO_CMD_SAFE_RET;
+		srso_mitigation = SRSO_MITIGATION_SAFE_RET;
 	else if (!strcmp(str, "ibpb"))
-		srso_cmd = SRSO_CMD_IBPB;
+		srso_mitigation = SRSO_MITIGATION_IBPB;
 	else if (!strcmp(str, "ibpb-vmexit"))
-		srso_cmd = SRSO_CMD_IBPB_ON_VMEXIT;
+		srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
 	else
 		pr_err("Ignoring unknown SRSO option (%s).", str);
 
@@ -2728,13 +2719,15 @@ static void __init srso_select_mitigation(void)
 {
 	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
 
-	if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
-	    cpu_mitigations_off() ||
-	    srso_cmd == SRSO_CMD_OFF) {
-		if (boot_cpu_has(X86_FEATURE_SBPB))
-			x86_pred_cmd = PRED_CMD_SBPB;
+	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
+		srso_mitigation = SRSO_MITIGATION_NONE;
+
+	if (srso_mitigation == SRSO_MITIGATION_NONE)
 		return;
-	}
+
+	/* Default mitigation */
+	if (srso_mitigation == SRSO_MITIGATION_AUTO)
+		srso_mitigation = SRSO_MITIGATION_SAFE_RET;
 
 	if (has_microcode) {
 		/*
@@ -2747,94 +2740,97 @@ static void __init srso_select_mitigation(void)
 			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
 			return;
 		}
-
-		if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
-			srso_mitigation = SRSO_MITIGATION_IBPB;
-			goto out;
-		}
 	} else {
 		pr_warn("IBPB-extending microcode not applied!\n");
 		pr_warn(SRSO_NOTICE);
 
-		/* may be overwritten by SRSO_CMD_SAFE_RET below */
-		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
+		/* Fall-back to Safe-RET */
+		srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
 	}
 
-	switch (srso_cmd) {
-	case SRSO_CMD_MICROCODE:
-		if (has_microcode) {
-			srso_mitigation = SRSO_MITIGATION_MICROCODE;
-			pr_warn(SRSO_NOTICE);
-		}
+	switch (srso_mitigation) {
+	case SRSO_MITIGATION_MICROCODE:
+		pr_warn(SRSO_NOTICE);
 		break;
 
-	case SRSO_CMD_SAFE_RET:
-		if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
-			/*
-			 * Enable the return thunk for generated code
-			 * like ftrace, static_call, etc.
-			 */
-			setup_force_cpu_cap(X86_FEATURE_RETHUNK);
-			setup_force_cpu_cap(X86_FEATURE_UNRET);
-
-			if (boot_cpu_data.x86 == 0x19) {
-				setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
-				x86_return_thunk = srso_alias_return_thunk;
-			} else {
-				setup_force_cpu_cap(X86_FEATURE_SRSO);
-				x86_return_thunk = srso_return_thunk;
-			}
-			if (has_microcode)
-				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
-			else
-				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
-		} else {
+	case SRSO_MITIGATION_SAFE_RET:
+	case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
+		if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
 			pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
-		}
 		break;
 
-	case SRSO_CMD_IBPB:
-		if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
-			if (has_microcode) {
-				setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
-				srso_mitigation = SRSO_MITIGATION_IBPB;
-
-				/*
-				 * IBPB on entry already obviates the need for
-				 * software-based untraining so clear those in case some
-				 * other mitigation like Retbleed has selected them.
-				 */
-				setup_clear_cpu_cap(X86_FEATURE_UNRET);
-				setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
-			}
-		} else {
+	case SRSO_MITIGATION_IBPB:
+		if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY))
 			pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
-		}
 		break;
 
-	case SRSO_CMD_IBPB_ON_VMEXIT:
-		if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
-			if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
-				setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
-				srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
+	case SRSO_MITIGATION_IBPB_ON_VMEXIT:
+		if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
+			pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+		break;
+	default:
+		break;
+	}
+}
+
+static void __init srso_update_mitigation(void)
+{
+	/* If retbleed is using IBPB, that works for SRSO as well */
+	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB)
+		srso_mitigation = SRSO_MITIGATION_IBPB;
+
+	if (srso_mitigation != SRSO_MITIGATION_NONE)
+		pr_info("%s\n", srso_strings[srso_mitigation]);
+}
 
-				/*
-				 * There is no need for RSB filling: entry_ibpb() ensures
-				 * all predictions, including the RSB, are invalidated,
-				 * regardless of IBPB implementation.
-				 */
-				setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
-			}
+static void __init srso_apply_mitigation(void)
+{
+	if (srso_mitigation == SRSO_MITIGATION_NONE) {
+		if (boot_cpu_has(X86_FEATURE_SBPB))
+			x86_pred_cmd = PRED_CMD_SBPB;
+		return;
+	}
+	switch (srso_mitigation) {
+	case SRSO_MITIGATION_SAFE_RET:
+	case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
+		/*
+		 * Enable the return thunk for generated code
+		 * like ftrace, static_call, etc.
+		 */
+		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
+		setup_force_cpu_cap(X86_FEATURE_UNRET);
+
+		if (boot_cpu_data.x86 == 0x19) {
+			setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
+			x86_return_thunk = srso_alias_return_thunk;
 		} else {
-			pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
-                }
+			setup_force_cpu_cap(X86_FEATURE_SRSO);
+			x86_return_thunk = srso_return_thunk;
+		}
+		break;
+	case SRSO_MITIGATION_IBPB:
+		setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
+		/*
+		 * IBPB on entry already obviates the need for
+		 * software-based untraining so clear those in case some
+		 * other mitigation like Retbleed has selected them.
+		 */
+		setup_clear_cpu_cap(X86_FEATURE_UNRET);
+		setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
+		break;
+	case SRSO_MITIGATION_IBPB_ON_VMEXIT:
+		setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+		/*
+		 * There is no need for RSB filling: entry_ibpb() ensures
+		 * all predictions, including the RSB, are invalidated,
+		 * regardless of IBPB implementation.
+		 */
+		setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
 		break;
 	default:
 		break;
 	}
 
-out:
-	pr_info("%s\n", srso_strings[srso_mitigation]);
 }
 
 #undef pr_fmt
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (17 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 18/35] x86/bugs: Restructure srso mitigation David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-06 10:39   ` Borislav Petkov
  2024-11-13 14:15   ` Brendan Jackman
  2024-11-05 21:54 ` [PATCH v2 20/35] x86/bugs: Define attack vectors David Kaplan
                   ` (15 subsequent siblings)
  34 siblings, 2 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Document the 5 new attack vector command line options, how they
interact with existing vulnerability controls, and recommendations on
when they can be disabled.

Note that while mitigating against untrusted userspace requires both
mitigate_user_kernel and mitigate_user_user, these are kept separate.
The kernel can control what code executes inside of it and that may
affect the risk associated with vulnerabilities especially if new kernel
mitigations are implemented.  The same isn't typically true of userspace.

In other words, the risk associated with user_user or guest_guest
attacks is unlikely to change over time.  While the risk associated with
user_kernel or guest_host attacks may change.  Therefore, these controls
are separated.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 .../hw-vuln/attack_vector_controls.rst        | 172 ++++++++++++++++++
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 2 files changed, 173 insertions(+)
 create mode 100644 Documentation/admin-guide/hw-vuln/attack_vector_controls.rst

diff --git a/Documentation/admin-guide/hw-vuln/attack_vector_controls.rst b/Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
new file mode 100644
index 000000000000..541c8a3cac13
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
@@ -0,0 +1,172 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Attack Vector Controls
+======================
+
+Attack vector controls provide a simple method to configure only the mitigations
+for CPU vulnerabilities which are relevant given the intended use of a system.
+Administrators are encouraged to consider which attack vectors are relevant and
+disable all others in order to recoup system performance.
+
+When new relevant CPU vulnerabilities are found, they will be added to these
+attack vector controls so administrators will likely not need to reconfigure
+their command line parameters as mitigations will continue to be correctly
+applied based on the chosen attack vector controls.
+
+Attack Vectors
+--------------
+
+There are 5 sets of attack-vector mitigations currently supported by the kernel:
+
+#. :ref:`user_kernel` (mitigate_user_kernel= )
+#. :ref:`user_user` (mitigate_user_user= )
+#. :ref:`guest_host` (mitigate_guest_host= )
+#. :ref:`guest_guest` (mitigate_guest_guest=)
+#. :ref:`cross_thread` (mitigate_cross_thread= )
+
+Each control may either be specified as 'off' or 'on'.
+
+.. _user_kernel:
+
+User-to-Kernel
+^^^^^^^^^^^^^^
+
+The user-to-kernel attack vector involves a malicious userspace program
+attempting to leak kernel data into userspace by exploiting a CPU vulnerability.
+The kernel data involved might be limited to certain kernel memory, or include
+all memory in the system, depending on the vulnerability exploited.
+
+If no untrusted userspace applications are being run, such as with single-user
+systems, consider disabling user-to-kernel mitigations.
+
+Note that the CPU vulnerabilities mitigated by Linux have generally not been
+shown to be exploitable from browser-based sandboxes.  User-to-kernel
+mitigations are therefore mostly relevant if unknown userspace applications may
+be run by untrusted users.
+
+*mitigate_user_kernel defaults to 'on'*
+
+.. _user_user:
+
+User-to-User
+^^^^^^^^^^^^
+
+The user-to-user attack vector involves a malicious userspace program attempting
+to influence the behavior of another unsuspecting userspace program in order to
+exfiltrate data.  The vulnerability of a userspace program is based on the
+program itself and the interfaces it provides.
+
+If no untrusted userspace applications are being run, consider disabling
+user-to-user mitigations.
+
+Note that because the Linux kernel contains a mapping of all physical memory,
+preventing a malicious userspace program from leaking data from another
+userspace program requires mitigating user-to-kernel attacks as well for
+complete protection.
+
+*mitigate_user_user defaults to 'on'*
+
+.. _guest_host:
+
+Guest-to-Host
+^^^^^^^^^^^^^
+
+The guest-to-host attack vector involves a malicious VM attempting to leak
+hypervisor data into the VM.  The data involved may be limited, or may
+potentially include all memory in the system, depending on the vulnerability
+exploited.
+
+If no untrusted VMs are being run, consider disabling guest-to-host mitigations.
+
+*mitigate_guest_host defaults to 'on' if KVM support is present*
+
+.. _guest_guest:
+
+Guest-to-Guest
+^^^^^^^^^^^^^^
+
+The guest-to-guest attack vector involves a malicious VM attempting to influence
+the behavior of another unsuspecting VM in order to exfiltrate data.  The
+vulnerability of a VM is based on the code inside the VM itself and the
+interfaces it provides.
+
+If no untrusted VMs, or only a single VM is being run, consider disabling
+guest-to-guest mitigations.
+
+Similar to the user-to-user attack vector, preventing a malicious VM from
+leaking data from another VM requires mitigating guest-to-host attacks as well
+due to the Linux kernel phys map.
+
+*mitigate_guest_guest defaults to 'on' if KVM support is present*
+
+.. _cross_thread:
+
+Cross-Thread
+^^^^^^^^^^^^
+
+The cross-thread attack vector involves a malicious userspace program or
+malicious VM either observing or attempting to influence the behavior of code
+running on the SMT sibling thread in order to exfiltrate data.
+
+Many cross-thread attacks can only be mitigated if SMT is disabled, which will
+result in reduced CPU core count and reduced performance.  Enabling mitigations
+for the cross-thread attack vector may result in SMT being disabled, depending
+on the CPU vulnerabilities detected.
+
+*mitigate_cross_thread defaults to 'off'*
+
+Interactions with command-line options
+--------------------------------------
+
+The global 'mitigations=off' command line takes precedence over all attack
+vector controls and will disable all mitigations.
+
+Vulnerability-specific controls (e.g. "retbleed=off") take precedence over all
+attack vector controls.  Mitigations for individual vulnerabilities may be
+turned on or off via their command-line options regardless of the attack vector
+controls.
+
+Summary of attack-vector mitigations
+------------------------------------
+
+When a vulnerability is mitigated due to an attack-vector control, the default
+mitigation option for that particular vulnerability is used.  To use a different
+mitigation, please use the vulnerability-specific command line option.
+
+The table below summarizes which vulnerabilities are mitigated when different
+attack vectors are enabled and assuming the CPU is vulnerable.
+
+=============== ============== ============ ============= ============== ============
+Vulnerability   User-to-Kernel User-to-User Guest-to-Host Guest-to-Guest Cross-Thread
+=============== ============== ============ ============= ============== ============
+BHI                   X                           X
+GDS                   X              X            X              X            X
+L1TF                                              X                       (Note 1)
+MDS                   X              X            X              X        (Note 1)
+MMIO                  X              X            X              X        (Note 1)
+Meltdown              X
+Retbleed              X                           X                       (Note 2)
+RFDS                  X              X            X              X
+Spectre_v1            X
+Spectre_v2            X                           X
+Spectre_v2_user                      X                           X
+SRBDS                 X              X            X              X
+SRSO                  X                           X
+SSB (Note 3)
+TAA                   X              X            X              X        (Note 1)
+=============== ============== ============ ============= ============== ============
+
+Notes:
+   1 --  Disables SMT if cross-thread mitigations are selected and CPU is vulnerable
+
+   2 --  Disables SMT if cross-thread mitigations are selected, CPU is vulnerable,
+   and STIBP is not supported
+
+   3 --  Speculative store bypass is always enabled by default (no kernel
+   mitigation applied) unless overridden with spec_store_bypass_disable option
+
+When an attack-vector is disabled (e.g., *mitigate_user_kernel=off*), all
+mitigations for the vulnerabilities listed in the above table are disabled,
+unless mitigation is required for a different enabled attack-vector or a
+mitigation is explicitly selected via a vulnerability-specific command line
+option.
diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index ff0b440ef2dc..1add4a0baeb0 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -9,6 +9,7 @@ are configurable at compile, boot or run time.
 .. toctree::
    :maxdepth: 1
 
+   attack_vector_controls
    spectre
    l1tf
    mds
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 20/35] x86/bugs: Define attack vectors
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (18 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2025-01-03 15:19   ` Borislav Petkov
  2024-11-05 21:54 ` [PATCH v2 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls David Kaplan
                   ` (14 subsequent siblings)
  34 siblings, 1 reply; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Define 5 new attack vectors that are used for controlling CPU
speculation mitigations and associated command line options.  Each
attack vector may be enabled or disabled, which affects the CPU
mitigations enabled.

The default settings for these attack vectors are consistent with
existing kernel defaults, other than the automatic disabling of VM-based
attack vectors if KVM support is not present.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 include/linux/cpu.h | 11 +++++++++
 kernel/cpu.c        | 58 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 69 insertions(+)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index bdcec1732445..b25566e1fb04 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -189,6 +189,17 @@ void cpuhp_report_idle_dead(void);
 static inline void cpuhp_report_idle_dead(void) { }
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
 
+enum cpu_attack_vectors {
+	CPU_MITIGATE_USER_KERNEL,
+	CPU_MITIGATE_USER_USER,
+	CPU_MITIGATE_GUEST_HOST,
+	CPU_MITIGATE_GUEST_GUEST,
+	CPU_MITIGATE_CROSS_THREAD,
+	NR_CPU_ATTACK_VECTORS,
+};
+
+bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v);
+
 #ifdef CONFIG_CPU_MITIGATIONS
 extern bool cpu_mitigations_off(void);
 extern bool cpu_mitigations_auto_nosmt(void);
diff --git a/kernel/cpu.c b/kernel/cpu.c
index d0699e47178b..841bcffee5d3 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -3200,6 +3200,22 @@ enum cpu_mitigations {
 
 static enum cpu_mitigations cpu_mitigations __ro_after_init = CPU_MITIGATIONS_AUTO;
 
+/*
+ * All except the cross-thread attack vector are mitigated by default.
+ * Cross-thread mitigation often requires disabling SMT which is too expensive
+ * to be enabled by default.
+ *
+ * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM support is
+ * present.
+ */
+static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS] __ro_after_init = {
+	[CPU_MITIGATE_USER_KERNEL] = true,
+	[CPU_MITIGATE_USER_USER] = true,
+	[CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
+	[CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
+	[CPU_MITIGATE_CROSS_THREAD] = false
+};
+
 static int __init mitigations_parse_cmdline(char *arg)
 {
 	if (!strcmp(arg, "off"))
@@ -3228,11 +3244,53 @@ bool cpu_mitigations_auto_nosmt(void)
 	return cpu_mitigations == CPU_MITIGATIONS_AUTO_NOSMT;
 }
 EXPORT_SYMBOL_GPL(cpu_mitigations_auto_nosmt);
+
+#define DEFINE_ATTACK_VECTOR(opt, v) \
+static int __init v##_parse_cmdline(char *arg) \
+{ \
+	if (!strcmp(arg, "off")) \
+		cpu_mitigate_attack_vectors[v] = false; \
+	else if (!strcmp(arg, "on")) \
+		cpu_mitigate_attack_vectors[v] = true; \
+	else \
+		pr_warn("Unsupported " opt "=%s\n", arg); \
+	return 0; \
+} \
+early_param(opt, v##_parse_cmdline)
+
+bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v)
+{
+	BUG_ON(v >= NR_CPU_ATTACK_VECTORS);
+	return cpu_mitigate_attack_vectors[v];
+}
+EXPORT_SYMBOL_GPL(cpu_mitigate_attack_vector);
+
 #else
 static int __init mitigations_parse_cmdline(char *arg)
 {
 	pr_crit("Kernel compiled without mitigations, ignoring 'mitigations'; system may still be vulnerable\n");
 	return 0;
 }
+
+#define DEFINE_ATTACK_VECTOR(opt, v) \
+static int __init v##_parse_cmdline(char *arg) \
+{ \
+	pr_crit("Kernel compiled without mitigations, ignoring %s; system may still be vulnerable\n", opt); \
+	return 0; \
+} \
+early_param(opt, v##_parse_cmdline)
+
+bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v)
+{
+	return false;
+}
+EXPORT_SYMBOL_GPL(cpu_mitigate_attack_vector);
+
 #endif
 early_param("mitigations", mitigations_parse_cmdline);
+
+DEFINE_ATTACK_VECTOR("mitigate_user_kernel", CPU_MITIGATE_USER_KERNEL);
+DEFINE_ATTACK_VECTOR("mitigate_user_user", CPU_MITIGATE_USER_USER);
+DEFINE_ATTACK_VECTOR("mitigate_guest_host", CPU_MITIGATE_GUEST_HOST);
+DEFINE_ATTACK_VECTOR("mitigate_guest_guest", CPU_MITIGATE_GUEST_GUEST);
+DEFINE_ATTACK_VECTOR("mitigate_cross_thread", CPU_MITIGATE_CROSS_THREAD);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (19 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 20/35] x86/bugs: Define attack vectors David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 22/35] x86/bugs: Add attack vector controls for mds David Kaplan
                   ` (13 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

The function should_mitigate_vuln() defines which vulnerabilities should
be mitigated based on the selected attack vector controls.  The
selections here are based on the individual characteristics of each
vulnerability.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 69 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 178415d8026a..6a5996d3b324 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -287,6 +287,75 @@ static void x86_amd_ssb_disable(void)
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 
+/*
+ * Returns true if vulnerability should be mitigated based on the
+ * selected attack vector controls
+ *
+ * See Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
+ */
+static bool __init should_mitigate_vuln(unsigned int bug)
+{
+	switch (bug) {
+	/*
+	 * The only spectre_v1 mitigations in the kernel are related to
+	 * SWAPGS protection on kernel entry.  Therefore, protection is
+	 * only required for the user->kernel attack vector.
+	 */
+	case X86_BUG_SPECTRE_V1:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL);
+
+	/*
+	 * Both spectre_v2 and srso may allow user->kernel or
+	 * guest->host attacks through branch predictor manipulation.
+	 */
+	case X86_BUG_SPECTRE_V2:
+	case X86_BUG_SRSO:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
+
+	/*
+	 * spectre_v2_user refers to user->user or guest->guest branch
+	 * predictor attacks only.  Other indirect branch predictor attacks
+	 * are covered by the spectre_v2 vulnerability.
+	 */
+	case X86_BUG_SPECTRE_V2_USER:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
+
+	/* L1TF is only possible as a guest->host attack */
+	case X86_BUG_L1TF:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
+
+	/*
+	 * All the vulnerabilities below allow potentially leaking data
+	 * across address spaces.  Therefore, mitigation is required for
+	 * any of these 4 attack vectors.
+	 */
+	case X86_BUG_MDS:
+	case X86_BUG_TAA:
+	case X86_BUG_MMIO_STALE_DATA:
+	case X86_BUG_RFDS:
+	case X86_BUG_SRBDS:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
+	/*
+	 * GDS can potentially leak data across address spaces and
+	 * threads.  Mitigation is required under all attack vectors.
+	 */
+	case X86_BUG_GDS:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD);
+	default:
+		return false;
+	}
+}
+
+
 /* Default mitigation for MDS-affected CPUs */
 static enum mds_mitigations mds_mitigation __ro_after_init =
 	IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 22/35] x86/bugs: Add attack vector controls for mds
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (20 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 23/35] x86/bugs: Add attack vector controls for taa David Kaplan
                   ` (12 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if mds mitigation is required.

If cross-thread attack mitigations are required, disable SMT.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6a5996d3b324..aa916e1af0b9 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -415,8 +415,12 @@ static void __init mds_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
 		mds_mitigation = MDS_MITIGATION_OFF;
 
-	if (mds_mitigation == MDS_MITIGATION_AUTO)
-		mds_mitigation = MDS_MITIGATION_FULL;
+	if (mds_mitigation == MDS_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(X86_BUG_MDS))
+			mds_mitigation = MDS_MITIGATION_FULL;
+		else
+			mds_mitigation = MDS_MITIGATION_OFF;
+	}
 
 	if (mds_mitigation == MDS_MITIGATION_FULL) {
 		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
@@ -445,7 +449,8 @@ static void __init mds_apply_mitigation(void)
 	if (mds_mitigation == MDS_MITIGATION_FULL) {
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
 		if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
-		    (mds_nosmt || cpu_mitigations_auto_nosmt()))
+		    (mds_nosmt || cpu_mitigations_auto_nosmt() ||
+		     cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD)))
 			cpu_smt_disable(false);
 	}
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 23/35] x86/bugs: Add attack vector controls for taa
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (21 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 22/35] x86/bugs: Add attack vector controls for mds David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 24/35] x86/bugs: Add attack vector controls for mmio David Kaplan
                   ` (11 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if taa mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index aa916e1af0b9..431182a0ecc5 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -510,11 +510,17 @@ static void __init taa_select_mitigation(void)
 	if (taa_mitigation == TAA_MITIGATION_OFF)
 		return;
 
-	/* This handles the AUTO case. */
-	if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
-		taa_mitigation = TAA_MITIGATION_VERW;
-	else
-		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+	if (taa_mitigation == TAA_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(X86_BUG_TAA)) {
+			if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+				taa_mitigation = TAA_MITIGATION_VERW;
+			else
+				taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+		} else {
+			taa_mitigation = TAA_MITIGATION_OFF;
+			return;
+		}
+	}
 
 	/*
 	 * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
@@ -555,7 +561,8 @@ static void __init taa_apply_mitigation(void)
 		 */
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
 
-		if (taa_nosmt || cpu_mitigations_auto_nosmt())
+		if (taa_nosmt || cpu_mitigations_auto_nosmt() ||
+		    cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD))
 			cpu_smt_disable(false);
 	}
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 24/35] x86/bugs: Add attack vector controls for mmio
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (22 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 23/35] x86/bugs: Add attack vector controls for taa David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 25/35] x86/bugs: Add attack vector controls for rfds David Kaplan
                   ` (10 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vectors controls to determine if mmio mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 37 ++++++++++++++++++++++---------------
 1 file changed, 22 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 431182a0ecc5..ab1a8ae31588 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -612,20 +612,26 @@ static void __init mmio_select_mitigation(void)
 	if (mmio_mitigation == MMIO_MITIGATION_OFF)
 		return;
 
-	/*
-	 * Check if the system has the right microcode.
-	 *
-	 * CPU Fill buffer clear mitigation is enumerated by either an explicit
-	 * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
-	 * affected systems.
-	 */
-	if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
-	    (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
-	     boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
-	     !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
-		mmio_mitigation = MMIO_MITIGATION_VERW;
-	else
-		mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
+	if (mmio_mitigation == MMIO_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(X86_BUG_MMIO_STALE_DATA)) {
+			/*
+			 * Check if the system has the right microcode.
+			 *
+			 * CPU Fill buffer clear mitigation is enumerated by either an explicit
+			 * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
+			 * affected systems.
+			 */
+			if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
+			    (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
+			     boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
+			     !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
+				mmio_mitigation = MMIO_MITIGATION_VERW;
+			else
+				mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
+		} else {
+			mmio_mitigation = MMIO_MITIGATION_OFF;
+		}
+	}
 }
 
 static void __init mmio_update_mitigation(void)
@@ -671,7 +677,8 @@ static void __init mmio_apply_mitigation(void)
 	if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
 		static_branch_enable(&mds_idle_clear);
 
-	if (mmio_nosmt || cpu_mitigations_auto_nosmt())
+	if (mmio_nosmt || cpu_mitigations_auto_nosmt() ||
+	    cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD))
 		cpu_smt_disable(false);
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 25/35] x86/bugs: Add attack vector controls for rfds
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (23 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 24/35] x86/bugs: Add attack vector controls for mmio David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 26/35] x86/bugs: Add attack vector controls for srbds David Kaplan
                   ` (9 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if rfds mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ab1a8ae31588..ecf7046673b9 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -718,8 +718,14 @@ static void __init rfds_select_mitigation(void)
 	if (rfds_mitigation == RFDS_MITIGATION_OFF)
 		return;
 
-	if (rfds_mitigation == RFDS_MITIGATION_AUTO)
-		rfds_mitigation = RFDS_MITIGATION_VERW;
+	if (rfds_mitigation == RFDS_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(X86_BUG_RFDS))
+			rfds_mitigation = RFDS_MITIGATION_VERW;
+		else {
+			rfds_mitigation = RFDS_MITIGATION_OFF;
+			return;
+		}
+	}
 
 	if (!(x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR))
 		rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 26/35] x86/bugs: Add attack vector controls for srbds
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (24 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 25/35] x86/bugs: Add attack vector controls for rfds David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 27/35] x86/bugs: Add attack vector controls for gds David Kaplan
                   ` (8 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if srbds mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ecf7046673b9..083452942264 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -834,8 +834,14 @@ static void __init srbds_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
 		return;
 
-	if (srbds_mitigation == SRBDS_MITIGATION_AUTO)
-		srbds_mitigation = SRBDS_MITIGATION_FULL;
+	if (srbds_mitigation == SRBDS_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(X86_BUG_SRBDS))
+			srbds_mitigation = SRBDS_MITIGATION_FULL;
+		else {
+			srbds_mitigation = SRBDS_MITIGATION_OFF;
+			return;
+		}
+	}
 
 	/*
 	 * Check to see if this is one of the MDS_NO systems supporting TSX that
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 27/35] x86/bugs: Add attack vector controls for gds
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (25 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 26/35] x86/bugs: Add attack vector controls for srbds David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 28/35] x86/bugs: Add attack vector controls for spectre_v1 David Kaplan
                   ` (7 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if gds mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 083452942264..8612be5445ba 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -995,8 +995,14 @@ static void __init gds_select_mitigation(void)
 		gds_mitigation = GDS_MITIGATION_OFF;
 	/* Will verify below that mitigation _can_ be disabled */
 
-	if (gds_mitigation == GDS_MITIGATION_AUTO)
-		gds_mitigation = GDS_MITIGATION_FULL;
+	if (gds_mitigation == GDS_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(X86_BUG_GDS))
+			gds_mitigation = GDS_MITIGATION_FULL;
+		else {
+			gds_mitigation = GDS_MITIGATION_OFF;
+			return;
+		}
+	}
 
 	/* No microcode */
 	if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 28/35] x86/bugs: Add attack vector controls for spectre_v1
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (26 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 27/35] x86/bugs: Add attack vector controls for gds David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 29/35] x86/bugs: Add attack vector controls for retbleed David Kaplan
                   ` (6 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if spectre_v1 mitigation is
required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 8612be5445ba..f63fa8a3b9ee 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1108,6 +1108,9 @@ static void __init spectre_v1_select_mitigation(void)
 {
 	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
 		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
+
+	if (!should_mitigate_vuln(X86_BUG_SPECTRE_V1))
+		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
 }
 
 static void __init spectre_v1_apply_mitigation(void)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 29/35] x86/bugs: Add attack vector controls for retbleed
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (27 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 28/35] x86/bugs: Add attack vector controls for spectre_v1 David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 30/35] x86/bugs: Add attack vector controls for spectre_v2_user David Kaplan
                   ` (5 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if retbleed mitigation is
required.

Disable SMT if cross-thread protection is desired and STIBP is not
available.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index f63fa8a3b9ee..545151114947 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1267,13 +1267,17 @@ static void __init retbleed_select_mitigation(void)
 	}
 
 	if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
-		if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
-		    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
-			if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
-				retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
-			else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
-				 boot_cpu_has(X86_FEATURE_IBPB))
-				retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+		if (should_mitigate_vuln(X86_BUG_RETBLEED)) {
+			if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+			    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
+				if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
+					retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
+				else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
+					 boot_cpu_has(X86_FEATURE_IBPB))
+					retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+			}
+		} else {
+			retbleed_mitigation = RETBLEED_MITIGATION_NONE;
 		}
 	}
 }
@@ -1372,7 +1376,8 @@ static void __init retbleed_apply_mitigation(void)
 	}
 
 	if (mitigate_smt && !boot_cpu_has(X86_FEATURE_STIBP) &&
-	    (retbleed_nosmt || cpu_mitigations_auto_nosmt()))
+	    (retbleed_nosmt || cpu_mitigations_auto_nosmt() ||
+	     cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD)))
 		cpu_smt_disable(false);
 
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 30/35] x86/bugs: Add attack vector controls for spectre_v2_user
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (28 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 29/35] x86/bugs: Add attack vector controls for retbleed David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 31/35] x86/bugs: Add attack vector controls for bhi David Kaplan
                   ` (4 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if spectre_v2_user mitigation is
required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 545151114947..6479c800e973 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1548,6 +1548,13 @@ spectre_v2_user_select_mitigation(void)
 		spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
 		break;
 	case SPECTRE_V2_USER_CMD_AUTO:
+		if (should_mitigate_vuln(X86_BUG_SPECTRE_V2_USER)) {
+			spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+			spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
+		} else {
+			return;
+		}
+		break;
 	case SPECTRE_V2_USER_CMD_PRCTL:
 		spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
 		spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 31/35] x86/bugs: Add attack vector controls for bhi
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (29 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 30/35] x86/bugs: Add attack vector controls for spectre_v2_user David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 32/35] x86/bugs: Add attack vector controls for spectre_v2 David Kaplan
                   ` (3 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

There are two BHI mitigations, one for SYSCALL and one for VMEXIT.
Split these up so they can be selected individually based on attack
vector.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 38 ++++++++++++++++++++++++++------------
 1 file changed, 26 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6479c800e973..cc5248cdfe6f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1876,8 +1876,9 @@ static bool __init spec_ctrl_bhi_dis(void)
 enum bhi_mitigations {
 	BHI_MITIGATION_OFF,
 	BHI_MITIGATION_AUTO,
-	BHI_MITIGATION_ON,
-	BHI_MITIGATION_VMEXIT_ONLY,
+	BHI_MITIGATION_FULL,
+	BHI_MITIGATION_VMEXIT,
+	BHI_MITIGATION_SYSCALL
 };
 
 static enum bhi_mitigations bhi_mitigation __ro_after_init =
@@ -1891,9 +1892,9 @@ static int __init spectre_bhi_parse_cmdline(char *str)
 	if (!strcmp(str, "off"))
 		bhi_mitigation = BHI_MITIGATION_OFF;
 	else if (!strcmp(str, "on"))
-		bhi_mitigation = BHI_MITIGATION_ON;
+		bhi_mitigation = BHI_MITIGATION_FULL;
 	else if (!strcmp(str, "vmexit"))
-		bhi_mitigation = BHI_MITIGATION_VMEXIT_ONLY;
+		bhi_mitigation = BHI_MITIGATION_VMEXIT;
 	else
 		pr_err("Ignoring unknown spectre_bhi option (%s)", str);
 
@@ -1909,8 +1910,17 @@ static void __init bhi_select_mitigation(void)
 	if (bhi_mitigation == BHI_MITIGATION_OFF)
 		return;
 
-	if (bhi_mitigation == BHI_MITIGATION_AUTO)
-		bhi_mitigation = BHI_MITIGATION_ON;
+	if (bhi_mitigation == BHI_MITIGATION_AUTO) {
+		if (cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)) {
+			if (cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST))
+				bhi_mitigation = BHI_MITIGATION_FULL;
+			else
+				bhi_mitigation = BHI_MITIGATION_SYSCALL;
+		} else if (cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST))
+			bhi_mitigation = BHI_MITIGATION_VMEXIT;
+		else
+			bhi_mitigation = BHI_MITIGATION_OFF;
+	}
 }
 
 static void __init bhi_apply_mitigation(void)
@@ -1933,15 +1943,19 @@ static void __init bhi_apply_mitigation(void)
 	if (!IS_ENABLED(CONFIG_X86_64))
 		return;
 
-	if (bhi_mitigation == BHI_MITIGATION_VMEXIT_ONLY) {
-		pr_info("Spectre BHI mitigation: SW BHB clearing on VM exit only\n");
+	/* Mitigate KVM if guest->host protection is desired */
+	if (bhi_mitigation == BHI_MITIGATION_FULL ||
+	    bhi_mitigation == BHI_MITIGATION_VMEXIT) {
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
-		return;
+		pr_info("Spectre BHI mitigation: SW BHB clearing on VM exit\n");
 	}
 
-	pr_info("Spectre BHI mitigation: SW BHB clearing on syscall and VM exit\n");
-	setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP);
-	setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
+	/* Mitigate syscalls if user->kernel protection is desired */
+	if (bhi_mitigation == BHI_MITIGATION_FULL ||
+	    bhi_mitigation == BHI_MITIGATION_SYSCALL) {
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP);
+		pr_info("Spectre BHI mitigation: SW BHB clearing on syscall\n");
+	}
 }
 
 static void __init spectre_v2_select_mitigation(void)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 32/35] x86/bugs: Add attack vector controls for spectre_v2
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (30 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 31/35] x86/bugs: Add attack vector controls for bhi David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 33/35] x86/bugs: Add attack vector controls for l1tf David Kaplan
                   ` (2 subsequent siblings)
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if spectre_v2 mitigation is
required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index cc5248cdfe6f..4d71b4f969dc 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1975,13 +1975,15 @@ static void __init spectre_v2_select_mitigation(void)
 	case SPECTRE_V2_CMD_NONE:
 		return;
 
-	case SPECTRE_V2_CMD_FORCE:
 	case SPECTRE_V2_CMD_AUTO:
+		if (!should_mitigate_vuln(X86_BUG_SPECTRE_V2))
+			break;
+		fallthrough;
+	case SPECTRE_V2_CMD_FORCE:
 		if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
 			mode = SPECTRE_V2_EIBRS;
 			break;
 		}
-
 		mode = spectre_v2_select_retpoline();
 		break;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 33/35] x86/bugs: Add attack vector controls for l1tf
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (31 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 32/35] x86/bugs: Add attack vector controls for spectre_v2 David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 34/35] x86/bugs: Add attack vector controls for srso David Kaplan
  2024-11-05 21:54 ` [PATCH v2 35/35] x86/pti: Add attack vector controls for pti David Kaplan
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if l1tf mitigation is required.

Disable SMT if cross-thread attack vector option is selected.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4d71b4f969dc..81876a24c83c 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2725,10 +2725,15 @@ static void __init l1tf_select_mitigation(void)
 	}
 
 	if (l1tf_mitigation == L1TF_MITIGATION_AUTO) {
-		if (cpu_mitigations_auto_nosmt())
-			l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
-		else
-			l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+		if (!should_mitigate_vuln(X86_BUG_L1TF))
+			l1tf_mitigation = L1TF_MITIGATION_OFF;
+		else {
+			if (cpu_mitigations_auto_nosmt() ||
+			    cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD))
+				l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
+			else
+				l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+		}
 	}
 
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 34/35] x86/bugs: Add attack vector controls for srso
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (32 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 33/35] x86/bugs: Add attack vector controls for l1tf David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  2024-11-05 21:54 ` [PATCH v2 35/35] x86/pti: Add attack vector controls for pti David Kaplan
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if srso mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 81876a24c83c..8552666c1b64 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2868,8 +2868,14 @@ static void __init srso_select_mitigation(void)
 		return;
 
 	/* Default mitigation */
-	if (srso_mitigation == SRSO_MITIGATION_AUTO)
-		srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+	if (srso_mitigation == SRSO_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(X86_BUG_SRSO))
+			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+		else {
+			srso_mitigation = SRSO_MITIGATION_NONE;
+			return;
+		}
+	}
 
 	if (has_microcode) {
 		/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v2 35/35] x86/pti: Add attack vector controls for pti
  2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
                   ` (33 preceding siblings ...)
  2024-11-05 21:54 ` [PATCH v2 34/35] x86/bugs: Add attack vector controls for srso David Kaplan
@ 2024-11-05 21:54 ` David Kaplan
  34 siblings, 0 replies; 78+ messages in thread
From: David Kaplan @ 2024-11-05 21:54 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Disable PTI mitigation if user->kernel attack vector mitigations are
disabled.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/mm/pti.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 851ec8f1363a..9e1ed3df04e8 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -94,7 +94,8 @@ void __init pti_check_boottime_disable(void)
 	if (pti_mode == PTI_FORCE_ON)
 		pti_print_if_secure("force enabled on command line.");
 
-	if (pti_mode == PTI_AUTO && !boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
+	if (pti_mode == PTI_AUTO && (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN) ||
+				     !cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)))
 		return;
 
 	setup_force_cpu_cap(X86_FEATURE_PTI);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-05 21:54 ` [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls David Kaplan
@ 2024-11-06 10:39   ` Borislav Petkov
  2024-11-06 14:49     ` Kaplan, David
  2024-11-13 14:15   ` Brendan Jackman
  1 sibling, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2024-11-06 10:39 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Tue, Nov 05, 2024 at 03:54:39PM -0600, David Kaplan wrote:
> Document the 5 new attack vector command line options, how they
> interact with existing vulnerability controls, and recommendations on
> when they can be disabled.
> 
> Note that while mitigating against untrusted userspace requires both
> mitigate_user_kernel and mitigate_user_user, these are kept separate.
> The kernel can control what code executes inside of it and that may
> affect the risk associated with vulnerabilities especially if new kernel
> mitigations are implemented.  The same isn't typically true of userspace.
> 
> In other words, the risk associated with user_user or guest_guest
> attacks is unlikely to change over time.  While the risk associated with
> user_kernel or guest_host attacks may change.  Therefore, these controls
> are separated.

Right, and this is one of the thing David and I have been bikeshedding on
recently so perhaps it'll be cool to hear some more opinions.

My issue with this is, because I always try to make the user interface as
simple as possible, I'm wondering if we should merge

	user_kernel and user_user

and

	guest_host and guest_guest

each into a single option.

Because user_user and guest_guest each pull in user_kernel and guest_host
respectively, due to how the protections work.

As David says, what user_kernel and guest_host enable for mitigating the
respective vector, will change when we add more involved kernel protection
schemes so their overhead should potentially go down.

While the user_user and guest_guest things should not change that much.

So, provided we always DTRT, what gets enabled behind those vectors will
change but still be sufficient depending on the kernel and its protections.

One of the arguments against those getting merged is, those are not going to
be *vector* controls anymore but something else:

mitigate_user - that will mitigate everything that has to do with executing
user processes

mitigate_guest - same but when running guests

The third one will be the SMT off: mitigate_cross_thread.

Oh and whatever we do now, we can always change it later but that means an
additional change.

Anyway, this should be the gist of our bikeshedding.

Thoughts? Ideas?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-06 10:39   ` Borislav Petkov
@ 2024-11-06 14:49     ` Kaplan, David
  2024-11-13  3:58       ` Manwaring, Derek
  0 siblings, 1 reply; 78+ messages in thread
From: Kaplan, David @ 2024-11-06 14:49 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Wednesday, November 6, 2024 4:39 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 19/35] Documentation/x86: Document the new attack
> vector controls
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Tue, Nov 05, 2024 at 03:54:39PM -0600, David Kaplan wrote:
> > Document the 5 new attack vector command line options, how they
> > interact with existing vulnerability controls, and recommendations on
> > when they can be disabled.
> >
> > Note that while mitigating against untrusted userspace requires both
> > mitigate_user_kernel and mitigate_user_user, these are kept separate.
> > The kernel can control what code executes inside of it and that may
> > affect the risk associated with vulnerabilities especially if new
> > kernel mitigations are implemented.  The same isn't typically true of userspace.
> >
> > In other words, the risk associated with user_user or guest_guest
> > attacks is unlikely to change over time.  While the risk associated
> > with user_kernel or guest_host attacks may change.  Therefore, these
> > controls are separated.
>
> Right, and this is one of the thing David and I have been bikeshedding on recently
> so perhaps it'll be cool to hear some more opinions.
>
> My issue with this is, because I always try to make the user interface as simple as
> possible, I'm wondering if we should merge
>
>         user_kernel and user_user
>
> and
>
>         guest_host and guest_guest
>
> each into a single option.
>
> Because user_user and guest_guest each pull in user_kernel and guest_host
> respectively, due to how the protections work.

To be clear, in the current patch series the user_user and guest_guest protections only turn on the mitigations required for those specific vectors, as noted in this patch.  They do not automatically enable the same protections as user_kernel or guest_host.

However due to how Linux works (at least today), if you have one trusted userspace process and another untrusted one, you really should enable both user_kernel and user_user controls to have complete protection.

>
> As David says, what user_kernel and guest_host enable for mitigating the
> respective vector, will change when we add more involved kernel protection
> schemes so their overhead should potentially go down.
>
> While the user_user and guest_guest things should not change that much.
>
> So, provided we always DTRT, what gets enabled behind those vectors will change
> but still be sufficient depending on the kernel and its protections.

One key point is noted in the commit message: the kernel can control how it behaves, what it executes, its address space, etc.  But it cannot control how userspace works.  Therefore, the risk associated with attacks that target the kernel might change over time, while the same can't really be said for userspace.  Risk is not always a black-and-white decision, and there could be a point where kernel defense-in-depth measures may be deemed sufficient by some, but not all, admins to eliminate the need for more expensive mitigations.

>
> One of the arguments against those getting merged is, those are not going to be
> *vector* controls anymore but something else:
>
> mitigate_user - that will mitigate everything that has to do with executing user
> processes
>
> mitigate_guest - same but when running guests
>
> The third one will be the SMT off: mitigate_cross_thread.
>
> Oh and whatever we do now, we can always change it later but that means an
> additional change.
>
> Anyway, this should be the gist of our bikeshedding.
>
> Thoughts? Ideas?
>

Right, so the way I think of this is that there is a cognitive process that administrators must go through:

1. Determine how the system will be used (e.g., am I running untrusted VMs?)
2. Determine the attack vectors relevant for that configuration (e.g., I need guest->host and guest->guest protection)
3. Determine which mitigations are required to enable the desired level of security (e.g., enable vulnerability X mitigation but not Y)

Today, the administrator must do all 3 of these, which requires in-depth knowledge of all these bugs, and isn't forward compatible.  The proposed patch series has the kernel take care of step 3, but still requires the administrator to do steps 1 and 2.  The provided documentation helps with step 2, but ultimately the admin must decide which attack vectors they want to turn on/off.  But the attack vectors are also forward compatible in case new bugs show up in the future.

What you've proposed is up-leveling things a bit further and trying to have the kernel do both steps 2 and 3 in the above flow.  That is, the admin decides for example they have untrusted userspace, and the kernel then determines they need user->kernel and user->user protection, and then which bug fixes to enable.

I'm not necessarily opposed to that, and welcome feedback on this.  But as you said, that is not an attack-vector control anymore, it is more of an end-use control.  It is possible to do both...we could also create end-use options like the ones you mention, and just map those in a pretty trivial way to the attack vector controls.

I'm nervous about only doing the end-use controls and not the attack vector controls because of the reasons outlined above.  For example, I'm thinking about some proposed technologies such as KVM Address Space Isolation which may go a long way to reducing the risk of guest->host attacks but may not be fully secure yet (where the kernel would feel comfortable disabling a bunch of guest->host protections automatically).  With attack vector-level controls, it would be easier to turn off guest->host protection if the admin is comfortable with this technology, but still leaving the (almost certainly needed) guest->guest protections in place.

--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 13/35] x86/bugs: Restructure spectre_v2_user mitigation
  2024-11-05 21:54 ` [PATCH v2 13/35] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
@ 2024-11-06 18:56   ` kernel test robot
  0 siblings, 0 replies; 78+ messages in thread
From: kernel test robot @ 2024-11-06 18:56 UTC (permalink / raw)
  To: David Kaplan, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen, x86,
	H . Peter Anvin
  Cc: oe-kbuild-all, linux-kernel

Hi David,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/master]
[also build test WARNING on linus/master v6.12-rc6 next-20241106]
[cannot apply to tip/smp/core tip/x86/core tip/auto-latest]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/David-Kaplan/x86-bugs-Add-X86_BUG_SPECTRE_V2_USER/20241106-060512
base:   tip/master
patch link:    https://lore.kernel.org/r/20241105215455.359471-14-david.kaplan%40amd.com
patch subject: [PATCH v2 13/35] x86/bugs: Restructure spectre_v2_user mitigation
config: x86_64-randconfig-121-20241106 (https://download.01.org/0day-ci/archive/20241107/202411070218.ge5HHTv1-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241107/202411070218.ge5HHTv1-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202411070218.ge5HHTv1-lkp@intel.com/

sparse warnings: (new ones prefixed by >>)
>> arch/x86/kernel/cpu/bugs.c:1337:32: sparse: sparse: symbol 'spectre_v2_cmd' was not declared. Should it be static?

vim +/spectre_v2_cmd +1337 arch/x86/kernel/cpu/bugs.c

  1336	
> 1337	enum spectre_v2_mitigation_cmd spectre_v2_cmd __ro_after_init = SPECTRE_V2_CMD_AUTO;
  1338	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-06 14:49     ` Kaplan, David
@ 2024-11-13  3:58       ` Manwaring, Derek
  2024-11-13 14:15         ` Brendan Jackman
  2024-11-13 14:49         ` Kaplan, David
  0 siblings, 2 replies; 78+ messages in thread
From: Manwaring, Derek @ 2024-11-13  3:58 UTC (permalink / raw)
  To: david.kaplan, jackmanb
  Cc: bp, dave.hansen, hpa, jpoimboe, linux-kernel, mingo,
	pawan.kumar.gupta, peterz, tglx, x86, mlipp, canellac

+Brendan

On 2024-11-06 at 14:49+0000, David Kaplan wrote:
> On 2024-11-06 at 10:39+0000, Borislav Petkov wrote:
> > One of the arguments against those getting merged is, those are not going to be
> > *vector* controls anymore but something else:
> >
> > mitigate_user - that will mitigate everything that has to do with executing user
> > processes
> >
> > mitigate_guest - same but when running guests
> >
> > The third one will be the SMT off: mitigate_cross_thread.
>
> Right, so the way I think of this is that there is a cognitive process
> that administrators must go through:
>
> 1. Determine how the system will be used (e.g., am I running untrusted
>    VMs?)
> 2. Determine the attack vectors relevant for that configuration (e.g., I
>    need guest->host and guest->guest protection)
> 3. Determine which mitigations are required to enable the desired level
>    of security (e.g., enable vulnerability X mitigation but not Y)
>
> Today, the administrator must do all 3 of these, which requires in-depth
> knowledge of all these bugs, and isn't forward compatible.  The proposed
> patch series has the kernel take care of step 3, but still requires the
> administrator to do steps 1 and 2.  The provided documentation helps
> with step 2, but ultimately the admin must decide which attack vectors
> they want to turn on/off.  But the attack vectors are also forward
> compatible in case new bugs show up in the future.
>
> What you've proposed is up-leveling things a bit further and trying to
> have the kernel do both steps 2 and 3 in the above flow.  That is, the
> admin decides for example they have untrusted userspace, and the kernel
> then determines they need user->kernel and user->user protection, and
> then which bug fixes to enable.
>
> I'm not necessarily opposed to that, and welcome feedback on this.  But
> as you said, that is not an attack-vector control anymore, it is more of
> an end-use control.  It is possible to do both...we could also create
> end-use options like the ones you mention, and just map those in a
> pretty trivial way to the attack vector controls.

I think the further simplification makes sense (merge to mitigate_user
or mitigate_guest). I would say definitely don't do both (ending up with
end-use, vector controls, *and* existing parameters). Both just seems
like more confusion rather than simplification overall.

For me the major dissonance in all of this remains cross_thread. Based
on either approach (end-use or vector), SMT should be disabled unless
the admin explicitly asks to keep it (presumably because they are
running with core scheduling correctly configured).

What if mitigate_user_user defaulted to 'defaults' instead of 'on'? I'm
thinking 'defaults' meaning "do the things the kernel normally did
before thinking in these attack-vector terms." That way we could
differentiate between "admin didn't specify anything" and "admin said
they cared about mitigating this vector (or case)." That should make it
reasonable to disable SMT when mitigate_user_user=on is supplied, yeah?

> I'm nervous about only doing the end-use controls and not the attack
> vector controls because of the reasons outlined above.  For example, I'm
> thinking about some proposed technologies such as KVM Address Space
> Isolation which may go a long way to reducing the risk of guest->host
> attacks but may not be fully secure yet (where the kernel would feel
> comfortable disabling a bunch of guest->host protections automatically).
> With attack vector-level controls, it would be easier to turn off
> guest->host protection if the admin is comfortable with this technology,
> but still leaving the (almost certainly needed) guest->guest protections
> in place.

Personally I wouldn't put too much weight on the possibility of
disabling kernel mitigations with these future approaches. For what
we're looking at with direct map removal, I would still keep kernel
mitigations on unless we really needed one off. Brendan, I know you were
looking at this differently though for ASI. What are your thoughts?

Derek

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-05 21:54 ` [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls David Kaplan
  2024-11-06 10:39   ` Borislav Petkov
@ 2024-11-13 14:15   ` Brendan Jackman
  2024-11-13 15:42     ` Kaplan, David
  1 sibling, 1 reply; 78+ messages in thread
From: Brendan Jackman @ 2024-11-13 14:15 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
	linux-kernel

Hi David,

I'll respond separately to the more interesting thread of discussion
w/ Boris & Derek but here are some more specific comments:

On Tue, 5 Nov 2024 at 23:00, David Kaplan <david.kaplan@amd.com> wrote:
> +User-to-User
Sorry for the annoying bikeshedding, but this naming could be
misconstrued by a hasty reader as being concerned with the boundary
between Unix users. I wonder if we should say "userspace-to-userspace"
or "process-to-process" or something? The latter would be confusing
because it doesn't imply any exclusion of KVM guests though.

At first I thought "don't be so pedantic, users really need to RTFM
here regardless,
at most we just need a 'note this has nothing to do with Unix users'"

But... actually isn't it conceivable that one day we could want
"mitigate attacks between userspace processes, unless they have the
same effective UID" or something? So then the naming would be really
confusing.

On the other hand, your response to my "but my trust domains are way
more complex than that" comment at LPC was "Google aren't the target
audience for this mechanism". Maybe anyone who knows that their unix
users are meaningful trust domains (probably: someone building a
specialised distro (Android?)) is similarly outside the target
audience here.

> +The user-to-user attack vector involves a malicious userspace program attempting
> +to influence the behavior of another unsuspecting userspace program in order to
> +exfiltrate data.  The vulnerability of a userspace program is based on the
> +program itself and the interfaces it provides.

I find this confusing. "Influence the behaviour" sounds like it's
talking specifically about attacks that drive mis-speculation
(Spectre-type), but shouldn't this also include stuff more in the vein
of L1TF and MDS? I also don't really understand "the interface it
provides".

To be concrete, imagine there's a system where process A just sits in
a loop reading a secret, and on a system with mitigations=off there's
a vuln if a userspace attacker can preempt process A, it can leak the
secret from some uarch buffer it had naturally got into. I expect
mitigate_user_user=on to prevent that, but this wording doesn't really
sound like it does. I guess the vulnerability is "based on the program
itself" in that it took advantage of the fact it read the data into
the buffer, but there's no "interface" and no "influenced the
behaviour".

I think the best I can come up with to describe what I expect this
flag to mitigate is: "a malicious userspace program attempting to leak
data directly from the address space of the victim program". It's a
bit unfortunate that "directly from the address space" implies there's
some other avenue to leak that data, which might not always be the
case and kinda gets back to the other thread about the
user-to-user/user-to-kernel split. Maybe this is a point against that
split.

> +The guest-to-guest attack vector involves a malicious VM [...]
(Ditto of course)

> +Summary of attack-vector mitigations
> +------------------------------------
> +
> +When a vulnerability is mitigated due to an attack-vector control, the default
> +mitigation option for that particular vulnerability is used.  To use a different
> +mitigation, please use the vulnerability-specific command line option.
> +
> +The table below summarizes which vulnerabilities are mitigated when different
> +attack vectors are enabled and assuming the CPU is vulnerable.
> +
> +=============== ============== ============ ============= ============== ============
> +Vulnerability   User-to-Kernel User-to-User Guest-to-Host Guest-to-Guest Cross-Thread
> +=============== ============== ============ ============= ============== ============
> +BHI                   X                           X
> +GDS                   X              X            X              X            X
> +L1TF                                              X                       (Note 1)
> +MDS                   X              X            X              X        (Note 1)
> +MMIO                  X              X            X              X        (Note 1)
> +Meltdown              X
> +Retbleed              X                           X                       (Note 2)
> +RFDS                  X              X            X              X
> +Spectre_v1            X
> +Spectre_v2            X                           X
> +Spectre_v2_user                      X                           X
> +SRBDS                 X              X            X              X
> +SRSO                  X                           X
> +SSB (Note 3)
> +TAA                   X              X            X              X        (Note 1)
> +=============== ============== ============ ============= ============== ============

Hmm, I'm also confused by this. This mechanism is supposed to be about
mitigating attack vectors, but now suddenly we've gone back to talking
about vulnerabilities. Vulns and vectors are obviously not totally
orthogonal but there isn't a 1:1 mapping.

To be concrete again: this seems to say if mitigate_guest_host=off
there's no L1TF mitigation? (I think in fact it's mitigated by PTE
inversion).

And I don't understand why Retbleed doesn't require user_user or
guest_guest mitigation. I assume I can figure this out by reading
about the details of Retbleed exploitation or looking in bugs.c. But
given this is about making things easier I think we should probably
assume the reader is as ignorant about that as I am.

I dunno exactly what to suggest here, but maybe instead of "we
do/don't mitigate these vulns" this would be more helpful as "we
do/don't enable this specific mitigation action". So instead of a row
for L1TF, there would be a row for the flush that l1tf=flush gives you
(basically, like this kinda already has for spectre_v2_user). Maybe
again I'm just being pedantic and making life difficult here though.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-13  3:58       ` Manwaring, Derek
@ 2024-11-13 14:15         ` Brendan Jackman
  2024-11-13 15:05           ` Kaplan, David
  2024-11-20  0:14           ` Manwaring, Derek
  2024-11-13 14:49         ` Kaplan, David
  1 sibling, 2 replies; 78+ messages in thread
From: Brendan Jackman @ 2024-11-13 14:15 UTC (permalink / raw)
  To: Manwaring, Derek
  Cc: david.kaplan, bp, dave.hansen, hpa, jpoimboe, linux-kernel, mingo,
	pawan.kumar.gupta, peterz, tglx, x86, mlipp, canellac

On Wed, 13 Nov 2024 at 04:58, Manwaring, Derek <derekmn@amazon.com> wrote:
> > I'm nervous about only doing the end-use controls and not the attack
> > vector controls because of the reasons outlined above.  For example, I'm
> > thinking about some proposed technologies such as KVM Address Space
> > Isolation which may go a long way to reducing the risk of guest->host
> > attacks but may not be fully secure yet (where the kernel would feel
> > comfortable disabling a bunch of guest->host protections automatically).
> > With attack vector-level controls, it would be easier to turn off
> > guest->host protection if the admin is comfortable with this technology,
> > but still leaving the (almost certainly needed) guest->guest protections
> > in place.
>
> Personally I wouldn't put too much weight on the possibility of
> disabling kernel mitigations with these future approaches. For what
> we're looking at with direct map removal, I would still keep kernel
> mitigations on unless we really needed one off. Brendan, I know you were
> looking at this differently though for ASI. What are your thoughts?

Yeah, personally my vision for ASI is more than it _is_ the
guest_host/guest_user mitigation and for the RFCv2 (long-awaited,
sorry) it will be the user_user/user_kernel mitigation too. If we
decide we wanna keep existing mitigations in place once ASI is at full
strength then ASI mostly failed. (Or perhaps to be more charitable to
ASI's strategic prospects, I'd feel OK if people said "I want ASI, but
I'll keep the old mitigations for defence-in-depth" as long as we
usually don't need to develop _new_ mitigations for those people).

So rather than saying "I have ASI, I can turn guest->host mitigation
off", you say "I have ASI, guest->host mitigation is very cheap, let's
have some champagne". In the utopian champagne future only very
advanced users will have any interest in more fine-grained questions
than "do I trust my KVM guests".

I guess another way to look at that is: the distinction in these pairs
of attack vectors is quite subtle. The fact that we are even
considering exposing users to that awkward subtle distinction
highlights a weakness of Linux's current mitigation posture and I
think ASI reduces that weakness. The weakness is: the cost of
mitigating attack vectors doesn't line up very well with the degree of
threat they present. We think it's kinda tricky in practice to steal
interesting data just by going into the kernel, but it's probably
possible, so we have to pay mitigation costs every time we go into the
kernel. Relatively low risk, relatively high cost. So we're having to
say to the user "do you wanna pay that high cost for this low risk? We
can't tell you how low it is though, we can only start rambling
incomprehensibly about something called the 'fizz map'".

At first I wanted to say the same thing about your work to remove
stuff from the direct map. Basically that's about architecting
ourselves towards a world where the "guest->kernel" attack vector just
isn't meaningful, right?

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-13  3:58       ` Manwaring, Derek
  2024-11-13 14:15         ` Brendan Jackman
@ 2024-11-13 14:49         ` Kaplan, David
  1 sibling, 0 replies; 78+ messages in thread
From: Kaplan, David @ 2024-11-13 14:49 UTC (permalink / raw)
  To: Manwaring, Derek, jackmanb@google.com
  Cc: bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com,
	jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org,
	mlipp@amazon.at, canellac@amazon.at

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Manwaring, Derek <derekmn@amazon.com>
> Sent: Tuesday, November 12, 2024 9:58 PM
> To: Kaplan, David <David.Kaplan@amd.com>; jackmanb@google.com
> Cc: bp@alien8.de; dave.hansen@linux.intel.com; hpa@zytor.com;
> jpoimboe@kernel.org; linux-kernel@vger.kernel.org; mingo@redhat.com;
> pawan.kumar.gupta@linux.intel.com; peterz@infradead.org; tglx@linutronix.de;
> x86@kernel.org; mlipp@amazon.at; canellac@amazon.at
> Subject: RE: [PATCH v2 19/35] Documentation/x86: Document the new attack
> vector controls
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> +Brendan
>
> On 2024-11-06 at 14:49+0000, David Kaplan wrote:
> > On 2024-11-06 at 10:39+0000, Borislav Petkov wrote:
> > > One of the arguments against those getting merged is, those are not
> > > going to be
> > > *vector* controls anymore but something else:
> > >
> > > mitigate_user - that will mitigate everything that has to do with
> > > executing user processes
> > >
> > > mitigate_guest - same but when running guests
> > >
> > > The third one will be the SMT off: mitigate_cross_thread.
> >
> > Right, so the way I think of this is that there is a cognitive process
> > that administrators must go through:
> >
> > 1. Determine how the system will be used (e.g., am I running untrusted
> >    VMs?)
> > 2. Determine the attack vectors relevant for that configuration (e.g.,
> >I
> >    need guest->host and guest->guest protection)  3. Determine which
> >mitigations are required to enable the desired level
> >    of security (e.g., enable vulnerability X mitigation but not Y)
> >
> > Today, the administrator must do all 3 of these, which requires
> > in-depth knowledge of all these bugs, and isn't forward compatible.
> > The proposed patch series has the kernel take care of step 3, but
> > still requires the administrator to do steps 1 and 2.  The provided
> > documentation helps with step 2, but ultimately the admin must decide
> > which attack vectors they want to turn on/off.  But the attack vectors
> > are also forward compatible in case new bugs show up in the future.
> >
> > What you've proposed is up-leveling things a bit further and trying to
> > have the kernel do both steps 2 and 3 in the above flow.  That is, the
> > admin decides for example they have untrusted userspace, and the
> > kernel then determines they need user->kernel and user->user
> > protection, and then which bug fixes to enable.
> >
> > I'm not necessarily opposed to that, and welcome feedback on this.
> > But as you said, that is not an attack-vector control anymore, it is
> > more of an end-use control.  It is possible to do both...we could also
> > create end-use options like the ones you mention, and just map those
> > in a pretty trivial way to the attack vector controls.
>
> I think the further simplification makes sense (merge to mitigate_user or
> mitigate_guest). I would say definitely don't do both (ending up with end-use, vector
> controls, *and* existing parameters). Both just seems like more confusion rather
> than simplification overall.
>
> For me the major dissonance in all of this remains cross_thread. Based on either
> approach (end-use or vector), SMT should be disabled unless the admin explicitly
> asks to keep it (presumably because they are running with core scheduling
> correctly configured).

Cross_thread is certainly a unique one.  The philosophy Linux appears to have taken in general is to always mitigate these kinds of bugs by default, unless doing so requires disabling SMT.  Others here may know the history better, but I presume that decision was made because of the performance impact of disabling SMT, and the fact that it would be highly disruptive to update your kernel and find half your cores have disappeared.  Still, it creates an incomplete security story.

But you do raise an important point which is that the relevance of cross-thread protection is also dependent on the scheduling policy since these attacks require the victim and attacker to be running on sibling threads.  If scheduling policy prohibits that, then disabling SMT is not required.  But the kernel doesn't know if that will be adhered to.  Hence why I think cross-thread has to be handled separately.  It would have arguably made sense to disable SMT unless the admin asks to keep it, but that ship I think has sailed and this doesn't seem like something we can change now.

>
> What if mitigate_user_user defaulted to 'defaults' instead of 'on'? I'm thinking
> 'defaults' meaning "do the things the kernel normally did before thinking in these
> attack-vector terms." That way we could differentiate between "admin didn't specify
> anything" and "admin said they cared about mitigating this vector (or case)." That
> should make it reasonable to disable SMT when mitigate_user_user=on is supplied,
> yeah?
>

Hmm.  I don't really like the name 'defaults', although I could envision something like 'partial' meaning do what we do today, while 'on' means disable SMT.  But I do worry that if there are too many options that secretly disable SMT under the hood, it will be confusing for users.  Plus you have the forward compatibility worry...the attack vectors are designed to be stable even as new bugs appear.  I could imagine users today choosing to enable mitigate_user_user but if a new bug shows up in the future that requires disabling SMT, all of the sudden they lose half the cores overnight again.

Keeping the SMT disablement unique to the mitigate_cross_thread control I think makes it more obvious to users whether there is a chance SMT could get turned off.

Thanks
--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-13 14:15         ` Brendan Jackman
@ 2024-11-13 15:05           ` Kaplan, David
  2024-11-13 15:31             ` Brendan Jackman
  2024-11-20  0:14           ` Manwaring, Derek
  1 sibling, 1 reply; 78+ messages in thread
From: Kaplan, David @ 2024-11-13 15:05 UTC (permalink / raw)
  To: Brendan Jackman, Manwaring, Derek
  Cc: bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com,
	jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org,
	mlipp@amazon.at, canellac@amazon.at

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Brendan Jackman <jackmanb@google.com>
> Sent: Wednesday, November 13, 2024 8:16 AM
> To: Manwaring, Derek <derekmn@amazon.com>
> Cc: Kaplan, David <David.Kaplan@amd.com>; bp@alien8.de;
> dave.hansen@linux.intel.com; hpa@zytor.com; jpoimboe@kernel.org; linux-
> kernel@vger.kernel.org; mingo@redhat.com; pawan.kumar.gupta@linux.intel.com;
> peterz@infradead.org; tglx@linutronix.de; x86@kernel.org; mlipp@amazon.at;
> canellac@amazon.at
> Subject: Re: [PATCH v2 19/35] Documentation/x86: Document the new attack
> vector controls
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, 13 Nov 2024 at 04:58, Manwaring, Derek <derekmn@amazon.com>
> wrote:
> > > I'm nervous about only doing the end-use controls and not the attack
> > > vector controls because of the reasons outlined above.  For example,
> > > I'm thinking about some proposed technologies such as KVM Address
> > > Space Isolation which may go a long way to reducing the risk of
> > > guest->host attacks but may not be fully secure yet (where the
> > > kernel would feel comfortable disabling a bunch of guest->host protections
> automatically).
> > > With attack vector-level controls, it would be easier to turn off
> > > guest->host protection if the admin is comfortable with this
> > > guest->technology,
> > > but still leaving the (almost certainly needed) guest->guest
> > > protections in place.
> >
> > Personally I wouldn't put too much weight on the possibility of
> > disabling kernel mitigations with these future approaches. For what
> > we're looking at with direct map removal, I would still keep kernel
> > mitigations on unless we really needed one off. Brendan, I know you
> > were looking at this differently though for ASI. What are your thoughts?
>
> Yeah, personally my vision for ASI is more than it _is_ the guest_host/guest_user
> mitigation and for the RFCv2 (long-awaited,
> sorry) it will be the user_user/user_kernel mitigation too.

I don't see how ASI can ever be a user_user mitigation.  User_user attacks are things like influencing the indirect predictions used by another process, causing that process to leak data from its address space.  User_user mitigations are things like doing an IBPB when switching tasks.

Also guest_user mitigation is not a thing, did you mean guest_guest?  If so, the same argument applies.


> If we decide we wanna
> keep existing mitigations in place once ASI is at full strength then ASI mostly failed.
> (Or perhaps to be more charitable to ASI's strategic prospects, I'd feel OK if people
> said "I want ASI, but I'll keep the old mitigations for defence-in-depth" as long as we
> usually don't need to develop _new_ mitigations for those people).
>
> So rather than saying "I have ASI, I can turn guest->host mitigation off", you say "I
> have ASI, guest->host mitigation is very cheap, let's have some champagne". In the
> utopian champagne future only very advanced users will have any interest in more
> fine-grained questions than "do I trust my KVM guests".
>
> I guess another way to look at that is: the distinction in these pairs of attack vectors
> is quite subtle. The fact that we are even considering exposing users to that
> awkward subtle distinction highlights a weakness of Linux's current mitigation
> posture and I think ASI reduces that weakness. The weakness is: the cost of
> mitigating attack vectors doesn't line up very well with the degree of threat they
> present. We think it's kinda tricky in practice to steal interesting data just by going
> into the kernel, but it's probably possible, so we have to pay mitigation costs every
> time we go into the kernel. Relatively low risk, relatively high cost. So we're having
> to say to the user "do you wanna pay that high cost for this low risk? We can't tell
> you how low it is though, we can only start rambling incomprehensibly about
> something called the 'fizz map'".

I disagree somewhat.  The distinction between say guest->host or guest->guest attacks is what is being targeted.  This is especially easy to see in the case of the branch-based side channels.  It's about whether you're influencing the predictions in the host or another guest.  The subtle part is that today, guest secrets exist both in the victim guest's address space as well as the host's address space.  Meaning that if you want to prevent those secrets from leaking, you need to protect both places.  ASI can potentially solve one of those problems, but not the other.

I'm also nervous to assert that because certain attack vectors are cheap to mitigate today doesn't mean they will always be.

--David Kaplan

>
> At first I wanted to say the same thing about your work to remove stuff from the
> direct map. Basically that's about architecting ourselves towards a world where the
> "guest->kernel" attack vector just isn't meaningful, right?

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-13 15:05           ` Kaplan, David
@ 2024-11-13 15:31             ` Brendan Jackman
  2024-11-13 16:00               ` Kaplan, David
  0 siblings, 1 reply; 78+ messages in thread
From: Brendan Jackman @ 2024-11-13 15:31 UTC (permalink / raw)
  To: Kaplan, David
  Cc: Manwaring, Derek, bp@alien8.de, dave.hansen@linux.intel.com,
	hpa@zytor.com, jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org,
	mlipp@amazon.at, canellac@amazon.at

On Wed, 13 Nov 2024 at 16:05, Kaplan, David <David.Kaplan@amd.com> wrote:
>
> I don't see how ASI can ever be a user_user mitigation.  User_user attacks are things like influencing the indirect predictions used by another process, causing that process to leak data from its address space.  User_user mitigations are things like doing an IBPB when switching tasks.

Well, in the RFC I'm currently (painfully slowly, sorry again) working
on, that IBPB is provided by ASI. Each process has an ASI domain, ASI
ensures there's an IBPB before we transition into any other domain
that doesn't trust it (VMs trust their VMM, but all other transitions
out of the userpace domain will flush).

In practice, this is just provided by the fact that context switching
currently incurs an asi_exit(), but that's an implementation detail,
if we transitioned directly from one process' domain to another that
would also create a flush.

(But yes, maybe that being "part of ASI" is just my very ASI-centric
perspective).

> Also guest_user mitigation is not a thing, did you mean guest_guest?  If so, the same argument applies.

Oh yep, sorry, and yep same response.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-13 14:15   ` Brendan Jackman
@ 2024-11-13 15:42     ` Kaplan, David
  0 siblings, 0 replies; 78+ messages in thread
From: Kaplan, David @ 2024-11-13 15:42 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86@kernel.org,
	H . Peter Anvin, linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Brendan Jackman <jackmanb@google.com>
> Sent: Wednesday, November 13, 2024 8:16 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan
> Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>;
> Dave Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 19/35] Documentation/x86: Document the new attack
> vector controls
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> Hi David,
>
> I'll respond separately to the more interesting thread of discussion w/ Boris & Derek
> but here are some more specific comments:
>
> On Tue, 5 Nov 2024 at 23:00, David Kaplan <david.kaplan@amd.com> wrote:
> > +User-to-User
> Sorry for the annoying bikeshedding, but this naming could be misconstrued by a
> hasty reader as being concerned with the boundary between Unix users. I wonder if
> we should say "userspace-to-userspace"
> or "process-to-process" or something? The latter would be confusing because it
> doesn't imply any exclusion of KVM guests though.

This was a concern that crossed my mind.  mitigate_process_kernel or mitigate_process_process might be alternatives, while still keeping guest_host and guest_host (I think those names are less likely to be misunderstood).  Curious if other people have opinions here.

>
> At first I thought "don't be so pedantic, users really need to RTFM here regardless,
> at most we just need a 'note this has nothing to do with Unix users'"
>
> But... actually isn't it conceivable that one day we could want "mitigate attacks
> between userspace processes, unless they have the same effective UID" or
> something? So then the naming would be really confusing.
>
> On the other hand, your response to my "but my trust domains are way more
> complex than that" comment at LPC was "Google aren't the target audience for this
> mechanism". Maybe anyone who knows that their unix users are meaningful trust
> domains (probably: someone building a specialised distro (Android?)) is similarly
> outside the target audience here.
>
> > +The user-to-user attack vector involves a malicious userspace program
> > +attempting to influence the behavior of another unsuspecting
> > +userspace program in order to exfiltrate data.  The vulnerability of
> > +a userspace program is based on the program itself and the interfaces it
> provides.
>
> I find this confusing. "Influence the behaviour" sounds like it's talking specifically
> about attacks that drive mis-speculation (Spectre-type), but shouldn't this also
> include stuff more in the vein of L1TF and MDS? I also don't really understand "the
> interface it provides".

You're right, this is talking more about mis-speculation type attacks.  The interface point is that generally the attacker needs some way to invoke the victim and provide it some inputs to cause it to do something which results in leakage.

Even for something like MDS, I think in order to have any control over what data is being leaked, the attacker arguably needs some way to influence victim behavior.  Of course that is not always required if the victim just reads the secret all the time on its own (like your following example).

>
> To be concrete, imagine there's a system where process A just sits in a loop
> reading a secret, and on a system with mitigations=off there's a vuln if a userspace
> attacker can preempt process A, it can leak the secret from some uarch buffer it
> had naturally got into. I expect mitigate_user_user=on to prevent that, but this
> wording doesn't really sound like it does. I guess the vulnerability is "based on the
> program itself" in that it took advantage of the fact it read the data into the buffer, but
> there's no "interface" and no "influenced the behaviour".

In the current patch series, it would be prevented because mitigate_user_user mitigates things like MDS.  I think I can update the documentation to be clearer on this point.

>
> I think the best I can come up with to describe what I expect this flag to mitigate is:
> "a malicious userspace program attempting to leak data directly from the address
> space of the victim program". It's a bit unfortunate that "directly from the address
> space" implies there's some other avenue to leak that data, which might not always
> be the case and kinda gets back to the other thread about the user-to-user/user-to-
> kernel split. Maybe this is a point against that split.

What you are pointing out is that a user->user attack can either be due to influencing the speculative behavior of another process, or through microarchitectural buffer leakage that may occur while that user process is running.  That's a valid point.

>
> > +The guest-to-guest attack vector involves a malicious VM [...]
> (Ditto of course)
>
> > +Summary of attack-vector mitigations
> > +------------------------------------
> > +
> > +When a vulnerability is mitigated due to an attack-vector control,
> > +the default mitigation option for that particular vulnerability is
> > +used.  To use a different mitigation, please use the vulnerability-specific
> command line option.
> > +
> > +The table below summarizes which vulnerabilities are mitigated when
> > +different attack vectors are enabled and assuming the CPU is vulnerable.
> > +
> > +=============== ============== ============ =============
> ============== ============
> > +Vulnerability   User-to-Kernel User-to-User Guest-to-Host Guest-to-Guest
> Cross-Thread
> > +=============== ============== ============ =============
> ============== ============
> > +BHI                   X                           X
> > +GDS                   X              X            X              X            X
> > +L1TF                                              X                       (Note 1)
> > +MDS                   X              X            X              X        (Note 1)
> > +MMIO                  X              X            X              X        (Note 1)
> > +Meltdown              X
> > +Retbleed              X                           X                       (Note 2)
> > +RFDS                  X              X            X              X
> > +Spectre_v1            X
> > +Spectre_v2            X                           X
> > +Spectre_v2_user                      X                           X
> > +SRBDS                 X              X            X              X
> > +SRSO                  X                           X
> > +SSB (Note 3)
> > +TAA                   X              X            X              X        (Note 1)
> > +=============== ============== ============ =============
> > +============== ============
>
> Hmm, I'm also confused by this. This mechanism is supposed to be about mitigating
> attack vectors, but now suddenly we've gone back to talking about vulnerabilities.
> Vulns and vectors are obviously not totally orthogonal but there isn't a 1:1 mapping.

I did simplify things a bit for this table, but I think it’s the mitigations where there isn't a 1:1 mapping.  In reality, a single vulnerability may have different mitigations for different vectors.  BHI is a great example of this where there is one option to clear the BHB on syscall and another to do it on VMEXIT.  And in the patches in this series, these are applied differently depending on the selected attack vectors.  So this table really was meant to indicate which bugs are relevant for each attack vector, and which require some form of mitigation.

>
> To be concrete again: this seems to say if mitigate_guest_host=off there's no L1TF
> mitigation? (I think in fact it's mitigated by PTE inversion).

Correct

>
> And I don't understand why Retbleed doesn't require user_user or guest_guest
> mitigation. I assume I can figure this out by reading about the details of Retbleed
> exploitation or looking in bugs.c. But given this is about making things easier I think
> we should probably assume the reader is as ignorant about that as I am.

Retbleed is a complex example because it is different on AMD vs Intel, but let me take SRSO instead which is a bit simpler to talk about.

SRSO involves the attacker poisoning the BTB so it can influence return instructions executed by the victim (AMD retbleed is similar).  This requires mitigation in the kernel to prevent this.

User->user or guest->guest are possible attack vectors for SRSO too, however they are mitigated by the fact that IBPB is done on context switches (through spectre_v2_user).

(That said, I think this points out something in the upstream code...if spectre_v2_user is disabled but SRSO/retbleed are enabled then I think we don't do the IBPB on context switch.  Not sure if that's a bug)

>
> I dunno exactly what to suggest here, but maybe instead of "we do/don't mitigate
> these vulns" this would be more helpful as "we do/don't enable this specific
> mitigation action". So instead of a row for L1TF, there would be a row for the flush
> that l1tf=flush gives you (basically, like this kinda already has for spectre_v2_user).
> Maybe again I'm just being pedantic and making life difficult here though.

I see your point...and in fact the example I just outlined above is a good argument in your favor.  It really should be attack vectors mapping to mitigations, not necessarily to vulnerabilities.  Many of the vulnerabilities though (other than BHI) don't do a good job of differentiating this though...so it simply becomes "turn on the mitigation if a relevant attack vector is enabled".  That could be improved in the future (and that might be good to see), but as you'll see in the later patches in the series, in most cases the attack vectors are mapping directly to vulnerabilities currently.

One concern I had is that the specific mitigations are very bug-specific, and some bugs have their own documentation files about them.  I don't know if the attack vector documentation is the place to go into the exact mitigation choice being used for each bug, or if that is something that the bug-specific documentation should be updated to discuss.

--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-13 15:31             ` Brendan Jackman
@ 2024-11-13 16:00               ` Kaplan, David
  2024-11-13 16:19                 ` Brendan Jackman
  0 siblings, 1 reply; 78+ messages in thread
From: Kaplan, David @ 2024-11-13 16:00 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Manwaring, Derek, bp@alien8.de, dave.hansen@linux.intel.com,
	hpa@zytor.com, jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org,
	mlipp@amazon.at, canellac@amazon.at

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Brendan Jackman <jackmanb@google.com>
> Sent: Wednesday, November 13, 2024 9:32 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Manwaring, Derek <derekmn@amazon.com>; bp@alien8.de;
> dave.hansen@linux.intel.com; hpa@zytor.com; jpoimboe@kernel.org; linux-
> kernel@vger.kernel.org; mingo@redhat.com; pawan.kumar.gupta@linux.intel.com;
> peterz@infradead.org; tglx@linutronix.de; x86@kernel.org; mlipp@amazon.at;
> canellac@amazon.at
> Subject: Re: [PATCH v2 19/35] Documentation/x86: Document the new attack
> vector controls
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, 13 Nov 2024 at 16:05, Kaplan, David <David.Kaplan@amd.com> wrote:
> >
> > I don't see how ASI can ever be a user_user mitigation.  User_user attacks are
> things like influencing the indirect predictions used by another process, causing that
> process to leak data from its address space.  User_user mitigations are things like
> doing an IBPB when switching tasks.
>
> Well, in the RFC I'm currently (painfully slowly, sorry again) working on, that IBPB is
> provided by ASI. Each process has an ASI domain, ASI ensures there's an IBPB
> before we transition into any other domain that doesn't trust it (VMs trust their VMM,
> but all other transitions out of the userpace domain will flush).
>
> In practice, this is just provided by the fact that context switching currently incurs an
> asi_exit(), but that's an implementation detail, if we transitioned directly from one
> process' domain to another that would also create a flush.
>
> (But yes, maybe that being "part of ASI" is just my very ASI-centric perspective).

Ah yes, this makes sense.  As you said though, this is due to the fact that context switch incurs an asi_exit.  As a thought exercise, I wonder what would happen if there was a mitigation that was required when switching to another guest, but not to the broader host address space.  I think that would mean there would still be a need for extra flushes when switching guest->guest that may not be covered by asi_exit.  (That's theoretical though but could be a reason not to tie these together forever)

And yes, please do keep up the ASI work, I'm very much looking forward to it, it should be a big improvement.

--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-13 16:00               ` Kaplan, David
@ 2024-11-13 16:19                 ` Brendan Jackman
  2024-11-14  9:32                   ` Brendan Jackman
  0 siblings, 1 reply; 78+ messages in thread
From: Brendan Jackman @ 2024-11-13 16:19 UTC (permalink / raw)
  To: Kaplan, David
  Cc: Manwaring, Derek, bp@alien8.de, dave.hansen@linux.intel.com,
	hpa@zytor.com, jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org,
	mlipp@amazon.at, canellac@amazon.at

On Wed, 13 Nov 2024 at 17:00, Kaplan, David <David.Kaplan@amd.com> wrote:
> I wonder what would happen if there was a mitigation that was required when switching to another guest, but not to the broader host address space.

This is already the case for the mitigations that "go the other way":
IBPB protects the incoming domain from the outgoing one, but L1D flush
protects the outgoing from the incoming. So when you exit to the
unrestricted address space it never makes sense to flush L1D (everyone
trusts the kernel) but e.g. guest->guest still needs one.

> that may not be covered by asi_exit.

That's right, these other mitigations are part of asi_enter.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
  2024-11-05 21:54 ` [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds David Kaplan
@ 2024-11-14  2:26   ` Pawan Gupta
  2024-11-14 14:59     ` Kaplan, David
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2024-11-14  2:26 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Tue, Nov 05, 2024 at 03:54:23PM -0600, David Kaplan wrote:
> @@ -1995,6 +2004,7 @@ void cpu_bugs_smt_update(void)
>  		update_mds_branch_idle();
>  		break;
>  	case MDS_MITIGATION_OFF:
> +	case MDS_MITIGATION_AUTO:

This implies AUTO and OFF are similar, which is counter intuitive.
While mitigation selection code ...

> +	if (mds_mitigation == MDS_MITIGATION_AUTO)
> +		mds_mitigation = MDS_MITIGATION_FULL;
> +

... indicates that AUTO is equivalent to FULL. So, I think AUTO should be
handled the same way as FULL in cpu_bugs_smt_update() as well.

Same for TAA and MMIO below.

>  		break;
>  	}
>  
> @@ -2006,6 +2016,7 @@ void cpu_bugs_smt_update(void) break;
>  	case TAA_MITIGATION_TSX_DISABLED:
>  	case TAA_MITIGATION_OFF:
> +	case TAA_MITIGATION_AUTO:
>  		break;
>  	}
>  
> @@ -2016,6 +2027,7 @@ void cpu_bugs_smt_update(void)
>  			pr_warn_once(MMIO_MSG_SMT);
>  		break;
>  	case MMIO_MITIGATION_OFF:
> +	case MMIO_MITIGATION_AUTO:
>  		break;
>  	}

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 04/35] x86/bugs: Restructure mds mitigation
  2024-11-05 21:54 ` [PATCH v2 04/35] x86/bugs: Restructure mds mitigation David Kaplan
@ 2024-11-14  3:03   ` Pawan Gupta
  2024-11-14 15:01     ` Kaplan, David
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2024-11-14  3:03 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Tue, Nov 05, 2024 at 03:54:24PM -0600, David Kaplan wrote:
> @@ -277,12 +304,19 @@ enum rfds_mitigations {
>  static enum rfds_mitigations rfds_mitigation __ro_after_init =
>  	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
>  
> +/* Return TRUE if any VERW-based mitigation is enabled. */
> +static bool __init mitigate_any_verw(void)

s/mitigate_any_verw/verw_enabled/ ?

> +{
> +	return (mds_mitigation != MDS_MITIGATION_OFF ||
> +		taa_mitigation != TAA_MITIGATION_OFF ||

TAA_MITIGATION_TSX_DISABLED does not require VERW, this should be:

		taa_mitigation != TAA_MITIGATION_OFF ||
		taa_mitigation != TAA_MITIGATION_TSX_DISABLED ||

> +		mmio_mitigation != MMIO_MITIGATION_OFF ||
> +		rfds_mitigation != RFDS_MITIGATION_OFF);
> +}

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 05/35] x86/bugs: Restructure taa mitigation
  2024-11-05 21:54 ` [PATCH v2 05/35] x86/bugs: Restructure taa mitigation David Kaplan
@ 2024-11-14  4:43   ` Pawan Gupta
  2024-11-14 15:08     ` Kaplan, David
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2024-11-14  4:43 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Tue, Nov 05, 2024 at 03:54:25PM -0600, David Kaplan wrote:
> +static void __init taa_update_mitigation(void)
> +{
> +	if (!boot_cpu_has_bug(X86_BUG_TAA))
> +		return;
> +
> +	if (mitigate_any_verw())
> +		taa_mitigation = TAA_MITIGATION_VERW;

This isn't right, TAA_MITIGATION_UCODE_NEEDED can never get set
irrespective of microcode.

The UCODE_NEEDED checks in taa_select_mitigation() actually belongs here.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 06/35] x86/bugs: Restructure mmio mitigation
  2024-11-05 21:54 ` [PATCH v2 06/35] x86/bugs: Restructure mmio mitigation David Kaplan
@ 2024-11-14  5:03   ` Pawan Gupta
  0 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2024-11-14  5:03 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Tue, Nov 05, 2024 at 03:54:26PM -0600, David Kaplan wrote:
> +static void __init mmio_update_mitigation(void)
> +{
> +	if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
> +		return;
> +
> +	if (mitigate_any_verw())
> +		mmio_mitigation = MMIO_MITIGATION_VERW;

Same as TAA, UCODE_NEEDED can't be set irrespective of microcode.

> +
> +	pr_info("%s\n", mmio_strings[mmio_mitigation]);

This should be in the 'else' part of below condition, otherwise they can
print conflicting mitigation status.

> +	if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
> +		pr_info("Unknown: No mitigations\n");
> +}


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 07/35] x86/bugs: Restructure rfds mitigation
  2024-11-05 21:54 ` [PATCH v2 07/35] x86/bugs: Restructure rfds mitigation David Kaplan
@ 2024-11-14  5:55   ` Pawan Gupta
  0 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2024-11-14  5:55 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Tue, Nov 05, 2024 at 03:54:27PM -0600, David Kaplan wrote:
> +static void __init rfds_update_mitigation(void)
> +{
> +	if (!boot_cpu_has_bug(X86_BUG_RFDS))
> +		return;
> +
> +	if (mitigate_any_verw())
> +		rfds_mitigation = RFDS_MITIGATION_VERW;

Ditto as TAA and MMIO.

> +	pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
> +}

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 10/35] x86/bugs: Restructure gds mitigation
  2024-11-05 21:54 ` [PATCH v2 10/35] x86/bugs: Restructure gds mitigation David Kaplan
@ 2024-11-14  6:21   ` Pawan Gupta
  0 siblings, 0 replies; 78+ messages in thread
From: Pawan Gupta @ 2024-11-14  6:21 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Tue, Nov 05, 2024 at 03:54:30PM -0600, David Kaplan wrote:
> @@ -892,7 +899,7 @@ static void __init gds_select_mitigation(void)
>  		} else {
>  			gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
>  		}
> -		goto out;
> +		return;
>  	}
>  
>  	/* Microcode has mitigation, use it */
> @@ -914,8 +921,14 @@ static void __init gds_select_mitigation(void)
>  		gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
>  	}
>  

Nit, extra newline.

> +}

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
  2024-11-05 21:54 ` [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
@ 2024-11-14  6:57   ` Pawan Gupta
  2024-11-14 15:36     ` Kaplan, David
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2024-11-14  6:57 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Tue, Nov 05, 2024 at 03:54:31PM -0600, David Kaplan wrote:
>  static void __init spectre_v1_select_mitigation(void)
>  {
> -	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
> +	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
>  		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
> +}
> +
> +static void __init spectre_v1_apply_mitigation(void)
> +{
> +	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())

We probably don't need to repeat this check, is this okay:

	if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_NONE)
>  		return;
> -	}
>  
>  	if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-13 16:19                 ` Brendan Jackman
@ 2024-11-14  9:32                   ` Brendan Jackman
  2024-11-22 16:15                     ` Manwaring, Derek
  0 siblings, 1 reply; 78+ messages in thread
From: Brendan Jackman @ 2024-11-14  9:32 UTC (permalink / raw)
  To: Kaplan, David
  Cc: Manwaring, Derek, bp@alien8.de, dave.hansen@linux.intel.com,
	hpa@zytor.com, jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org,
	mlipp@amazon.at, canellac@amazon.at

On Wed, 13 Nov 2024 at 17:19, Brendan Jackman <jackmanb@google.com> wrote:
>
> On Wed, 13 Nov 2024 at 17:00, Kaplan, David <David.Kaplan@amd.com> wrote:
> > I wonder what would happen if there was a mitigation that was required when switching to another guest, but not to the broader host address space.
>
> This is already the case for the mitigations that "go the other way":
> IBPB protects the incoming domain from the outgoing one, but L1D flush
> protects the outgoing from the incoming. So when you exit to the
> unrestricted address space it never makes sense to flush L1D (everyone
> trusts the kernel) but e.g. guest->guest still needs one.

I'm straying quite far from the actual topic now but to avoid
confusion for anyone reading later:

A discussion off-list led me to realise that the specifics of this
comment are nonsensical, I had L1TF in mind but I don't think you can
exploit L1TF in a direct guest->guest attack (I'm probably still
missing some nuance there). We wouldn't need to flush L1D there unless
there's a new vuln.

I made a similar mistake in [1] where I had forgotten that you can't
really do direct user->user L1TF attacks either. I was thinking of
"Foreshadow-OS" [2] which is not really user->user.

[1] https://lore.kernel.org/linux-kernel/CA+i-1C38hxnCGC=Zr=hNFzJBceYoOHfixhpL3xiXEg3hcdgWUg@mail.gmail.com/

[2] https://foreshadowattack.eu/foreshadow-NG.pdf

Anyway, the underlying point I'm making is still valid I think. In
RFCv1 ASI has flushes on both transitions. In the RFCv2 as well as the
two transitions into & out of the restricted address space there are
transitions between different restricted address spaces and we can do
flushes there too.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
  2024-11-14  2:26   ` Pawan Gupta
@ 2024-11-14 14:59     ` Kaplan, David
  2024-11-14 17:14       ` Pawan Gupta
  0 siblings, 1 reply; 78+ messages in thread
From: Kaplan, David @ 2024-11-14 14:59 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Wednesday, November 13, 2024 8:27 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for
> mds/taa/mmio/rfds
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Tue, Nov 05, 2024 at 03:54:23PM -0600, David Kaplan wrote:
> > @@ -1995,6 +2004,7 @@ void cpu_bugs_smt_update(void)
> >               update_mds_branch_idle();
> >               break;
> >       case MDS_MITIGATION_OFF:
> > +     case MDS_MITIGATION_AUTO:
>
> This implies AUTO and OFF are similar, which is counter intuitive.
> While mitigation selection code ...
>
> > +     if (mds_mitigation == MDS_MITIGATION_AUTO)
> > +             mds_mitigation = MDS_MITIGATION_FULL;
> > +
>
> ... indicates that AUTO is equivalent to FULL. So, I think AUTO should be handled
> the same way as FULL in cpu_bugs_smt_update() as well.
>
> Same for TAA and MMIO below.
>

The mitigation is never actually AUTO by the time we call cpu_bugs_smt_update(), since this happens after cpu_select_mitigations().  I had to add the case statement here so the switch statement was complete, but this case will never be hit.

Should I put a comment here about that?  Or is a default case the better way to handle this?

--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 04/35] x86/bugs: Restructure mds mitigation
  2024-11-14  3:03   ` Pawan Gupta
@ 2024-11-14 15:01     ` Kaplan, David
  2024-12-10 15:24       ` Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Kaplan, David @ 2024-11-14 15:01 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Wednesday, November 13, 2024 9:04 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 04/35] x86/bugs: Restructure mds mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Tue, Nov 05, 2024 at 03:54:24PM -0600, David Kaplan wrote:
> > @@ -277,12 +304,19 @@ enum rfds_mitigations {  static enum
> > rfds_mitigations rfds_mitigation __ro_after_init =
> >       IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO
> :
> > RFDS_MITIGATION_OFF;
> >
> > +/* Return TRUE if any VERW-based mitigation is enabled. */ static
> > +bool __init mitigate_any_verw(void)
>
> s/mitigate_any_verw/verw_enabled/ ?

Ok

>
> > +{
> > +     return (mds_mitigation != MDS_MITIGATION_OFF ||
> > +             taa_mitigation != TAA_MITIGATION_OFF ||
>
> TAA_MITIGATION_TSX_DISABLED does not require VERW, this should be:
>
>                 taa_mitigation != TAA_MITIGATION_OFF ||
>                 taa_mitigation != TAA_MITIGATION_TSX_DISABLED ||
>
> > +             mmio_mitigation != MMIO_MITIGATION_OFF ||
> > +             rfds_mitigation != RFDS_MITIGATION_OFF); }

Good catch, will fix

--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 05/35] x86/bugs: Restructure taa mitigation
  2024-11-14  4:43   ` Pawan Gupta
@ 2024-11-14 15:08     ` Kaplan, David
  0 siblings, 0 replies; 78+ messages in thread
From: Kaplan, David @ 2024-11-14 15:08 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Wednesday, November 13, 2024 10:43 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 05/35] x86/bugs: Restructure taa mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Tue, Nov 05, 2024 at 03:54:25PM -0600, David Kaplan wrote:
> > +static void __init taa_update_mitigation(void) {
> > +     if (!boot_cpu_has_bug(X86_BUG_TAA))
> > +             return;
> > +
> > +     if (mitigate_any_verw())
> > +             taa_mitigation = TAA_MITIGATION_VERW;
>
> This isn't right, TAA_MITIGATION_UCODE_NEEDED can never get set
> irrespective of microcode.
>
> The UCODE_NEEDED checks in taa_select_mitigation() actually belongs here.

Ah, I see your point.  I'll fix this, and for mmio/rfds too.

Thanks --David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
  2024-11-14  6:57   ` Pawan Gupta
@ 2024-11-14 15:36     ` Kaplan, David
  2024-11-14 15:49       ` Kaplan, David
  2024-11-14 17:41       ` Pawan Gupta
  0 siblings, 2 replies; 78+ messages in thread
From: Kaplan, David @ 2024-11-14 15:36 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Thursday, November 14, 2024 12:57 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Tue, Nov 05, 2024 at 03:54:31PM -0600, David Kaplan wrote:
> >  static void __init spectre_v1_select_mitigation(void)
> >  {
> > -     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
> > +     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> > + cpu_mitigations_off())
> >               spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
> > +}
> > +
> > +static void __init spectre_v1_apply_mitigation(void) {
> > +     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> > +cpu_mitigations_off())
>
> We probably don't need to repeat this check, is this okay:
>
>         if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_NONE)
> >               return;
> > -     }
> >
> >       if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {

I don't think so.  That would stop us from printing the message about the system being vulnerable at the end of the function.

We should only not print the message I believe if the CPU is actually not vulnerable or mitigations are globally disabled.  Although now I realize my patches may not be suppressing the print statements always if cpu_mitigations_off(), so I need to go and fix that.

--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
  2024-11-14 15:36     ` Kaplan, David
@ 2024-11-14 15:49       ` Kaplan, David
  2024-11-14 16:19         ` Borislav Petkov
  2024-11-14 17:41       ` Pawan Gupta
  1 sibling, 1 reply; 78+ messages in thread
From: Kaplan, David @ 2024-11-14 15:49 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Kaplan, David
> Sent: Thursday, November 14, 2024 9:37 AM
> To: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: RE: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
>
>
>
> > -----Original Message-----
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Sent: Thursday, November 14, 2024 12:57 AM
> > To: Kaplan, David <David.Kaplan@amd.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov
> > <bp@alien8.de>; Peter Zijlstra <peterz@infradead.org>; Josh Poimboeuf
> > <jpoimboe@kernel.org>; Ingo Molnar <mingo@redhat.com>; Dave Hansen
> > <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> > <hpa@zytor.com>; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1
> > mitigation
> >
> > Caution: This message originated from an External Source. Use proper
> > caution when opening attachments, clicking links, or responding.
> >
> >
> > On Tue, Nov 05, 2024 at 03:54:31PM -0600, David Kaplan wrote:
> > >  static void __init spectre_v1_select_mitigation(void)
> > >  {
> > > -     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
> > > +     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> > > + cpu_mitigations_off())
> > >               spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
> > > +}
> > > +
> > > +static void __init spectre_v1_apply_mitigation(void) {
> > > +     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> > > +cpu_mitigations_off())
> >
> > We probably don't need to repeat this check, is this okay:
> >
> >         if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_NONE)
> > >               return;
> > > -     }
> > >
> > >       if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
>
> I don't think so.  That would stop us from printing the message about the system
> being vulnerable at the end of the function.
>
> We should only not print the message I believe if the CPU is actually not vulnerable
> or mitigations are globally disabled.  Although now I realize my patches may not be
> suppressing the print statements always if cpu_mitigations_off(), so I need to go
> and fix that.
>

Actually looks like the existing code wasn't always consistent here.  For srbds, ssb, and gds, it would still print a message about the system being vulnerable even if mitigations=off was passed.  But for the others it would not print a message.  I think I'm going to suppress the message for all cases, but if people feel it should be the other way, let me know.

--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
  2024-11-14 15:49       ` Kaplan, David
@ 2024-11-14 16:19         ` Borislav Petkov
  2024-11-14 16:45           ` Kaplan, David
  0 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2024-11-14 16:19 UTC (permalink / raw)
  To: Kaplan, David
  Cc: Pawan Gupta, Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

On Thu, Nov 14, 2024 at 03:49:42PM +0000, Kaplan, David wrote:
> Actually looks like the existing code wasn't always consistent here.  For
> srbds, ssb, and gds, it would still print a message about the system being
> vulnerable even if mitigations=off was passed.  But for the others it would
> not print a message.  I think I'm going to suppress the message for all
> cases, but if people feel it should be the other way, let me know.

Yeah, we probably should fix this in a pre-patch. I.e., if mitigations=off,
not issue any "Vulnerable" message because this is the "master switch", so to
speak.

Or do we want to issue a bunch of "Vulnerable" in dmesg?

I gravitate towards former because if user supplies mitigations=off, then she
probably knows what she's doing...?

Hmm.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
  2024-11-14 16:19         ` Borislav Petkov
@ 2024-11-14 16:45           ` Kaplan, David
  2024-11-14 23:33             ` Josh Poimboeuf
  0 siblings, 1 reply; 78+ messages in thread
From: Kaplan, David @ 2024-11-14 16:45 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Pawan Gupta, Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Thursday, November 14, 2024 10:20 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>; Thomas Gleixner
> <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>; Josh Poimboeuf
> <jpoimboe@kernel.org>; Ingo Molnar <mingo@redhat.com>; Dave Hansen
> <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Thu, Nov 14, 2024 at 03:49:42PM +0000, Kaplan, David wrote:
> > Actually looks like the existing code wasn't always consistent here.
> > For srbds, ssb, and gds, it would still print a message about the
> > system being vulnerable even if mitigations=off was passed.  But for
> > the others it would not print a message.  I think I'm going to
> > suppress the message for all cases, but if people feel it should be the other way,
> let me know.
>
> Yeah, we probably should fix this in a pre-patch. I.e., if mitigations=off, not issue any
> "Vulnerable" message because this is the "master switch", so to speak.
>

I would prefer to fix it with these restructuring patches since they're moving around a lot of the code in this area anyway and putting these print messages in more consistent places.  Otherwise I have to do it twice...

> Or do we want to issue a bunch of "Vulnerable" in dmesg?
>
> I gravitate towards former because if user supplies mitigations=off, then she
> probably knows what she's doing...?
>
> Hmm.
>

Right, there are 4 cases I think:
1) the CPU is not vulnerable (it doesn't have the bug)
2) the CPU is vulnerable but mitigations=off was passed
3) the CPU is vulnerable but the bug-specific mitigation was disabled (e.g., retbleed=off)
4) the CPU is vulnerable, mitigations were not disabled, but no mitigation is available (perhaps it wasn't compiled in)

We absolutely should not print a message in case #1, because the CPU isn't vulnerable.  And we should probably always print a message in case 4 to warn the user.  Question is really about cases 2 and 3.

Today, some bugs print a message saying the CPU is vulnerable in case 2 and 3 (e.g., gds)
Some bugs don't print a message in case 2, but do in case 3 (e.g., spectre_v1)
Some don't print a message in case 2 or 3 (e.g., retbleed)

Case 4 is things like where you need SRSO mitigation but CONFIG_MITIGATION_SRSO was disabled.

So which do we want?  It would be nice to be consistent and I can do that while reworking these functions.

If we're going to argue that command line options mean the user knows what they're doing, that's probably an argument for saying do not print anything in cases 2 and 3 (since both relate to explicit command line options).  I'm not sure if it really makes sense to differentiate these cases.

Thoughts?

--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
  2024-11-14 14:59     ` Kaplan, David
@ 2024-11-14 17:14       ` Pawan Gupta
  2024-11-14 17:17         ` Kaplan, David
  0 siblings, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2024-11-14 17:14 UTC (permalink / raw)
  To: Kaplan, David
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

On Thu, Nov 14, 2024 at 02:59:34PM +0000, Kaplan, David wrote:
> [AMD Official Use Only - AMD Internal Distribution Only]
> 
> > -----Original Message-----
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Sent: Wednesday, November 13, 2024 8:27 PM
> > To: Kaplan, David <David.Kaplan@amd.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> > Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> > Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> > x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for
> > mds/taa/mmio/rfds
> >
> > Caution: This message originated from an External Source. Use proper caution
> > when opening attachments, clicking links, or responding.
> >
> >
> > On Tue, Nov 05, 2024 at 03:54:23PM -0600, David Kaplan wrote:
> > > @@ -1995,6 +2004,7 @@ void cpu_bugs_smt_update(void)
> > >               update_mds_branch_idle();
> > >               break;
> > >       case MDS_MITIGATION_OFF:
> > > +     case MDS_MITIGATION_AUTO:
> >
> > This implies AUTO and OFF are similar, which is counter intuitive.
> > While mitigation selection code ...
> >
> > > +     if (mds_mitigation == MDS_MITIGATION_AUTO)
> > > +             mds_mitigation = MDS_MITIGATION_FULL;
> > > +
> >
> > ... indicates that AUTO is equivalent to FULL. So, I think AUTO should be handled
> > the same way as FULL in cpu_bugs_smt_update() as well.
> >
> > Same for TAA and MMIO below.
> >
> 
> The mitigation is never actually AUTO by the time we call
> cpu_bugs_smt_update(), since this happens after cpu_select_mitigations().
> I had to add the case statement here so the switch statement was
> complete, but this case will never be hit.
> 
> Should I put a comment here about that?  Or is a default case the better
> way to handle this?

My suggestion would be to treat AUTO as FULL, and move it up with FULL:

         switch (mds_mitigation) {
         case MDS_MITIGATION_FULL:
+        case MDS_MITIGATION_AUTO:
         case MDS_MITIGATION_VMWERV:
                 if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
                         pr_warn_once(MDS_MSG_SMT);
                 update_mds_branch_idle();
                 break;
         case MDS_MITIGATION_OFF:
                 break;
         }

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
  2024-11-14 17:14       ` Pawan Gupta
@ 2024-11-14 17:17         ` Kaplan, David
  0 siblings, 0 replies; 78+ messages in thread
From: Kaplan, David @ 2024-11-14 17:17 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Thursday, November 14, 2024 11:14 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for
> mds/taa/mmio/rfds
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Thu, Nov 14, 2024 at 02:59:34PM +0000, Kaplan, David wrote:
> > [AMD Official Use Only - AMD Internal Distribution Only]
> >
> > > -----Original Message-----
> > > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > > Sent: Wednesday, November 13, 2024 8:27 PM
> > > To: Kaplan, David <David.Kaplan@amd.com>
> > > Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov
> > > <bp@alien8.de>; Peter Zijlstra <peterz@infradead.org>; Josh
> > > Poimboeuf <jpoimboe@kernel.org>; Ingo Molnar <mingo@redhat.com>;
> > > Dave Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter
> > > Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> > > Subject: Re: [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for
> > > mds/taa/mmio/rfds
> > >
> > > Caution: This message originated from an External Source. Use proper
> > > caution when opening attachments, clicking links, or responding.
> > >
> > >
> > > On Tue, Nov 05, 2024 at 03:54:23PM -0600, David Kaplan wrote:
> > > > @@ -1995,6 +2004,7 @@ void cpu_bugs_smt_update(void)
> > > >               update_mds_branch_idle();
> > > >               break;
> > > >       case MDS_MITIGATION_OFF:
> > > > +     case MDS_MITIGATION_AUTO:
> > >
> > > This implies AUTO and OFF are similar, which is counter intuitive.
> > > While mitigation selection code ...
> > >
> > > > +     if (mds_mitigation == MDS_MITIGATION_AUTO)
> > > > +             mds_mitigation = MDS_MITIGATION_FULL;
> > > > +
> > >
> > > ... indicates that AUTO is equivalent to FULL. So, I think AUTO
> > > should be handled the same way as FULL in cpu_bugs_smt_update() as well.
> > >
> > > Same for TAA and MMIO below.
> > >
> >
> > The mitigation is never actually AUTO by the time we call
> > cpu_bugs_smt_update(), since this happens after cpu_select_mitigations().
> > I had to add the case statement here so the switch statement was
> > complete, but this case will never be hit.
> >
> > Should I put a comment here about that?  Or is a default case the
> > better way to handle this?
>
> My suggestion would be to treat AUTO as FULL, and move it up with FULL:
>
>          switch (mds_mitigation) {
>          case MDS_MITIGATION_FULL:
> +        case MDS_MITIGATION_AUTO:
>          case MDS_MITIGATION_VMWERV:
>                  if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
>                          pr_warn_once(MDS_MSG_SMT);
>                  update_mds_branch_idle();
>                  break;
>          case MDS_MITIGATION_OFF:
>                  break;
>          }

Ok, I can do that

Thanks --David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
  2024-11-14 15:36     ` Kaplan, David
  2024-11-14 15:49       ` Kaplan, David
@ 2024-11-14 17:41       ` Pawan Gupta
  2024-11-14 17:48         ` Kaplan, David
  1 sibling, 1 reply; 78+ messages in thread
From: Pawan Gupta @ 2024-11-14 17:41 UTC (permalink / raw)
  To: Kaplan, David
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

On Thu, Nov 14, 2024 at 03:36:44PM +0000, Kaplan, David wrote:
> [AMD Official Use Only - AMD Internal Distribution Only]
> 
> > -----Original Message-----
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Sent: Thursday, November 14, 2024 12:57 AM
> > To: Kaplan, David <David.Kaplan@amd.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> > Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> > Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> > x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
> >
> > Caution: This message originated from an External Source. Use proper caution
> > when opening attachments, clicking links, or responding.
> >
> >
> > On Tue, Nov 05, 2024 at 03:54:31PM -0600, David Kaplan wrote:
> > >  static void __init spectre_v1_select_mitigation(void)
> > >  {
> > > -     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
> > > +     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> > > + cpu_mitigations_off())
> > >               spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
> > > +}
> > > +
> > > +static void __init spectre_v1_apply_mitigation(void) {
> > > +     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> > > +cpu_mitigations_off())
> >
> > We probably don't need to repeat this check, is this okay:
> >
> >         if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_NONE)
> > >               return;
> > > -     }
> > >
> > >       if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
> 
> I don't think so.  That would stop us from printing the message about the
> system being vulnerable at the end of the function.

Sorry it wasn't clear, my comment was not about the return, but about
simplifying the check:

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 22fdaaac2d21..e8c481c7a590 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1115,7 +1115,7 @@ static void __init spectre_v1_select_mitigation(void)
 
 static void __init spectre_v1_apply_mitigation(void)
 {
-	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
+	if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_NONE)
 		return;
 
 	if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {

Since we already set spectre_v1_mitigation to SPECTRE_V1_MITIGATION_NONE
for that exact condition.

> We should only not print the message I believe if the CPU is actually not
> vulnerable or mitigations are globally disabled.  Although now I realize
> my patches may not be suppressing the print statements always if
> cpu_mitigations_off(), so I need to go and fix that.

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
  2024-11-14 17:41       ` Pawan Gupta
@ 2024-11-14 17:48         ` Kaplan, David
  0 siblings, 0 replies; 78+ messages in thread
From: Kaplan, David @ 2024-11-14 17:48 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Thursday, November 14, 2024 11:41 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Thu, Nov 14, 2024 at 03:36:44PM +0000, Kaplan, David wrote:
> > [AMD Official Use Only - AMD Internal Distribution Only]
> >
> > > -----Original Message-----
> > > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > > Sent: Thursday, November 14, 2024 12:57 AM
> > > To: Kaplan, David <David.Kaplan@amd.com>
> > > Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov
> > > <bp@alien8.de>; Peter Zijlstra <peterz@infradead.org>; Josh
> > > Poimboeuf <jpoimboe@kernel.org>; Ingo Molnar <mingo@redhat.com>;
> > > Dave Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter
> > > Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> > > Subject: Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1
> > > mitigation
> > >
> > > Caution: This message originated from an External Source. Use proper
> > > caution when opening attachments, clicking links, or responding.
> > >
> > >
> > > On Tue, Nov 05, 2024 at 03:54:31PM -0600, David Kaplan wrote:
> > > >  static void __init spectre_v1_select_mitigation(void)
> > > >  {
> > > > -     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> cpu_mitigations_off()) {
> > > > +     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> > > > + cpu_mitigations_off())
> > > >               spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
> > > > +}
> > > > +
> > > > +static void __init spectre_v1_apply_mitigation(void) {
> > > > +     if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> > > > +cpu_mitigations_off())
> > >
> > > We probably don't need to repeat this check, is this okay:
> > >
> > >         if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_NONE)
> > > >               return;
> > > > -     }
> > > >
> > > >       if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
> >
> > I don't think so.  That would stop us from printing the message about
> > the system being vulnerable at the end of the function.
>
> Sorry it wasn't clear, my comment was not about the return, but about simplifying
> the check:
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index
> 22fdaaac2d21..e8c481c7a590 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -1115,7 +1115,7 @@ static void __init spectre_v1_select_mitigation(void)
>
>  static void __init spectre_v1_apply_mitigation(void)  {
> -       if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
> +       if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_NONE)
>                 return;
>
>         if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
>
> Since we already set spectre_v1_mitigation to
> SPECTRE_V1_MITIGATION_NONE for that exact condition.

Right, but this gets to my point that this changes whether we issue the print statement or not.  In the current upstream code, it will print a message saying the CPU is vulnerable if 'nospectre_v1' is passed, but not print the message if mitigations=off is passed.

So in the patch, it was keeping the same behavior.

However as noted in my latest reply, there is a lot of inconsistency in bugs.c about behavior here and whether a message is printed when mitigations are disabled in various ways.  I think we need to resolve that, and then I can make it consistent.  If the decision is to not print messages if the bug is explicitly disabled (either via the global or bug-specific option), then I agree with your diff.

--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
  2024-11-14 16:45           ` Kaplan, David
@ 2024-11-14 23:33             ` Josh Poimboeuf
  2024-12-12 10:41               ` Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Josh Poimboeuf @ 2024-11-14 23:33 UTC (permalink / raw)
  To: Kaplan, David
  Cc: Borislav Petkov, Pawan Gupta, Thomas Gleixner, Peter Zijlstra,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

On Thu, Nov 14, 2024 at 04:45:37PM +0000, Kaplan, David wrote:
> 1) the CPU is not vulnerable (it doesn't have the bug)
> 2) the CPU is vulnerable but mitigations=off was passed
> 3) the CPU is vulnerable but the bug-specific mitigation was disabled (e.g., retbleed=off)
> 4) the CPU is vulnerable, mitigations were not disabled, but no mitigation is available (perhaps it wasn't compiled in)
> 
> We absolutely should not print a message in case #1, because the CPU isn't vulnerable.  And we should probably always print a message in case 4 to warn the user.  Question is really about cases 2 and 3.
> 
> Today, some bugs print a message saying the CPU is vulnerable in case 2 and 3 (e.g., gds)
> Some bugs don't print a message in case 2, but do in case 3 (e.g., spectre_v1)
> Some don't print a message in case 2 or 3 (e.g., retbleed)
> 
> Case 4 is things like where you need SRSO mitigation but CONFIG_MITIGATION_SRSO was disabled.
> 
> So which do we want?  It would be nice to be consistent and I can do that while reworking these functions.
> 
> If we're going to argue that command line options mean the user knows
> what they're doing, that's probably an argument for saying do not
> print anything in cases 2 and 3 (since both relate to explicit command
> line options).  I'm not sure if it really makes sense to differentiate
> these cases.

IMO, mitigations=off shouldn't show any bug-specific messages, as user
doesn't care about the specifics, they just want everything off.

That said, they still might want to see some kind of "all mitigations
disabled" message to indicate the option actually worked.

For similar reasons I'd argue the bug-specific toggle should show a
bug-specific vulnerable message.

-- 
Josh

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-13 14:15         ` Brendan Jackman
  2024-11-13 15:05           ` Kaplan, David
@ 2024-11-20  0:14           ` Manwaring, Derek
  1 sibling, 0 replies; 78+ messages in thread
From: Manwaring, Derek @ 2024-11-20  0:14 UTC (permalink / raw)
  To: jackmanb
  Cc: bp, canellac, dave.hansen, david.kaplan, derekmn, hpa, jpoimboe,
	linux-kernel, mingo, mlipp, pawan.kumar.gupta, peterz, tglx, x86

On 2024-11-13 at 14:15+0000, Brendan Jackman wrote:
> On Wed, 13 Nov 2024 at 04:58, Manwaring, Derek <derekmn@amazon.com> wrote:
> > Personally I wouldn't put too much weight on the possibility of
> > disabling kernel mitigations with these future approaches. For what
> > we're looking at with direct map removal, I would still keep kernel
> > mitigations on unless we really needed one off. Brendan, I know you were
> > looking at this differently though for ASI. What are your thoughts?
>
> [...]
>
> At first I wanted to say the same thing about your work to remove
> stuff from the direct map. Basically that's about architecting
> ourselves towards a world where the "guest->kernel" attack vector just
> isn't meaningful, right?

Right, that is definitely the goal. The approach is like Microsoft
describes in the Secret-Free Hypervisor paper [1].

Call me belt-and-suspenders, but I prefer to leave mitigations in place
as well unless the performance is terrible. Like rappelling with a good
harness but still pad the fall zone.

Derek


[1] https://www.microsoft.com/en-us/research/uploads/prod/2022/07/sf-hypervisor.pdf

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-14  9:32                   ` Brendan Jackman
@ 2024-11-22 16:15                     ` Manwaring, Derek
  2024-11-22 16:36                       ` Brendan Jackman
  0 siblings, 1 reply; 78+ messages in thread
From: Manwaring, Derek @ 2024-11-22 16:15 UTC (permalink / raw)
  To: jackmanb
  Cc: David.Kaplan, bp, canellac, dave.hansen, derekmn, hpa, jpoimboe,
	linux-kernel, mingo, mlipp, pawan.kumar.gupta, peterz, tglx, x86

On 2024-11-14 at 9:32+0000, Brendan Jackman wrote:
> On Wed, 13 Nov 2024 at 17:19, Brendan Jackman <jackmanb@google.com> wrote:
> >
> > On Wed, 13 Nov 2024 at 17:00, Kaplan, David <David.Kaplan@amd.com> wrote:
> > > I wonder what would happen if there was a mitigation that was required
> > > when switching to another guest, but not to the broader host address
> > > space.
> >
> > This is already the case for the mitigations that "go the other way":
> > IBPB protects the incoming domain from the outgoing one, but L1D flush
> > protects the outgoing from the incoming. So when you exit to the
> > unrestricted address space it never makes sense to flush L1D (everyone
> > trusts the kernel) but e.g. guest->guest still needs one.
>
> I'm straying quite far from the actual topic now but to avoid
> confusion for anyone reading later:
>
> A discussion off-list led me to realise that the specifics of this
> comment are nonsensical, I had L1TF in mind but I don't think you can
> exploit L1TF in a direct guest->guest attack (I'm probably still
> missing some nuance there). We wouldn't need to flush L1D there unless
> there's a new vuln.

With Foreshadow-VMM/CVE-2018-3646 I thought you can do guest->guest?
Since guest completely controls the physical address which ends up
probing L1D (as if it were a host physical address).

And agree with the flushes between different restricted address spaces
(e.g. context switch between guests, right?).

Derek

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-22 16:15                     ` Manwaring, Derek
@ 2024-11-22 16:36                       ` Brendan Jackman
  2024-11-22 17:23                         ` Kaplan, David
  0 siblings, 1 reply; 78+ messages in thread
From: Brendan Jackman @ 2024-11-22 16:36 UTC (permalink / raw)
  To: Manwaring, Derek
  Cc: David.Kaplan, bp, canellac, dave.hansen, hpa, jpoimboe,
	linux-kernel, mingo, mlipp, pawan.kumar.gupta, peterz, tglx, x86

On Fri, 22 Nov 2024 at 17:15, Manwaring, Derek <derekmn@amazon.com> wrote:
>
> On 2024-11-14 at 9:32+0000, Brendan Jackman wrote:
> > On Wed, 13 Nov 2024 at 17:19, Brendan Jackman <jackmanb@google.com> wrote:

> > A discussion off-list led me to realise that the specifics of this
> > comment are nonsensical, I had L1TF in mind but I don't think you can
> > exploit L1TF in a direct guest->guest attack (I'm probably still
> > missing some nuance there). We wouldn't need to flush L1D there unless
> > there's a new vuln.
>
> With Foreshadow-VMM/CVE-2018-3646 I thought you can do guest->guest?
> Since guest completely controls the physical address which ends up
> probing L1D (as if it were a host physical address).

You are almost certainly right!

> And agree with the flushes between different restricted address spaces
> (e.g. context switch between guests, right?).

Yeah basically, although with the RFCv2 I'm gonna be proposing this
"tainting" model where instead of having to flush, in context switch
we just set a flag to say "another MM* might have left data in a
sidechannel". Then if we have that flag set on an asi_enter we flush
at that point.

*We could break that down further too, e.g. whether the thing that
left data behind was a VM guest or a userspace task, if that ever
influences what caches/buffers we wanna burn.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls
  2024-11-22 16:36                       ` Brendan Jackman
@ 2024-11-22 17:23                         ` Kaplan, David
  0 siblings, 0 replies; 78+ messages in thread
From: Kaplan, David @ 2024-11-22 17:23 UTC (permalink / raw)
  To: Brendan Jackman, Manwaring, Derek
  Cc: bp@alien8.de, canellac@amazon.at, dave.hansen@linux.intel.com,
	hpa@zytor.com, jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, mlipp@amazon.at,
	pawan.kumar.gupta@linux.intel.com, peterz@infradead.org,
	tglx@linutronix.de, x86@kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Brendan Jackman <jackmanb@google.com>
> Sent: Friday, November 22, 2024 10:37 AM
> To: Manwaring, Derek <derekmn@amazon.com>
> Cc: Kaplan, David <David.Kaplan@amd.com>; bp@alien8.de;
> canellac@amazon.at; dave.hansen@linux.intel.com; hpa@zytor.com;
> jpoimboe@kernel.org; linux-kernel@vger.kernel.org; mingo@redhat.com;
> mlipp@amazon.at; pawan.kumar.gupta@linux.intel.com;
> peterz@infradead.org; tglx@linutronix.de; x86@kernel.org
> Subject: Re: [PATCH v2 19/35] Documentation/x86: Document the new attack
> vector controls
>
> Caution: This message originated from an External Source. Use proper
> caution when opening attachments, clicking links, or responding.
>
>
> On Fri, 22 Nov 2024 at 17:15, Manwaring, Derek <derekmn@amazon.com>
> wrote:
> >
> > On 2024-11-14 at 9:32+0000, Brendan Jackman wrote:
> > > On Wed, 13 Nov 2024 at 17:19, Brendan Jackman
> <jackmanb@google.com> wrote:
>
> > > A discussion off-list led me to realise that the specifics of this
> > > comment are nonsensical, I had L1TF in mind but I don't think you
> > > can exploit L1TF in a direct guest->guest attack (I'm probably still
> > > missing some nuance there). We wouldn't need to flush L1D there
> > > unless there's a new vuln.
> >
> > With Foreshadow-VMM/CVE-2018-3646 I thought you can do guest-
> >guest?
> > Since guest completely controls the physical address which ends up
> > probing L1D (as if it were a host physical address).
>
> You are almost certainly right!

Agreed, I will update my patches to fix this, so that mitigation is applied if guest->guest protection is requested.

Thanks
--David Kaplan

>
> > And agree with the flushes between different restricted address spaces
> > (e.g. context switch between guests, right?).
>
> Yeah basically, although with the RFCv2 I'm gonna be proposing this
> "tainting" model where instead of having to flush, in context switch we just
> set a flag to say "another MM* might have left data in a sidechannel". Then if
> we have that flag set on an asi_enter we flush at that point.
>
> *We could break that down further too, e.g. whether the thing that left data
> behind was a VM guest or a userspace task, if that ever influences what
> caches/buffers we wanna burn.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 04/35] x86/bugs: Restructure mds mitigation
  2024-11-14 15:01     ` Kaplan, David
@ 2024-12-10 15:24       ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2024-12-10 15:24 UTC (permalink / raw)
  To: Kaplan, David
  Cc: Pawan Gupta, Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

On Thu, Nov 14, 2024 at 03:01:54PM +0000, Kaplan, David wrote:
> > > +/* Return TRUE if any VERW-based mitigation is enabled. */ static
> > > +bool __init mitigate_any_verw(void)
> >
> > s/mitigate_any_verw/verw_enabled/ ?
> 
> Ok

Right, except "verw_enabled" asks whether VERW is enabled while what we want
to ask here is whether mitigation through VERW is enabled.

So verw_mitigation_enabled() perhaps?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation
  2024-11-14 23:33             ` Josh Poimboeuf
@ 2024-12-12 10:41               ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2024-12-12 10:41 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Kaplan, David, Pawan Gupta, Thomas Gleixner, Peter Zijlstra,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

On Thu, Nov 14, 2024 at 03:33:39PM -0800, Josh Poimboeuf wrote:
> On Thu, Nov 14, 2024 at 04:45:37PM +0000, Kaplan, David wrote:
> > 1) the CPU is not vulnerable (it doesn't have the bug)
> > 2) the CPU is vulnerable but mitigations=off was passed
> > 3) the CPU is vulnerable but the bug-specific mitigation was disabled (e.g., retbleed=off)
> > 4) the CPU is vulnerable, mitigations were not disabled, but no mitigation is available (perhaps it wasn't compiled in)
> > 
> > We absolutely should not print a message in case #1, because the CPU isn't vulnerable.  And we should probably always print a message in case 4 to warn the user.  Question is really about cases 2 and 3.
> > 
> > Today, some bugs print a message saying the CPU is vulnerable in case 2 and 3 (e.g., gds)
> > Some bugs don't print a message in case 2, but do in case 3 (e.g., spectre_v1)
> > Some don't print a message in case 2 or 3 (e.g., retbleed)
> > 
> > Case 4 is things like where you need SRSO mitigation but CONFIG_MITIGATION_SRSO was disabled.
> > 
> > So which do we want?  It would be nice to be consistent and I can do that while reworking these functions.
> > 
> > If we're going to argue that command line options mean the user knows
> > what they're doing, that's probably an argument for saying do not
> > print anything in cases 2 and 3 (since both relate to explicit command
> > line options).  I'm not sure if it really makes sense to differentiate
> > these cases.
> 
> IMO, mitigations=off shouldn't show any bug-specific messages, as user
> doesn't care about the specifics, they just want everything off.
> 
> That said, they still might want to see some kind of "all mitigations
> disabled" message to indicate the option actually worked.
> 
> For similar reasons I'd argue the bug-specific toggle should show a
> bug-specific vulnerable message.

I guess that makes sense, and the bikeshed is already painted. :)

I mean, there's always

$ grep -r . /sys/devices/system/cpu/vulnerabilities/

so it's not like we don't have that info anywhere...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 18/35] x86/bugs: Restructure srso mitigation
  2024-11-05 21:54 ` [PATCH v2 18/35] x86/bugs: Restructure srso mitigation David Kaplan
@ 2025-01-02 14:55   ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2025-01-02 14:55 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Tue, Nov 05, 2024 at 03:54:38PM -0600, David Kaplan wrote:
> @@ -2747,94 +2740,97 @@ static void __init srso_select_mitigation(void)
>  			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
>  			return;
>  		}
> -
> -		if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
> -			srso_mitigation = SRSO_MITIGATION_IBPB;
> -			goto out;
> -		}
>  	} else {
>  		pr_warn("IBPB-extending microcode not applied!\n");
>  		pr_warn(SRSO_NOTICE);
>  
> -		/* may be overwritten by SRSO_CMD_SAFE_RET below */
> -		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
> +		/* Fall-back to Safe-RET */
> +		srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
>  	}
>  
> -	switch (srso_cmd) {
> -	case SRSO_CMD_MICROCODE:
> -		if (has_microcode) {
> -			srso_mitigation = SRSO_MITIGATION_MICROCODE;
> -			pr_warn(SRSO_NOTICE);
> -		}
> +	switch (srso_mitigation) {
> +	case SRSO_MITIGATION_MICROCODE:
> +		pr_warn(SRSO_NOTICE);

We're already dumping this message above in the else-branch of
(has_microcode).

Btw, here's a refreshed diff to accomodate the SRSO updates I did recently.

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 396d7c75fc2d..39ea02983b9e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -84,6 +84,8 @@ static void __init srbds_select_mitigation(void);
 static void __init srbds_apply_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
+static void __init srso_update_mitigation(void);
+static void __init srso_apply_mitigation(void);
 static void __init gds_select_mitigation(void);
 static void __init gds_apply_mitigation(void);
 static void __init bhi_select_mitigation(void);
@@ -200,11 +202,6 @@ void __init cpu_select_mitigations(void)
 	rfds_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
-
-	/*
-	 * srso_select_mitigation() depends and must run after
-	 * retbleed_select_mitigation().
-	 */
 	srso_select_mitigation();
 	gds_select_mitigation();
 	bhi_select_mitigation();
@@ -220,6 +217,7 @@ void __init cpu_select_mitigations(void)
 	taa_update_mitigation();
 	mmio_update_mitigation();
 	rfds_update_mitigation();
+	srso_update_mitigation();
 
 	spectre_v1_apply_mitigation();
 	spectre_v2_apply_mitigation();
@@ -232,6 +230,7 @@ void __init cpu_select_mitigations(void)
 	mmio_apply_mitigation();
 	rfds_apply_mitigation();
 	srbds_apply_mitigation();
+	srso_apply_mitigation();
 	gds_apply_mitigation();
 	bhi_apply_mitigation();
 }
@@ -2658,6 +2657,7 @@ early_param("l1tf", l1tf_cmdline);
 
 enum srso_mitigation {
 	SRSO_MITIGATION_NONE,
+	SRSO_MITIGATION_AUTO,
 	SRSO_MITIGATION_UCODE_NEEDED,
 	SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
 	SRSO_MITIGATION_MICROCODE,
@@ -2666,14 +2666,6 @@ enum srso_mitigation {
 	SRSO_MITIGATION_IBPB_ON_VMEXIT,
 };
 
-enum srso_mitigation_cmd {
-	SRSO_CMD_OFF,
-	SRSO_CMD_MICROCODE,
-	SRSO_CMD_SAFE_RET,
-	SRSO_CMD_IBPB,
-	SRSO_CMD_IBPB_ON_VMEXIT,
-};
-
 static const char * const srso_strings[] = {
 	[SRSO_MITIGATION_NONE]			= "Vulnerable",
 	[SRSO_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
@@ -2684,8 +2676,7 @@ static const char * const srso_strings[] = {
 	[SRSO_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT only"
 };
 
-static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
-static enum srso_mitigation_cmd srso_cmd __ro_after_init = SRSO_CMD_SAFE_RET;
+static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_AUTO;
 
 static int __init srso_parse_cmdline(char *str)
 {
@@ -2693,15 +2684,15 @@ static int __init srso_parse_cmdline(char *str)
 		return -EINVAL;
 
 	if (!strcmp(str, "off"))
-		srso_cmd = SRSO_CMD_OFF;
+		srso_mitigation = SRSO_MITIGATION_NONE;
 	else if (!strcmp(str, "microcode"))
-		srso_cmd = SRSO_CMD_MICROCODE;
+		srso_mitigation = SRSO_MITIGATION_MICROCODE;
 	else if (!strcmp(str, "safe-ret"))
-		srso_cmd = SRSO_CMD_SAFE_RET;
+		srso_mitigation = SRSO_MITIGATION_SAFE_RET;
 	else if (!strcmp(str, "ibpb"))
-		srso_cmd = SRSO_CMD_IBPB;
+		srso_mitigation = SRSO_MITIGATION_IBPB;
 	else if (!strcmp(str, "ibpb-vmexit"))
-		srso_cmd = SRSO_CMD_IBPB_ON_VMEXIT;
+		srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
 	else
 		pr_err("Ignoring unknown SRSO option (%s).", str);
 
@@ -2715,13 +2706,15 @@ static void __init srso_select_mitigation(void)
 {
 	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
 
-	if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
-	    cpu_mitigations_off() ||
-	    srso_cmd == SRSO_CMD_OFF) {
-		if (boot_cpu_has(X86_FEATURE_SBPB))
-			x86_pred_cmd = PRED_CMD_SBPB;
+	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
+		srso_mitigation = SRSO_MITIGATION_NONE;
+
+	if (srso_mitigation == SRSO_MITIGATION_NONE)
 		return;
-	}
+
+	/* Default mitigation */
+	if (srso_mitigation == SRSO_MITIGATION_AUTO)
+		srso_mitigation = SRSO_MITIGATION_SAFE_RET;
 
 	if (has_microcode) {
 		/*
@@ -2734,98 +2727,104 @@ static void __init srso_select_mitigation(void)
 			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
 			return;
 		}
-
-		if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
-			srso_mitigation = SRSO_MITIGATION_IBPB;
-			goto out;
-		}
 	} else {
 		pr_warn("IBPB-extending microcode not applied!\n");
 		pr_warn(SRSO_NOTICE);
 
-		/* may be overwritten by SRSO_CMD_SAFE_RET below */
-		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
+		/* Fall-back to Safe-RET */
+		srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
 	}
 
-	switch (srso_cmd) {
-	case SRSO_CMD_MICROCODE:
-		if (has_microcode) {
-			srso_mitigation = SRSO_MITIGATION_MICROCODE;
-			pr_warn(SRSO_NOTICE);
-		}
+	switch (srso_mitigation) {
+	case SRSO_MITIGATION_MICROCODE:
 		break;
 
-	case SRSO_CMD_SAFE_RET:
+	case SRSO_MITIGATION_SAFE_RET:
+	case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
 		if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO))
 			goto ibpb_on_vmexit;
 
-		if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
-			/*
-			 * Enable the return thunk for generated code
-			 * like ftrace, static_call, etc.
-			 */
-			setup_force_cpu_cap(X86_FEATURE_RETHUNK);
-			setup_force_cpu_cap(X86_FEATURE_UNRET);
-
-			if (boot_cpu_data.x86 == 0x19) {
-				setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
-				x86_return_thunk = srso_alias_return_thunk;
-			} else {
-				setup_force_cpu_cap(X86_FEATURE_SRSO);
-				x86_return_thunk = srso_return_thunk;
-			}
-			if (has_microcode)
-				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
-			else
-				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
-		} else {
+		if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
 			pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
-		}
 		break;
 
-	case SRSO_CMD_IBPB:
-		if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
-			if (has_microcode) {
-				setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
-				srso_mitigation = SRSO_MITIGATION_IBPB;
-
-				/*
-				 * IBPB on entry already obviates the need for
-				 * software-based untraining so clear those in case some
-				 * other mitigation like Retbleed has selected them.
-				 */
-				setup_clear_cpu_cap(X86_FEATURE_UNRET);
-				setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
-			}
-		} else {
+	case SRSO_MITIGATION_IBPB:
+		if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY))
 			pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
-		}
 		break;
 
 ibpb_on_vmexit:
-	case SRSO_CMD_IBPB_ON_VMEXIT:
-		if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
-			if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
-				setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
-				srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
-
-				/*
-				 * There is no need for RSB filling: entry_ibpb() ensures
-				 * all predictions, including the RSB, are invalidated,
-				 * regardless of IBPB implementation.
-				 */
-				setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
-			}
-		} else {
+	case SRSO_MITIGATION_IBPB_ON_VMEXIT:
+		if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
 			pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
-                }
 		break;
 	default:
 		break;
 	}
+}
+
+static void __init srso_update_mitigation(void)
+{
+	/* If retbleed is using IBPB, that works for SRSO as well */
+	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB)
+		srso_mitigation = SRSO_MITIGATION_IBPB;
+
+	if (srso_mitigation != SRSO_MITIGATION_NONE)
+		pr_info("%s\n", srso_strings[srso_mitigation]);
+}
+
+static void __init srso_apply_mitigation(void)
+{
+	if (srso_mitigation == SRSO_MITIGATION_NONE) {
+		if (boot_cpu_has(X86_FEATURE_SBPB))
+			x86_pred_cmd = PRED_CMD_SBPB;
+		return;
+	}
+
+	switch (srso_mitigation) {
+	case SRSO_MITIGATION_SAFE_RET:
+	case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
+		/*
+		 * Enable the return thunk for generated code
+		 * like ftrace, static_call, etc.
+		 */
+		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
+		setup_force_cpu_cap(X86_FEATURE_UNRET);
+
+		if (boot_cpu_data.x86 == 0x19) {
+			setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
+			x86_return_thunk = srso_alias_return_thunk;
+		} else {
+			setup_force_cpu_cap(X86_FEATURE_SRSO);
+			x86_return_thunk = srso_return_thunk;
+		}
+		break;
+
+	case SRSO_MITIGATION_IBPB:
+		setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
+		/*
+		 * IBPB on entry already obviates the need for
+		 * software-based untraining so clear those in case some
+		 * other mitigation like Retbleed has selected them.
+		 */
+		setup_clear_cpu_cap(X86_FEATURE_UNRET);
+		setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
+		break;
+
+	case SRSO_MITIGATION_IBPB_ON_VMEXIT:
+		setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+		/*
+		 * There is no need for RSB filling: entry_ibpb() ensures
+		 * all predictions, including the RSB, are invalidated,
+		 * regardless of IBPB implementation.
+		 */
+		setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
+		break;
+
+	default:
+		break;
+	}
 
-out:
-	pr_info("%s\n", srso_strings[srso_mitigation]);
 }
 
 #undef pr_fmt


-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 20/35] x86/bugs: Define attack vectors
  2024-11-05 21:54 ` [PATCH v2 20/35] x86/bugs: Define attack vectors David Kaplan
@ 2025-01-03 15:19   ` Borislav Petkov
  2025-01-03 15:29     ` Kaplan, David
  0 siblings, 1 reply; 78+ messages in thread
From: Borislav Petkov @ 2025-01-03 15:19 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Tue, Nov 05, 2024 at 03:54:40PM -0600, David Kaplan wrote:
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index d0699e47178b..841bcffee5d3 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c

I faintly remember talking about this but shouldn't we make those vectors
x86-only for now?

No need to have them in generic code as other arches will need to get enabled
first anyway so they can lift them here when they need them.

> @@ -3200,6 +3200,22 @@ enum cpu_mitigations {
>  
>  static enum cpu_mitigations cpu_mitigations __ro_after_init = CPU_MITIGATIONS_AUTO;
>  
> +/*
> + * All except the cross-thread attack vector are mitigated by default.
> + * Cross-thread mitigation often requires disabling SMT which is too expensive
> + * to be enabled by default.
> + *
> + * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM support is
> + * present.
> + */
> +static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS] __ro_after_init = {
> +	[CPU_MITIGATE_USER_KERNEL] = true,
> +	[CPU_MITIGATE_USER_USER] = true,
> +	[CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
> +	[CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
> +	[CPU_MITIGATE_CROSS_THREAD] = false
> +};
> +
>  static int __init mitigations_parse_cmdline(char *arg)
>  {
>  	if (!strcmp(arg, "off"))
> @@ -3228,11 +3244,53 @@ bool cpu_mitigations_auto_nosmt(void)
>  	return cpu_mitigations == CPU_MITIGATIONS_AUTO_NOSMT;
>  }
>  EXPORT_SYMBOL_GPL(cpu_mitigations_auto_nosmt);
> +
> +#define DEFINE_ATTACK_VECTOR(opt, v) \
> +static int __init v##_parse_cmdline(char *arg) \
> +{ \
> +	if (!strcmp(arg, "off")) \
> +		cpu_mitigate_attack_vectors[v] = false; \
> +	else if (!strcmp(arg, "on")) \
> +		cpu_mitigate_attack_vectors[v] = true; \
> +	else \
> +		pr_warn("Unsupported " opt "=%s\n", arg); \
> +	return 0; \
> +} \
> +early_param(opt, v##_parse_cmdline)
> +
> +bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v)
> +{
> +	BUG_ON(v >= NR_CPU_ATTACK_VECTORS);

Yeah, we don't love BUG* at all if it can be helped without it. And here it
can. You can simply return false for out-of-range vector and WARN_ON_ONCE.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 78+ messages in thread

* RE: [PATCH v2 20/35] x86/bugs: Define attack vectors
  2025-01-03 15:19   ` Borislav Petkov
@ 2025-01-03 15:29     ` Kaplan, David
  2025-01-03 15:51       ` Borislav Petkov
  0 siblings, 1 reply; 78+ messages in thread
From: Kaplan, David @ 2025-01-03 15:29 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Friday, January 3, 2025 9:19 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v2 20/35] x86/bugs: Define attack vectors
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Tue, Nov 05, 2024 at 03:54:40PM -0600, David Kaplan wrote:
> > diff --git a/kernel/cpu.c b/kernel/cpu.c index
> > d0699e47178b..841bcffee5d3 100644
> > --- a/kernel/cpu.c
> > +++ b/kernel/cpu.c
>
> I faintly remember talking about this but shouldn't we make those vectors x86-only
> for now?
>
> No need to have them in generic code as other arches will need to get enabled first
> anyway so they can lift them here when they need them.

The concept of attack vectors are generic (like how mitigations=off is generic), while the bugs involved are arch-specific.  Other architectures support speculative mitigations too (for many of the same bugs), but I'm not enough of an expert in those architectures personally to implement/document attack vector controls for them.  It shouldn't be too hard though for someone who knows them better.

Imo, keeping them in generic code is more forward-looking and prevents the next developer from having to move them back here once another architecture implements them.  But I can move them to bugs.c if that is the preference...

>
> > @@ -3200,6 +3200,22 @@ enum cpu_mitigations {
> >
> >  static enum cpu_mitigations cpu_mitigations __ro_after_init =
> > CPU_MITIGATIONS_AUTO;
> >
> > +/*
> > + * All except the cross-thread attack vector are mitigated by default.
> > + * Cross-thread mitigation often requires disabling SMT which is too
> > +expensive
> > + * to be enabled by default.
> > + *
> > + * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM
> > +support is
> > + * present.
> > + */
> > +static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS]
> __ro_after_init = {
> > +     [CPU_MITIGATE_USER_KERNEL] = true,
> > +     [CPU_MITIGATE_USER_USER] = true,
> > +     [CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
> > +     [CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
> > +     [CPU_MITIGATE_CROSS_THREAD] = false };
> > +
> >  static int __init mitigations_parse_cmdline(char *arg)  {
> >       if (!strcmp(arg, "off"))
> > @@ -3228,11 +3244,53 @@ bool cpu_mitigations_auto_nosmt(void)
> >       return cpu_mitigations == CPU_MITIGATIONS_AUTO_NOSMT;  }
> > EXPORT_SYMBOL_GPL(cpu_mitigations_auto_nosmt);
> > +
> > +#define DEFINE_ATTACK_VECTOR(opt, v) \ static int __init
> > +v##_parse_cmdline(char *arg) \ { \
> > +     if (!strcmp(arg, "off")) \
> > +             cpu_mitigate_attack_vectors[v] = false; \
> > +     else if (!strcmp(arg, "on")) \
> > +             cpu_mitigate_attack_vectors[v] = true; \
> > +     else \
> > +             pr_warn("Unsupported " opt "=%s\n", arg); \
> > +     return 0; \
> > +} \
> > +early_param(opt, v##_parse_cmdline)
> > +
> > +bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v) {
> > +     BUG_ON(v >= NR_CPU_ATTACK_VECTORS);
>
> Yeah, we don't love BUG* at all if it can be helped without it. And here it can. You
> can simply return false for out-of-range vector and WARN_ON_ONCE.
>

Ack.

--David Kaplan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v2 20/35] x86/bugs: Define attack vectors
  2025-01-03 15:29     ` Kaplan, David
@ 2025-01-03 15:51       ` Borislav Petkov
  0 siblings, 0 replies; 78+ messages in thread
From: Borislav Petkov @ 2025-01-03 15:51 UTC (permalink / raw)
  To: Kaplan, David
  Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

On Fri, Jan 03, 2025 at 03:29:03PM +0000, Kaplan, David wrote:
> The concept of attack vectors are generic (like how mitigations=off is
> generic), while the bugs involved are arch-specific.  Other architectures
> support speculative mitigations too (for many of the same bugs), but I'm not
> enough of an expert in those architectures personally to implement/document
> attack vector controls for them.  It shouldn't be too hard though for
> someone who knows them better.
> 
> Imo, keeping them in generic code is more forward-looking and prevents the
> next developer from having to move them back here once another architecture
> implements them.  But I can move them to bugs.c if that is the preference...

Right, the intent is for other arches to move them themselves *only* when they
wanna use them. Otherwise, this could remain in generic code and if other arches
don't, then it'll be at the wrong place.

So I'd like for them to do that explicitly and not someone else to start the
work with the hope that others will take it up.

And this is the usual process anyway when other arches want to reuse x86 code
- stuff gets moved to arch-agnostic place first and then shared.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 78+ messages in thread

end of thread, other threads:[~2025-01-03 15:52 UTC | newest]

Thread overview: 78+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-05 21:54 [PATCH v2 00/35] x86/bugs: Attack vector controls David Kaplan
2024-11-05 21:54 ` [PATCH v2 01/35] x86/bugs: Add X86_BUG_SPECTRE_V2_USER David Kaplan
2024-11-05 21:54 ` [PATCH v2 02/35] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
2024-11-05 21:54 ` [PATCH v2 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds David Kaplan
2024-11-14  2:26   ` Pawan Gupta
2024-11-14 14:59     ` Kaplan, David
2024-11-14 17:14       ` Pawan Gupta
2024-11-14 17:17         ` Kaplan, David
2024-11-05 21:54 ` [PATCH v2 04/35] x86/bugs: Restructure mds mitigation David Kaplan
2024-11-14  3:03   ` Pawan Gupta
2024-11-14 15:01     ` Kaplan, David
2024-12-10 15:24       ` Borislav Petkov
2024-11-05 21:54 ` [PATCH v2 05/35] x86/bugs: Restructure taa mitigation David Kaplan
2024-11-14  4:43   ` Pawan Gupta
2024-11-14 15:08     ` Kaplan, David
2024-11-05 21:54 ` [PATCH v2 06/35] x86/bugs: Restructure mmio mitigation David Kaplan
2024-11-14  5:03   ` Pawan Gupta
2024-11-05 21:54 ` [PATCH v2 07/35] x86/bugs: Restructure rfds mitigation David Kaplan
2024-11-14  5:55   ` Pawan Gupta
2024-11-05 21:54 ` [PATCH v2 08/35] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
2024-11-05 21:54 ` [PATCH v2 09/35] x86/bugs: Restructure srbds mitigation David Kaplan
2024-11-05 21:54 ` [PATCH v2 10/35] x86/bugs: Restructure gds mitigation David Kaplan
2024-11-14  6:21   ` Pawan Gupta
2024-11-05 21:54 ` [PATCH v2 11/35] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
2024-11-14  6:57   ` Pawan Gupta
2024-11-14 15:36     ` Kaplan, David
2024-11-14 15:49       ` Kaplan, David
2024-11-14 16:19         ` Borislav Petkov
2024-11-14 16:45           ` Kaplan, David
2024-11-14 23:33             ` Josh Poimboeuf
2024-12-12 10:41               ` Borislav Petkov
2024-11-14 17:41       ` Pawan Gupta
2024-11-14 17:48         ` Kaplan, David
2024-11-05 21:54 ` [PATCH v2 12/35] x86/bugs: Restructure retbleed mitigation David Kaplan
2024-11-05 21:54 ` [PATCH v2 13/35] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
2024-11-06 18:56   ` kernel test robot
2024-11-05 21:54 ` [PATCH v2 14/35] x86/bugs: Restructure bhi mitigation David Kaplan
2024-11-05 21:54 ` [PATCH v2 15/35] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
2024-11-05 21:54 ` [PATCH v2 16/35] x86/bugs: Restructure ssb mitigation David Kaplan
2024-11-05 21:54 ` [PATCH v2 17/35] x86/bugs: Restructure l1tf mitigation David Kaplan
2024-11-05 21:54 ` [PATCH v2 18/35] x86/bugs: Restructure srso mitigation David Kaplan
2025-01-02 14:55   ` Borislav Petkov
2024-11-05 21:54 ` [PATCH v2 19/35] Documentation/x86: Document the new attack vector controls David Kaplan
2024-11-06 10:39   ` Borislav Petkov
2024-11-06 14:49     ` Kaplan, David
2024-11-13  3:58       ` Manwaring, Derek
2024-11-13 14:15         ` Brendan Jackman
2024-11-13 15:05           ` Kaplan, David
2024-11-13 15:31             ` Brendan Jackman
2024-11-13 16:00               ` Kaplan, David
2024-11-13 16:19                 ` Brendan Jackman
2024-11-14  9:32                   ` Brendan Jackman
2024-11-22 16:15                     ` Manwaring, Derek
2024-11-22 16:36                       ` Brendan Jackman
2024-11-22 17:23                         ` Kaplan, David
2024-11-20  0:14           ` Manwaring, Derek
2024-11-13 14:49         ` Kaplan, David
2024-11-13 14:15   ` Brendan Jackman
2024-11-13 15:42     ` Kaplan, David
2024-11-05 21:54 ` [PATCH v2 20/35] x86/bugs: Define attack vectors David Kaplan
2025-01-03 15:19   ` Borislav Petkov
2025-01-03 15:29     ` Kaplan, David
2025-01-03 15:51       ` Borislav Petkov
2024-11-05 21:54 ` [PATCH v2 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls David Kaplan
2024-11-05 21:54 ` [PATCH v2 22/35] x86/bugs: Add attack vector controls for mds David Kaplan
2024-11-05 21:54 ` [PATCH v2 23/35] x86/bugs: Add attack vector controls for taa David Kaplan
2024-11-05 21:54 ` [PATCH v2 24/35] x86/bugs: Add attack vector controls for mmio David Kaplan
2024-11-05 21:54 ` [PATCH v2 25/35] x86/bugs: Add attack vector controls for rfds David Kaplan
2024-11-05 21:54 ` [PATCH v2 26/35] x86/bugs: Add attack vector controls for srbds David Kaplan
2024-11-05 21:54 ` [PATCH v2 27/35] x86/bugs: Add attack vector controls for gds David Kaplan
2024-11-05 21:54 ` [PATCH v2 28/35] x86/bugs: Add attack vector controls for spectre_v1 David Kaplan
2024-11-05 21:54 ` [PATCH v2 29/35] x86/bugs: Add attack vector controls for retbleed David Kaplan
2024-11-05 21:54 ` [PATCH v2 30/35] x86/bugs: Add attack vector controls for spectre_v2_user David Kaplan
2024-11-05 21:54 ` [PATCH v2 31/35] x86/bugs: Add attack vector controls for bhi David Kaplan
2024-11-05 21:54 ` [PATCH v2 32/35] x86/bugs: Add attack vector controls for spectre_v2 David Kaplan
2024-11-05 21:54 ` [PATCH v2 33/35] x86/bugs: Add attack vector controls for l1tf David Kaplan
2024-11-05 21:54 ` [PATCH v2 34/35] x86/bugs: Add attack vector controls for srso David Kaplan
2024-11-05 21:54 ` [PATCH v2 35/35] x86/pti: Add attack vector controls for pti David Kaplan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox