public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 00/34] x86/bugs: Attack vector controls
@ 2024-09-12 19:08 David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 01/34] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
                   ` (34 more replies)
  0 siblings, 35 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

This RFC restructures arch/x86/kernel/cpu/bugs.c and proposes new
command line options to make it easier to control which CPU mitigations
are applied.  These options select relevant mitigations based on chosen
attack vectors, which are hopefully easier for users to understand.

This patch series will also be part of a discussion at the LPC x86
microconference next week.

There are two parts to this patch series:

The first 17 patches restructure the existing mitigation selection logic
to use a uniform set of functions.  First, the "select" function is
called for each mitigation to select an appropriate mitigation.  Unless
a mitigation is explicitly selected or disabled with a command line
option, the default mitigation is AUTO and the "select" function will
then choose the best mitigation.  After the "select" function is called
for each mitigation, some mitigations define an "update" function which
can be used to update the selection, based on the choices made by other
mitigations.  Finally, the "apply" function is called which enables the
chosen mitigation.

This structure simplifies the mitigation control logic, especially when
there are dependencies between multiple vulnerabilities.  It also
prepares the code for the second set of patches.

The rest of the patches define new "attack vector" command line options
to make it easier to select appropriate mitigations based on the usage
of the system.  While many users may not be intimately familiar with the
details of these CPU vulnerabilities, they are likely better able to
understand the intended usage of their system.  As a result, unneeded
mitigations may be disabled, allowing users to recoup more performance.
New documentation is included with recommendations on what to consider
when choosing which attack vectors to enable/disable.

Note that this patch series does not change any of the existing
mitigation defaults.

David Kaplan (34):
  x86/bugs: Relocate mds/taa/mmio/rfds defines
  x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
  x86/bugs: Restructure mds mitigation
  x86/bugs: Restructure taa mitigation
  x86/bugs: Restructure mmio mitigation
  x86/bugs: Restructure rfds mitigation
  x86/bugs: Remove md_clear_*_mitigation()
  x86/bugs: Restructure srbds mitigation
  x86/bugs: Restructure gds mitigation
  x86/bugs: Restructure spectre_v1 mitigation
  x86/bugs: Restructure retbleed mitigation
  x86/bugs: Restructure spectre_v2_user mitigation
  x86/bugs: Restructure bhi mitigation
  x86/bugs: Restructure spectre_v2 mitigation
  x86/bugs: Restructure ssb mitigation
  x86/bugs: Restructure l1tf mitigation
  x86/bugs: Restructure srso mitigation
  Documentation/x86: Document the new attack vector controls
  x86/bugs: Define attack vectors
  x86/bugs: Determine relevant vulnerabilities based on attack vector
    controls.
  x86/bugs: Add attack vector controls for mds
  x86/bugs: Add attack vector controls for taa
  x86/bugs: Add attack vector controls for mmio
  x86/bugs: Add attack vector controls for rfds
  x86/bugs: Add attack vector controls for srbds
  x86/bugs: Add attack vector controls for gds
  x86/bugs: Add attack vector controls for spectre_v1
  x86/bugs: Add attack vector controls for retbleed
  x86/bugs: Add attack vector controls for spectre_v2_user
  x86/bugs: Add attack vector controls for bhi
  x86/bugs: Add attack vector controls for spectre_v2
  x86/bugs: Add attack vector controls for l1tf
  x86/bugs: Add attack vector controls for srso
  x86/pti: Add attack vector controls for pti

 .../hw-vuln/attack_vector_controls.rst        |  172 +++
 Documentation/admin-guide/hw-vuln/index.rst   |    1 +
 arch/x86/include/asm/processor.h              |    2 +
 arch/x86/kernel/cpu/bugs.c                    | 1171 ++++++++++-------
 arch/x86/mm/pti.c                             |    3 +-
 include/linux/cpu.h                           |   11 +
 kernel/cpu.c                                  |   58 +
 7 files changed, 977 insertions(+), 441 deletions(-)
 create mode 100644 Documentation/admin-guide/hw-vuln/attack_vector_controls.rst

-- 
2.34.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [RFC PATCH 01/34] x86/bugs: Relocate mds/taa/mmio/rfds defines
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-10-24 13:07   ` Borislav Petkov
  2024-09-12 19:08 ` [RFC PATCH 02/34] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds David Kaplan
                   ` (33 subsequent siblings)
  34 siblings, 1 reply; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Move the mds, taa, mmio, and rfds mitigation enums earlier in the file
to prepare for restructuring of these mitigations as they are all
inter-related.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 60 ++++++++++++++++++++------------------
 1 file changed, 31 insertions(+), 29 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d1915427b4ff..ee89e6676107 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -243,6 +243,37 @@ static const char * const mds_strings[] = {
 	[MDS_MITIGATION_VMWERV]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
 };
 
+enum taa_mitigations {
+	TAA_MITIGATION_OFF,
+	TAA_MITIGATION_UCODE_NEEDED,
+	TAA_MITIGATION_VERW,
+	TAA_MITIGATION_TSX_DISABLED,
+};
+
+/* Default mitigation for TAA-affected CPUs */
+static enum taa_mitigations taa_mitigation __ro_after_init =
+	IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
+
+enum mmio_mitigations {
+	MMIO_MITIGATION_OFF,
+	MMIO_MITIGATION_UCODE_NEEDED,
+	MMIO_MITIGATION_VERW,
+};
+
+/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
+static enum mmio_mitigations mmio_mitigation __ro_after_init =
+	IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
+
+enum rfds_mitigations {
+	RFDS_MITIGATION_OFF,
+	RFDS_MITIGATION_VERW,
+	RFDS_MITIGATION_UCODE_NEEDED,
+};
+
+/* Default mitigation for Register File Data Sampling */
+static enum rfds_mitigations rfds_mitigation __ro_after_init =
+	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
+
 static void __init mds_select_mitigation(void)
 {
 	if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
@@ -286,16 +317,6 @@ early_param("mds", mds_cmdline);
 #undef pr_fmt
 #define pr_fmt(fmt)	"TAA: " fmt
 
-enum taa_mitigations {
-	TAA_MITIGATION_OFF,
-	TAA_MITIGATION_UCODE_NEEDED,
-	TAA_MITIGATION_VERW,
-	TAA_MITIGATION_TSX_DISABLED,
-};
-
-/* Default mitigation for TAA-affected CPUs */
-static enum taa_mitigations taa_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
 static bool taa_nosmt __ro_after_init;
 
 static const char * const taa_strings[] = {
@@ -386,15 +407,6 @@ early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
 #undef pr_fmt
 #define pr_fmt(fmt)	"MMIO Stale Data: " fmt
 
-enum mmio_mitigations {
-	MMIO_MITIGATION_OFF,
-	MMIO_MITIGATION_UCODE_NEEDED,
-	MMIO_MITIGATION_VERW,
-};
-
-/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
-static enum mmio_mitigations mmio_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
 static bool mmio_nosmt __ro_after_init = false;
 
 static const char * const mmio_strings[] = {
@@ -483,16 +495,6 @@ early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
 #undef pr_fmt
 #define pr_fmt(fmt)	"Register File Data Sampling: " fmt
 
-enum rfds_mitigations {
-	RFDS_MITIGATION_OFF,
-	RFDS_MITIGATION_VERW,
-	RFDS_MITIGATION_UCODE_NEEDED,
-};
-
-/* Default mitigation for Register File Data Sampling */
-static enum rfds_mitigations rfds_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
-
 static const char * const rfds_strings[] = {
 	[RFDS_MITIGATION_OFF]			= "Vulnerable",
 	[RFDS_MITIGATION_VERW]			= "Mitigation: Clear Register File",
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 02/34] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 01/34] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 03/34] x86/bugs: Restructure mds mitigation David Kaplan
                   ` (32 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Add AUTO mitigations for mds/taa/mmio/rfds to create consistent
vulnerability handling.  These AUTO mitigations will be turned into the
appropriate default mitigations in the <vuln>_select_mitigation()
functions.  In a later patch, these will be used with the new attack
vector controls to help select appropriate mitigations.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/include/asm/processor.h |  1 +
 arch/x86/kernel/cpu/bugs.c       | 17 +++++++++++++----
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 399f7d1c4c61..187805f7db3f 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -739,6 +739,7 @@ extern enum l1tf_mitigations l1tf_mitigation;
 
 enum mds_mitigations {
 	MDS_MITIGATION_OFF,
+	MDS_MITIGATION_AUTO,
 	MDS_MITIGATION_FULL,
 	MDS_MITIGATION_VMWERV,
 };
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ee89e6676107..1cf5a8edec53 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -234,7 +234,7 @@ static void x86_amd_ssb_disable(void)
 
 /* Default mitigation for MDS-affected CPUs */
 static enum mds_mitigations mds_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_FULL : MDS_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF;
 static bool mds_nosmt __ro_after_init = false;
 
 static const char * const mds_strings[] = {
@@ -245,6 +245,7 @@ static const char * const mds_strings[] = {
 
 enum taa_mitigations {
 	TAA_MITIGATION_OFF,
+	TAA_MITIGATION_AUTO,
 	TAA_MITIGATION_UCODE_NEEDED,
 	TAA_MITIGATION_VERW,
 	TAA_MITIGATION_TSX_DISABLED,
@@ -252,27 +253,29 @@ enum taa_mitigations {
 
 /* Default mitigation for TAA-affected CPUs */
 static enum taa_mitigations taa_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_AUTO : TAA_MITIGATION_OFF;
 
 enum mmio_mitigations {
 	MMIO_MITIGATION_OFF,
+	MMIO_MITIGATION_AUTO,
 	MMIO_MITIGATION_UCODE_NEEDED,
 	MMIO_MITIGATION_VERW,
 };
 
 /* Default mitigation for Processor MMIO Stale Data vulnerabilities */
 static enum mmio_mitigations mmio_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ?	MMIO_MITIGATION_AUTO : MMIO_MITIGATION_OFF;
 
 enum rfds_mitigations {
 	RFDS_MITIGATION_OFF,
+	RFDS_MITIGATION_AUTO,
 	RFDS_MITIGATION_VERW,
 	RFDS_MITIGATION_UCODE_NEEDED,
 };
 
 /* Default mitigation for Register File Data Sampling */
 static enum rfds_mitigations rfds_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
 
 static void __init mds_select_mitigation(void)
 {
@@ -281,6 +284,9 @@ static void __init mds_select_mitigation(void)
 		return;
 	}
 
+	if (mds_mitigation == MDS_MITIGATION_AUTO)
+		mds_mitigation = MDS_MITIGATION_FULL;
+
 	if (mds_mitigation == MDS_MITIGATION_FULL) {
 		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
 			mds_mitigation = MDS_MITIGATION_VMWERV;
@@ -1965,6 +1971,7 @@ void cpu_bugs_smt_update(void)
 		update_mds_branch_idle();
 		break;
 	case MDS_MITIGATION_OFF:
+	case MDS_MITIGATION_AUTO:
 		break;
 	}
 
@@ -1976,6 +1983,7 @@ void cpu_bugs_smt_update(void)
 		break;
 	case TAA_MITIGATION_TSX_DISABLED:
 	case TAA_MITIGATION_OFF:
+	case TAA_MITIGATION_AUTO:
 		break;
 	}
 
@@ -1986,6 +1994,7 @@ void cpu_bugs_smt_update(void)
 			pr_warn_once(MMIO_MSG_SMT);
 		break;
 	case MMIO_MITIGATION_OFF:
+	case MMIO_MITIGATION_AUTO:
 		break;
 	}
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 03/34] x86/bugs: Restructure mds mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 01/34] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 02/34] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 04/34] x86/bugs: Restructure taa mitigation David Kaplan
                   ` (31 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure mds mitigation selection to use select/update/apply
functions to create consistent vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 58 ++++++++++++++++++++++++++++++++++----
 1 file changed, 53 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 1cf5a8edec53..0bdd4e5b8fc1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -34,6 +34,25 @@
 
 #include "cpu.h"
 
+/*
+ * Speculation Vulnerability Handling
+ *
+ * Each vulnerability is handled with the following functions:
+ *   <vuln>_select_mitigation() -- Selects a mitigation to use.  This should
+ *				   take into account all relevant command line
+ *				   options.
+ *   <vuln>_update_mitigation() -- This is called after all vulnerabilities have
+ *				   selected a mitigation, in case the selection
+ *				   may want to change based on other choices
+ *				   made.  This function is optional.
+ *   <vuln>_apply_mitigation() -- Enable the selected mitigation.
+ *
+ * The compile-time mitigation in all cases should be AUTO.  An explicit
+ * command-line option can override AUTO.  If no such option is
+ * provided, <vuln>_select_mitigation() will override AUTO to the best
+ * mitigation option.
+ */
+
 static void __init spectre_v1_select_mitigation(void);
 static void __init spectre_v2_select_mitigation(void);
 static void __init retbleed_select_mitigation(void);
@@ -41,6 +60,8 @@ static void __init spectre_v2_user_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
+static void __init mds_update_mitigation(void);
+static void __init mds_apply_mitigation(void);
 static void __init md_clear_update_mitigation(void);
 static void __init md_clear_select_mitigation(void);
 static void __init taa_select_mitigation(void);
@@ -165,6 +186,7 @@ void __init cpu_select_mitigations(void)
 	spectre_v2_user_select_mitigation();
 	ssb_select_mitigation();
 	l1tf_select_mitigation();
+	mds_select_mitigation();
 	md_clear_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
@@ -175,6 +197,14 @@ void __init cpu_select_mitigations(void)
 	 */
 	srso_select_mitigation();
 	gds_select_mitigation();
+
+	/*
+	 * After mitigations are selected, some may need to update their
+	 * choices.
+	 */
+	mds_update_mitigation();
+
+	mds_apply_mitigation();
 }
 
 /*
@@ -229,9 +259,6 @@ static void x86_amd_ssb_disable(void)
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 
-#undef pr_fmt
-#define pr_fmt(fmt)	"MDS: " fmt
-
 /* Default mitigation for MDS-affected CPUs */
 static enum mds_mitigations mds_mitigation __ro_after_init =
 	IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF;
@@ -290,9 +317,31 @@ static void __init mds_select_mitigation(void)
 	if (mds_mitigation == MDS_MITIGATION_FULL) {
 		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
 			mds_mitigation = MDS_MITIGATION_VMWERV;
+	}
+}
+
+static void __init mds_update_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_MDS))
+		return;
+
+	/* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
+	if (taa_mitigation != TAA_MITIGATION_OFF ||
+	    mmio_mitigation != MMIO_MITIGATION_OFF ||
+	    rfds_mitigation != RFDS_MITIGATION_OFF) {
+		if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+			mds_mitigation = MDS_MITIGATION_FULL;
+		else
+			mds_mitigation = MDS_MITIGATION_VMWERV;
+	}
 
-		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+	pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
+}
 
+static void __init mds_apply_mitigation(void)
+{
+	if (mds_mitigation == MDS_MITIGATION_FULL) {
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
 		if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
 		    (mds_nosmt || cpu_mitigations_auto_nosmt()))
 			cpu_smt_disable(false);
@@ -592,7 +641,6 @@ static void __init md_clear_update_mitigation(void)
 
 static void __init md_clear_select_mitigation(void)
 {
-	mds_select_mitigation();
 	taa_select_mitigation();
 	mmio_select_mitigation();
 	rfds_select_mitigation();
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 04/34] x86/bugs: Restructure taa mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (2 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 03/34] x86/bugs: Restructure mds mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 05/34] x86/bugs: Restructure mmio mitigation David Kaplan
                   ` (30 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure taa mitigation to use select/update/apply functions to
create consistent vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 56 +++++++++++++++++++++++++++-----------
 1 file changed, 40 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0bdd4e5b8fc1..3c0a0890d382 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,8 @@ static void __init mds_apply_mitigation(void);
 static void __init md_clear_update_mitigation(void);
 static void __init md_clear_select_mitigation(void);
 static void __init taa_select_mitigation(void);
+static void __init taa_update_mitigation(void);
+static void __init taa_apply_mitigation(void);
 static void __init mmio_select_mitigation(void);
 static void __init srbds_select_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
@@ -187,6 +189,7 @@ void __init cpu_select_mitigations(void)
 	ssb_select_mitigation();
 	l1tf_select_mitigation();
 	mds_select_mitigation();
+	taa_select_mitigation();
 	md_clear_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
@@ -203,8 +206,10 @@ void __init cpu_select_mitigations(void)
 	 * choices.
 	 */
 	mds_update_mitigation();
+	taa_update_mitigation();
 
 	mds_apply_mitigation();
+	taa_apply_mitigation();
 }
 
 /*
@@ -369,9 +374,6 @@ static int __init mds_cmdline(char *str)
 }
 early_param("mds", mds_cmdline);
 
-#undef pr_fmt
-#define pr_fmt(fmt)	"TAA: " fmt
-
 static bool taa_nosmt __ro_after_init;
 
 static const char * const taa_strings[] = {
@@ -402,11 +404,13 @@ static void __init taa_select_mitigation(void)
 	/*
 	 * TAA mitigation via VERW is turned off if both
 	 * tsx_async_abort=off and mds=off are specified.
+	 *
+	 * mds mitigation will be checked in taa_update_mitigation()
 	 */
-	if (taa_mitigation == TAA_MITIGATION_OFF &&
-	    mds_mitigation == MDS_MITIGATION_OFF)
+	if (taa_mitigation == TAA_MITIGATION_OFF)
 		return;
 
+	/* This handles the AUTO case. */
 	if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
 		taa_mitigation = TAA_MITIGATION_VERW;
 	else
@@ -425,17 +429,38 @@ static void __init taa_select_mitigation(void)
 	    !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
 		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
 
-	/*
-	 * TSX is enabled, select alternate mitigation for TAA which is
-	 * the same as MDS. Enable MDS static branch to clear CPU buffers.
-	 *
-	 * For guests that can't determine whether the correct microcode is
-	 * present on host, enable the mitigation for UCODE_NEEDED as well.
-	 */
-	setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+}
+
+static void __init taa_update_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_TAA))
+		return;
+
+	if (mds_mitigation != MDS_MITIGATION_OFF ||
+	    mmio_mitigation != MMIO_MITIGATION_OFF ||
+	    rfds_mitigation != RFDS_MITIGATION_OFF)
+		taa_mitigation = TAA_MITIGATION_VERW;
+
+	pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
+}
+
+static void __init taa_apply_mitigation(void)
+{
+	if (taa_mitigation == TAA_MITIGATION_VERW ||
+	    taa_mitigation == TAA_MITIGATION_UCODE_NEEDED) {
+		/*
+		 * TSX is enabled, select alternate mitigation for TAA which is
+		 * the same as MDS. Enable MDS static branch to clear CPU buffers.
+		 *
+		 * For guests that can't determine whether the correct microcode is
+		 * present on host, enable the mitigation for UCODE_NEEDED as well.
+		 */
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+
+		if (taa_nosmt || cpu_mitigations_auto_nosmt())
+			cpu_smt_disable(false);
+	}
 
-	if (taa_nosmt || cpu_mitigations_auto_nosmt())
-		cpu_smt_disable(false);
 }
 
 static int __init tsx_async_abort_parse_cmdline(char *str)
@@ -641,7 +666,6 @@ static void __init md_clear_update_mitigation(void)
 
 static void __init md_clear_select_mitigation(void)
 {
-	taa_select_mitigation();
 	mmio_select_mitigation();
 	rfds_select_mitigation();
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 05/34] x86/bugs: Restructure mmio mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (3 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 04/34] x86/bugs: Restructure taa mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 06/34] x86/bugs: Restructure rfds mitigation David Kaplan
                   ` (29 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure mmio mitigation to use select/update/apply functions to
create consistent vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 60 ++++++++++++++++++++++++++------------
 1 file changed, 41 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3c0a0890d382..0b93a0f030b7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -68,6 +68,8 @@ static void __init taa_select_mitigation(void);
 static void __init taa_update_mitigation(void);
 static void __init taa_apply_mitigation(void);
 static void __init mmio_select_mitigation(void);
+static void __init mmio_update_mitigation(void);
+static void __init mmio_apply_mitigation(void);
 static void __init srbds_select_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
@@ -190,6 +192,7 @@ void __init cpu_select_mitigations(void)
 	l1tf_select_mitigation();
 	mds_select_mitigation();
 	taa_select_mitigation();
+	mmio_select_mitigation();
 	md_clear_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
@@ -207,9 +210,11 @@ void __init cpu_select_mitigations(void)
 	 */
 	mds_update_mitigation();
 	taa_update_mitigation();
+	mmio_update_mitigation();
 
 	mds_apply_mitigation();
 	taa_apply_mitigation();
+	mmio_apply_mitigation();
 }
 
 /*
@@ -484,9 +489,6 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
 }
 early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
 
-#undef pr_fmt
-#define pr_fmt(fmt)	"MMIO Stale Data: " fmt
-
 static bool mmio_nosmt __ro_after_init = false;
 
 static const char * const mmio_strings[] = {
@@ -504,6 +506,42 @@ static void __init mmio_select_mitigation(void)
 		return;
 	}
 
+	if (mmio_mitigation == MMIO_MITIGATION_OFF)
+		return;
+
+	/*
+	 * Check if the system has the right microcode.
+	 *
+	 * CPU Fill buffer clear mitigation is enumerated by either an explicit
+	 * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
+	 * affected systems.
+	 */
+	if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
+	    (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
+	     boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
+	     !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
+		mmio_mitigation = MMIO_MITIGATION_VERW;
+	else
+		mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
+}
+
+static void __init mmio_update_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
+		return;
+
+	if (mds_mitigation != MDS_MITIGATION_OFF ||
+	    taa_mitigation != TAA_MITIGATION_OFF ||
+	    rfds_mitigation != RFDS_MITIGATION_OFF)
+		mmio_mitigation = MMIO_MITIGATION_VERW;
+
+	pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
+	if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
+		pr_info("MMIO Stale Data: Unknown: No mitigations\n");
+}
+
+static void __init mmio_apply_mitigation(void)
+{
 	if (mmio_mitigation == MMIO_MITIGATION_OFF)
 		return;
 
@@ -532,21 +570,6 @@ static void __init mmio_select_mitigation(void)
 	if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
 		static_branch_enable(&mds_idle_clear);
 
-	/*
-	 * Check if the system has the right microcode.
-	 *
-	 * CPU Fill buffer clear mitigation is enumerated by either an explicit
-	 * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
-	 * affected systems.
-	 */
-	if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
-	    (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
-	     boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
-	     !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
-		mmio_mitigation = MMIO_MITIGATION_VERW;
-	else
-		mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
-
 	if (mmio_nosmt || cpu_mitigations_auto_nosmt())
 		cpu_smt_disable(false);
 }
@@ -666,7 +689,6 @@ static void __init md_clear_update_mitigation(void)
 
 static void __init md_clear_select_mitigation(void)
 {
-	mmio_select_mitigation();
 	rfds_select_mitigation();
 
 	/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 06/34] x86/bugs: Restructure rfds mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (4 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 05/34] x86/bugs: Restructure mmio mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 07/34] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
                   ` (28 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure rfds mitigation to use select/update/apply functions to
create consistent vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 38 +++++++++++++++++++++++++++++++-------
 1 file changed, 31 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0b93a0f030b7..d3e6ce7238e4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -70,6 +70,9 @@ static void __init taa_apply_mitigation(void);
 static void __init mmio_select_mitigation(void);
 static void __init mmio_update_mitigation(void);
 static void __init mmio_apply_mitigation(void);
+static void __init rfds_select_mitigation(void);
+static void __init rfds_update_mitigation(void);
+static void __init rfds_apply_mitigation(void);
 static void __init srbds_select_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
@@ -193,6 +196,7 @@ void __init cpu_select_mitigations(void)
 	mds_select_mitigation();
 	taa_select_mitigation();
 	mmio_select_mitigation();
+	rfds_select_mitigation();
 	md_clear_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
@@ -211,10 +215,12 @@ void __init cpu_select_mitigations(void)
 	mds_update_mitigation();
 	taa_update_mitigation();
 	mmio_update_mitigation();
+	rfds_update_mitigation();
 
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
+	rfds_apply_mitigation();
 }
 
 /*
@@ -595,9 +601,6 @@ static int __init mmio_stale_data_parse_cmdline(char *str)
 }
 early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
 
-#undef pr_fmt
-#define pr_fmt(fmt)	"Register File Data Sampling: " fmt
-
 static const char * const rfds_strings[] = {
 	[RFDS_MITIGATION_OFF]			= "Vulnerable",
 	[RFDS_MITIGATION_VERW]			= "Mitigation: Clear Register File",
@@ -613,12 +616,34 @@ static void __init rfds_select_mitigation(void)
 	if (rfds_mitigation == RFDS_MITIGATION_OFF)
 		return;
 
-	if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
-		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
-	else
+	if (rfds_mitigation == RFDS_MITIGATION_AUTO)
+		rfds_mitigation = RFDS_MITIGATION_VERW;
+
+	if (!(x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR))
 		rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
 }
 
+static void __init rfds_update_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_RFDS))
+		return;
+
+	if (mds_mitigation != MDS_MITIGATION_OFF ||
+	    taa_mitigation != TAA_MITIGATION_OFF ||
+	    mmio_mitigation != MMIO_MITIGATION_OFF)
+		rfds_mitigation = RFDS_MITIGATION_VERW;
+
+	pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
+}
+
+static void __init rfds_apply_mitigation(void)
+{
+	if (rfds_mitigation == RFDS_MITIGATION_VERW) {
+		if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
+			setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+	}
+}
+
 static __init int rfds_parse_cmdline(char *str)
 {
 	if (!str)
@@ -689,7 +714,6 @@ static void __init md_clear_update_mitigation(void)
 
 static void __init md_clear_select_mitigation(void)
 {
-	rfds_select_mitigation();
 
 	/*
 	 * As these mitigations are inter-related and rely on VERW instruction
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 07/34] x86/bugs: Remove md_clear_*_mitigation()
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (5 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 06/34] x86/bugs: Restructure rfds mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-10-08  8:40   ` Nikolay Borisov
  2024-09-12 19:08 ` [RFC PATCH 08/34] x86/bugs: Restructure srbds mitigation David Kaplan
                   ` (27 subsequent siblings)
  34 siblings, 1 reply; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

The functionality in md_clear_update_mitigation() and
md_clear_select_mitigation() is now integrated into the select/update
functions for the MDS, TAA, MMIO, and RFDS vulnerabilities.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 65 --------------------------------------
 1 file changed, 65 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d3e6ce7238e4..df41572c5d10 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -62,8 +62,6 @@ static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
 static void __init mds_update_mitigation(void);
 static void __init mds_apply_mitigation(void);
-static void __init md_clear_update_mitigation(void);
-static void __init md_clear_select_mitigation(void);
 static void __init taa_select_mitigation(void);
 static void __init taa_update_mitigation(void);
 static void __init taa_apply_mitigation(void);
@@ -197,7 +195,6 @@ void __init cpu_select_mitigations(void)
 	taa_select_mitigation();
 	mmio_select_mitigation();
 	rfds_select_mitigation();
-	md_clear_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
 
@@ -661,68 +658,6 @@ static __init int rfds_parse_cmdline(char *str)
 }
 early_param("reg_file_data_sampling", rfds_parse_cmdline);
 
-#undef pr_fmt
-#define pr_fmt(fmt)     "" fmt
-
-static void __init md_clear_update_mitigation(void)
-{
-	if (cpu_mitigations_off())
-		return;
-
-	if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
-		goto out;
-
-	/*
-	 * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO
-	 * Stale Data mitigation, if necessary.
-	 */
-	if (mds_mitigation == MDS_MITIGATION_OFF &&
-	    boot_cpu_has_bug(X86_BUG_MDS)) {
-		mds_mitigation = MDS_MITIGATION_FULL;
-		mds_select_mitigation();
-	}
-	if (taa_mitigation == TAA_MITIGATION_OFF &&
-	    boot_cpu_has_bug(X86_BUG_TAA)) {
-		taa_mitigation = TAA_MITIGATION_VERW;
-		taa_select_mitigation();
-	}
-	/*
-	 * MMIO_MITIGATION_OFF is not checked here so that mmio_stale_data_clear
-	 * gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
-	 */
-	if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
-		mmio_mitigation = MMIO_MITIGATION_VERW;
-		mmio_select_mitigation();
-	}
-	if (rfds_mitigation == RFDS_MITIGATION_OFF &&
-	    boot_cpu_has_bug(X86_BUG_RFDS)) {
-		rfds_mitigation = RFDS_MITIGATION_VERW;
-		rfds_select_mitigation();
-	}
-out:
-	if (boot_cpu_has_bug(X86_BUG_MDS))
-		pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
-	if (boot_cpu_has_bug(X86_BUG_TAA))
-		pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
-	if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
-		pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
-	else if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
-		pr_info("MMIO Stale Data: Unknown: No mitigations\n");
-	if (boot_cpu_has_bug(X86_BUG_RFDS))
-		pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
-}
-
-static void __init md_clear_select_mitigation(void)
-{
-
-	/*
-	 * As these mitigations are inter-related and rely on VERW instruction
-	 * to clear the microarchitural buffers, update and print their status
-	 * after mitigation selection is done for each of these vulnerabilities.
-	 */
-	md_clear_update_mitigation();
-}
-
 #undef pr_fmt
 #define pr_fmt(fmt)	"SRBDS: " fmt
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 08/34] x86/bugs: Restructure srbds mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (6 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 07/34] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 09/34] x86/bugs: Restructure gds mitigation David Kaplan
                   ` (26 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure srbds to use select/apply functions to create consistent
vulnerability handling.

Define new AUTO mitigation for SRBDS.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index df41572c5d10..0fb97b94f5b9 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -72,6 +72,7 @@ static void __init rfds_select_mitigation(void);
 static void __init rfds_update_mitigation(void);
 static void __init rfds_apply_mitigation(void);
 static void __init srbds_select_mitigation(void);
+static void __init srbds_apply_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
 static void __init gds_select_mitigation(void);
@@ -218,6 +219,7 @@ void __init cpu_select_mitigations(void)
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
 	rfds_apply_mitigation();
+	srbds_apply_mitigation();
 }
 
 /*
@@ -663,6 +665,7 @@ early_param("reg_file_data_sampling", rfds_parse_cmdline);
 
 enum srbds_mitigations {
 	SRBDS_MITIGATION_OFF,
+	SRBDS_MITIGATION_AUTO,
 	SRBDS_MITIGATION_UCODE_NEEDED,
 	SRBDS_MITIGATION_FULL,
 	SRBDS_MITIGATION_TSX_OFF,
@@ -670,7 +673,7 @@ enum srbds_mitigations {
 };
 
 static enum srbds_mitigations srbds_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_FULL : SRBDS_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_AUTO : SRBDS_MITIGATION_OFF;
 
 static const char * const srbds_strings[] = {
 	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
@@ -724,6 +727,9 @@ static void __init srbds_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
 		return;
 
+	if (srbds_mitigation == SRBDS_MITIGATION_AUTO)
+		srbds_mitigation = SRBDS_MITIGATION_FULL;
+
 	/*
 	 * Check to see if this is one of the MDS_NO systems supporting TSX that
 	 * are only exposed to SRBDS when TSX is enabled or when CPU is affected
@@ -738,6 +744,12 @@ static void __init srbds_select_mitigation(void)
 		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
 	else if (cpu_mitigations_off() || srbds_off)
 		srbds_mitigation = SRBDS_MITIGATION_OFF;
+}
+
+static void __init srbds_apply_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+		return;
 
 	update_srbds_msr();
 	pr_info("%s\n", srbds_strings[srbds_mitigation]);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 09/34] x86/bugs: Restructure gds mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (7 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 08/34] x86/bugs: Restructure srbds mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 10/34] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
                   ` (25 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure gds mitigation to use select/apply functions to create
consistent vulnerability handling.

Define new AUTO mitigation for gds.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0fb97b94f5b9..7fee5c3de135 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -76,6 +76,7 @@ static void __init srbds_apply_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
 static void __init gds_select_mitigation(void);
+static void __init gds_apply_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR without task-specific bits set */
 u64 x86_spec_ctrl_base;
@@ -220,6 +221,7 @@ void __init cpu_select_mitigations(void)
 	mmio_apply_mitigation();
 	rfds_apply_mitigation();
 	srbds_apply_mitigation();
+	gds_apply_mitigation();
 }
 
 /*
@@ -801,6 +803,7 @@ early_param("l1d_flush", l1d_flush_parse_cmdline);
 
 enum gds_mitigations {
 	GDS_MITIGATION_OFF,
+	GDS_MITIGATION_AUTO,
 	GDS_MITIGATION_UCODE_NEEDED,
 	GDS_MITIGATION_FORCE,
 	GDS_MITIGATION_FULL,
@@ -809,7 +812,7 @@ enum gds_mitigations {
 };
 
 static enum gds_mitigations gds_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_FULL : GDS_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_AUTO : GDS_MITIGATION_OFF;
 
 static const char * const gds_strings[] = {
 	[GDS_MITIGATION_OFF]		= "Vulnerable",
@@ -850,6 +853,7 @@ void update_gds_msr(void)
 	case GDS_MITIGATION_FORCE:
 	case GDS_MITIGATION_UCODE_NEEDED:
 	case GDS_MITIGATION_HYPERVISOR:
+	case GDS_MITIGATION_AUTO:
 		return;
 	}
 
@@ -873,13 +877,16 @@ static void __init gds_select_mitigation(void)
 
 	if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
 		gds_mitigation = GDS_MITIGATION_HYPERVISOR;
-		goto out;
+		return;
 	}
 
 	if (cpu_mitigations_off())
 		gds_mitigation = GDS_MITIGATION_OFF;
 	/* Will verify below that mitigation _can_ be disabled */
 
+	if (gds_mitigation == GDS_MITIGATION_AUTO)
+		gds_mitigation = GDS_MITIGATION_FULL;
+
 	/* No microcode */
 	if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
 		if (gds_mitigation == GDS_MITIGATION_FORCE) {
@@ -892,7 +899,7 @@ static void __init gds_select_mitigation(void)
 		} else {
 			gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
 		}
-		goto out;
+		return;
 	}
 
 	/* Microcode has mitigation, use it */
@@ -914,8 +921,14 @@ static void __init gds_select_mitigation(void)
 		gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
 	}
 
+}
+
+static void __init gds_apply_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_GDS))
+		return;
+
 	update_gds_msr();
-out:
 	pr_info("%s\n", gds_strings[gds_mitigation]);
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 10/34] x86/bugs: Restructure spectre_v1 mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (8 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 09/34] x86/bugs: Restructure gds mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 11/34] x86/bugs: Restructure retbleed mitigation David Kaplan
                   ` (24 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure spectre_v1 to use select/apply functions to create
consistent vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 7fee5c3de135..ab49205ebb15 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -54,6 +54,7 @@
  */
 
 static void __init spectre_v1_select_mitigation(void);
+static void __init spectre_v1_apply_mitigation(void);
 static void __init spectre_v2_select_mitigation(void);
 static void __init retbleed_select_mitigation(void);
 static void __init spectre_v2_user_select_mitigation(void);
@@ -216,6 +217,7 @@ void __init cpu_select_mitigations(void)
 	mmio_update_mitigation();
 	rfds_update_mitigation();
 
+	spectre_v1_apply_mitigation();
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
@@ -989,11 +991,12 @@ static bool smap_works_speculatively(void)
 
 static void __init spectre_v1_select_mitigation(void)
 {
-	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
+	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
 		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
-		return;
-	}
+}
 
+static void __init spectre_v1_apply_mitigation(void)
+{
 	if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
 		/*
 		 * With Spectre v1, a user can speculatively control either
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 11/34] x86/bugs: Restructure retbleed mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (9 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 10/34] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-10-08  8:32   ` Nikolay Borisov
  2024-09-12 19:08 ` [RFC PATCH 12/34] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
                   ` (23 subsequent siblings)
  34 siblings, 1 reply; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure retbleed mitigation to use select/update/apply functions to
create consistent vulnerability handling.  The retbleed_update_mitigation()
simplifies the dependency between spectre_v2 and retbleed.

The command line options now directly select a preferred mitigation
which simplifies the logic.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 168 ++++++++++++++++---------------------
 1 file changed, 73 insertions(+), 95 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ab49205ebb15..13143854ca42 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -57,6 +57,8 @@ static void __init spectre_v1_select_mitigation(void);
 static void __init spectre_v1_apply_mitigation(void);
 static void __init spectre_v2_select_mitigation(void);
 static void __init retbleed_select_mitigation(void);
+static void __init retbleed_update_mitigation(void);
+static void __init retbleed_apply_mitigation(void);
 static void __init spectre_v2_user_select_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
@@ -180,11 +182,6 @@ void __init cpu_select_mitigations(void)
 	/* Select the proper CPU mitigations before patching alternatives: */
 	spectre_v1_select_mitigation();
 	spectre_v2_select_mitigation();
-	/*
-	 * retbleed_select_mitigation() relies on the state set by
-	 * spectre_v2_select_mitigation(); specifically it wants to know about
-	 * spectre_v2=ibrs.
-	 */
 	retbleed_select_mitigation();
 	/*
 	 * spectre_v2_user_select_mitigation() relies on the state set by
@@ -212,12 +209,14 @@ void __init cpu_select_mitigations(void)
 	 * After mitigations are selected, some may need to update their
 	 * choices.
 	 */
+	retbleed_update_mitigation();
 	mds_update_mitigation();
 	taa_update_mitigation();
 	mmio_update_mitigation();
 	rfds_update_mitigation();
 
 	spectre_v1_apply_mitigation();
+	retbleed_apply_mitigation();
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
@@ -1050,6 +1049,7 @@ enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;
 
 enum retbleed_mitigation {
 	RETBLEED_MITIGATION_NONE,
+	RETBLEED_MITIGATION_AUTO,
 	RETBLEED_MITIGATION_UNRET,
 	RETBLEED_MITIGATION_IBPB,
 	RETBLEED_MITIGATION_IBRS,
@@ -1057,14 +1057,6 @@ enum retbleed_mitigation {
 	RETBLEED_MITIGATION_STUFF,
 };
 
-enum retbleed_mitigation_cmd {
-	RETBLEED_CMD_OFF,
-	RETBLEED_CMD_AUTO,
-	RETBLEED_CMD_UNRET,
-	RETBLEED_CMD_IBPB,
-	RETBLEED_CMD_STUFF,
-};
-
 static const char * const retbleed_strings[] = {
 	[RETBLEED_MITIGATION_NONE]	= "Vulnerable",
 	[RETBLEED_MITIGATION_UNRET]	= "Mitigation: untrained return thunk",
@@ -1075,9 +1067,7 @@ static const char * const retbleed_strings[] = {
 };
 
 static enum retbleed_mitigation retbleed_mitigation __ro_after_init =
-	RETBLEED_MITIGATION_NONE;
-static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_CMD_AUTO : RETBLEED_CMD_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_MITIGATION_AUTO : RETBLEED_MITIGATION_NONE;
 
 static int __ro_after_init retbleed_nosmt = false;
 
@@ -1094,15 +1084,15 @@ static int __init retbleed_parse_cmdline(char *str)
 		}
 
 		if (!strcmp(str, "off")) {
-			retbleed_cmd = RETBLEED_CMD_OFF;
+			retbleed_mitigation = RETBLEED_MITIGATION_NONE;
 		} else if (!strcmp(str, "auto")) {
-			retbleed_cmd = RETBLEED_CMD_AUTO;
+			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
 		} else if (!strcmp(str, "unret")) {
-			retbleed_cmd = RETBLEED_CMD_UNRET;
+			retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
 		} else if (!strcmp(str, "ibpb")) {
-			retbleed_cmd = RETBLEED_CMD_IBPB;
+			retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
 		} else if (!strcmp(str, "stuff")) {
-			retbleed_cmd = RETBLEED_CMD_STUFF;
+			retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
 		} else if (!strcmp(str, "nosmt")) {
 			retbleed_nosmt = true;
 		} else if (!strcmp(str, "force")) {
@@ -1123,53 +1113,38 @@ early_param("retbleed", retbleed_parse_cmdline);
 
 static void __init retbleed_select_mitigation(void)
 {
-	bool mitigate_smt = false;
-
 	if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
 		return;
 
-	switch (retbleed_cmd) {
-	case RETBLEED_CMD_OFF:
-		return;
-
-	case RETBLEED_CMD_UNRET:
-		if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
-			retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
-		} else {
+	switch (retbleed_mitigation) {
+	case RETBLEED_MITIGATION_UNRET:
+		if (!IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
+			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
 			pr_err("WARNING: kernel not compiled with MITIGATION_UNRET_ENTRY.\n");
-			goto do_cmd_auto;
 		}
 		break;
-
-	case RETBLEED_CMD_IBPB:
-		if (!boot_cpu_has(X86_FEATURE_IBPB)) {
-			pr_err("WARNING: CPU does not support IBPB.\n");
-			goto do_cmd_auto;
-		} else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
-			retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
-		} else {
-			pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
-			goto do_cmd_auto;
+	case RETBLEED_MITIGATION_IBPB:
+		if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
+			if (!boot_cpu_has(X86_FEATURE_IBPB)) {
+				pr_err("WARNING: CPU does not support IBPB.\n");
+				retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+			} else if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
+				pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
+				retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+			}
 		}
 		break;
-
-	case RETBLEED_CMD_STUFF:
-		if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) &&
-		    spectre_v2_enabled == SPECTRE_V2_RETPOLINE) {
-			retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
-
-		} else {
-			if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING))
-				pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
-			else
-				pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
-
-			goto do_cmd_auto;
+	case RETBLEED_MITIGATION_STUFF:
+		if (!IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING)) {
+			pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
+			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
 		}
 		break;
+	default:
+		break;
+	}
 
-do_cmd_auto:
-	case RETBLEED_CMD_AUTO:
+	if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
 		if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
 		    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
 			if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
@@ -1178,16 +1153,50 @@ static void __init retbleed_select_mitigation(void)
 				 boot_cpu_has(X86_FEATURE_IBPB))
 				retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
 		}
+	}
+}
 
-		/*
-		 * The Intel mitigation (IBRS or eIBRS) was already selected in
-		 * spectre_v2_select_mitigation().  'retbleed_mitigation' will
-		 * be set accordingly below.
-		 */
+static void __init retbleed_update_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_RETBLEED))
+		return;
+	/*
+	 * Let IBRS trump all on Intel without affecting the effects of the
+	 * retbleed= cmdline option except for call depth based stuffing
+	 */
+	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+		switch (spectre_v2_enabled) {
+		case SPECTRE_V2_IBRS:
+			retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
+			break;
+		case SPECTRE_V2_EIBRS:
+		case SPECTRE_V2_EIBRS_RETPOLINE:
+		case SPECTRE_V2_EIBRS_LFENCE:
+			retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
+			break;
+		default:
+			if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
+				pr_err(RETBLEED_INTEL_MSG);
+		}
+	}
 
-		break;
+	if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF) {
+		if (spectre_v2_enabled != SPECTRE_V2_RETPOLINE) {
+			pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
+			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+			/* Try again */
+			retbleed_select_mitigation();
+		}
 	}
 
+	pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
+}
+
+
+static void __init retbleed_apply_mitigation(void)
+{
+	bool mitigate_smt = false;
+
 	switch (retbleed_mitigation) {
 	case RETBLEED_MITIGATION_UNRET:
 		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
@@ -1223,27 +1232,6 @@ static void __init retbleed_select_mitigation(void)
 	    (retbleed_nosmt || cpu_mitigations_auto_nosmt()))
 		cpu_smt_disable(false);
 
-	/*
-	 * Let IBRS trump all on Intel without affecting the effects of the
-	 * retbleed= cmdline option except for call depth based stuffing
-	 */
-	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
-		switch (spectre_v2_enabled) {
-		case SPECTRE_V2_IBRS:
-			retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
-			break;
-		case SPECTRE_V2_EIBRS:
-		case SPECTRE_V2_EIBRS_RETPOLINE:
-		case SPECTRE_V2_EIBRS_LFENCE:
-			retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
-			break;
-		default:
-			if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
-				pr_err(RETBLEED_INTEL_MSG);
-		}
-	}
-
-	pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
 }
 
 #undef pr_fmt
@@ -1796,16 +1784,6 @@ static void __init spectre_v2_select_mitigation(void)
 			break;
 		}
 
-		if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
-		    boot_cpu_has_bug(X86_BUG_RETBLEED) &&
-		    retbleed_cmd != RETBLEED_CMD_OFF &&
-		    retbleed_cmd != RETBLEED_CMD_STUFF &&
-		    boot_cpu_has(X86_FEATURE_IBRS) &&
-		    boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
-			mode = SPECTRE_V2_IBRS;
-			break;
-		}
-
 		mode = spectre_v2_select_retpoline();
 		break;
 
@@ -1948,7 +1926,7 @@ static void __init spectre_v2_select_mitigation(void)
 	    (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
 	     boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) {
 
-		if (retbleed_cmd != RETBLEED_CMD_IBPB) {
+		if (retbleed_mitigation != RETBLEED_MITIGATION_IBPB) {
 			setup_force_cpu_cap(X86_FEATURE_USE_IBPB_FW);
 			pr_info("Enabling Speculation Barrier for firmware calls\n");
 		}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 12/34] x86/bugs: Restructure spectre_v2_user mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (10 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 11/34] x86/bugs: Restructure retbleed mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 13/34] x86/bugs: Restructure bhi mitigation David Kaplan
                   ` (22 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure spectre_v2_user to use select/update/apply functions to
create consistent vulnerability handling.

The ibpb/stibp choices are first decided based on the spectre_v2_user
command line but can be modified by the spectre_v2 command line option
as well.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 143 ++++++++++++++++++++-----------------
 1 file changed, 79 insertions(+), 64 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 13143854ca42..eaef5a1cb4a3 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -60,6 +60,8 @@ static void __init retbleed_select_mitigation(void);
 static void __init retbleed_update_mitigation(void);
 static void __init retbleed_apply_mitigation(void);
 static void __init spectre_v2_user_select_mitigation(void);
+static void __init spectre_v2_user_update_mitigation(void);
+static void __init spectre_v2_user_apply_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
@@ -183,11 +185,6 @@ void __init cpu_select_mitigations(void)
 	spectre_v1_select_mitigation();
 	spectre_v2_select_mitigation();
 	retbleed_select_mitigation();
-	/*
-	 * spectre_v2_user_select_mitigation() relies on the state set by
-	 * retbleed_select_mitigation(); specifically the STIBP selection is
-	 * forced for UNRET or IBPB.
-	 */
 	spectre_v2_user_select_mitigation();
 	ssb_select_mitigation();
 	l1tf_select_mitigation();
@@ -210,6 +207,7 @@ void __init cpu_select_mitigations(void)
 	 * choices.
 	 */
 	retbleed_update_mitigation();
+	spectre_v2_user_update_mitigation();
 	mds_update_mitigation();
 	taa_update_mitigation();
 	mmio_update_mitigation();
@@ -217,6 +215,7 @@ void __init cpu_select_mitigations(void)
 
 	spectre_v1_apply_mitigation();
 	retbleed_apply_mitigation();
+	spectre_v2_user_apply_mitigation();
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
@@ -1311,6 +1310,8 @@ enum spectre_v2_mitigation_cmd {
 	SPECTRE_V2_CMD_IBRS,
 };
 
+enum spectre_v2_mitigation_cmd spectre_v2_cmd __ro_after_init = SPECTRE_V2_CMD_AUTO;
+
 enum spectre_v2_user_cmd {
 	SPECTRE_V2_USER_CMD_NONE,
 	SPECTRE_V2_USER_CMD_AUTO,
@@ -1349,22 +1350,14 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
 		pr_info("spectre_v2_user=%s forced on command line.\n", reason);
 }
 
-static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd;
-
 static enum spectre_v2_user_cmd __init
 spectre_v2_parse_user_cmdline(void)
 {
 	char arg[20];
 	int ret, i;
 
-	switch (spectre_v2_cmd) {
-	case SPECTRE_V2_CMD_NONE:
+	if (cpu_mitigations_off())
 		return SPECTRE_V2_USER_CMD_NONE;
-	case SPECTRE_V2_CMD_FORCE:
-		return SPECTRE_V2_USER_CMD_FORCE;
-	default:
-		break;
-	}
 
 	ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
 				  arg, sizeof(arg));
@@ -1388,65 +1381,70 @@ static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
 	return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
 }
 
+
 static void __init
 spectre_v2_user_select_mitigation(void)
 {
-	enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
-	bool smt_possible = IS_ENABLED(CONFIG_SMP);
 	enum spectre_v2_user_cmd cmd;
 
 	if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
 		return;
 
-	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
-	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
-		smt_possible = false;
-
 	cmd = spectre_v2_parse_user_cmdline();
 	switch (cmd) {
 	case SPECTRE_V2_USER_CMD_NONE:
-		goto set_mode;
+		return;
 	case SPECTRE_V2_USER_CMD_FORCE:
-		mode = SPECTRE_V2_USER_STRICT;
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
 		break;
 	case SPECTRE_V2_USER_CMD_AUTO:
 	case SPECTRE_V2_USER_CMD_PRCTL:
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
+		break;
 	case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
-		mode = SPECTRE_V2_USER_PRCTL;
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
 		break;
 	case SPECTRE_V2_USER_CMD_SECCOMP:
-	case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
 		if (IS_ENABLED(CONFIG_SECCOMP))
-			mode = SPECTRE_V2_USER_SECCOMP;
+			spectre_v2_user_ibpb = SPECTRE_V2_USER_SECCOMP;
 		else
-			mode = SPECTRE_V2_USER_PRCTL;
+			spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+		spectre_v2_user_stibp = spectre_v2_user_ibpb;
+		break;
+	case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
 		break;
 	}
 
-	/* Initialize Indirect Branch Prediction Barrier */
-	if (boot_cpu_has(X86_FEATURE_IBPB)) {
-		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+	/*
+	 * At this point, an STIBP mode other than "off" has been set.
+	 * If STIBP support is not being forced, check if STIBP always-on
+	 * is preferred.
+	 */
+	if (spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT &&
+	    boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
+		spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT_PREFERRED;
+}
 
-		spectre_v2_user_ibpb = mode;
-		switch (cmd) {
-		case SPECTRE_V2_USER_CMD_NONE:
-			break;
-		case SPECTRE_V2_USER_CMD_FORCE:
-		case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
-		case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
-			static_branch_enable(&switch_mm_always_ibpb);
-			spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
-			break;
-		case SPECTRE_V2_USER_CMD_PRCTL:
-		case SPECTRE_V2_USER_CMD_AUTO:
-		case SPECTRE_V2_USER_CMD_SECCOMP:
-			static_branch_enable(&switch_mm_cond_ibpb);
-			break;
-		}
+static void __init spectre_v2_user_update_mitigation(void)
+{
+	bool smt_possible = IS_ENABLED(CONFIG_SMP);
 
-		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
-			static_key_enabled(&switch_mm_always_ibpb) ?
-			"always-on" : "conditional");
+	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
+	    cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
+		smt_possible = false;
+
+	/* The spectre_v2 cmd line can override spectre_v2_user options */
+	if (spectre_v2_cmd == SPECTRE_V2_CMD_NONE) {
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_NONE;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
+	} else if (spectre_v2_cmd == SPECTRE_V2_CMD_FORCE) {
+		spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
 	}
 
 	/*
@@ -1464,30 +1462,47 @@ spectre_v2_user_select_mitigation(void)
 	if (!boot_cpu_has(X86_FEATURE_STIBP) ||
 	    !smt_possible ||
 	    (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
-	     !boot_cpu_has(X86_FEATURE_AUTOIBRS)))
+	     !boot_cpu_has(X86_FEATURE_AUTOIBRS))) {
+		spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
 		return;
-
-	/*
-	 * At this point, an STIBP mode other than "off" has been set.
-	 * If STIBP support is not being forced, check if STIBP always-on
-	 * is preferred.
-	 */
-	if (mode != SPECTRE_V2_USER_STRICT &&
-	    boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
-		mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+	}
 
 	if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
 	    retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
-		if (mode != SPECTRE_V2_USER_STRICT &&
-		    mode != SPECTRE_V2_USER_STRICT_PREFERRED)
+		if (spectre_v2_user_stibp != SPECTRE_V2_USER_NONE &&
+		    spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT &&
+		    spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT_PREFERRED)
 			pr_info("Selecting STIBP always-on mode to complement retbleed mitigation\n");
-		mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+		spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT_PREFERRED;
 	}
+	pr_info("%s\n", spectre_v2_user_strings[spectre_v2_user_stibp]);
+}
 
-	spectre_v2_user_stibp = mode;
+static void __init spectre_v2_user_apply_mitigation(void)
+{
+	/* Initialize Indirect Branch Prediction Barrier */
+	if (boot_cpu_has(X86_FEATURE_IBPB) &&
+	    spectre_v2_user_ibpb != SPECTRE_V2_USER_NONE) {
+		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
 
-set_mode:
-	pr_info("%s\n", spectre_v2_user_strings[mode]);
+		switch (spectre_v2_user_ibpb) {
+		case SPECTRE_V2_USER_NONE:
+			break;
+		case SPECTRE_V2_USER_STRICT:
+			static_branch_enable(&switch_mm_always_ibpb);
+			break;
+		case SPECTRE_V2_USER_PRCTL:
+		case SPECTRE_V2_USER_SECCOMP:
+			static_branch_enable(&switch_mm_cond_ibpb);
+			break;
+		default:
+			break;
+		}
+
+		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
+			static_key_enabled(&switch_mm_always_ibpb) ?
+			"always-on" : "conditional");
+	}
 }
 
 static const char * const spectre_v2_strings[] = {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 13/34] x86/bugs: Restructure bhi mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (11 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 12/34] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-10-08 12:41   ` Nikolay Borisov
  2024-09-12 19:08 ` [RFC PATCH 14/34] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
                   ` (21 subsequent siblings)
  34 siblings, 1 reply; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure bhi mitigation to use select/apply functions to create
consistent vulnerability handling.

Define new AUTO mitigation for bhi.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 22 ++++++++++++++++++----
 1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index eaef5a1cb4a3..da6ca2fc939d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -82,6 +82,8 @@ static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
 static void __init gds_select_mitigation(void);
 static void __init gds_apply_mitigation(void);
+static void __init bhi_select_mitigation(void);
+static void __init bhi_apply_mitigation(void);
 
 /* The base value of the SPEC_CTRL MSR without task-specific bits set */
 u64 x86_spec_ctrl_base;
@@ -201,6 +203,7 @@ void __init cpu_select_mitigations(void)
 	 */
 	srso_select_mitigation();
 	gds_select_mitigation();
+	bhi_select_mitigation();
 
 	/*
 	 * After mitigations are selected, some may need to update their
@@ -222,6 +225,7 @@ void __init cpu_select_mitigations(void)
 	rfds_apply_mitigation();
 	srbds_apply_mitigation();
 	gds_apply_mitigation();
+	bhi_apply_mitigation();
 }
 
 /*
@@ -1719,12 +1723,13 @@ static bool __init spec_ctrl_bhi_dis(void)
 
 enum bhi_mitigations {
 	BHI_MITIGATION_OFF,
+	BHI_MITIGATION_AUTO,
 	BHI_MITIGATION_ON,
 	BHI_MITIGATION_VMEXIT_ONLY,
 };
 
 static enum bhi_mitigations bhi_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_ON : BHI_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_AUTO : BHI_MITIGATION_OFF;
 
 static int __init spectre_bhi_parse_cmdline(char *str)
 {
@@ -1745,6 +1750,18 @@ static int __init spectre_bhi_parse_cmdline(char *str)
 early_param("spectre_bhi", spectre_bhi_parse_cmdline);
 
 static void __init bhi_select_mitigation(void)
+{
+	if (!boot_cpu_has(X86_BUG_BHI) || cpu_mitigations_off())
+		return;
+
+	if (bhi_mitigation == BHI_MITIGATION_OFF)
+		return;
+
+	if (bhi_mitigation == BHI_MITIGATION_AUTO)
+		bhi_mitigation = BHI_MITIGATION_ON;
+}
+
+static void __init bhi_apply_mitigation(void)
 {
 	if (bhi_mitigation == BHI_MITIGATION_OFF)
 		return;
@@ -1876,9 +1893,6 @@ static void __init spectre_v2_select_mitigation(void)
 	    mode == SPECTRE_V2_RETPOLINE)
 		spec_ctrl_disable_kernel_rrsba();
 
-	if (boot_cpu_has(X86_BUG_BHI))
-		bhi_select_mitigation();
-
 	spectre_v2_enabled = mode;
 	pr_info("%s\n", spectre_v2_strings[mode]);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 14/34] x86/bugs: Restructure spectre_v2 mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (12 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 13/34] x86/bugs: Restructure bhi mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 15/34] x86/bugs: Restructure ssb mitigation David Kaplan
                   ` (20 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure spectre_v2 to use select/update/apply functions to create
consistent vulnerability handling.

The spectre_v2 mitigation may be updated based on the selected retbleed
mitigation.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 52 ++++++++++++++++++++++++++------------
 1 file changed, 36 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index da6ca2fc939d..32ebe9e934fe 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -56,6 +56,8 @@
 static void __init spectre_v1_select_mitigation(void);
 static void __init spectre_v1_apply_mitigation(void);
 static void __init spectre_v2_select_mitigation(void);
+static void __init spectre_v2_update_mitigation(void);
+static void __init spectre_v2_apply_mitigation(void);
 static void __init retbleed_select_mitigation(void);
 static void __init retbleed_update_mitigation(void);
 static void __init retbleed_apply_mitigation(void);
@@ -209,6 +211,7 @@ void __init cpu_select_mitigations(void)
 	 * After mitigations are selected, some may need to update their
 	 * choices.
 	 */
+	spectre_v2_update_mitigation();
 	retbleed_update_mitigation();
 	spectre_v2_user_update_mitigation();
 	mds_update_mitigation();
@@ -217,6 +220,7 @@ void __init cpu_select_mitigations(void)
 	rfds_update_mitigation();
 
 	spectre_v1_apply_mitigation();
+	spectre_v2_apply_mitigation();
 	retbleed_apply_mitigation();
 	spectre_v2_user_apply_mitigation();
 	mds_apply_mitigation();
@@ -1794,18 +1798,18 @@ static void __init bhi_apply_mitigation(void)
 
 static void __init spectre_v2_select_mitigation(void)
 {
-	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
 	enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
+	spectre_v2_cmd = spectre_v2_parse_cmdline();
 
 	/*
 	 * If the CPU is not affected and the command line mode is NONE or AUTO
 	 * then nothing to do.
 	 */
 	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
-	    (cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
+	    (spectre_v2_cmd == SPECTRE_V2_CMD_NONE || spectre_v2_cmd == SPECTRE_V2_CMD_AUTO))
 		return;
 
-	switch (cmd) {
+	switch (spectre_v2_cmd) {
 	case SPECTRE_V2_CMD_NONE:
 		return;
 
@@ -1849,10 +1853,29 @@ static void __init spectre_v2_select_mitigation(void)
 		break;
 	}
 
-	if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
+	spectre_v2_enabled = mode;
+}
+
+static void __init spectre_v2_update_mitigation(void)
+{
+	if (spectre_v2_cmd == SPECTRE_V2_CMD_AUTO) {
+		if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
+		    boot_cpu_has_bug(X86_BUG_RETBLEED) &&
+		    retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
+		    retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
+		    boot_cpu_has(X86_FEATURE_IBRS) &&
+		    boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+			spectre_v2_enabled = SPECTRE_V2_IBRS;
+		}
+	}
+}
+
+static void __init spectre_v2_apply_mitigation(void)
+{
+	if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
 		pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
 
-	if (spectre_v2_in_ibrs_mode(mode)) {
+	if (spectre_v2_in_ibrs_mode(spectre_v2_enabled)) {
 		if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
 			msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
 		} else {
@@ -1861,7 +1884,7 @@ static void __init spectre_v2_select_mitigation(void)
 		}
 	}
 
-	switch (mode) {
+	switch (spectre_v2_enabled) {
 	case SPECTRE_V2_NONE:
 	case SPECTRE_V2_EIBRS:
 		break;
@@ -1888,13 +1911,12 @@ static void __init spectre_v2_select_mitigation(void)
 	 * JMPs gets protection against BHI and Intramode-BTI, but RET
 	 * prediction from a non-RSB predictor is still a risk.
 	 */
-	if (mode == SPECTRE_V2_EIBRS_LFENCE ||
-	    mode == SPECTRE_V2_EIBRS_RETPOLINE ||
-	    mode == SPECTRE_V2_RETPOLINE)
+	if (spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE ||
+	    spectre_v2_enabled == SPECTRE_V2_EIBRS_RETPOLINE ||
+	    spectre_v2_enabled == SPECTRE_V2_RETPOLINE)
 		spec_ctrl_disable_kernel_rrsba();
 
-	spectre_v2_enabled = mode;
-	pr_info("%s\n", spectre_v2_strings[mode]);
+	pr_info("%s\n", spectre_v2_strings[spectre_v2_enabled]);
 
 	/*
 	 * If Spectre v2 protection has been enabled, fill the RSB during a
@@ -1937,7 +1959,7 @@ static void __init spectre_v2_select_mitigation(void)
 	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
 	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
 
-	spectre_v2_determine_rsb_fill_type_at_vmexit(mode);
+	spectre_v2_determine_rsb_fill_type_at_vmexit(spectre_v2_enabled);
 
 	/*
 	 * Retpoline protects the kernel, but doesn't protect firmware.  IBRS
@@ -1960,13 +1982,11 @@ static void __init spectre_v2_select_mitigation(void)
 			pr_info("Enabling Speculation Barrier for firmware calls\n");
 		}
 
-	} else if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) {
+	} else if (boot_cpu_has(X86_FEATURE_IBRS) &&
+		   !spectre_v2_in_ibrs_mode(spectre_v2_enabled)) {
 		setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
 		pr_info("Enabling Restricted Speculation for firmware calls\n");
 	}
-
-	/* Set up IBPB and STIBP depending on the general spectre V2 command */
-	spectre_v2_cmd = cmd;
 }
 
 static void update_stibp_msr(void * __unused)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 15/34] x86/bugs: Restructure ssb mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (13 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 14/34] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-10-08 15:21   ` Nikolay Borisov
  2024-09-12 19:08 ` [RFC PATCH 16/34] x86/bugs: Restructure l1tf mitigation David Kaplan
                   ` (19 subsequent siblings)
  34 siblings, 1 reply; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure ssb to use select/apply functions to create consistent
vulnerability handling.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 32ebe9e934fe..c996c1521851 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,7 @@ static void __init spectre_v2_user_select_mitigation(void);
 static void __init spectre_v2_user_update_mitigation(void);
 static void __init spectre_v2_user_apply_mitigation(void);
 static void __init ssb_select_mitigation(void);
+static void __init ssb_apply_mitigation(void);
 static void __init l1tf_select_mitigation(void);
 static void __init mds_select_mitigation(void);
 static void __init mds_update_mitigation(void);
@@ -223,6 +224,7 @@ void __init cpu_select_mitigations(void)
 	spectre_v2_apply_mitigation();
 	retbleed_apply_mitigation();
 	spectre_v2_user_apply_mitigation();
+	ssb_apply_mitigation();
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
@@ -2211,13 +2213,26 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
 		break;
 	}
 
+	return mode;
+}
+
+static void ssb_select_mitigation(void)
+{
+	ssb_mode = __ssb_select_mitigation();
+
+	if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
+		pr_info("%s\n", ssb_strings[ssb_mode]);
+}
+
+static void __init ssb_apply_mitigation(void)
+{
 	/*
 	 * We have three CPU feature flags that are in play here:
 	 *  - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
 	 *  - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass
 	 *  - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE - engage the mitigation
 	 */
-	if (mode == SPEC_STORE_BYPASS_DISABLE) {
+	if (ssb_mode == SPEC_STORE_BYPASS_DISABLE) {
 		setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
 		/*
 		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
@@ -2232,15 +2247,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
 		}
 	}
 
-	return mode;
-}
-
-static void ssb_select_mitigation(void)
-{
-	ssb_mode = __ssb_select_mitigation();
-
-	if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
-		pr_info("%s\n", ssb_strings[ssb_mode]);
 }
 
 #undef pr_fmt
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 16/34] x86/bugs: Restructure l1tf mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (14 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 15/34] x86/bugs: Restructure ssb mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 17/34] x86/bugs: Restructure srso mitigation David Kaplan
                   ` (18 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure l1tf to use select/apply functions to create consistent
vulnerability handling.

Define new AUTO mitigation for l1tf.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/include/asm/processor.h |  1 +
 arch/x86/kernel/cpu/bugs.c       | 28 ++++++++++++++++++++--------
 2 files changed, 21 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 187805f7db3f..ba4005a7c0e3 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -728,6 +728,7 @@ void store_cpu_caps(struct cpuinfo_x86 *info);
 
 enum l1tf_mitigations {
 	L1TF_MITIGATION_OFF,
+	L1TF_MITIGATION_AUTO,
 	L1TF_MITIGATION_FLUSH_NOWARN,
 	L1TF_MITIGATION_FLUSH,
 	L1TF_MITIGATION_FLUSH_NOSMT,
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c996c1521851..ba10aa37d949 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -67,6 +67,7 @@ static void __init spectre_v2_user_apply_mitigation(void);
 static void __init ssb_select_mitigation(void);
 static void __init ssb_apply_mitigation(void);
 static void __init l1tf_select_mitigation(void);
+static void __init l1tf_apply_mitigation(void);
 static void __init mds_select_mitigation(void);
 static void __init mds_update_mitigation(void);
 static void __init mds_apply_mitigation(void);
@@ -225,6 +226,7 @@ void __init cpu_select_mitigations(void)
 	retbleed_apply_mitigation();
 	spectre_v2_user_apply_mitigation();
 	ssb_apply_mitigation();
+	l1tf_apply_mitigation();
 	mds_apply_mitigation();
 	taa_apply_mitigation();
 	mmio_apply_mitigation();
@@ -2502,7 +2504,7 @@ EXPORT_SYMBOL_GPL(itlb_multihit_kvm_mitigation);
 
 /* Default mitigation for L1TF-affected CPUs */
 enum l1tf_mitigations l1tf_mitigation __ro_after_init =
-	IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_FLUSH : L1TF_MITIGATION_OFF;
+	IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_AUTO : L1TF_MITIGATION_OFF;
 #if IS_ENABLED(CONFIG_KVM_INTEL)
 EXPORT_SYMBOL_GPL(l1tf_mitigation);
 #endif
@@ -2550,22 +2552,32 @@ static void override_cache_bits(struct cpuinfo_x86 *c)
 
 static void __init l1tf_select_mitigation(void)
 {
-	u64 half_pa;
-
-	if (!boot_cpu_has_bug(X86_BUG_L1TF))
+	if (!boot_cpu_has_bug(X86_BUG_L1TF) || cpu_mitigations_off()) {
+		l1tf_mitigation = L1TF_MITIGATION_OFF;
 		return;
+	}
 
-	if (cpu_mitigations_off())
-		l1tf_mitigation = L1TF_MITIGATION_OFF;
-	else if (cpu_mitigations_auto_nosmt())
-		l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
+	if (l1tf_mitigation == L1TF_MITIGATION_AUTO) {
+		if (cpu_mitigations_auto_nosmt())
+			l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
+		else
+			l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+	}
+
+}
+
+static void __init l1tf_apply_mitigation(void)
+{
+	u64 half_pa;
 
 	override_cache_bits(&boot_cpu_data);
 
 	switch (l1tf_mitigation) {
 	case L1TF_MITIGATION_OFF:
+		return;
 	case L1TF_MITIGATION_FLUSH_NOWARN:
 	case L1TF_MITIGATION_FLUSH:
+	case L1TF_MITIGATION_AUTO:
 		break;
 	case L1TF_MITIGATION_FLUSH_NOSMT:
 	case L1TF_MITIGATION_FULL:
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 17/34] x86/bugs: Restructure srso mitigation
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (15 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 16/34] x86/bugs: Restructure l1tf mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 18/34] Documentation/x86: Document the new attack vector controls David Kaplan
                   ` (17 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Restructure srso to use select/update/apply functions to create
consistent vulnerability handling.  Like with retbleed, the command line
options directly select mitigations which can later be modified.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 136 ++++++++++++++++++-------------------
 1 file changed, 68 insertions(+), 68 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ba10aa37d949..334fd2c5251d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -84,6 +84,8 @@ static void __init srbds_select_mitigation(void);
 static void __init srbds_apply_mitigation(void);
 static void __init l1d_flush_select_mitigation(void);
 static void __init srso_select_mitigation(void);
+static void __init srso_update_mitigation(void);
+static void __init srso_apply_mitigation(void);
 static void __init gds_select_mitigation(void);
 static void __init gds_apply_mitigation(void);
 static void __init bhi_select_mitigation(void);
@@ -200,11 +202,6 @@ void __init cpu_select_mitigations(void)
 	rfds_select_mitigation();
 	srbds_select_mitigation();
 	l1d_flush_select_mitigation();
-
-	/*
-	 * srso_select_mitigation() depends and must run after
-	 * retbleed_select_mitigation().
-	 */
 	srso_select_mitigation();
 	gds_select_mitigation();
 	bhi_select_mitigation();
@@ -220,6 +217,7 @@ void __init cpu_select_mitigations(void)
 	taa_update_mitigation();
 	mmio_update_mitigation();
 	rfds_update_mitigation();
+	srso_update_mitigation();
 
 	spectre_v1_apply_mitigation();
 	spectre_v2_apply_mitigation();
@@ -232,6 +230,7 @@ void __init cpu_select_mitigations(void)
 	mmio_apply_mitigation();
 	rfds_apply_mitigation();
 	srbds_apply_mitigation();
+	srso_apply_mitigation();
 	gds_apply_mitigation();
 	bhi_apply_mitigation();
 }
@@ -2637,6 +2636,7 @@ early_param("l1tf", l1tf_cmdline);
 
 enum srso_mitigation {
 	SRSO_MITIGATION_NONE,
+	SRSO_MITIGATION_AUTO,
 	SRSO_MITIGATION_UCODE_NEEDED,
 	SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
 	SRSO_MITIGATION_MICROCODE,
@@ -2645,14 +2645,6 @@ enum srso_mitigation {
 	SRSO_MITIGATION_IBPB_ON_VMEXIT,
 };
 
-enum srso_mitigation_cmd {
-	SRSO_CMD_OFF,
-	SRSO_CMD_MICROCODE,
-	SRSO_CMD_SAFE_RET,
-	SRSO_CMD_IBPB,
-	SRSO_CMD_IBPB_ON_VMEXIT,
-};
-
 static const char * const srso_strings[] = {
 	[SRSO_MITIGATION_NONE]			= "Vulnerable",
 	[SRSO_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
@@ -2663,8 +2655,7 @@ static const char * const srso_strings[] = {
 	[SRSO_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT only"
 };
 
-static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
-static enum srso_mitigation_cmd srso_cmd __ro_after_init = SRSO_CMD_SAFE_RET;
+static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_AUTO;
 
 static int __init srso_parse_cmdline(char *str)
 {
@@ -2672,15 +2663,15 @@ static int __init srso_parse_cmdline(char *str)
 		return -EINVAL;
 
 	if (!strcmp(str, "off"))
-		srso_cmd = SRSO_CMD_OFF;
+		srso_mitigation = SRSO_MITIGATION_NONE;
 	else if (!strcmp(str, "microcode"))
-		srso_cmd = SRSO_CMD_MICROCODE;
+		srso_mitigation = SRSO_MITIGATION_MICROCODE;
 	else if (!strcmp(str, "safe-ret"))
-		srso_cmd = SRSO_CMD_SAFE_RET;
+		srso_mitigation = SRSO_MITIGATION_SAFE_RET;
 	else if (!strcmp(str, "ibpb"))
-		srso_cmd = SRSO_CMD_IBPB;
+		srso_mitigation = SRSO_MITIGATION_IBPB;
 	else if (!strcmp(str, "ibpb-vmexit"))
-		srso_cmd = SRSO_CMD_IBPB_ON_VMEXIT;
+		srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
 	else
 		pr_err("Ignoring unknown SRSO option (%s).", str);
 
@@ -2696,12 +2687,16 @@ static void __init srso_select_mitigation(void)
 
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
 	    cpu_mitigations_off() ||
-	    srso_cmd == SRSO_CMD_OFF) {
+	    srso_mitigation == SRSO_MITIGATION_NONE) {
 		if (boot_cpu_has(X86_FEATURE_SBPB))
 			x86_pred_cmd = PRED_CMD_SBPB;
 		return;
 	}
 
+	/* Default mitigation */
+	if (srso_mitigation == SRSO_MITIGATION_AUTO)
+		srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+
 	if (has_microcode) {
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
@@ -2713,29 +2708,59 @@ static void __init srso_select_mitigation(void)
 			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
 			return;
 		}
-
-		if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
-			srso_mitigation = SRSO_MITIGATION_IBPB;
-			goto out;
-		}
 	} else {
 		pr_warn("IBPB-extending microcode not applied!\n");
 		pr_warn(SRSO_NOTICE);
 
-		/* may be overwritten by SRSO_CMD_SAFE_RET below */
-		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
+		/* Fall-back to Safe-RET */
+		srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
 	}
 
-	switch (srso_cmd) {
-	case SRSO_CMD_MICROCODE:
-		if (has_microcode) {
-			srso_mitigation = SRSO_MITIGATION_MICROCODE;
-			pr_warn(SRSO_NOTICE);
-		}
+	switch (srso_mitigation) {
+	case SRSO_MITIGATION_MICROCODE:
+		pr_warn(SRSO_NOTICE);
+		break;
+
+	case SRSO_MITIGATION_SAFE_RET:
+	case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
+		if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
+			pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
 		break;
 
-	case SRSO_CMD_SAFE_RET:
-		if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
+	case SRSO_MITIGATION_IBPB:
+		if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY))
+			pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
+		break;
+
+	case SRSO_MITIGATION_IBPB_ON_VMEXIT:
+		if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
+			pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+		break;
+	default:
+		break;
+	}
+}
+
+static void __init srso_update_mitigation(void)
+{
+	/* If retbleed is using IBPB, that works for SRSO as well */
+	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB)
+		srso_mitigation = SRSO_MITIGATION_IBPB;
+
+	pr_info("%s\n", srso_strings[srso_mitigation]);
+}
+
+static void __init srso_apply_mitigation(void)
+{
+	if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
+	     srso_mitigation == SRSO_MITIGATION_NONE) {
+		if (boot_cpu_has(X86_FEATURE_SBPB))
+			x86_pred_cmd = PRED_CMD_SBPB;
+		return;
+	}
+	switch (srso_mitigation) {
+	case SRSO_MITIGATION_SAFE_RET:
+	case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
 			/*
 			 * Enable the return thunk for generated code
 			 * like ftrace, static_call, etc.
@@ -2750,42 +2775,17 @@ static void __init srso_select_mitigation(void)
 				setup_force_cpu_cap(X86_FEATURE_SRSO);
 				x86_return_thunk = srso_return_thunk;
 			}
-			if (has_microcode)
-				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
-			else
-				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
-		} else {
-			pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
-		}
-		break;
-
-	case SRSO_CMD_IBPB:
-		if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
-			if (has_microcode) {
-				setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
-				srso_mitigation = SRSO_MITIGATION_IBPB;
-			}
-		} else {
-			pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
-		}
-		break;
-
-	case SRSO_CMD_IBPB_ON_VMEXIT:
-		if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
-			if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
-				setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
-				srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
-			}
-		} else {
-			pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
-                }
 		break;
+	case SRSO_MITIGATION_IBPB:
+			setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
+			break;
+	case SRSO_MITIGATION_IBPB_ON_VMEXIT:
+			setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+			break;
 	default:
-		break;
+			break;
 	}
 
-out:
-	pr_info("%s\n", srso_strings[srso_mitigation]);
 }
 
 #undef pr_fmt
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 18/34] Documentation/x86: Document the new attack vector controls
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (16 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 17/34] x86/bugs: Restructure srso mitigation David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-10-01  0:43   ` Manwaring, Derek
  2024-09-12 19:08 ` [RFC PATCH 19/34] x86/bugs: Define attack vectors David Kaplan
                   ` (16 subsequent siblings)
  34 siblings, 1 reply; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Document the 5 new attack vector command line options, how they
interact with existing vulnerability controls, and recommendations on
when they can be disabled.

Note that while mitigating against untrusted userspace requires both
mitigate_user_kernel and mitigate_user_user, these are kept separate.
The kernel can control what code executes inside of it and that may
affect the risk associated with vulnerabilities especially if new kernel
mitigations are implemented.  The same isn't typically true of userspace.

In other words, the risk associated with user_user or guest_guest
attacks is unlikely to change over time.  While the risk associated with
user_kernel or guest_host attacks may change.  Therefore, these controls
are separated.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 .../hw-vuln/attack_vector_controls.rst        | 172 ++++++++++++++++++
 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 2 files changed, 173 insertions(+)
 create mode 100644 Documentation/admin-guide/hw-vuln/attack_vector_controls.rst

diff --git a/Documentation/admin-guide/hw-vuln/attack_vector_controls.rst b/Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
new file mode 100644
index 000000000000..4f77e1e69090
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
@@ -0,0 +1,172 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Attack Vector Controls
+======================
+
+Attack vector controls provide a simple method to configure only the mitigations
+for CPU vulnerabilities which are relevant given the intended use of a system.
+Administrators are encouraged to consider which attack vectors are relevant and
+disable all others in order to recoup system performance.
+
+When new relevant CPU vulnerabilities are found, they will be added to these
+attack vector controls so administrators will likely not need to reconfigure
+their command line parameters as mitigations will continue to be correctly
+applied based on the chosen attack vector controls.
+
+Attack Vectors
+--------------
+
+There are 5 sets of attack-vector mitigations currently supported by the kernel:
+
+#. :ref:`user_kernel` (mitigate_user_kernel= )
+#. :ref:`user_user` (mitigate_user_user= )
+#. :ref:`guest_host` (mitigate_guest_host= )
+#. :ref:`guest_guest` (mitigate_guest_guest=)
+#. :ref:`cross_thread` (mitigate_cross_thread= )
+
+Each control may either be specified as 'off' or 'on'.
+
+.. _user_kernel:
+
+User-to-Kernel
+^^^^^^^^^^^^^^
+
+The user-to-kernel attack vector involves a malicious userspace program
+attempting to leak kernel data into userspace by exploiting a CPU vulnerability.
+The kernel data involved might be limited to certain kernel memory, or include
+all memory in the system, depending on the vulnerability exploited.
+
+If no untrusted userspace applications are being run, such as with single-user
+systems, consider disabling user-to-kernel mitigations.
+
+Note that the CPU vulnerabilities mitigated by Linux have generally not been
+shown to be exploitable from browser-based sandboxes.  User-to-kernel
+mitigations are therefore mostly relevant if unknown userspace applications may
+be run by untrusted users.
+
+*mitigate_user_kernel defaults to 'on'*
+
+.. _user_user:
+
+User-to-User
+^^^^^^^^^^^^
+
+The user-to-user attack vector involves a malicious userspace program attempting
+to influence the behavior of another unsuspecting userspace program in order to
+exfiltrate data.  The vulnerability of a userspace program is based on the
+program itself and the interfaces it provides.
+
+If no untrusted userspace applications are being run, consider disabling
+user-to-user mitigations.
+
+Note that because the Linux kernel contains a mapping of all physical memory,
+preventing a malicious userspace program from leaking data from another
+userspace program requires mitigating user-to-kernel attacks as well for
+complete protection.
+
+*mitigate_user_user defaults to 'on'*
+
+.. _guest_host:
+
+Guest-to-Host
+^^^^^^^^^^^^^
+
+The guest-to-host attack vector involves a malicious VM attempting to leak
+hypervisor data into the VM.  The data involved may be limited, or may
+potentially include all memory in the system, depending on the vulnerability
+exploited.
+
+If no untrusted VMs are being run, consider disabling guest-to-host mitigations.
+
+*mitigate_guest_host defaults to 'on' if KVM support is present*
+
+.. _guest_guest:
+
+Guest-to-Guest
+^^^^^^^^^^^^^^
+
+The guest-to-guest attack vector involves a malicious VM attempting to influence
+the behavior of another unsuspecting VM in order to exfiltrate data.  The
+vulnerability of a VM is based on the code inside the VM itself and the
+interfaces it provides.
+
+If no untrusted VMs, or only a single VM is being run, consider disabling
+guest-to-guest mitigations.
+
+Similar to the user-to-user attack vector, preventing a malicious VM from
+leaking data from another VM requires mitigating guest-to-host attacks as well
+due to the Linux kernel phys map.
+
+*mitigate_guest_guest defaults to 'on' if KVM support is present*
+
+.. _cross_thread:
+
+Cross-Thread
+^^^^^^^^^^^^
+
+The cross-thread attack vector involves a malicious userspace program or
+malicious VM either observing or attempting to influence the behavior of code
+running on the SMT sibling thread in order to exfiltrate data.
+
+Many cross-thread attacks can only be mitigated if SMT is disabled, which will
+result in reduced CPU core count and reduced performance.  Enabling mitigations
+for the cross-thread attack vector may result in SMT being disabled, depending
+on the CPU vulnerabilities detected.
+
+*mitigate_cross_thread defaults to 'off'*
+
+Interactions with command-line options
+--------------------------------------
+
+The global 'mitigations=off' command line takes precedence over all attack
+vector controls and will disable all mitigations.
+
+Vulnerability-specific controls (e.g. "retbleed=off") take precedence over all
+attack vector controls.  Mitigations for individual vulnerabilities may be
+turned on or off via their command-line options regardless of the attack vector
+controls.
+
+Summary of attack-vector mitigations
+------------------------------------
+
+When a vulnerability is mitigated due to an attack-vector control, the default
+mitigation option for that particular vulnerability is used.  To use a different
+mitigation, please use the vulnerability-specific command line option.
+
+The table below summarizes which vulnerabilities are mitigated when different
+attack vectors are enabled and assuming the CPU is vulnerable.
+
+=============== ============== ============ ============= ============== ============
+Vulnerability   User-to-Kernel User-to-User Guest-to-Host Guest-to-Guest Cross-Thread
+=============== ============== ============ ============= ============== ============
+BHI                   X                           X
+GDS                   X              X            X              X
+L1TF                                              X                       (Note 1)
+MDS                   X              X            X              X        (Note 1)
+MMIO                  X              X            X              X        (Note 1)
+Meltdown              X
+Retbleed              X                           X                       (Note 2)
+RFDS                  X              X            X              X
+Spectre_v1            X
+Spectre_v2            X                           X
+Spectre_v2_user                      X                           X
+SRBDS                 X              X            X              X
+SRSO                  X                           X
+SSB (Note 3)
+TAA                   X              X            X              X        (Note 1)
+=============== ============== ============ ============= ============== ============
+
+Notes:
+   1 --  Disables SMT if cross-thread mitigations are selected and CPU is vulnerable
+
+   2 --  Disables SMT if cross-thread mitigations are selected, CPU is vulnerable,
+   and STIBP is not supported
+
+   3 --  Speculative store bypass is always enabled by default (no kernel
+   mitigation applied) unless overridden with spec_store_bypass_disable option
+
+When an attack-vector is disabled (e.g., *mitigate_user_kernel=off*), all
+mitigations for the vulnerabilities listed in the above table are disabled,
+unless mitigation is required for a different enabled attack-vector or a
+mitigation is explicitly selected via a vulnerability-specific command line
+option.
diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index ff0b440ef2dc..1add4a0baeb0 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -9,6 +9,7 @@ are configurable at compile, boot or run time.
 .. toctree::
    :maxdepth: 1
 
+   attack_vector_controls
    spectre
    l1tf
    mds
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 19/34] x86/bugs: Define attack vectors
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (17 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 18/34] Documentation/x86: Document the new attack vector controls David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 20/34] x86/bugs: Determine relevant vulnerabilities based on attack vector controls David Kaplan
                   ` (15 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Define 5 new attack vectors that are used for controlling CPU
speculation mitigations and associated command line options.  Each
attack vector may be enabled or disabled, which affects the CPU
mitigations enabled.

The default settings for these attack vectors are consistent with
existing kernel defaults, other than the automatic disabling of VM-based
attack vectors if KVM support is not present.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 include/linux/cpu.h | 11 +++++++++
 kernel/cpu.c        | 58 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 69 insertions(+)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index bdcec1732445..b25566e1fb04 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -189,6 +189,17 @@ void cpuhp_report_idle_dead(void);
 static inline void cpuhp_report_idle_dead(void) { }
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
 
+enum cpu_attack_vectors {
+	CPU_MITIGATE_USER_KERNEL,
+	CPU_MITIGATE_USER_USER,
+	CPU_MITIGATE_GUEST_HOST,
+	CPU_MITIGATE_GUEST_GUEST,
+	CPU_MITIGATE_CROSS_THREAD,
+	NR_CPU_ATTACK_VECTORS,
+};
+
+bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v);
+
 #ifdef CONFIG_CPU_MITIGATIONS
 extern bool cpu_mitigations_off(void);
 extern bool cpu_mitigations_auto_nosmt(void);
diff --git a/kernel/cpu.c b/kernel/cpu.c
index d293d52a3e00..980653a55d9c 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -3201,6 +3201,22 @@ enum cpu_mitigations {
 
 static enum cpu_mitigations cpu_mitigations __ro_after_init = CPU_MITIGATIONS_AUTO;
 
+/*
+ * All except the cross-thread attack vector are mitigated by default.
+ * Cross-thread mitigation often requires disabling SMT which is too expensive
+ * to be enabled by default.
+ *
+ * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM support is
+ * present.
+ */
+static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS] __ro_after_init = {
+	[CPU_MITIGATE_USER_KERNEL] = true,
+	[CPU_MITIGATE_USER_USER] = true,
+	[CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
+	[CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
+	[CPU_MITIGATE_CROSS_THREAD] = false
+};
+
 static int __init mitigations_parse_cmdline(char *arg)
 {
 	if (!strcmp(arg, "off"))
@@ -3229,11 +3245,53 @@ bool cpu_mitigations_auto_nosmt(void)
 	return cpu_mitigations == CPU_MITIGATIONS_AUTO_NOSMT;
 }
 EXPORT_SYMBOL_GPL(cpu_mitigations_auto_nosmt);
+
+#define DEFINE_ATTACK_VECTOR(opt, v) \
+static int __init v##_parse_cmdline(char *arg) \
+{ \
+	if (!strcmp(arg, "off")) \
+		cpu_mitigate_attack_vectors[v] = false; \
+	else if (!strcmp(arg, "on")) \
+		cpu_mitigate_attack_vectors[v] = true; \
+	else \
+		pr_warn("Unsupported " opt "=%s\n", arg); \
+	return 0; \
+} \
+early_param(opt, v##_parse_cmdline)
+
+bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v)
+{
+	BUG_ON(v >= NR_CPU_ATTACK_VECTORS);
+	return cpu_mitigate_attack_vectors[v];
+}
+EXPORT_SYMBOL_GPL(cpu_mitigate_attack_vector);
+
 #else
 static int __init mitigations_parse_cmdline(char *arg)
 {
 	pr_crit("Kernel compiled without mitigations, ignoring 'mitigations'; system may still be vulnerable\n");
 	return 0;
 }
+
+#define DEFINE_ATTACK_VECTOR(opt, v) \
+static int __init v##_parse_cmdline(char *arg) \
+{ \
+	pr_crit("Kernel compiled without mitigations, ignoring %s; system may still be vulnerable\n", opt); \
+	return 0; \
+} \
+early_param(opt, v##_parse_cmdline)
+
+bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v)
+{
+	return false;
+}
+EXPORT_SYMBOL_GPL(cpu_mitigate_attack_vector);
+
 #endif
 early_param("mitigations", mitigations_parse_cmdline);
+
+DEFINE_ATTACK_VECTOR("mitigate_user_kernel", CPU_MITIGATE_USER_KERNEL);
+DEFINE_ATTACK_VECTOR("mitigate_user_user", CPU_MITIGATE_USER_USER);
+DEFINE_ATTACK_VECTOR("mitigate_guest_host", CPU_MITIGATE_GUEST_HOST);
+DEFINE_ATTACK_VECTOR("mitigate_guest_guest", CPU_MITIGATE_GUEST_GUEST);
+DEFINE_ATTACK_VECTOR("mitigate_cross_thread", CPU_MITIGATE_CROSS_THREAD);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 20/34] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (18 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 19/34] x86/bugs: Define attack vectors David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds David Kaplan
                   ` (14 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

The function should_mitigate_vuln() defines which vulnerabilities should
be mitigated based on the selected attack vector controls.  The
selections here are based on the individual characteristics of each
vulnerability.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 75 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 75 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 334fd2c5251d..a50c7cf2975d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -287,6 +287,81 @@ static void x86_amd_ssb_disable(void)
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 
+enum vulnerabilities {
+	SPECTRE_V1,
+	SPECTRE_V2,
+	RETBLEED,
+	SPECTRE_V2_USER,
+	L1TF,
+	MDS,
+	TAA,
+	MMIO,
+	RFDS,
+	SRBDS,
+	SRSO,
+	GDS,
+};
+
+/*
+ * Returns true if vulnerability should be mitigated based on the
+ * selected attack vector controls
+ *
+ * See Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
+ */
+static bool __init should_mitigate_vuln(enum vulnerabilities vuln)
+{
+	switch (vuln) {
+	/*
+	 * The only spectre_v1 mitigations in the kernel are related to
+	 * SWAPGS protection on kernel entry.  Therefore, protection is
+	 * only required for the user->kernel attack vector.
+	 */
+	case SPECTRE_V1:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL);
+
+	/*
+	 * Both spectre_v2 and srso may allow user->kernel or
+	 * guest->host attacks through branch predictor manipulation.
+	 */
+	case SPECTRE_V2:
+	case SRSO:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
+
+	/*
+	 * spectre_v2_user refers to user->user or guest->guest branch
+	 * predictor attacks only.  Other indirect branch predictor attacks
+	 * are covered by the spectre_v2 vulnerability.
+	 */
+	case SPECTRE_V2_USER:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
+
+	/* L1TF is only possible as a guest->host attack */
+	case L1TF:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
+
+	/*
+	 * All the vulnerabilities below allow potentially leaking data
+	 * across address spaces.  Therefore, mitigation is required for
+	 * any of these 4 attack vectors.
+	 */
+	case MDS:
+	case TAA:
+	case MMIO:
+	case RFDS:
+	case SRBDS:
+	case GDS:
+		return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
+			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
+	default:
+		return false;
+	}
+}
+
+
 /* Default mitigation for MDS-affected CPUs */
 static enum mds_mitigations mds_mitigation __ro_after_init =
 	IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (19 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 20/34] x86/bugs: Determine relevant vulnerabilities based on attack vector controls David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-10-01  0:50   ` Manwaring, Derek
  2024-09-12 19:08 ` [RFC PATCH 22/34] x86/bugs: Add attack vector controls for taa David Kaplan
                   ` (13 subsequent siblings)
  34 siblings, 1 reply; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if mds mitigation is required.

If cross-thread attack mitigations are required, disable SMT.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a50c7cf2975d..a5fbd7cc9e25 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -414,8 +414,12 @@ static void __init mds_select_mitigation(void)
 		return;
 	}
 
-	if (mds_mitigation == MDS_MITIGATION_AUTO)
-		mds_mitigation = MDS_MITIGATION_FULL;
+	if (mds_mitigation == MDS_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(MDS))
+			mds_mitigation = MDS_MITIGATION_FULL;
+		else
+			mds_mitigation = MDS_MITIGATION_OFF;
+	}
 
 	if (mds_mitigation == MDS_MITIGATION_FULL) {
 		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
@@ -446,7 +450,8 @@ static void __init mds_apply_mitigation(void)
 	if (mds_mitigation == MDS_MITIGATION_FULL) {
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
 		if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
-		    (mds_nosmt || cpu_mitigations_auto_nosmt()))
+		    (mds_nosmt || cpu_mitigations_auto_nosmt() ||
+		     cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD)))
 			cpu_smt_disable(false);
 	}
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 22/34] x86/bugs: Add attack vector controls for taa
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (20 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 23/34] x86/bugs: Add attack vector controls for mmio David Kaplan
                   ` (12 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if taa mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a5fbd7cc9e25..f042c5595463 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -513,11 +513,17 @@ static void __init taa_select_mitigation(void)
 	if (taa_mitigation == TAA_MITIGATION_OFF)
 		return;
 
-	/* This handles the AUTO case. */
-	if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
-		taa_mitigation = TAA_MITIGATION_VERW;
-	else
-		taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+	if (taa_mitigation == TAA_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(TAA)) {
+			if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+				taa_mitigation = TAA_MITIGATION_VERW;
+			else
+				taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+		} else {
+			taa_mitigation = TAA_MITIGATION_OFF;
+			return;
+		}
+	}
 
 	/*
 	 * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
@@ -560,7 +566,8 @@ static void __init taa_apply_mitigation(void)
 		 */
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
 
-		if (taa_nosmt || cpu_mitigations_auto_nosmt())
+		if (taa_nosmt || cpu_mitigations_auto_nosmt() ||
+		    cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD))
 			cpu_smt_disable(false);
 	}
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 23/34] x86/bugs: Add attack vector controls for mmio
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (21 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 22/34] x86/bugs: Add attack vector controls for taa David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 24/34] x86/bugs: Add attack vector controls for rfds David Kaplan
                   ` (11 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vectors controls to determine if mmio mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 37 ++++++++++++++++++++++---------------
 1 file changed, 22 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index f042c5595463..87ddf0b67d45 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -614,20 +614,26 @@ static void __init mmio_select_mitigation(void)
 	if (mmio_mitigation == MMIO_MITIGATION_OFF)
 		return;
 
-	/*
-	 * Check if the system has the right microcode.
-	 *
-	 * CPU Fill buffer clear mitigation is enumerated by either an explicit
-	 * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
-	 * affected systems.
-	 */
-	if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
-	    (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
-	     boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
-	     !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
-		mmio_mitigation = MMIO_MITIGATION_VERW;
-	else
-		mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
+	if (mmio_mitigation == MMIO_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(MMIO)) {
+			/*
+			 * Check if the system has the right microcode.
+			 *
+			 * CPU Fill buffer clear mitigation is enumerated by either an explicit
+			 * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
+			 * affected systems.
+			 */
+			if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
+			    (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
+			     boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
+			     !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
+				mmio_mitigation = MMIO_MITIGATION_VERW;
+			else
+				mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
+		} else {
+			mmio_mitigation = MMIO_MITIGATION_OFF;
+		}
+	}
 }
 
 static void __init mmio_update_mitigation(void)
@@ -675,7 +681,8 @@ static void __init mmio_apply_mitigation(void)
 	if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
 		static_branch_enable(&mds_idle_clear);
 
-	if (mmio_nosmt || cpu_mitigations_auto_nosmt())
+	if (mmio_nosmt || cpu_mitigations_auto_nosmt() ||
+	    cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD))
 		cpu_smt_disable(false);
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 24/34] x86/bugs: Add attack vector controls for rfds
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (22 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 23/34] x86/bugs: Add attack vector controls for mmio David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 25/34] x86/bugs: Add attack vector controls for srbds David Kaplan
                   ` (10 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if rfds mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 87ddf0b67d45..75ac56cd0e21 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -722,8 +722,14 @@ static void __init rfds_select_mitigation(void)
 	if (rfds_mitigation == RFDS_MITIGATION_OFF)
 		return;
 
-	if (rfds_mitigation == RFDS_MITIGATION_AUTO)
-		rfds_mitigation = RFDS_MITIGATION_VERW;
+	if (rfds_mitigation == RFDS_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(RFDS))
+			rfds_mitigation = RFDS_MITIGATION_VERW;
+		else {
+			rfds_mitigation = RFDS_MITIGATION_OFF;
+			return;
+		}
+	}
 
 	if (!(x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR))
 		rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 25/34] x86/bugs: Add attack vector controls for srbds
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (23 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 24/34] x86/bugs: Add attack vector controls for rfds David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 26/34] x86/bugs: Add attack vector controls for gds David Kaplan
                   ` (9 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if srbds mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 75ac56cd0e21..d86755218c72 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -840,8 +840,14 @@ static void __init srbds_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
 		return;
 
-	if (srbds_mitigation == SRBDS_MITIGATION_AUTO)
-		srbds_mitigation = SRBDS_MITIGATION_FULL;
+	if (srbds_mitigation == SRBDS_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(SRBDS))
+			srbds_mitigation = SRBDS_MITIGATION_FULL;
+		else {
+			srbds_mitigation = SRBDS_MITIGATION_OFF;
+			return;
+		}
+	}
 
 	/*
 	 * Check to see if this is one of the MDS_NO systems supporting TSX that
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 26/34] x86/bugs: Add attack vector controls for gds
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (24 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 25/34] x86/bugs: Add attack vector controls for srbds David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1 David Kaplan
                   ` (8 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if gds mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d86755218c72..5fbf5a274c9f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1001,8 +1001,14 @@ static void __init gds_select_mitigation(void)
 		gds_mitigation = GDS_MITIGATION_OFF;
 	/* Will verify below that mitigation _can_ be disabled */
 
-	if (gds_mitigation == GDS_MITIGATION_AUTO)
-		gds_mitigation = GDS_MITIGATION_FULL;
+	if (gds_mitigation == GDS_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(GDS))
+			gds_mitigation = GDS_MITIGATION_FULL;
+		else {
+			gds_mitigation = GDS_MITIGATION_OFF;
+			return;
+		}
+	}
 
 	/* No microcode */
 	if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (25 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 26/34] x86/bugs: Add attack vector controls for gds David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:37   ` Dave Hansen
  2024-09-12 19:08 ` [RFC PATCH 28/34] x86/bugs: Add attack vector controls for retbleed David Kaplan
                   ` (7 subsequent siblings)
  34 siblings, 1 reply; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if spectre_v1 mitigation is
required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 5fbf5a274c9f..d7e154031c93 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1114,6 +1114,9 @@ static void __init spectre_v1_select_mitigation(void)
 {
 	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
 		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
+
+	if (!should_mitigate_vuln(SPECTRE_V1))
+		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
 }
 
 static void __init spectre_v1_apply_mitigation(void)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 28/34] x86/bugs: Add attack vector controls for retbleed
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (26 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1 David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 29/34] x86/bugs: Add attack vector controls for spectre_v2_user David Kaplan
                   ` (6 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if retbleed mitigation is
required.

Disable SMT if cross-thread protection is desired and STIBP is not
available.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d7e154031c93..2659feb33090 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1270,13 +1270,17 @@ static void __init retbleed_select_mitigation(void)
 	}
 
 	if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
-		if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
-		    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
-			if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
-				retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
-			else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
-				 boot_cpu_has(X86_FEATURE_IBPB))
-				retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+		if (should_mitigate_vuln(RETBLEED)) {
+			if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+			    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
+				if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
+					retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
+				else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
+					 boot_cpu_has(X86_FEATURE_IBPB))
+					retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+			}
+		} else {
+			retbleed_mitigation = RETBLEED_MITIGATION_NONE;
 		}
 	}
 }
@@ -1354,7 +1358,8 @@ static void __init retbleed_apply_mitigation(void)
 	}
 
 	if (mitigate_smt && !boot_cpu_has(X86_FEATURE_STIBP) &&
-	    (retbleed_nosmt || cpu_mitigations_auto_nosmt()))
+	    (retbleed_nosmt || cpu_mitigations_auto_nosmt() ||
+	     cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD)))
 		cpu_smt_disable(false);
 
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 29/34] x86/bugs: Add attack vector controls for spectre_v2_user
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (27 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 28/34] x86/bugs: Add attack vector controls for retbleed David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 30/34] x86/bugs: Add attack vector controls for bhi David Kaplan
                   ` (5 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if spectre_v2_user mitigation is
required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 2659feb33090..9859f650f25f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1530,6 +1530,13 @@ spectre_v2_user_select_mitigation(void)
 		spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
 		break;
 	case SPECTRE_V2_USER_CMD_AUTO:
+		if (should_mitigate_vuln(SPECTRE_V2_USER)) {
+			spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+			spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
+		} else {
+			return;
+		}
+		break;
 	case SPECTRE_V2_USER_CMD_PRCTL:
 		spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
 		spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 30/34] x86/bugs: Add attack vector controls for bhi
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (28 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 29/34] x86/bugs: Add attack vector controls for spectre_v2_user David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 31/34] x86/bugs: Add attack vector controls for spectre_v2 David Kaplan
                   ` (4 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

There are two BHI mitigations, one for SYSCALL and one for VMEXIT.
Split these up so they can be selected individually based on attack
vector.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 38 ++++++++++++++++++++++++++------------
 1 file changed, 26 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 9859f650f25f..cc26f5680523 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1858,8 +1858,9 @@ static bool __init spec_ctrl_bhi_dis(void)
 enum bhi_mitigations {
 	BHI_MITIGATION_OFF,
 	BHI_MITIGATION_AUTO,
-	BHI_MITIGATION_ON,
-	BHI_MITIGATION_VMEXIT_ONLY,
+	BHI_MITIGATION_FULL,
+	BHI_MITIGATION_VMEXIT,
+	BHI_MITIGATION_SYSCALL
 };
 
 static enum bhi_mitigations bhi_mitigation __ro_after_init =
@@ -1873,9 +1874,9 @@ static int __init spectre_bhi_parse_cmdline(char *str)
 	if (!strcmp(str, "off"))
 		bhi_mitigation = BHI_MITIGATION_OFF;
 	else if (!strcmp(str, "on"))
-		bhi_mitigation = BHI_MITIGATION_ON;
+		bhi_mitigation = BHI_MITIGATION_FULL;
 	else if (!strcmp(str, "vmexit"))
-		bhi_mitigation = BHI_MITIGATION_VMEXIT_ONLY;
+		bhi_mitigation = BHI_MITIGATION_VMEXIT;
 	else
 		pr_err("Ignoring unknown spectre_bhi option (%s)", str);
 
@@ -1891,8 +1892,17 @@ static void __init bhi_select_mitigation(void)
 	if (bhi_mitigation == BHI_MITIGATION_OFF)
 		return;
 
-	if (bhi_mitigation == BHI_MITIGATION_AUTO)
-		bhi_mitigation = BHI_MITIGATION_ON;
+	if (bhi_mitigation == BHI_MITIGATION_AUTO) {
+		if (cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)) {
+			if (cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST))
+				bhi_mitigation = BHI_MITIGATION_FULL;
+			else
+				bhi_mitigation = BHI_MITIGATION_SYSCALL;
+		} else if (cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST))
+			bhi_mitigation = BHI_MITIGATION_VMEXIT;
+		else
+			bhi_mitigation = BHI_MITIGATION_OFF;
+	}
 }
 
 static void __init bhi_apply_mitigation(void)
@@ -1915,15 +1925,19 @@ static void __init bhi_apply_mitigation(void)
 	if (!IS_ENABLED(CONFIG_X86_64))
 		return;
 
-	if (bhi_mitigation == BHI_MITIGATION_VMEXIT_ONLY) {
-		pr_info("Spectre BHI mitigation: SW BHB clearing on VM exit only\n");
+	/* Mitigate KVM if guest->host protection is desired */
+	if (bhi_mitigation == BHI_MITIGATION_FULL ||
+	    bhi_mitigation == BHI_MITIGATION_VMEXIT) {
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
-		return;
+		pr_info("Spectre BHI mitigation: SW BHB clearing on VM exit\n");
 	}
 
-	pr_info("Spectre BHI mitigation: SW BHB clearing on syscall and VM exit\n");
-	setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP);
-	setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
+	/* Mitigate syscalls if user->kernel protection is desired */
+	if (bhi_mitigation == BHI_MITIGATION_FULL ||
+	    bhi_mitigation == BHI_MITIGATION_SYSCALL) {
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP);
+		pr_info("Spectre BHI mitigation: SW BHB clearing on syscall\n");
+	}
 }
 
 static void __init spectre_v2_select_mitigation(void)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 31/34] x86/bugs: Add attack vector controls for spectre_v2
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (29 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 30/34] x86/bugs: Add attack vector controls for bhi David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 32/34] x86/bugs: Add attack vector controls for l1tf David Kaplan
                   ` (3 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if spectre_v2 mitigation is
required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index cc26f5680523..9c920e2b4f33 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1957,13 +1957,15 @@ static void __init spectre_v2_select_mitigation(void)
 	case SPECTRE_V2_CMD_NONE:
 		return;
 
-	case SPECTRE_V2_CMD_FORCE:
 	case SPECTRE_V2_CMD_AUTO:
+		if (!should_mitigate_vuln(SPECTRE_V2))
+			break;
+		fallthrough;
+	case SPECTRE_V2_CMD_FORCE:
 		if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
 			mode = SPECTRE_V2_EIBRS;
 			break;
 		}
-
 		mode = spectre_v2_select_retpoline();
 		break;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 32/34] x86/bugs: Add attack vector controls for l1tf
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (30 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 31/34] x86/bugs: Add attack vector controls for spectre_v2 David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 33/34] x86/bugs: Add attack vector controls for srso David Kaplan
                   ` (2 subsequent siblings)
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if l1tf mitigation is required.

Disable SMT if cross-thread attack vector option is selected.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 9c920e2b4f33..3be3431c20c0 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2700,10 +2700,15 @@ static void __init l1tf_select_mitigation(void)
 	}
 
 	if (l1tf_mitigation == L1TF_MITIGATION_AUTO) {
-		if (cpu_mitigations_auto_nosmt())
-			l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
-		else
-			l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+		if (!should_mitigate_vuln(L1TF))
+			l1tf_mitigation = L1TF_MITIGATION_OFF;
+		else {
+			if (cpu_mitigations_auto_nosmt() ||
+			    cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD))
+				l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
+			else
+				l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+		}
 	}
 
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 33/34] x86/bugs: Add attack vector controls for srso
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (31 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 32/34] x86/bugs: Add attack vector controls for l1tf David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-12 19:08 ` [RFC PATCH 34/34] x86/pti: Add attack vector controls for pti David Kaplan
  2024-09-17 17:04 ` [RFC PATCH 00/34] x86/bugs: Attack vector controls Pawan Gupta
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Use attack vector controls to determine if srso mitigation is required.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3be3431c20c0..ddade7d6d539 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2842,8 +2842,14 @@ static void __init srso_select_mitigation(void)
 	}
 
 	/* Default mitigation */
-	if (srso_mitigation == SRSO_MITIGATION_AUTO)
-		srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+	if (srso_mitigation == SRSO_MITIGATION_AUTO) {
+		if (should_mitigate_vuln(SRSO))
+			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+		else {
+			srso_mitigation = SRSO_MITIGATION_NONE;
+			return;
+		}
+	}
 
 	if (has_microcode) {
 		/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [RFC PATCH 34/34] x86/pti: Add attack vector controls for pti
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (32 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 33/34] x86/bugs: Add attack vector controls for srso David Kaplan
@ 2024-09-12 19:08 ` David Kaplan
  2024-09-17 17:04 ` [RFC PATCH 00/34] x86/bugs: Attack vector controls Pawan Gupta
  34 siblings, 0 replies; 63+ messages in thread
From: David Kaplan @ 2024-09-12 19:08 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
  Cc: linux-kernel

Disable PTI mitigation if user->kernel attack vector mitigations are
disabled.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
 arch/x86/mm/pti.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 851ec8f1363a..9e1ed3df04e8 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -94,7 +94,8 @@ void __init pti_check_boottime_disable(void)
 	if (pti_mode == PTI_FORCE_ON)
 		pti_print_if_secure("force enabled on command line.");
 
-	if (pti_mode == PTI_AUTO && !boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
+	if (pti_mode == PTI_AUTO && (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN) ||
+				     !cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)))
 		return;
 
 	setup_force_cpu_cap(X86_FEATURE_PTI);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1
  2024-09-12 19:08 ` [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1 David Kaplan
@ 2024-09-12 19:37   ` Dave Hansen
  2024-09-12 19:57     ` Kaplan, David
  0 siblings, 1 reply; 63+ messages in thread
From: Dave Hansen @ 2024-09-12 19:37 UTC (permalink / raw)
  To: David Kaplan, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen, x86,
	H . Peter Anvin
  Cc: linux-kernel

On 9/12/24 12:08, David Kaplan wrote:
> @@ -1114,6 +1114,9 @@ static void __init spectre_v1_select_mitigation(void)
>  {
>  	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
>  		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
> +
> +	if (!should_mitigate_vuln(SPECTRE_V1))
> +		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
>  }

Just a high-level comment on this: usually in a well-structured series
that has sufficient refactoring, if you start to look at the end of the
series, things start to fall into place.  The series (at some point)
stops adding complexity things get simpler.

I don't really see that inflection point here.

For instance, I would have expected cpu_mitigations_off() to be
consulted in should_mitigate_vuln() so that some of the individual sites
can go away.

There's also added complexity from having 'enum vulnerabilities' which
basically duplicate the X86_BUG_* space.  If the infrastructure was, for
instance, built around X86_BUG bits, it might have enabled this patch to
be something like:

-  	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
-	    cpu_mitigations_off())
+	if (!should_mitigate_vuln(X86_BUG_SPECTRE_V1))
		spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;

I'm also not sure this series takes the right approach in representing
logic in data structures versus code.

For instance, this:

> +	case MDS:
> +	case TAA:
> +	case MMIO:
> +	case RFDS:
> +	case SRBDS:
> +	case GDS:
> +		return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
> +			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
> +			cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> +			cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);

We've _tended_ to represent these in data structure like cpu_vuln_whitelist.

struct whatever var[] =
   MACRO(MDS,  USER_KERNEL | GUEST_HOST | USER_USER | GUEST_GUEST)
   MACRO(MMIO, USER_KERNEL | GUEST_HOST | USER_USER | GUEST_GUEST)
   ...
};

But I do like the concept of users being focused on the attack vectors
in general.  That part is really nice.

As we talk about this at Plumbers, we probably need to be focused on
whether users want this new attack-vector-based selection mechanism or
the old style.  Because adding the attack-vector style is going to add
complexity any way we do it.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1
  2024-09-12 19:37   ` Dave Hansen
@ 2024-09-12 19:57     ` Kaplan, David
  2024-09-12 20:16       ` Dave Hansen
  2024-09-13 14:20       ` Borislav Petkov
  0 siblings, 2 replies; 63+ messages in thread
From: Kaplan, David @ 2024-09-12 19:57 UTC (permalink / raw)
  To: Dave Hansen, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen,
	x86@kernel.org, H . Peter Anvin
  Cc: linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Dave Hansen <dave.hansen@intel.com>
> Sent: Thursday, September 12, 2024 2:37 PM
> To: Kaplan, David <David.Kaplan@amd.com>; Thomas Gleixner
> <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter Zijlstra
> <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan
> Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar
> <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>
> Cc: linux-kernel@vger.kernel.org
> Subject: Re: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for
> spectre_v1
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On 9/12/24 12:08, David Kaplan wrote:
> > @@ -1114,6 +1114,9 @@ static void __init
> > spectre_v1_select_mitigation(void)
> >  {
> >       if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> cpu_mitigations_off())
> >               spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
> > +
> > +     if (!should_mitigate_vuln(SPECTRE_V1))
> > +             spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
> >  }
>
> Just a high-level comment on this: usually in a well-structured series that has
> sufficient refactoring, if you start to look at the end of the series, things start
> to fall into place.  The series (at some point) stops adding complexity things get
> simpler.
>
> I don't really see that inflection point here.
>
> For instance, I would have expected cpu_mitigations_off() to be consulted in
> should_mitigate_vuln() so that some of the individual sites can go away.

In the existing functionality, mitigations=off overrides everything, even other bug-specific command line options.  While the should_mitigate_vuln() is only called if the mitigation remains as AUTO (meaning no bug-specific command line option was passed).  So moving the cpu_mitigations_off() check into should_mitigate_vuln() would be a functional change to current behavior.

Feedback on that is certainly welcome, I was trying to be cautious about not changing any existing command line behavior or interactions.

>
> There's also added complexity from having 'enum vulnerabilities' which
> basically duplicate the X86_BUG_* space.  If the infrastructure was, for
> instance, built around X86_BUG bits, it might have enabled this patch to be
> something like:
>
> -       if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> -           cpu_mitigations_off())
> +       if (!should_mitigate_vuln(X86_BUG_SPECTRE_V1))
>                 spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;

That's a reasonable idea.  One issue I see is that there is no separation in the X86_BUG* space for spectre_v2 vs spectre_v2_user, but they do have separate mitigations.  But I think that is the only missing one, so maybe it just makes sense to add a X86_BUG bit for that?

>
> I'm also not sure this series takes the right approach in representing logic in
> data structures versus code.
>
> For instance, this:
>
> > +     case MDS:
> > +     case TAA:
> > +     case MMIO:
> > +     case RFDS:
> > +     case SRBDS:
> > +     case GDS:
> > +             return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)
> ||
> > +                     cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
> > +                     cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> > +
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
>
> We've _tended_ to represent these in data structure like cpu_vuln_whitelist.
>
> struct whatever var[] =
>    MACRO(MDS,  USER_KERNEL | GUEST_HOST | USER_USER | GUEST_GUEST)
>    MACRO(MMIO, USER_KERNEL | GUEST_HOST | USER_USER |
> GUEST_GUEST)
>    ...
> };

Ah, yeah I could do that.  I think the case statement makes it a bit easier to see groupings of which issues involve the same attack vectors, although that's also covered in the documentation file.

I'm not opposed to using a data structure for this if that’s more consistent with other areas.

>
> But I do like the concept of users being focused on the attack vectors in
> general.  That part is really nice.
>
> As we talk about this at Plumbers, we probably need to be focused on
> whether users want this new attack-vector-based selection mechanism or the
> old style.  Because adding the attack-vector style is going to add complexity
> any way we do it.

And to be clear, I was trying to continue to support both.  But the attack-vector style is also more future-proof because when new issues arise, they would get added to the appropriate vectors and users wouldn't have to do anything ideally.

Thanks
--David Kaplan

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1
  2024-09-12 19:57     ` Kaplan, David
@ 2024-09-12 20:16       ` Dave Hansen
  2024-09-12 21:15         ` Kaplan, David
  2024-09-13 14:20       ` Borislav Petkov
  1 sibling, 1 reply; 63+ messages in thread
From: Dave Hansen @ 2024-09-12 20:16 UTC (permalink / raw)
  To: Kaplan, David, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen,
	x86@kernel.org, H . Peter Anvin
  Cc: linux-kernel@vger.kernel.org

On 9/12/24 12:57, Kaplan, David wrote:
> And to be clear, I was trying to continue to support both.  But the 
> attack-vector style is also more future-proof because when new issues 
> arise, they would get added to the appropriate vectors and users 
> wouldn't have to do anything ideally.

That's a good point.  Do you have any inkling about how static folks'
vector selection would have been over time?

For instance, if someone cared about CPU_MITIGATE_GUEST_HOST at the
original spectre_v2 time, did that carry forward to L1TF and all the way
into 2024?

Or would they have had to shift their vector selection over time?

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1
  2024-09-12 20:16       ` Dave Hansen
@ 2024-09-12 21:15         ` Kaplan, David
  2024-10-01  0:39           ` Manwaring, Derek
  0 siblings, 1 reply; 63+ messages in thread
From: Kaplan, David @ 2024-09-12 21:15 UTC (permalink / raw)
  To: Dave Hansen, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen,
	x86@kernel.org, H . Peter Anvin
  Cc: linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Dave Hansen <dave.hansen@intel.com>
> Sent: Thursday, September 12, 2024 3:17 PM
> To: Kaplan, David <David.Kaplan@amd.com>; Thomas Gleixner
> <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter Zijlstra
> <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan
> Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar
> <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>
> Cc: linux-kernel@vger.kernel.org
> Subject: Re: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for
> spectre_v1
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On 9/12/24 12:57, Kaplan, David wrote:
> > And to be clear, I was trying to continue to support both.  But the
> > attack-vector style is also more future-proof because when new issues
> > arise, they would get added to the appropriate vectors and users
> > wouldn't have to do anything ideally.
>
> That's a good point.  Do you have any inkling about how static folks'
> vector selection would have been over time?
>
> For instance, if someone cared about CPU_MITIGATE_GUEST_HOST at the
> original spectre_v2 time, did that carry forward to L1TF and all the way into
> 2024?
>
> Or would they have had to shift their vector selection over time?

In my view, the attack vector selection is a function of how the system is being used.  A system that runs untrusted guests and cared about spectre_v2 I would think also cares about L1TF, Retbleed, etc. They're all attacks that can leak the same kind of data, although the mechanisms of exploit are different.  In what I've personally seen, if you care about one attack along a certain attack vector, you tend to care about all of them.

Now that said, there could be risk decisions based on the characteristics of individual bugs.  And that’s one reason why this RFC proposes that the bug-specific options always override the attack vector selection (in either direction).

--David Kaplan

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1
  2024-09-12 19:57     ` Kaplan, David
  2024-09-12 20:16       ` Dave Hansen
@ 2024-09-13 14:20       ` Borislav Petkov
  1 sibling, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2024-09-13 14:20 UTC (permalink / raw)
  To: Kaplan, David
  Cc: Dave Hansen, Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf,
	Pawan Gupta, Ingo Molnar, Dave Hansen, x86@kernel.org,
	H . Peter Anvin, linux-kernel@vger.kernel.org

On Thu, Sep 12, 2024 at 07:57:50PM +0000, Kaplan, David wrote:
> > There's also added complexity from having 'enum vulnerabilities' which
> > basically duplicate the X86_BUG_* space.  If the infrastructure was, for
> > instance, built around X86_BUG bits, it might have enabled this patch to be
> > something like:
> >
> > -       if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) ||
> > -           cpu_mitigations_off())
> > +       if (!should_mitigate_vuln(X86_BUG_SPECTRE_V1))
> >                 spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
> 
> That's a reasonable idea.  One issue I see is that there is no separation in
> the X86_BUG* space for spectre_v2 vs spectre_v2_user, but they do have
> separate mitigations.  But I think that is the only missing one, so maybe it
> just makes sense to add a X86_BUG bit for that?

I think we should do that. That's less complexity in the mitigations area and
those are always welcome.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 00/34] x86/bugs: Attack vector controls
  2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
                   ` (33 preceding siblings ...)
  2024-09-12 19:08 ` [RFC PATCH 34/34] x86/pti: Add attack vector controls for pti David Kaplan
@ 2024-09-17 17:04 ` Pawan Gupta
  2024-09-18  6:29   ` Kaplan, David
  34 siblings, 1 reply; 63+ messages in thread
From: Pawan Gupta @ 2024-09-17 17:04 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Thu, Sep 12, 2024 at 02:08:23PM -0500, David Kaplan wrote:
> The rest of the patches define new "attack vector" command line options
> to make it easier to select appropriate mitigations based on the usage
> of the system.  While many users may not be intimately familiar with the
> details of these CPU vulnerabilities, they are likely better able to
> understand the intended usage of their system.  As a result, unneeded
> mitigations may be disabled, allowing users to recoup more performance.

How much performance improvement are you seeing with each of the attack
vector?

There aren't many vulnerabilities that only affect a single attack vector.
So, selecting to mitigate single attack vector mitigates a lot more than
that.

We may be able to get better performance improvement by adding vector-based
switches at the mitigation points. And only enable them if user asked for it.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 00/34] x86/bugs: Attack vector controls
  2024-09-17 17:04 ` [RFC PATCH 00/34] x86/bugs: Attack vector controls Pawan Gupta
@ 2024-09-18  6:29   ` Kaplan, David
  0 siblings, 0 replies; 63+ messages in thread
From: Kaplan, David @ 2024-09-18  6:29 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
	linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Tuesday, September 17, 2024 7:04 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>;
> Peter Zijlstra <peterz@infradead.org>; Josh Poimboeuf
> <jpoimboe@kernel.org>; Ingo Molnar <mingo@redhat.com>; Dave Hansen
> <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [RFC PATCH 00/34] x86/bugs: Attack vector controls
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Thu, Sep 12, 2024 at 02:08:23PM -0500, David Kaplan wrote:
> > The rest of the patches define new "attack vector" command line
> > options to make it easier to select appropriate mitigations based on
> > the usage of the system.  While many users may not be intimately
> > familiar with the details of these CPU vulnerabilities, they are
> > likely better able to understand the intended usage of their system.
> > As a result, unneeded mitigations may be disabled, allowing users to recoup
> more performance.
>
> How much performance improvement are you seeing with each of the attack
> vector?
>
> There aren't many vulnerabilities that only affect a single attack vector.
> So, selecting to mitigate single attack vector mitigates a lot more than that.

I think any performance discussion will of course vary significantly by microarchitecture, workload, etc.  Several vulnerabilities are known to have non-trivial performance impacts.

Of course it's worth noting that several of the attack vectors likely go hand-in-hand...like if you trust userspace you would disable user_kernel and user_user.  I discuss in patch 18 why these are separated, but at least for now they'd likely be configured in sync.

>
> We may be able to get better performance improvement by adding vector-
> based switches at the mitigation points. And only enable them if user asked for
> it.

Right, and some mitigations might chose to now support different mitigations for each attack vector.  This was already the case with bhi (see patch 30) where the syscall mitigation was enabled only for the user_kernel vector and the vmexit mitigation only for the guest_host vector.  I could imagine other mitigations choosing to support similar separation which could lead to improved performance if mitigations on only certain vectors is required.

--David Kaplan


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1
  2024-09-12 21:15         ` Kaplan, David
@ 2024-10-01  0:39           ` Manwaring, Derek
  2024-10-01  1:46             ` Kaplan, David
  0 siblings, 1 reply; 63+ messages in thread
From: Manwaring, Derek @ 2024-10-01  0:39 UTC (permalink / raw)
  To: david.kaplan
  Cc: bp, dave.hansen, dave.hansen, hpa, jpoimboe, linux-kernel, mingo,
	pawan.kumar.gupta, peterz, tglx, x86

On 2024-09-12 21:15+0000 David Kaplan wrote:
> On 2024-09-12 13:16-0700 Dave Hansen wrote:
> > On 9/12/24 12:57, Kaplan, David wrote:
> > > And to be clear, I was trying to continue to support both.  But the
> > > attack-vector style is also more future-proof because when new issues
> > > arise, they would get added to the appropriate vectors and users
> > > wouldn't have to do anything ideally.
> >
> > That's a good point.  Do you have any inkling about how static folks'
> > vector selection would have been over time?
> >
> > For instance, if someone cared about CPU_MITIGATE_GUEST_HOST at the
> > original spectre_v2 time, did that carry forward to L1TF and all the way into
> > 2024?
> >
> > Or would they have had to shift their vector selection over time?
>
> In my view, the attack vector selection is a function of how the system
> is being used.  A system that runs untrusted guests and cared about
> spectre_v2 I would think also cares about L1TF, Retbleed, etc. They're
> all attacks that can leak the same kind of data, although the mechanisms
> of exploit are different.  In what I've personally seen, if you care
> about one attack along a certain attack vector, you tend to care about
> all of them.

This makes sense, but I'm not sure it is a meaningful simplification for
users. I think it'd be helpful if we had a few samples of how users
normally configure their systems. My hunch would be there are three main
camps:
  1) default for everything
  2) mitigations=off
  3) specify at least one mitigation individually.

I think you're saying group (3) is helped most because now they don't
have to understand each individual mitigation. But (3) is perhaps
already a very small group of users? Maybe it would help (1) as well
because they would get performance gains, but I'm skeptical of how many
would feel safe switching from defaults to a vector specification. If
they do feel comfortable doing that, they're probably closer to (3). Is
that fair?

Derek

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 18/34] Documentation/x86: Document the new attack vector controls
  2024-09-12 19:08 ` [RFC PATCH 18/34] Documentation/x86: Document the new attack vector controls David Kaplan
@ 2024-10-01  0:43   ` Manwaring, Derek
  2024-10-01  1:53     ` Kaplan, David
  0 siblings, 1 reply; 63+ messages in thread
From: Manwaring, Derek @ 2024-10-01  0:43 UTC (permalink / raw)
  To: david.kaplan
  Cc: bp, dave.hansen, hpa, jpoimboe, linux-kernel, mingo,
	pawan.kumar.gupta, peterz, tglx, x86

On 2024-09-12 14:08-0500 David Kaplan wrote:
> +
> +Summary of attack-vector mitigations
> +------------------------------------
> +
> +When a vulnerability is mitigated due to an attack-vector control, the default
> +mitigation option for that particular vulnerability is used.  To use a different
> +mitigation, please use the vulnerability-specific command line option.
> +
> +The table below summarizes which vulnerabilities are mitigated when different
> +attack vectors are enabled and assuming the CPU is vulnerable.

Really excited to see this breakdown of which attacks matter when. I
think this will help demystify the space generally. I am tempted to add
even more issues to the table, but I suppose the idea is to limit only
to issues for which there is a kernel parameter, is that right?

I think it'd be useful to get to a point that if someone comes across
one of the many papers & issue names, they could find it here and have
an idea of how it impacts their workload. Maybe this isn't the place for
that kind of a glossary, but interested in hearing where you see
something like that fitting in. If we could at least add a column or
footnote for each to capture something like "SRSO is also known as
Inception and CVE-2023-20569," I think that would go a long way to
reduce confusion.

Derek

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
  2024-09-12 19:08 ` [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds David Kaplan
@ 2024-10-01  0:50   ` Manwaring, Derek
  2024-10-01  1:58     ` Kaplan, David
  0 siblings, 1 reply; 63+ messages in thread
From: Manwaring, Derek @ 2024-10-01  0:50 UTC (permalink / raw)
  To: david.kaplan
  Cc: bp, dave.hansen, hpa, jpoimboe, linux-kernel, mingo,
	pawan.kumar.gupta, peterz, tglx, x86

On 2024-09-12 14:08-0500 David Kaplan wrote:
> @@ -446,7 +450,8 @@ static void __init mds_apply_mitigation(void)
>      if (mds_mitigation == MDS_MITIGATION_FULL) {
>          setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
>          if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
> -            (mds_nosmt || cpu_mitigations_auto_nosmt()))
> +            (mds_nosmt || cpu_mitigations_auto_nosmt() ||
> +             cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD)))
>              cpu_smt_disable(false);
>      }
>  }

Maybe I'm missing something here - if you care about user/user, why
would you not care about cross-thread? It seems to me SMT should be
turned off for all of the vectors.

Derek

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1
  2024-10-01  0:39           ` Manwaring, Derek
@ 2024-10-01  1:46             ` Kaplan, David
  2024-10-01 22:18               ` Manwaring, Derek
  0 siblings, 1 reply; 63+ messages in thread
From: Kaplan, David @ 2024-10-01  1:46 UTC (permalink / raw)
  To: Manwaring, Derek
  Cc: bp@alien8.de, dave.hansen@intel.com, dave.hansen@linux.intel.com,
	hpa@zytor.com, jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Manwaring, Derek <derekmn@amazon.com>
> Sent: Monday, September 30, 2024 7:40 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: bp@alien8.de; dave.hansen@intel.com; dave.hansen@linux.intel.com;
> hpa@zytor.com; jpoimboe@kernel.org; linux-kernel@vger.kernel.org;
> mingo@redhat.com; pawan.kumar.gupta@linux.intel.com;
> peterz@infradead.org; tglx@linutronix.de; x86@kernel.org
> Subject: RE: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for
> spectre_v1
>
> Caution: This message originated from an External Source. Use proper
> caution when opening attachments, clicking links, or responding.
>
>
> On 2024-09-12 21:15+0000 David Kaplan wrote:
> > On 2024-09-12 13:16-0700 Dave Hansen wrote:
> > > On 9/12/24 12:57, Kaplan, David wrote:
> > > > And to be clear, I was trying to continue to support both.  But
> > > > the attack-vector style is also more future-proof because when new
> > > > issues arise, they would get added to the appropriate vectors and
> > > > users wouldn't have to do anything ideally.
> > >
> > > That's a good point.  Do you have any inkling about how static folks'
> > > vector selection would have been over time?
> > >
> > > For instance, if someone cared about CPU_MITIGATE_GUEST_HOST at the
> > > original spectre_v2 time, did that carry forward to L1TF and all the
> > > way into 2024?
> > >
> > > Or would they have had to shift their vector selection over time?
> >
> > In my view, the attack vector selection is a function of how the
> > system is being used.  A system that runs untrusted guests and cared
> > about
> > spectre_v2 I would think also cares about L1TF, Retbleed, etc. They're
> > all attacks that can leak the same kind of data, although the
> > mechanisms of exploit are different.  In what I've personally seen, if
> > you care about one attack along a certain attack vector, you tend to
> > care about all of them.
>
> This makes sense, but I'm not sure it is a meaningful simplification for users. I
> think it'd be helpful if we had a few samples of how users normally configure
> their systems. My hunch would be there are three main
> camps:
>   1) default for everything
>   2) mitigations=off
>   3) specify at least one mitigation individually.
>
> I think you're saying group (3) is helped most because now they don't have to
> understand each individual mitigation. But (3) is perhaps already a very small
> group of users? Maybe it would help (1) as well because they would get
> performance gains, but I'm skeptical of how many would feel safe switching
> from defaults to a vector specification. If they do feel comfortable doing that,
> they're probably closer to (3). Is that fair?
>

I think these attack vector controls make it easier to configure say a system where userspace is trusted by VMs are not (such as with a cloud node).  Or a shared system where userspace is untrusted but only trusted users are allowed to run VMs, so the VMs are trusted.  I see those as potentially being more likely vs specifying mitigations individually (which I suspect very few people do).

If it was helpful, I could perhaps include these as examples in the documentation file.

--David Kaplan

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 18/34] Documentation/x86: Document the new attack vector controls
  2024-10-01  0:43   ` Manwaring, Derek
@ 2024-10-01  1:53     ` Kaplan, David
  2024-10-01 22:21       ` Manwaring, Derek
  0 siblings, 1 reply; 63+ messages in thread
From: Kaplan, David @ 2024-10-01  1:53 UTC (permalink / raw)
  To: Manwaring, Derek
  Cc: bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com,
	jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Manwaring, Derek <derekmn@amazon.com>
> Sent: Monday, September 30, 2024 7:44 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: bp@alien8.de; dave.hansen@linux.intel.com; hpa@zytor.com;
> jpoimboe@kernel.org; linux-kernel@vger.kernel.org; mingo@redhat.com;
> pawan.kumar.gupta@linux.intel.com; peterz@infradead.org;
> tglx@linutronix.de; x86@kernel.org
> Subject: Re: [RFC PATCH 18/34] Documentation/x86: Document the new
> attack vector controls
>
> Caution: This message originated from an External Source. Use proper
> caution when opening attachments, clicking links, or responding.
>
>
> On 2024-09-12 14:08-0500 David Kaplan wrote:
> > +
> > +Summary of attack-vector mitigations
> > +------------------------------------
> > +
> > +When a vulnerability is mitigated due to an attack-vector control,
> > +the default mitigation option for that particular vulnerability is
> > +used.  To use a different mitigation, please use the vulnerability-specific
> command line option.
> > +
> > +The table below summarizes which vulnerabilities are mitigated when
> > +different attack vectors are enabled and assuming the CPU is vulnerable.
>
> Really excited to see this breakdown of which attacks matter when. I think
> this will help demystify the space generally. I am tempted to add even more
> issues to the table, but I suppose the idea is to limit only to issues for which
> there is a kernel parameter, is that right?

Right.

>
> I think it'd be useful to get to a point that if someone comes across one of the
> many papers & issue names, they could find it here and have an idea of how
> it impacts their workload. Maybe this isn't the place for that kind of a
> glossary, but interested in hearing where you see something like that fitting
> in. If we could at least add a column or footnote for each to capture
> something like "SRSO is also known as Inception and CVE-2023-20569," I
> think that would go a long way to reduce confusion.
>

That's a good idea.  One thought could be a new documentation file which could map CVE numbers to vendor/researcher names, kernel options, and related documentation.  Some of the issues already have their own documentation files with more details, but not all do.  I tend to agree it would be nice to have something easily searchable to help navigate all the names/acronyms.

Open to other ideas on how to present the info, but this seems like a good thing to add somewhere.

--David Kaplan

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
  2024-10-01  0:50   ` Manwaring, Derek
@ 2024-10-01  1:58     ` Kaplan, David
  2024-10-01 22:37       ` Manwaring, Derek
  0 siblings, 1 reply; 63+ messages in thread
From: Kaplan, David @ 2024-10-01  1:58 UTC (permalink / raw)
  To: Manwaring, Derek
  Cc: bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com,
	jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Manwaring, Derek <derekmn@amazon.com>
> Sent: Monday, September 30, 2024 7:50 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: bp@alien8.de; dave.hansen@linux.intel.com; hpa@zytor.com;
> jpoimboe@kernel.org; linux-kernel@vger.kernel.org; mingo@redhat.com;
> pawan.kumar.gupta@linux.intel.com; peterz@infradead.org;
> tglx@linutronix.de; x86@kernel.org
> Subject: Re: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
>
> Caution: This message originated from an External Source. Use proper
> caution when opening attachments, clicking links, or responding.
>
>
> On 2024-09-12 14:08-0500 David Kaplan wrote:
> > @@ -446,7 +450,8 @@ static void __init mds_apply_mitigation(void)
> >      if (mds_mitigation == MDS_MITIGATION_FULL) {
> >          setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> >          if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
> > -            (mds_nosmt || cpu_mitigations_auto_nosmt()))
> > +            (mds_nosmt || cpu_mitigations_auto_nosmt() ||
> > +             cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD)))
> >              cpu_smt_disable(false);
> >      }
> >  }
>
> Maybe I'm missing something here - if you care about user/user, why would
> you not care about cross-thread? It seems to me SMT should be turned off
> for all of the vectors.
>
> Derek

I broke out cross-thread separately to maintain the existing kernel defaults, which does not disable SMT by default even if full mitigation requires it.

In theory, cross-thread protection is only required if there is a risk that untrusted workloads might run as siblings.  If techniques like core scheduling are used, this might be able to be prevented I suppose.

--David Kaplan

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1
  2024-10-01  1:46             ` Kaplan, David
@ 2024-10-01 22:18               ` Manwaring, Derek
  0 siblings, 0 replies; 63+ messages in thread
From: Manwaring, Derek @ 2024-10-01 22:18 UTC (permalink / raw)
  To: david.kaplan
  Cc: bp, dave.hansen, dave.hansen, derekmn, hpa, jpoimboe,
	linux-kernel, mingo, pawan.kumar.gupta, peterz, tglx, x86

On 2024-10-01 01:45+0000 David Kaplan wrote:
> On 2024-09-30 17:39-0700 Derek Manwaring wrote:
> > On 2024-09-12 21:15+0000 David Kaplan wrote:
> > > On 2024-09-12 13:16-0700 Dave Hansen wrote:
> > > > On 9/12/24 12:57, Kaplan, David wrote:
> > > > > And to be clear, I was trying to continue to support both.  But
> > > > > the attack-vector style is also more future-proof because when new
> > > > > issues arise, they would get added to the appropriate vectors and
> > > > > users wouldn't have to do anything ideally.
> > > >
> > > > That's a good point.  Do you have any inkling about how static folks'
> > > > vector selection would have been over time?
> > > >
> > > > For instance, if someone cared about CPU_MITIGATE_GUEST_HOST at the
> > > > original spectre_v2 time, did that carry forward to L1TF and all the
> > > > way into 2024?
> > > >
> > > > Or would they have had to shift their vector selection over time?
> > >
> > > In my view, the attack vector selection is a function of how the
> > > system is being used.  A system that runs untrusted guests and cared
> > > about
> > > spectre_v2 I would think also cares about L1TF, Retbleed, etc. They're
> > > all attacks that can leak the same kind of data, although the
> > > mechanisms of exploit are different.  In what I've personally seen, if
> > > you care about one attack along a certain attack vector, you tend to
> > > care about all of them.
> >
> > This makes sense, but I'm not sure it is a meaningful simplification for users. I
> > think it'd be helpful if we had a few samples of how users normally configure
> > their systems. My hunch would be there are three main
> > camps:
> >   1) default for everything
> >   2) mitigations=off
> >   3) specify at least one mitigation individually.
> >
> > I think you're saying group (3) is helped most because now they don't have to
> > understand each individual mitigation. But (3) is perhaps already a very small
> > group of users? Maybe it would help (1) as well because they would get
> > performance gains, but I'm skeptical of how many would feel safe switching
> > from defaults to a vector specification. If they do feel comfortable doing that,
> > they're probably closer to (3). Is that fair?
>
> I think these attack vector controls make it easier to configure say a
> system where userspace is trusted by VMs are not (such as with a cloud
> node).  Or a shared system where userspace is untrusted but only trusted
> users are allowed to run VMs, so the VMs are trusted.  I see those as
> potentially being more likely vs specifying mitigations individually
> (which I suspect very few people do).

Ok I see the potential for those cases. I still wonder whether the extra
complexity for everyone is worth the benefits to those users.

> If it was helpful, I could perhaps include these as examples in the
> documentation file.

I think the examples would help, yeah.

Derek

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 18/34] Documentation/x86: Document the new attack vector controls
  2024-10-01  1:53     ` Kaplan, David
@ 2024-10-01 22:21       ` Manwaring, Derek
  0 siblings, 0 replies; 63+ messages in thread
From: Manwaring, Derek @ 2024-10-01 22:21 UTC (permalink / raw)
  To: david.kaplan
  Cc: bp, dave.hansen, derekmn, hpa, jpoimboe, linux-kernel, mingo,
	pawan.kumar.gupta, peterz, tglx, x86

On 2024-10-01 01:53+0000 David Kaplan wrote:
> On 2024-09-30 17:43-0700 Derek Manwaring wrote:
> > I think it'd be useful to get to a point that if someone comes across one of the
> > many papers & issue names, they could find it here and have an idea of how
> > it impacts their workload. Maybe this isn't the place for that kind of a
> > glossary, but interested in hearing where you see something like that fitting
> > in. If we could at least add a column or footnote for each to capture
> > something like "SRSO is also known as Inception and CVE-2023-20569," I
> > think that would go a long way to reduce confusion.
>
> That's a good idea.  One thought could be a new documentation file which could
> map CVE numbers to vendor/researcher names, kernel options, and related
> documentation.  Some of the issues already have their own documentation files
> with more details, but not all do.  I tend to agree it would be nice to have
> something easily searchable to help navigate all the names/acronyms.
>
> Open to other ideas on how to present the info, but this seems like a good
> thing to add somewhere.

Great, yeah if not as an addition to "Summary of attack-vector mitigations,"
maybe a new table in hw-vuln/index would be a good place.

Derek

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
  2024-10-01  1:58     ` Kaplan, David
@ 2024-10-01 22:37       ` Manwaring, Derek
  2024-10-02 14:28         ` Kaplan, David
  2024-10-02 15:50         ` Pawan Gupta
  0 siblings, 2 replies; 63+ messages in thread
From: Manwaring, Derek @ 2024-10-01 22:37 UTC (permalink / raw)
  To: david.kaplan
  Cc: bp, dave.hansen, derekmn, hpa, jpoimboe, linux-kernel, mingo,
	pawan.kumar.gupta, peterz, tglx, x86

On 2024-10-01 01:56+0000 David Kaplan wrote:
> On 2024-09-30 17:50-0700 Derek Manwaring wrote:
> > Maybe I'm missing something here - if you care about user/user, why would
> > you not care about cross-thread? It seems to me SMT should be turned off
> > for all of the vectors.
>
> I broke out cross-thread separately to maintain the existing kernel
> defaults, which does not disable SMT by default even if full mitigation
> requires it.

Ok that makes a lot of sense. My bias would be to use the vector
parameters as an opportunity to make the SMT stance more obvious. So
kernel's position becomes more of "I disabled SMT because you asked for
protection with mitigate_user_user" (or any other vector). If no vector
parameters are specified, SMT default would be maintained. What are your
thoughts on disabling SMT if a vector parameter is explicitly supplied?

> In theory, cross-thread protection is only required if there is a risk
> that untrusted workloads might run as siblings.  If techniques like core
> scheduling are used, this might be able to be prevented I suppose.

True, though I think it's worth making clear that doing core scheduling
correctly is the user's responsibility, and the vector protection they
asked for may be incomplete if there are mistakes in how they manage
process cookies. Just an idea, but what if users had to ask for SMT to
remain enabled if they had also asked for protection from one of these
vectors?

Derek

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
  2024-10-01 22:37       ` Manwaring, Derek
@ 2024-10-02 14:28         ` Kaplan, David
  2024-10-02 20:11           ` Manwaring, Derek
  2024-10-02 15:50         ` Pawan Gupta
  1 sibling, 1 reply; 63+ messages in thread
From: Kaplan, David @ 2024-10-02 14:28 UTC (permalink / raw)
  To: Manwaring, Derek
  Cc: bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com,
	jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Manwaring, Derek <derekmn@amazon.com>
> Sent: Tuesday, October 1, 2024 5:37 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: bp@alien8.de; dave.hansen@linux.intel.com; derekmn@amazon.com;
> hpa@zytor.com; jpoimboe@kernel.org; linux-kernel@vger.kernel.org;
> mingo@redhat.com; pawan.kumar.gupta@linux.intel.com; peterz@infradead.org;
> tglx@linutronix.de; x86@kernel.org
> Subject: RE: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On 2024-10-01 01:56+0000 David Kaplan wrote:
> > On 2024-09-30 17:50-0700 Derek Manwaring wrote:
> > > Maybe I'm missing something here - if you care about user/user, why
> > > would you not care about cross-thread? It seems to me SMT should be
> > > turned off for all of the vectors.
> >
> > I broke out cross-thread separately to maintain the existing kernel
> > defaults, which does not disable SMT by default even if full
> > mitigation requires it.
>
> Ok that makes a lot of sense. My bias would be to use the vector parameters as an
> opportunity to make the SMT stance more obvious. So kernel's position becomes
> more of "I disabled SMT because you asked for protection with mitigate_user_user"
> (or any other vector). If no vector parameters are specified, SMT default would be
> maintained. What are your thoughts on disabling SMT if a vector parameter is
> explicitly supplied?

Hmm, I'm not quite sure how to do that because mitigate_user_user defaults to 'on' (again, to maintain the existing kernel defaults).  It seems odd to me that explicitly specifying 'mitigate_user_user=on' would result in different behavior.  And I think many vulnerabilities that require SMT disabled will already print out a warning if mitigation is requested and SMT is enabled.  Open to ideas here...

>
> > In theory, cross-thread protection is only required if there is a risk
> > that untrusted workloads might run as siblings.  If techniques like
> > core scheduling are used, this might be able to be prevented I suppose.
>
> True, though I think it's worth making clear that doing core scheduling correctly is
> the user's responsibility, and the vector protection they asked for may be incomplete
> if there are mistakes in how they manage process cookies. Just an idea, but what if
> users had to ask for SMT to remain enabled if they had also asked for protection
> from one of these vectors?
>
> Derek

I think the fact that some mitigations will print warnings if SMT is enabled might be sufficient here.  I can also add something more about core scheduling in the documentation file.

--David Kaplan

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
  2024-10-01 22:37       ` Manwaring, Derek
  2024-10-02 14:28         ` Kaplan, David
@ 2024-10-02 15:50         ` Pawan Gupta
  2024-10-02 19:40           ` Manwaring, Derek
  1 sibling, 1 reply; 63+ messages in thread
From: Pawan Gupta @ 2024-10-02 15:50 UTC (permalink / raw)
  To: Manwaring, Derek
  Cc: david.kaplan, bp, dave.hansen, hpa, jpoimboe, linux-kernel, mingo,
	peterz, tglx, x86

On Tue, Oct 01, 2024 at 03:37:13PM -0700, Manwaring, Derek wrote:
> On 2024-10-01 01:56+0000 David Kaplan wrote:
> > On 2024-09-30 17:50-0700 Derek Manwaring wrote:
> > > Maybe I'm missing something here - if you care about user/user, why would
> > > you not care about cross-thread? It seems to me SMT should be turned off
> > > for all of the vectors.
> >
> > I broke out cross-thread separately to maintain the existing kernel
> > defaults, which does not disable SMT by default even if full mitigation
> > requires it.
> 
> Ok that makes a lot of sense. My bias would be to use the vector
> parameters as an opportunity to make the SMT stance more obvious. So
> kernel's position becomes more of "I disabled SMT because you asked for
> protection with mitigate_user_user" (or any other vector). If no vector
> parameters are specified, SMT default would be maintained. What are your
> thoughts on disabling SMT if a vector parameter is explicitly supplied?

I think attack vector mitigation like user-user does not necessarily mean
SMT needs to be disabled. For example, for a system only affected by
Spectre-v2, selecting user-user mitigation should deploy STIBP and IBPB,
rather than disabling SMT.

IMO, unless explicitly asked by a user, the decision to disable SMT should
be left to individual mitigations.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
  2024-10-02 15:50         ` Pawan Gupta
@ 2024-10-02 19:40           ` Manwaring, Derek
  0 siblings, 0 replies; 63+ messages in thread
From: Manwaring, Derek @ 2024-10-02 19:40 UTC (permalink / raw)
  To: pawan.kumar.gupta
  Cc: bp, dave.hansen, david.kaplan, derekmn, hpa, jpoimboe,
	linux-kernel, mingo, peterz, tglx, x86

On 2024-10-02  08:50-0700 Pawan Gupta wrote:
> On Tue, Oct 01, 2024 at 03:37:13PM -0700, Manwaring, Derek wrote:
> > On 2024-10-01 01:56+0000 David Kaplan wrote:
> > > On 2024-09-30 17:50-0700 Derek Manwaring wrote:
> > > > Maybe I'm missing something here - if you care about user/user, why would
> > > > you not care about cross-thread? It seems to me SMT should be turned off
> > > > for all of the vectors.
> > >
> > > I broke out cross-thread separately to maintain the existing kernel
> > > defaults, which does not disable SMT by default even if full mitigation
> > > requires it.
> >
> > Ok that makes a lot of sense. My bias would be to use the vector
> > parameters as an opportunity to make the SMT stance more obvious. So
> > kernel's position becomes more of "I disabled SMT because you asked for
> > protection with mitigate_user_user" (or any other vector). If no vector
> > parameters are specified, SMT default would be maintained. What are your
> > thoughts on disabling SMT if a vector parameter is explicitly supplied?
>
> I think attack vector mitigation like user-user does not necessarily mean
> SMT needs to be disabled. For example, for a system only affected by
> Spectre-v2, selecting user-user mitigation should deploy STIBP and IBPB,
> rather than disabling SMT.
>
> IMO, unless explicitly asked by a user, the decision to disable SMT should
> be left to individual mitigations.

Maybe so. Agree on preferring targeted mitigations rather than
disabling SMT where possible.

Derek

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
  2024-10-02 14:28         ` Kaplan, David
@ 2024-10-02 20:11           ` Manwaring, Derek
  2024-10-02 20:26             ` Kaplan, David
  0 siblings, 1 reply; 63+ messages in thread
From: Manwaring, Derek @ 2024-10-02 20:11 UTC (permalink / raw)
  To: david.kaplan
  Cc: bp, dave.hansen, derekmn, hpa, jpoimboe, linux-kernel, mingo,
	pawan.kumar.gupta, peterz, tglx, x86

On 2024-10-02 14:28+0000 David Kaplan wrote:
> On 2024-10-01 22:37+0000 Derek Manwaring wrote:
> > On 2024-10-01 01:56+0000 David Kaplan wrote:
> > > On 2024-09-30 17:50-0700 Derek Manwaring wrote:
> > > > Maybe I'm missing something here - if you care about user/user, why
> > > > would you not care about cross-thread? It seems to me SMT should be
> > > > turned off for all of the vectors.
> > >
> > > I broke out cross-thread separately to maintain the existing kernel
> > > defaults, which does not disable SMT by default even if full
> > > mitigation requires it.
> >
> > Ok that makes a lot of sense. My bias would be to use the vector parameters as an
> > opportunity to make the SMT stance more obvious. So kernel's position becomes
> > more of "I disabled SMT because you asked for protection with mitigate_user_user"
> > (or any other vector). If no vector parameters are specified, SMT default would be
> > maintained. What are your thoughts on disabling SMT if a vector parameter is
> > explicitly supplied?
>
> Hmm, I'm not quite sure how to do that because mitigate_user_user
> defaults to 'on' (again, to maintain the existing kernel defaults).  It
> seems odd to me that explicitly specifying 'mitigate_user_user=on' would
> result in different behavior.  And I think many vulnerabilities that
> require SMT disabled will already print out a warning if mitigation is
> requested and SMT is enabled.  Open to ideas here...

Yeah this would be awkward. Maybe the warning is enough. It just makes
SMT such an exception.

> > > In theory, cross-thread protection is only required if there is a risk
> > > that untrusted workloads might run as siblings.  If techniques like
> > > core scheduling are used, this might be able to be prevented I suppose.
> >
> > True, though I think it's worth making clear that doing core scheduling correctly is
> > the user's responsibility, and the vector protection they asked for may be incomplete
> > if there are mistakes in how they manage process cookies. Just an idea, but what if
> > users had to ask for SMT to remain enabled if they had also asked for protection
> > from one of these vectors?
>
> I think the fact that some mitigations will print warnings if SMT is
> enabled might be sufficient here.  I can also add something more about
> core scheduling in the documentation file.

That works. Personally I would say make the SMT and core scheduling bits
clear in the documentation and remove mitigate_cross_threads since it's
not inherently two separate domains like the others are
(user/kernel/guest/host).

It should be clear that SMT is the one case where specifying a vector
will not necessarily give you sufficient protection (unless we can find
an intuitive/low-surprise way to disable SMT when required to mitigate
certain vulnerabilities for the configured vector on affected parts).

Derek

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
  2024-10-02 20:11           ` Manwaring, Derek
@ 2024-10-02 20:26             ` Kaplan, David
  0 siblings, 0 replies; 63+ messages in thread
From: Kaplan, David @ 2024-10-02 20:26 UTC (permalink / raw)
  To: Manwaring, Derek
  Cc: bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com,
	jpoimboe@kernel.org, linux-kernel@vger.kernel.org,
	mingo@redhat.com, pawan.kumar.gupta@linux.intel.com,
	peterz@infradead.org, tglx@linutronix.de, x86@kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Manwaring, Derek <derekmn@amazon.com>
> Sent: Wednesday, October 2, 2024 3:12 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: bp@alien8.de; dave.hansen@linux.intel.com; derekmn@amazon.com;
> hpa@zytor.com; jpoimboe@kernel.org; linux-kernel@vger.kernel.org;
> mingo@redhat.com; pawan.kumar.gupta@linux.intel.com; peterz@infradead.org;
> tglx@linutronix.de; x86@kernel.org
> Subject: RE: [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On 2024-10-02 14:28+0000 David Kaplan wrote:
> > On 2024-10-01 22:37+0000 Derek Manwaring wrote:
> > > On 2024-10-01 01:56+0000 David Kaplan wrote:
> > > > On 2024-09-30 17:50-0700 Derek Manwaring wrote:
> > > > > Maybe I'm missing something here - if you care about user/user,
> > > > > why would you not care about cross-thread? It seems to me SMT
> > > > > should be turned off for all of the vectors.
> > > >
> > > > I broke out cross-thread separately to maintain the existing
> > > > kernel defaults, which does not disable SMT by default even if
> > > > full mitigation requires it.
> > >
> > > Ok that makes a lot of sense. My bias would be to use the vector
> > > parameters as an opportunity to make the SMT stance more obvious. So
> > > kernel's position becomes more of "I disabled SMT because you asked for
> protection with mitigate_user_user"
> > > (or any other vector). If no vector parameters are specified, SMT
> > > default would be maintained. What are your thoughts on disabling SMT
> > > if a vector parameter is explicitly supplied?
> >
> > Hmm, I'm not quite sure how to do that because mitigate_user_user
> > defaults to 'on' (again, to maintain the existing kernel defaults).
> > It seems odd to me that explicitly specifying 'mitigate_user_user=on'
> > would result in different behavior.  And I think many vulnerabilities
> > that require SMT disabled will already print out a warning if
> > mitigation is requested and SMT is enabled.  Open to ideas here...
>
> Yeah this would be awkward. Maybe the warning is enough. It just makes SMT
> such an exception.
>
> > > > In theory, cross-thread protection is only required if there is a
> > > > risk that untrusted workloads might run as siblings.  If
> > > > techniques like core scheduling are used, this might be able to be prevented I
> suppose.
> > >
> > > True, though I think it's worth making clear that doing core
> > > scheduling correctly is the user's responsibility, and the vector
> > > protection they asked for may be incomplete if there are mistakes in
> > > how they manage process cookies. Just an idea, but what if users had
> > > to ask for SMT to remain enabled if they had also asked for protection from one
> of these vectors?
> >
> > I think the fact that some mitigations will print warnings if SMT is
> > enabled might be sufficient here.  I can also add something more about
> > core scheduling in the documentation file.
>
> That works. Personally I would say make the SMT and core scheduling bits clear in
> the documentation and remove mitigate_cross_threads since it's not inherently two
> separate domains like the others are (user/kernel/guest/host).

I wanted to keep mitigate_cross_thread because a paranoid user could simply set all attack vector controls to 'on' and know they are fully mitigated against everything without having to worry about core scheduling.  Mitigating cross thread attacks doesn't always disable SMT, it depends on what CPU you're running on.  If I removed that vector, then you'd have to boot up and see if you got any warnings, and then reboot without SMT.

Part of the goal with the attack vector stuff is to be future-compatible.  So even if you're currently running on HW that doesn't require disabling SMT for mitigation, but then you move to new HW that does, you don't have to change anything as the system will remain fully mitigated.

But I do agree that cross-thread is a bit different than the other vectors, both because it's not inherently across domains but also because its relevance may be related to scheduling policy that the kernel knows nothing about (at boot time at least).

--David Kaplan

>
> It should be clear that SMT is the one case where specifying a vector will not
> necessarily give you sufficient protection (unless we can find an intuitive/low-
> surprise way to disable SMT when required to mitigate certain vulnerabilities for the
> configured vector on affected parts).
>


^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 11/34] x86/bugs: Restructure retbleed mitigation
  2024-09-12 19:08 ` [RFC PATCH 11/34] x86/bugs: Restructure retbleed mitigation David Kaplan
@ 2024-10-08  8:32   ` Nikolay Borisov
  2024-10-08 14:28     ` Kaplan, David
  0 siblings, 1 reply; 63+ messages in thread
From: Nikolay Borisov @ 2024-10-08  8:32 UTC (permalink / raw)
  To: David Kaplan, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen, x86,
	H . Peter Anvin
  Cc: linux-kernel



On 12.09.24 г. 22:08 ч., David Kaplan wrote:
> Restructure retbleed mitigation to use select/update/apply functions to
> create consistent vulnerability handling.  The retbleed_update_mitigation()
> simplifies the dependency between spectre_v2 and retbleed.
> 
> The command line options now directly select a preferred mitigation
> which simplifies the logic.
> 
> Signed-off-by: David Kaplan <david.kaplan@amd.com>
> ---
>   arch/x86/kernel/cpu/bugs.c | 168 ++++++++++++++++---------------------
>   1 file changed, 73 insertions(+), 95 deletions(-)
> 

<snip>

>   static void __init retbleed_select_mitigation(void)
>   {
> -	bool mitigate_smt = false;
> -
>   	if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
>   		return;
>   
> -	switch (retbleed_cmd) {
> -	case RETBLEED_CMD_OFF:
> -		return;
> -
> -	case RETBLEED_CMD_UNRET:
> -		if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
> -			retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
> -		} else {
> +	switch (retbleed_mitigation) {
> +	case RETBLEED_MITIGATION_UNRET:
> +		if (!IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
> +			retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
>   			pr_err("WARNING: kernel not compiled with MITIGATION_UNRET_ENTRY.\n");
> -			goto do_cmd_auto;
>   		}
>   		break;
> -
> -	case RETBLEED_CMD_IBPB:
> -		if (!boot_cpu_has(X86_FEATURE_IBPB)) {
> -			pr_err("WARNING: CPU does not support IBPB.\n");
> -			goto do_cmd_auto;
> -		} else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
> -			retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
> -		} else {
> -			pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
> -			goto do_cmd_auto;
> +	case RETBLEED_MITIGATION_IBPB:
> +		if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {

This check is redundant, if this leg of the switch is executed it's 
because retbleed_mitigation is already RETBLEED_MITIGATIOB_IBPB.

> +			if (!boot_cpu_has(X86_FEATURE_IBPB)) {
> +				pr_err("WARNING: CPU does not support IBPB.\n");
> +				retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
> +			} else if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
> +				pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
> +				retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
> +			}
>   		}
>   		break;
> -

<snip>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 07/34] x86/bugs: Remove md_clear_*_mitigation()
  2024-09-12 19:08 ` [RFC PATCH 07/34] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
@ 2024-10-08  8:40   ` Nikolay Borisov
  0 siblings, 0 replies; 63+ messages in thread
From: Nikolay Borisov @ 2024-10-08  8:40 UTC (permalink / raw)
  To: David Kaplan, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen, x86,
	H . Peter Anvin
  Cc: linux-kernel



On 12.09.24 г. 22:08 ч., David Kaplan wrote:
> The functionality in md_clear_update_mitigation() and
> md_clear_select_mitigation() is now integrated into the select/update
> functions for the MDS, TAA, MMIO, and RFDS vulnerabilities.
> 
> Signed-off-by: David Kaplan <david.kaplan@amd.com>


I think the previous 4 patches are better replaced with the series that 
Daniel Sneddon has sent: 
20240924223140.1054918-1-daniel.sneddon@linux.intel.com

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 13/34] x86/bugs: Restructure bhi mitigation
  2024-09-12 19:08 ` [RFC PATCH 13/34] x86/bugs: Restructure bhi mitigation David Kaplan
@ 2024-10-08 12:41   ` Nikolay Borisov
  2024-10-08 14:25     ` Kaplan, David
  0 siblings, 1 reply; 63+ messages in thread
From: Nikolay Borisov @ 2024-10-08 12:41 UTC (permalink / raw)
  To: David Kaplan, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen, x86,
	H . Peter Anvin
  Cc: linux-kernel



On 12.09.24 г. 22:08 ч., David Kaplan wrote:
> Restructure bhi mitigation to use select/apply functions to create
> consistent vulnerability handling.
> 
> Define new AUTO mitigation for bhi.
> 
> Signed-off-by: David Kaplan <david.kaplan@amd.com>
> ---
>   arch/x86/kernel/cpu/bugs.c | 22 ++++++++++++++++++----
>   1 file changed, 18 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index eaef5a1cb4a3..da6ca2fc939d 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -82,6 +82,8 @@ static void __init l1d_flush_select_mitigation(void);
>   static void __init srso_select_mitigation(void);
>   static void __init gds_select_mitigation(void);
>   static void __init gds_apply_mitigation(void);
> +static void __init bhi_select_mitigation(void);
> +static void __init bhi_apply_mitigation(void);
>   
>   /* The base value of the SPEC_CTRL MSR without task-specific bits set */
>   u64 x86_spec_ctrl_base;
> @@ -201,6 +203,7 @@ void __init cpu_select_mitigations(void)
>   	 */
>   	srso_select_mitigation();
>   	gds_select_mitigation();
> +	bhi_select_mitigation();
>   
>   	/*
>   	 * After mitigations are selected, some may need to update their
> @@ -222,6 +225,7 @@ void __init cpu_select_mitigations(void)
>   	rfds_apply_mitigation();
>   	srbds_apply_mitigation();
>   	gds_apply_mitigation();
> +	bhi_apply_mitigation();
>   }
>   
>   /*
> @@ -1719,12 +1723,13 @@ static bool __init spec_ctrl_bhi_dis(void)
>   
>   enum bhi_mitigations {
>   	BHI_MITIGATION_OFF,
> +	BHI_MITIGATION_AUTO,
>   	BHI_MITIGATION_ON,
>   	BHI_MITIGATION_VMEXIT_ONLY,
>   };


Since this series refactors all mitigations how about taking ON to mean 
AUTO which would result in overall less states for the various 
mitigations. If we take BHI as an example I don't see what value does 
_AUTO bring here.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 13/34] x86/bugs: Restructure bhi mitigation
  2024-10-08 12:41   ` Nikolay Borisov
@ 2024-10-08 14:25     ` Kaplan, David
  0 siblings, 0 replies; 63+ messages in thread
From: Kaplan, David @ 2024-10-08 14:25 UTC (permalink / raw)
  To: Nikolay Borisov, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen,
	x86@kernel.org, H . Peter Anvin
  Cc: linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Nikolay Borisov <nik.borisov@suse.com>
> Sent: Tuesday, October 8, 2024 7:42 AM
> To: Kaplan, David <David.Kaplan@amd.com>; Thomas Gleixner
> <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter Zijlstra
> <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>
> Cc: linux-kernel@vger.kernel.org
> Subject: Re: [RFC PATCH 13/34] x86/bugs: Restructure bhi mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On 12.09.24 г. 22:08 ч., David Kaplan wrote:
> > Restructure bhi mitigation to use select/apply functions to create
> > consistent vulnerability handling.
> >
> > Define new AUTO mitigation for bhi.
> >
> > Signed-off-by: David Kaplan <david.kaplan@amd.com>
> > ---
> >   arch/x86/kernel/cpu/bugs.c | 22 ++++++++++++++++++----
> >   1 file changed, 18 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index eaef5a1cb4a3..da6ca2fc939d 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -82,6 +82,8 @@ static void __init l1d_flush_select_mitigation(void);
> >   static void __init srso_select_mitigation(void);
> >   static void __init gds_select_mitigation(void);
> >   static void __init gds_apply_mitigation(void);
> > +static void __init bhi_select_mitigation(void); static void __init
> > +bhi_apply_mitigation(void);
> >
> >   /* The base value of the SPEC_CTRL MSR without task-specific bits set */
> >   u64 x86_spec_ctrl_base;
> > @@ -201,6 +203,7 @@ void __init cpu_select_mitigations(void)
> >        */
> >       srso_select_mitigation();
> >       gds_select_mitigation();
> > +     bhi_select_mitigation();
> >
> >       /*
> >        * After mitigations are selected, some may need to update their
> > @@ -222,6 +225,7 @@ void __init cpu_select_mitigations(void)
> >       rfds_apply_mitigation();
> >       srbds_apply_mitigation();
> >       gds_apply_mitigation();
> > +     bhi_apply_mitigation();
> >   }
> >
> >   /*
> > @@ -1719,12 +1723,13 @@ static bool __init spec_ctrl_bhi_dis(void)
> >
> >   enum bhi_mitigations {
> >       BHI_MITIGATION_OFF,
> > +     BHI_MITIGATION_AUTO,
> >       BHI_MITIGATION_ON,
> >       BHI_MITIGATION_VMEXIT_ONLY,
> >   };
>
>
> Since this series refactors all mitigations how about taking ON to mean AUTO
> which would result in overall less states for the various mitigations. If we take BHI
> as an example I don't see what value does _AUTO bring here.

In this (and the other bugs), AUTO means that no bug-specific command line option was provided.  In this way we can differentiate between no option provided (in which case attack vector controls will decide if mitigation is needed) or "bhi=on" which will force the bhi mitigation on even if the attack vector controls would otherwise leave it disabled.

--David Kaplan

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [RFC PATCH 11/34] x86/bugs: Restructure retbleed mitigation
  2024-10-08  8:32   ` Nikolay Borisov
@ 2024-10-08 14:28     ` Kaplan, David
  0 siblings, 0 replies; 63+ messages in thread
From: Kaplan, David @ 2024-10-08 14:28 UTC (permalink / raw)
  To: Nikolay Borisov, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen,
	x86@kernel.org, H . Peter Anvin
  Cc: linux-kernel@vger.kernel.org

[AMD Official Use Only - AMD Internal Distribution Only]

> -----Original Message-----
> From: Nikolay Borisov <nik.borisov@suse.com>
> Sent: Tuesday, October 8, 2024 3:33 AM
> To: Kaplan, David <David.Kaplan@amd.com>; Thomas Gleixner
> <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter Zijlstra
> <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>
> Cc: linux-kernel@vger.kernel.org
> Subject: Re: [RFC PATCH 11/34] x86/bugs: Restructure retbleed mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On 12.09.24 г. 22:08 ч., David Kaplan wrote:
> > Restructure retbleed mitigation to use select/update/apply functions
> > to create consistent vulnerability handling.  The
> > retbleed_update_mitigation() simplifies the dependency between spectre_v2 and
> retbleed.
> >
> > The command line options now directly select a preferred mitigation
> > which simplifies the logic.
> >
> > Signed-off-by: David Kaplan <david.kaplan@amd.com>
> > ---
> >   arch/x86/kernel/cpu/bugs.c | 168 ++++++++++++++++---------------------
> >   1 file changed, 73 insertions(+), 95 deletions(-)
> >
>
> <snip>
>
> >   static void __init retbleed_select_mitigation(void)
> >   {
> > -     bool mitigate_smt = false;
> > -
> >       if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
> >               return;
> >
> > -     switch (retbleed_cmd) {
> > -     case RETBLEED_CMD_OFF:
> > -             return;
> > -
> > -     case RETBLEED_CMD_UNRET:
> > -             if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
> > -                     retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
> > -             } else {
> > +     switch (retbleed_mitigation) {
> > +     case RETBLEED_MITIGATION_UNRET:
> > +             if (!IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
> > +                     retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
> >                       pr_err("WARNING: kernel not compiled with
> MITIGATION_UNRET_ENTRY.\n");
> > -                     goto do_cmd_auto;
> >               }
> >               break;
> > -
> > -     case RETBLEED_CMD_IBPB:
> > -             if (!boot_cpu_has(X86_FEATURE_IBPB)) {
> > -                     pr_err("WARNING: CPU does not support IBPB.\n");
> > -                     goto do_cmd_auto;
> > -             } else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
> > -                     retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
> > -             } else {
> > -                     pr_err("WARNING: kernel not compiled with
> MITIGATION_IBPB_ENTRY.\n");
> > -                     goto do_cmd_auto;
> > +     case RETBLEED_MITIGATION_IBPB:
> > +             if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
>
> This check is redundant, if this leg of the switch is executed it's because
> retbleed_mitigation is already RETBLEED_MITIGATIOB_IBPB.

Yes, thanks for catching that.  Will fix.

--David Kaplan

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 15/34] x86/bugs: Restructure ssb mitigation
  2024-09-12 19:08 ` [RFC PATCH 15/34] x86/bugs: Restructure ssb mitigation David Kaplan
@ 2024-10-08 15:21   ` Nikolay Borisov
  0 siblings, 0 replies; 63+ messages in thread
From: Nikolay Borisov @ 2024-10-08 15:21 UTC (permalink / raw)
  To: David Kaplan, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Pawan Gupta, Ingo Molnar, Dave Hansen, x86,
	H . Peter Anvin
  Cc: linux-kernel



On 12.09.24 г. 22:08 ч., David Kaplan wrote:
> Restructure ssb to use select/apply functions to create consistent
> vulnerability handling.
> 
> Signed-off-by: David Kaplan <david.kaplan@amd.com>
> ---
>   arch/x86/kernel/cpu/bugs.c | 26 ++++++++++++++++----------
>   1 file changed, 16 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 32ebe9e934fe..c996c1521851 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -65,6 +65,7 @@ static void __init spectre_v2_user_select_mitigation(void);
>   static void __init spectre_v2_user_update_mitigation(void);
>   static void __init spectre_v2_user_apply_mitigation(void);
>   static void __init ssb_select_mitigation(void);
> +static void __init ssb_apply_mitigation(void);
>   static void __init l1tf_select_mitigation(void);
>   static void __init mds_select_mitigation(void);
>   static void __init mds_update_mitigation(void);
> @@ -223,6 +224,7 @@ void __init cpu_select_mitigations(void)
>   	spectre_v2_apply_mitigation();
>   	retbleed_apply_mitigation();
>   	spectre_v2_user_apply_mitigation();
> +	ssb_apply_mitigation();
>   	mds_apply_mitigation();
>   	taa_apply_mitigation();
>   	mmio_apply_mitigation();
> @@ -2211,13 +2213,26 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
>   		break;
>   	}
>   
> +	return mode;
> +}
> +
> +static void ssb_select_mitigation(void)
> +{
> +	ssb_mode = __ssb_select_mitigation();
> +
> +	if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
> +		pr_info("%s\n", ssb_strings[ssb_mode]);
> +}

nit: While at it simply open code __ssb_select_mitigation() here.

<snip>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [RFC PATCH 01/34] x86/bugs: Relocate mds/taa/mmio/rfds defines
  2024-09-12 19:08 ` [RFC PATCH 01/34] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
@ 2024-10-24 13:07   ` Borislav Petkov
  0 siblings, 0 replies; 63+ messages in thread
From: Borislav Petkov @ 2024-10-24 13:07 UTC (permalink / raw)
  To: David Kaplan
  Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
	Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel

On Thu, Sep 12, 2024 at 02:08:24PM -0500, David Kaplan wrote:
> Move the mds, taa, mmio, and rfds mitigation enums earlier in the file
> to prepare for restructuring of these mitigations as they are all
> inter-related.

Add here

"No functional changes."

to denote that the patch is solely doing code movement.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2024-10-24 13:07 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-12 19:08 [RFC PATCH 00/34] x86/bugs: Attack vector controls David Kaplan
2024-09-12 19:08 ` [RFC PATCH 01/34] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
2024-10-24 13:07   ` Borislav Petkov
2024-09-12 19:08 ` [RFC PATCH 02/34] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds David Kaplan
2024-09-12 19:08 ` [RFC PATCH 03/34] x86/bugs: Restructure mds mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 04/34] x86/bugs: Restructure taa mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 05/34] x86/bugs: Restructure mmio mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 06/34] x86/bugs: Restructure rfds mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 07/34] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
2024-10-08  8:40   ` Nikolay Borisov
2024-09-12 19:08 ` [RFC PATCH 08/34] x86/bugs: Restructure srbds mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 09/34] x86/bugs: Restructure gds mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 10/34] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 11/34] x86/bugs: Restructure retbleed mitigation David Kaplan
2024-10-08  8:32   ` Nikolay Borisov
2024-10-08 14:28     ` Kaplan, David
2024-09-12 19:08 ` [RFC PATCH 12/34] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 13/34] x86/bugs: Restructure bhi mitigation David Kaplan
2024-10-08 12:41   ` Nikolay Borisov
2024-10-08 14:25     ` Kaplan, David
2024-09-12 19:08 ` [RFC PATCH 14/34] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 15/34] x86/bugs: Restructure ssb mitigation David Kaplan
2024-10-08 15:21   ` Nikolay Borisov
2024-09-12 19:08 ` [RFC PATCH 16/34] x86/bugs: Restructure l1tf mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 17/34] x86/bugs: Restructure srso mitigation David Kaplan
2024-09-12 19:08 ` [RFC PATCH 18/34] Documentation/x86: Document the new attack vector controls David Kaplan
2024-10-01  0:43   ` Manwaring, Derek
2024-10-01  1:53     ` Kaplan, David
2024-10-01 22:21       ` Manwaring, Derek
2024-09-12 19:08 ` [RFC PATCH 19/34] x86/bugs: Define attack vectors David Kaplan
2024-09-12 19:08 ` [RFC PATCH 20/34] x86/bugs: Determine relevant vulnerabilities based on attack vector controls David Kaplan
2024-09-12 19:08 ` [RFC PATCH 21/34] x86/bugs: Add attack vector controls for mds David Kaplan
2024-10-01  0:50   ` Manwaring, Derek
2024-10-01  1:58     ` Kaplan, David
2024-10-01 22:37       ` Manwaring, Derek
2024-10-02 14:28         ` Kaplan, David
2024-10-02 20:11           ` Manwaring, Derek
2024-10-02 20:26             ` Kaplan, David
2024-10-02 15:50         ` Pawan Gupta
2024-10-02 19:40           ` Manwaring, Derek
2024-09-12 19:08 ` [RFC PATCH 22/34] x86/bugs: Add attack vector controls for taa David Kaplan
2024-09-12 19:08 ` [RFC PATCH 23/34] x86/bugs: Add attack vector controls for mmio David Kaplan
2024-09-12 19:08 ` [RFC PATCH 24/34] x86/bugs: Add attack vector controls for rfds David Kaplan
2024-09-12 19:08 ` [RFC PATCH 25/34] x86/bugs: Add attack vector controls for srbds David Kaplan
2024-09-12 19:08 ` [RFC PATCH 26/34] x86/bugs: Add attack vector controls for gds David Kaplan
2024-09-12 19:08 ` [RFC PATCH 27/34] x86/bugs: Add attack vector controls for spectre_v1 David Kaplan
2024-09-12 19:37   ` Dave Hansen
2024-09-12 19:57     ` Kaplan, David
2024-09-12 20:16       ` Dave Hansen
2024-09-12 21:15         ` Kaplan, David
2024-10-01  0:39           ` Manwaring, Derek
2024-10-01  1:46             ` Kaplan, David
2024-10-01 22:18               ` Manwaring, Derek
2024-09-13 14:20       ` Borislav Petkov
2024-09-12 19:08 ` [RFC PATCH 28/34] x86/bugs: Add attack vector controls for retbleed David Kaplan
2024-09-12 19:08 ` [RFC PATCH 29/34] x86/bugs: Add attack vector controls for spectre_v2_user David Kaplan
2024-09-12 19:08 ` [RFC PATCH 30/34] x86/bugs: Add attack vector controls for bhi David Kaplan
2024-09-12 19:08 ` [RFC PATCH 31/34] x86/bugs: Add attack vector controls for spectre_v2 David Kaplan
2024-09-12 19:08 ` [RFC PATCH 32/34] x86/bugs: Add attack vector controls for l1tf David Kaplan
2024-09-12 19:08 ` [RFC PATCH 33/34] x86/bugs: Add attack vector controls for srso David Kaplan
2024-09-12 19:08 ` [RFC PATCH 34/34] x86/pti: Add attack vector controls for pti David Kaplan
2024-09-17 17:04 ` [RFC PATCH 00/34] x86/bugs: Attack vector controls Pawan Gupta
2024-09-18  6:29   ` Kaplan, David

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox