public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/23] SRSO fixes/cleanups
@ 2023-08-25  7:01 Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 01/23] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
                   ` (24 more replies)
  0 siblings, 25 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

v2:
- reorder everything: fixes/functionality before cleanups
- split up KVM patch, add Sean's changes
- add patch to support live migration
- remove "default:" case for enums throughout bugs.c
- various minor tweaks based on v1 discussions with Boris
- add Reviewed-by's

Josh Poimboeuf (23):
  x86/srso: Fix srso_show_state() side effect
  x86/srso: Set CPUID feature bits independently of bug or mitigation
    status
  x86/srso: Don't probe microcode in a guest
  KVM: x86: Add IBPB_BRTYPE support
  KVM: x86: Add SBPB support
  x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  x86/srso: Fix SBPB enablement for (possible) future fixed HW
  x86/srso: Print actual mitigation if requested mitigation isn't
    possible
  x86/srso: Print mitigation for retbleed IBPB case
  x86/srso: Fix vulnerability reporting for missing microcode
  x86/srso: Fix unret validation dependencies
  x86/alternatives: Remove faulty optimization
  x86/srso: Improve i-cache locality for alias mitigation
  x86/srso: Unexport untraining functions
  x86/srso: Remove 'pred_cmd' label
  x86/bugs: Remove default case for fully switched enums
  x86/srso: Move retbleed IBPB check into existing 'has_microcode' code
    block
  x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check
  x86/srso: Disentangle rethunk-dependent options
  x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros
  x86/retpoline: Remove .text..__x86.return_thunk section
  x86/nospec: Refactor UNTRAIN_RET[_*]
  x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk()

 Documentation/admin-guide/hw-vuln/srso.rst |  22 ++-
 arch/x86/include/asm/nospec-branch.h       |  69 ++++-----
 arch/x86/include/asm/processor.h           |   2 -
 arch/x86/kernel/alternative.c              |   8 -
 arch/x86/kernel/cpu/amd.c                  |  28 ++--
 arch/x86/kernel/cpu/bugs.c                 | 104 ++++++-------
 arch/x86/kernel/vmlinux.lds.S              |  10 +-
 arch/x86/kvm/cpuid.c                       |   5 +-
 arch/x86/kvm/cpuid.h                       |   3 +-
 arch/x86/kvm/x86.c                         |  29 +++-
 arch/x86/lib/retpoline.S                   | 171 +++++++++++----------
 include/linux/objtool.h                    |   3 +-
 scripts/Makefile.vmlinux_o                 |   3 +-
 13 files changed, 230 insertions(+), 227 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH 01/23] x86/srso: Fix srso_show_state() side effect
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 02/23] x86/srso: Set CPUID feature bits independently of bug or mitigation status Josh Poimboeuf
                   ` (23 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Reading the 'spec_rstack_overflow' sysfs file can trigger an unnecessary
MSR write, and possibly even a (handled) exception if the microcode
hasn't been updated.

Avoid all that by just checking X86_FEATURE_IBPB_BRTYPE instead, which
gets set by srso_select_mitigation() if the updated microcode exists.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index f081d26616ac..bdd3e296f72b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2717,7 +2717,7 @@ static ssize_t srso_show_state(char *buf)
 
 	return sysfs_emit(buf, "%s%s\n",
 			  srso_strings[srso_mitigation],
-			  (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
+			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
 }
 
 static ssize_t gds_show_state(char *buf)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 02/23] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 01/23] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 03/23] x86/srso: Don't probe microcode in a guest Josh Poimboeuf
                   ` (22 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Booting with mitigations=off incorrectly prevents the
X86_FEATURE_{IBPB_BRTYPE,SBPB} CPUID bits from getting set.

Also, future CPUs without X86_BUG_SRSO might still have IBPB with branch
type prediction flushing, in which case SBPB should be used instead of
IBPB.  The current code doesn't allow for that.

Also, cpu_has_ibpb_brtype_microcode() has some surprising side effects
and the setting of these feature bits really doesn't belong in the
mitigation code anyway.  Move it to earlier.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/processor.h |  2 --
 arch/x86/kernel/cpu/amd.c        | 28 +++++++++-------------------
 arch/x86/kernel/cpu/bugs.c       | 13 +------------
 3 files changed, 10 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index fd750247ca89..9e26294e415c 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -676,12 +676,10 @@ extern u16 get_llc_id(unsigned int cpu);
 #ifdef CONFIG_CPU_SUP_AMD
 extern u32 amd_get_nodes_per_socket(void);
 extern u32 amd_get_highest_perf(void);
-extern bool cpu_has_ibpb_brtype_microcode(void);
 extern void amd_clear_divider(void);
 #else
 static inline u32 amd_get_nodes_per_socket(void)	{ return 0; }
 static inline u32 amd_get_highest_perf(void)		{ return 0; }
-static inline bool cpu_has_ibpb_brtype_microcode(void)	{ return false; }
 static inline void amd_clear_divider(void)		{ }
 #endif
 
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 7eca6a8abbb1..b08af929135d 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -766,6 +766,15 @@ static void early_init_amd(struct cpuinfo_x86 *c)
 
 	if (cpu_has(c, X86_FEATURE_TOPOEXT))
 		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
+
+	if (!cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
+		if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
+			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
+		else if (c->x86 >= 0x19 && !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
+			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
+			setup_force_cpu_cap(X86_FEATURE_SBPB);
+		}
+	}
 }
 
 static void init_amd_k8(struct cpuinfo_x86 *c)
@@ -1301,25 +1310,6 @@ void amd_check_microcode(void)
 	on_each_cpu(zenbleed_check_cpu, NULL, 1);
 }
 
-bool cpu_has_ibpb_brtype_microcode(void)
-{
-	switch (boot_cpu_data.x86) {
-	/* Zen1/2 IBPB flushes branch type predictions too. */
-	case 0x17:
-		return boot_cpu_has(X86_FEATURE_AMD_IBPB);
-	case 0x19:
-		/* Poke the MSR bit on Zen3/4 to check its presence. */
-		if (!wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
-			setup_force_cpu_cap(X86_FEATURE_SBPB);
-			return true;
-		} else {
-			return false;
-		}
-	default:
-		return false;
-	}
-}
-
 /*
  * Issue a DIV 0/1 insn to clear any division data from previous DIV
  * operations.
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index bdd3e296f72b..b0ae985aa6a4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2404,26 +2404,15 @@ early_param("spec_rstack_overflow", srso_parse_cmdline);
 
 static void __init srso_select_mitigation(void)
 {
-	bool has_microcode;
+	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
 
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
 		goto pred_cmd;
 
-	/*
-	 * The first check is for the kernel running as a guest in order
-	 * for guests to verify whether IBPB is a viable mitigation.
-	 */
-	has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) || cpu_has_ibpb_brtype_microcode();
 	if (!has_microcode) {
 		pr_warn("IBPB-extending microcode not applied!\n");
 		pr_warn(SRSO_NOTICE);
 	} else {
-		/*
-		 * Enable the synthetic (even if in a real CPUID leaf)
-		 * flags for guests.
-		 */
-		setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
-
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 03/23] x86/srso: Don't probe microcode in a guest
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 01/23] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 02/23] x86/srso: Set CPUID feature bits independently of bug or mitigation status Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25  7:52   ` Andrew Cooper
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 04/23] KVM: x86: Add IBPB_BRTYPE support Josh Poimboeuf
                   ` (21 subsequent siblings)
  24 siblings, 2 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

To support live migration, the hypervisor sets the "lowest common
denominator" of features.  Probing the microcode isn't allowed because
any detected features might go away after a migration.

As Andy Cooper states:

  "Linux must not probe microcode when virtualised.  What it may see
  instantaneously on boot (owing to MSR_PRED_CMD being fully passed
  through) is not accurate for the lifetime of the VM."

Rely on the hypervisor to set the needed IBPB_BRTYPE and SBPB bits.

Fixes: 1b5277c0ea0b ("x86/srso: Add SRSO_NO support")
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/amd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index b08af929135d..28e77c5d6484 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -767,7 +767,7 @@ static void early_init_amd(struct cpuinfo_x86 *c)
 	if (cpu_has(c, X86_FEATURE_TOPOEXT))
 		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
 
-	if (!cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
+	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
 		if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
 			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
 		else if (c->x86 >= 0x19 && !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 04/23] KVM: x86: Add IBPB_BRTYPE support
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (2 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 03/23] x86/srso: Don't probe microcode in a guest Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 18:15   ` Sean Christopherson
  2023-08-25  7:01 ` [PATCH 05/23] KVM: x86: Add SBPB support Josh Poimboeuf
                   ` (20 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Add support for the IBPB_BRTYPE CPUID flag, which indicates that IBPB
includes branch type prediction flushing.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kvm/cpuid.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index d3432687c9e6..c65f3ff1c79d 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -729,8 +729,8 @@ void kvm_set_cpu_caps(void)
 		F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */
 	);
 
-	if (cpu_feature_enabled(X86_FEATURE_SRSO_NO))
-		kvm_cpu_cap_set(X86_FEATURE_SRSO_NO);
+	kvm_cpu_cap_check_and_set(X86_FEATURE_IBPB_BRTYPE);
+	kvm_cpu_cap_check_and_set(X86_FEATURE_SRSO_NO);
 
 	kvm_cpu_cap_init_kvm_defined(CPUID_8000_0022_EAX,
 		F(PERFMON_V2)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 05/23] KVM: x86: Add SBPB support
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (3 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 04/23] KVM: x86: Add IBPB_BRTYPE support Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 18:20   ` Sean Christopherson
  2023-08-25  7:01 ` [PATCH 06/23] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off Josh Poimboeuf
                   ` (19 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Add support for the AMD Selective Branch Predictor Barrier (SBPB) by
advertising the CPUID bit and handling PRED_CMD writes accordingly.

Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kvm/cpuid.c |  1 +
 arch/x86/kvm/cpuid.h |  3 ++-
 arch/x86/kvm/x86.c   | 29 ++++++++++++++++++++++++-----
 3 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index c65f3ff1c79d..3a9879605513 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -729,6 +729,7 @@ void kvm_set_cpu_caps(void)
 		F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */
 	);
 
+	kvm_cpu_cap_check_and_set(X86_FEATURE_SBPB);
 	kvm_cpu_cap_check_and_set(X86_FEATURE_IBPB_BRTYPE);
 	kvm_cpu_cap_check_and_set(X86_FEATURE_SRSO_NO);
 
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index b1658c0de847..e4db844a58fe 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -174,7 +174,8 @@ static inline bool guest_has_spec_ctrl_msr(struct kvm_vcpu *vcpu)
 static inline bool guest_has_pred_cmd_msr(struct kvm_vcpu *vcpu)
 {
 	return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) ||
-		guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB));
+		guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB) ||
+		guest_cpuid_has(vcpu, X86_FEATURE_SBPB));
 }
 
 static inline bool supports_cpuid_fault(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c381770bcbf1..0af7d4484435 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3672,17 +3672,36 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		vcpu->arch.perf_capabilities = data;
 		kvm_pmu_refresh(vcpu);
 		break;
-	case MSR_IA32_PRED_CMD:
-		if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu))
+	case MSR_IA32_PRED_CMD: {
+		u64 reserved_bits = ~(PRED_CMD_IBPB | PRED_CMD_SBPB);
+
+		if (!msr_info->host_initiated) {
+			if ((!guest_has_pred_cmd_msr(vcpu)))
+				return 1;
+
+			if (!guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
+			    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB))
+				reserved_bits |= PRED_CMD_IBPB;
+
+			if (!guest_cpuid_has(vcpu, X86_FEATURE_SBPB))
+				reserved_bits |= PRED_CMD_SBPB;
+		}
+
+		if (!boot_cpu_has(X86_FEATURE_IBPB))
+			reserved_bits |= PRED_CMD_IBPB;
+
+		if (!boot_cpu_has(X86_FEATURE_SBPB))
+			reserved_bits |= PRED_CMD_SBPB;
+
+		if (data & reserved_bits)
 			return 1;
 
-		if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB))
-			return 1;
 		if (!data)
 			break;
 
-		wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
+		wrmsrl(MSR_IA32_PRED_CMD, data);
 		break;
+	}
 	case MSR_IA32_FLUSH_CMD:
 		if (!msr_info->host_initiated &&
 		    !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D))
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 06/23] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (4 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 05/23] KVM: x86: Add SBPB support Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 07/23] x86/srso: Fix SBPB enablement for (possible) future fixed HW Josh Poimboeuf
                   ` (18 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

If the user has requested no SRSO mitigation, other mitigations can use
the lighter-weight SBPB instead of IBPB.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b0ae985aa6a4..10499bcd4e39 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2433,7 +2433,7 @@ static void __init srso_select_mitigation(void)
 
 	switch (srso_cmd) {
 	case SRSO_CMD_OFF:
-		return;
+		goto pred_cmd;
 
 	case SRSO_CMD_MICROCODE:
 		if (has_microcode) {
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 07/23] x86/srso: Fix SBPB enablement for (possible) future fixed HW
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (5 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 06/23] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 08/23] x86/srso: Print actual mitigation if requested mitigation isn't possible Josh Poimboeuf
                   ` (17 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Make the SBPB check more robust against the (possible) case where future
HW has SRSO fixed but doesn't have the SRSO_NO bit set.

Fixes: 1b5277c0ea0b ("x86/srso: Add SRSO_NO support")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 10499bcd4e39..2859a54660a2 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2496,7 +2496,7 @@ static void __init srso_select_mitigation(void)
 	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
 
 pred_cmd:
-	if ((boot_cpu_has(X86_FEATURE_SRSO_NO) || srso_cmd == SRSO_CMD_OFF) &&
+	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
 	     boot_cpu_has(X86_FEATURE_SBPB))
 		x86_pred_cmd = PRED_CMD_SBPB;
 }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 08/23] x86/srso: Print actual mitigation if requested mitigation isn't possible
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (6 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 07/23] x86/srso: Fix SBPB enablement for (possible) future fixed HW Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 09/23] x86/srso: Print mitigation for retbleed IBPB case Josh Poimboeuf
                   ` (16 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

If the kernel wasn't compiled to support the requested option, print the
actual option that ends up getting used.

Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 2859a54660a2..235c0e00ae51 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2461,7 +2461,6 @@ static void __init srso_select_mitigation(void)
 			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
-			goto pred_cmd;
 		}
 		break;
 
@@ -2473,7 +2472,6 @@ static void __init srso_select_mitigation(void)
 			}
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n");
-			goto pred_cmd;
 		}
 		break;
 
@@ -2485,7 +2483,6 @@ static void __init srso_select_mitigation(void)
 			}
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
-			goto pred_cmd;
                 }
 		break;
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 09/23] x86/srso: Print mitigation for retbleed IBPB case
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (7 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 08/23] x86/srso: Print actual mitigation if requested mitigation isn't possible Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 10/23] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
                   ` (15 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

When overriding the requested mitigation with IBPB due to retbleed=ibpb,
print the mitigation in the usual format instead of a custom error
message.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 235c0e00ae51..6c47f37515b8 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2425,9 +2425,8 @@ static void __init srso_select_mitigation(void)
 
 	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
 		if (has_microcode) {
-			pr_err("Retbleed IBPB mitigation enabled, using same for SRSO\n");
 			srso_mitigation = SRSO_MITIGATION_IBPB;
-			goto pred_cmd;
+			goto out;
 		}
 	}
 
@@ -2490,7 +2489,8 @@ static void __init srso_select_mitigation(void)
 		break;
 	}
 
-	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
+out:
+	pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");
 
 pred_cmd:
 	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 10/23] x86/srso: Fix vulnerability reporting for missing microcode
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (8 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 09/23] x86/srso: Print mitigation for retbleed IBPB case Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 11/23] x86/srso: Fix unret validation dependencies Josh Poimboeuf
                   ` (14 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

The SRSO default safe-ret mitigation is reported as "mitigated" even if
microcode hasn't been updated.  That's wrong because userspace may still
be vulnerable to SRSO attacks due to IBPB not flushing branch type
predictions.

Report the safe-ret + !microcode case as vulnerable.

Also report the microcode-only case as vulnerable as it leaves the
kernel open to attacks.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 Documentation/admin-guide/hw-vuln/srso.rst | 22 ++++++++++----
 arch/x86/kernel/cpu/bugs.c                 | 34 +++++++++++++---------
 2 files changed, 37 insertions(+), 19 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
index b6cfb51cb0b4..4516719e00b5 100644
--- a/Documentation/admin-guide/hw-vuln/srso.rst
+++ b/Documentation/admin-guide/hw-vuln/srso.rst
@@ -46,12 +46,22 @@ The possible values in this file are:
 
    The processor is not vulnerable
 
- * 'Vulnerable: no microcode':
+* 'Vulnerable':
+
+   The processor is vulnerable and no mitigations have been applied.
+
+ * 'Vulnerable: No microcode':
 
    The processor is vulnerable, no microcode extending IBPB
    functionality to address the vulnerability has been applied.
 
- * 'Mitigation: microcode':
+ * 'Vulnerable: Safe RET, no microcode':
+
+   The "Safe Ret" mitigation (see below) has been applied to protect the
+   kernel, but the IBPB-extending microcode has not been applied.  User
+   space tasks may still be vulnerable.
+
+ * 'Vulnerable: Microcode, no safe RET':
 
    Extended IBPB functionality microcode patch has been applied. It does
    not address User->Kernel and Guest->Host transitions protection but it
@@ -72,11 +82,11 @@ The possible values in this file are:
 
    (spec_rstack_overflow=microcode)
 
- * 'Mitigation: safe RET':
+ * 'Mitigation: Safe RET':
 
-   Software-only mitigation. It complements the extended IBPB microcode
-   patch functionality by addressing User->Kernel and Guest->Host
-   transitions protection.
+   Combined microcode/software mitigation. It complements the
+   extended IBPB microcode patch functionality by addressing
+   User->Kernel and Guest->Host transitions protection.
 
    Selected by default or by spec_rstack_overflow=safe-ret
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6c47f37515b8..d883d1c38f7f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2353,6 +2353,8 @@ early_param("l1tf", l1tf_cmdline);
 
 enum srso_mitigation {
 	SRSO_MITIGATION_NONE,
+	SRSO_MITIGATION_UCODE_NEEDED,
+	SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
 	SRSO_MITIGATION_MICROCODE,
 	SRSO_MITIGATION_SAFE_RET,
 	SRSO_MITIGATION_IBPB,
@@ -2368,11 +2370,13 @@ enum srso_mitigation_cmd {
 };
 
 static const char * const srso_strings[] = {
-	[SRSO_MITIGATION_NONE]           = "Vulnerable",
-	[SRSO_MITIGATION_MICROCODE]      = "Mitigation: microcode",
-	[SRSO_MITIGATION_SAFE_RET]	 = "Mitigation: safe RET",
-	[SRSO_MITIGATION_IBPB]		 = "Mitigation: IBPB",
-	[SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
+	[SRSO_MITIGATION_NONE]			= "Vulnerable",
+	[SRSO_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
+	[SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED]	= "Vulnerable: Safe RET, no microcode",
+	[SRSO_MITIGATION_MICROCODE]		= "Vulnerable: Microcode, no safe RET",
+	[SRSO_MITIGATION_SAFE_RET]		= "Mitigation: Safe RET",
+	[SRSO_MITIGATION_IBPB]			= "Mitigation: IBPB",
+	[SRSO_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT only"
 };
 
 static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
@@ -2409,10 +2413,7 @@ static void __init srso_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
 		goto pred_cmd;
 
-	if (!has_microcode) {
-		pr_warn("IBPB-extending microcode not applied!\n");
-		pr_warn(SRSO_NOTICE);
-	} else {
+	if (has_microcode) {
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
@@ -2428,6 +2429,12 @@ static void __init srso_select_mitigation(void)
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto out;
 		}
+	} else {
+		pr_warn("IBPB-extending microcode not applied!\n");
+		pr_warn(SRSO_NOTICE);
+
+		/* may be overwritten by SRSO_CMD_SAFE_RET below */
+		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
 	}
 
 	switch (srso_cmd) {
@@ -2457,7 +2464,10 @@ static void __init srso_select_mitigation(void)
 				setup_force_cpu_cap(X86_FEATURE_SRSO);
 				x86_return_thunk = srso_return_thunk;
 			}
-			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			if (has_microcode)
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			else
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
 		}
@@ -2701,9 +2711,7 @@ static ssize_t srso_show_state(char *buf)
 	if (boot_cpu_has(X86_FEATURE_SRSO_NO))
 		return sysfs_emit(buf, "Mitigation: SMT disabled\n");
 
-	return sysfs_emit(buf, "%s%s\n",
-			  srso_strings[srso_mitigation],
-			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+	return sysfs_emit(buf, "%s\n", srso_strings[srso_mitigation]);
 }
 
 static ssize_t gds_show_state(char *buf)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 11/23] x86/srso: Fix unret validation dependencies
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (9 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 10/23] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 12/23] x86/alternatives: Remove faulty optimization Josh Poimboeuf
                   ` (13 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

CONFIG_CPU_SRSO isn't dependent on CONFIG_CPU_UNRET_ENTRY (AMD
Retbleed), so the two features are independently configurable.  Fix
several issues for the (presumably rare) case where CONFIG_CPU_SRSO is
enabled but CONFIG_CPU_UNRET_ENTRY isn't.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/nospec-branch.h | 4 ++--
 include/linux/objtool.h              | 3 ++-
 scripts/Makefile.vmlinux_o           | 3 ++-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c55cc243592e..197ff4f4d1ce 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -271,7 +271,7 @@
 .Lskip_rsb_\@:
 .endm
 
-#ifdef CONFIG_CPU_UNRET_ENTRY
+#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
 #define CALL_UNTRAIN_RET	"call entry_untrain_ret"
 #else
 #define CALL_UNTRAIN_RET	""
@@ -312,7 +312,7 @@
 
 .macro UNTRAIN_RET_FROM_CALL
 #if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING)
+	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
diff --git a/include/linux/objtool.h b/include/linux/objtool.h
index 03f82c2c2ebf..b5440e7da55b 100644
--- a/include/linux/objtool.h
+++ b/include/linux/objtool.h
@@ -130,7 +130,8 @@
  * it will be ignored.
  */
 .macro VALIDATE_UNRET_BEGIN
-#if defined(CONFIG_NOINSTR_VALIDATION) && defined(CONFIG_CPU_UNRET_ENTRY)
+#if defined(CONFIG_NOINSTR_VALIDATION) && \
+	(defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
 .Lhere_\@:
 	.pushsection .discard.validate_unret
 	.long	.Lhere_\@ - .
diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
index 0edfdb40364b..25b3b587d37c 100644
--- a/scripts/Makefile.vmlinux_o
+++ b/scripts/Makefile.vmlinux_o
@@ -37,7 +37,8 @@ objtool-enabled := $(or $(delay-objtool),$(CONFIG_NOINSTR_VALIDATION))
 
 vmlinux-objtool-args-$(delay-objtool)			+= $(objtool-args-y)
 vmlinux-objtool-args-$(CONFIG_GCOV_KERNEL)		+= --no-unreachable
-vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr $(if $(CONFIG_CPU_UNRET_ENTRY), --unret)
+vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr \
+							   $(if $(or $(CONFIG_CPU_UNRET_ENTRY),$(CONFIG_CPU_SRSO)), --unret)
 
 objtool-args = $(vmlinux-objtool-args-y) --link
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 12/23] x86/alternatives: Remove faulty optimization
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (10 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 11/23] x86/srso: Fix unret validation dependencies Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25  9:20   ` Ingo Molnar
                     ` (2 more replies)
  2023-08-25  7:01 ` [PATCH 13/23] x86/srso: Improve i-cache locality for alias mitigation Josh Poimboeuf
                   ` (12 subsequent siblings)
  24 siblings, 3 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

The following commit

  095b8303f383 ("x86/alternative: Make custom return thunk

made '__x86_return_thunk' a placeholder value.  All code setting
X86_FEATURE_RETHUNK also changes the value of 'x86_return_thunk'.  So
the optimization at the beginning of apply_returns() is dead code.

Also, before the above-mentioned commit, the optimization actually had a
bug It bypassed __static_call_fixup(), causing some raw returns to
remain unpatched in static call trampolines.  Thus the 'Fixes' tag.

Fixes: d2408e043e72 ("x86/alternative: Optimize returns patching")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/alternative.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 099d58d02a26..34be5fbaf41e 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -720,14 +720,6 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end)
 {
 	s32 *s;
 
-	/*
-	 * Do not patch out the default return thunks if those needed are the
-	 * ones generated by the compiler.
-	 */
-	if (cpu_feature_enabled(X86_FEATURE_RETHUNK) &&
-	    (x86_return_thunk == __x86_return_thunk))
-		return;
-
 	for (s = start; s < end; s++) {
 		void *dest = NULL, *addr = (void *)s + *s;
 		struct insn insn;
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 13/23] x86/srso: Improve i-cache locality for alias mitigation
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (11 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 12/23] x86/alternatives: Remove faulty optimization Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 14/23] x86/srso: Unexport untraining functions Josh Poimboeuf
                   ` (11 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Move srso_alias_return_thunk() to the same section as
srso_alias_safe_ret() so they can share a cache line.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/lib/retpoline.S | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index cd86aeb5fdd3..9ab634f0b5d2 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -177,15 +177,14 @@ SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	int3
 SYM_FUNC_END(srso_alias_safe_ret)
 
-	.section .text..__x86.return_thunk
-
-SYM_CODE_START(srso_alias_return_thunk)
+SYM_CODE_START_NOALIGN(srso_alias_return_thunk)
 	UNWIND_HINT_FUNC
 	ANNOTATE_NOENDBR
 	call srso_alias_safe_ret
 	ud2
 SYM_CODE_END(srso_alias_return_thunk)
 
+	.section .text..__x86.return_thunk
 /*
  * Some generic notes on the untraining sequences:
  *
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 14/23] x86/srso: Unexport untraining functions
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (12 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 13/23] x86/srso: Improve i-cache locality for alias mitigation Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 15/23] x86/srso: Remove 'pred_cmd' label Josh Poimboeuf
                   ` (10 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

These functions aren't called outside of retpoline.S.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/nospec-branch.h | 4 ----
 arch/x86/lib/retpoline.S             | 7 ++-----
 2 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 197ff4f4d1ce..6c14fd1f5912 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -352,10 +352,6 @@ extern void retbleed_return_thunk(void);
 extern void srso_return_thunk(void);
 extern void srso_alias_return_thunk(void);
 
-extern void retbleed_untrain_ret(void);
-extern void srso_untrain_ret(void);
-extern void srso_alias_untrain_ret(void);
-
 extern void entry_untrain_ret(void);
 extern void entry_ibpb(void);
 
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 9ab634f0b5d2..a40ba18610d8 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -157,7 +157,6 @@ SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	lfence
 	jmp srso_alias_return_thunk
 SYM_FUNC_END(srso_alias_untrain_ret)
-__EXPORT_THUNK(srso_alias_untrain_ret)
 
 	.section .text..__x86.rethunk_safe
 #else
@@ -215,7 +214,7 @@ SYM_CODE_END(srso_alias_return_thunk)
  */
 	.align 64
 	.skip 64 - (retbleed_return_thunk - retbleed_untrain_ret), 0xcc
-SYM_START(retbleed_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_START(retbleed_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
 	ANNOTATE_NOENDBR
 	/*
 	 * As executed from retbleed_untrain_ret, this is:
@@ -263,7 +262,6 @@ SYM_CODE_END(retbleed_return_thunk)
 	jmp retbleed_return_thunk
 	int3
 SYM_FUNC_END(retbleed_untrain_ret)
-__EXPORT_THUNK(retbleed_untrain_ret)
 
 /*
  * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
@@ -277,7 +275,7 @@ __EXPORT_THUNK(retbleed_untrain_ret)
  */
 	.align 64
 	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
-SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
 	ANNOTATE_NOENDBR
 	.byte 0x48, 0xb8
 
@@ -298,7 +296,6 @@ SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
 	ud2
 SYM_CODE_END(srso_safe_ret)
 SYM_FUNC_END(srso_untrain_ret)
-__EXPORT_THUNK(srso_untrain_ret)
 
 SYM_CODE_START(srso_return_thunk)
 	UNWIND_HINT_FUNC
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 15/23] x86/srso: Remove 'pred_cmd' label
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (13 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 14/23] x86/srso: Unexport untraining functions Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25 19:51   ` [PATCH 15/23] " Nikolay Borisov
  2023-08-25  7:01 ` [PATCH 16/23] x86/bugs: Remove default case for fully switched enums Josh Poimboeuf
                   ` (9 subsequent siblings)
  24 siblings, 2 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

SBPB is only enabled in two distinct cases:

1) when SRSO has been disabled with srso=off

2) when SRSO has been fixed (in future HW)

Simplify the control flow by getting rid of the 'pred_cmd' label and
moving the SBPB enablement check to the two corresponding code sites.
This makes it more clear when exactly SBPB gets enabled.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d883d1c38f7f..3c7f634b6148 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2410,13 +2410,21 @@ static void __init srso_select_mitigation(void)
 {
 	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
 
-	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
-		goto pred_cmd;
+	if (cpu_mitigations_off())
+		return;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRSO)) {
+		if (boot_cpu_has(X86_FEATURE_SBPB))
+			x86_pred_cmd = PRED_CMD_SBPB;
+		return;
+	}
 
 	if (has_microcode) {
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
+		 *
+		 * Zen1/2 don't have SBPB, no need to try to enable it here.
 		 */
 		if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
 			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
@@ -2439,7 +2447,9 @@ static void __init srso_select_mitigation(void)
 
 	switch (srso_cmd) {
 	case SRSO_CMD_OFF:
-		goto pred_cmd;
+		if (boot_cpu_has(X86_FEATURE_SBPB))
+			x86_pred_cmd = PRED_CMD_SBPB;
+		return;
 
 	case SRSO_CMD_MICROCODE:
 		if (has_microcode) {
@@ -2501,11 +2511,6 @@ static void __init srso_select_mitigation(void)
 
 out:
 	pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");
-
-pred_cmd:
-	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
-	     boot_cpu_has(X86_FEATURE_SBPB))
-		x86_pred_cmd = PRED_CMD_SBPB;
 }
 
 #undef pr_fmt
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 16/23] x86/bugs: Remove default case for fully switched enums
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (14 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 15/23] x86/srso: Remove 'pred_cmd' label Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-09-02  9:02   ` [PATCH 16/23] " Borislav Petkov
  2023-08-25  7:01 ` [PATCH 17/23] x86/srso: Move retbleed IBPB check into existing 'has_microcode' code block Josh Poimboeuf
                   ` (8 subsequent siblings)
  24 siblings, 2 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

For enum switch statements which handle all possible cases, remove the
default case so a compiler warning gets printed if one of the enums gets
accidentally omitted from the switch statement.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3c7f634b6148..06216159d7fc 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1019,7 +1019,6 @@ static void __init retbleed_select_mitigation(void)
 
 do_cmd_auto:
 	case RETBLEED_CMD_AUTO:
-	default:
 		if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
 		    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
 			if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY))
@@ -1290,6 +1289,8 @@ spectre_v2_user_select_mitigation(void)
 
 		spectre_v2_user_ibpb = mode;
 		switch (cmd) {
+		case SPECTRE_V2_USER_CMD_NONE:
+			break;
 		case SPECTRE_V2_USER_CMD_FORCE:
 		case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
 		case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
@@ -1301,8 +1302,6 @@ spectre_v2_user_select_mitigation(void)
 		case SPECTRE_V2_USER_CMD_SECCOMP:
 			static_branch_enable(&switch_mm_cond_ibpb);
 			break;
-		default:
-			break;
 		}
 
 		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
@@ -2160,6 +2159,10 @@ static int l1d_flush_prctl_get(struct task_struct *task)
 static int ssb_prctl_get(struct task_struct *task)
 {
 	switch (ssb_mode) {
+	case SPEC_STORE_BYPASS_NONE:
+		if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
+			return PR_SPEC_ENABLE;
+		return PR_SPEC_NOT_AFFECTED;
 	case SPEC_STORE_BYPASS_DISABLE:
 		return PR_SPEC_DISABLE;
 	case SPEC_STORE_BYPASS_SECCOMP:
@@ -2171,11 +2174,8 @@ static int ssb_prctl_get(struct task_struct *task)
 		if (task_spec_ssb_disable(task))
 			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
 		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
-	default:
-		if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
-			return PR_SPEC_ENABLE;
-		return PR_SPEC_NOT_AFFECTED;
 	}
+	BUG();
 }
 
 static int ib_prctl_get(struct task_struct *task)
@@ -2504,9 +2504,6 @@ static void __init srso_select_mitigation(void)
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
                 }
 		break;
-
-	default:
-		break;
 	}
 
 out:
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 17/23] x86/srso: Move retbleed IBPB check into existing 'has_microcode' code block
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (15 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 16/23] x86/bugs: Remove default case for fully switched enums Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 18/23] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check Josh Poimboeuf
                   ` (7 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Simplify the code flow a bit by moving the retbleed IBPB check into the
existing 'has_microcode' block.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 06216159d7fc..b086fd46fa1b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2430,10 +2430,8 @@ static void __init srso_select_mitigation(void)
 			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
 			return;
 		}
-	}
 
-	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
-		if (has_microcode) {
+		if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto out;
 		}
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 18/23] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (16 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 17/23] x86/srso: Move retbleed IBPB check into existing 'has_microcode' code block Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-09-02  9:10   ` [PATCH 18/23] " Borislav Petkov
  2023-08-25  7:01 ` [PATCH 19/23] x86/srso: Disentangle rethunk-dependent options Josh Poimboeuf
                   ` (6 subsequent siblings)
  24 siblings, 2 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

The X86_FEATURE_ENTRY_IBPB check is redundant here due to the above
RETBLEED_MITIGATION_IBPB check.  RETBLEED_MITIGATION_IBPB already
implies X86_FEATURE_ENTRY_IBPB.  So if we got here and 'has_microcode'
is true, it means X86_FEATURE_ENTRY_IBPB is not set.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b086fd46fa1b..563f09ba6446 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2494,7 +2494,7 @@ static void __init srso_select_mitigation(void)
 
 	case SRSO_CMD_IBPB_ON_VMEXIT:
 		if (IS_ENABLED(CONFIG_CPU_SRSO)) {
-			if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
+			if (has_microcode) {
 				setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
 				srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
 			}
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 19/23] x86/srso: Disentangle rethunk-dependent options
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (17 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 18/23] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 20/23] x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros Josh Poimboeuf
                   ` (5 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

CONFIG_RETHUNK, CONFIG_CPU_UNRET_ENTRY and CONFIG_CPU_SRSO are all
tangled up.  De-spaghettify the code a bit.

Some of the rethunk-related code has been shuffled around within the
'.text..__x86.return_thunk' section, but otherwise there are no
functional changes.  srso_alias_untrain_ret() and srso_alias_safe_ret()
((which are very address-sensitive) haven't moved.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/nospec-branch.h |  25 +++--
 arch/x86/kernel/cpu/bugs.c           |   5 +-
 arch/x86/kernel/vmlinux.lds.S        |   7 +-
 arch/x86/lib/retpoline.S             | 157 +++++++++++++++------------
 4 files changed, 109 insertions(+), 85 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 6c14fd1f5912..51e3f1a287d2 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -289,19 +289,17 @@
  * where we have a stack but before any RET instruction.
  */
 .macro UNTRAIN_RET
-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
 		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
-		      __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
+		     __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
 #endif
 .endm
 
 .macro UNTRAIN_RET_VM
-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
@@ -311,8 +309,7 @@
 .endm
 
 .macro UNTRAIN_RET_FROM_CALL
-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
@@ -348,6 +345,20 @@ extern void __x86_return_thunk(void);
 static inline void __x86_return_thunk(void) {}
 #endif
 
+#ifdef CONFIG_CPU_UNRET_ENTRY
+extern void retbleed_return_thunk(void);
+#else
+static inline void retbleed_return_thunk(void) {}
+#endif
+
+#ifdef CONFIG_CPU_SRSO
+extern void srso_return_thunk(void);
+extern void srso_alias_return_thunk(void);
+#else
+static inline void srso_return_thunk(void) {}
+static inline void srso_alias_return_thunk(void) {}
+#endif
+
 extern void retbleed_return_thunk(void);
 extern void srso_return_thunk(void);
 extern void srso_alias_return_thunk(void);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 563f09ba6446..0ebdaa734e33 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -63,7 +63,7 @@ EXPORT_SYMBOL_GPL(x86_pred_cmd);
 
 static DEFINE_MUTEX(spec_ctrl_mutex);
 
-void (*x86_return_thunk)(void) __ro_after_init = &__x86_return_thunk;
+void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk;
 
 /* Update SPEC_CTRL MSR and its cached copy unconditionally */
 static void update_spec_ctrl(u64 val)
@@ -1041,8 +1041,7 @@ static void __init retbleed_select_mitigation(void)
 		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
 		setup_force_cpu_cap(X86_FEATURE_UNRET);
 
-		if (IS_ENABLED(CONFIG_RETHUNK))
-			x86_return_thunk = retbleed_return_thunk;
+		x86_return_thunk = retbleed_return_thunk;
 
 		if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
 		    boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 83d41c2601d7..9188834e56c9 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -139,10 +139,7 @@ SECTIONS
 		STATIC_CALL_TEXT
 
 		ALIGN_ENTRY_TEXT_BEGIN
-#ifdef CONFIG_CPU_SRSO
 		*(.text..__x86.rethunk_untrain)
-#endif
-
 		ENTRY_TEXT
 
 #ifdef CONFIG_CPU_SRSO
@@ -520,12 +517,12 @@ INIT_PER_CPU(irq_stack_backing_store);
            "fixed_percpu_data is not at start of per-cpu area");
 #endif
 
-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_CPU_UNRET_ENTRY
 . = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned");
-. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
 #endif
 
 #ifdef CONFIG_CPU_SRSO
+. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
 /*
  * GNU ld cannot do XOR until 2.41.
  * https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f6f78318fca803c4907fb8d7f6ded8295f1947b1
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index a40ba18610d8..8ba79d2b8997 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -126,12 +126,13 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
 #include <asm/GEN-for-each-reg.h>
 #undef GEN
 #endif
-/*
- * This function name is magical and is used by -mfunction-return=thunk-extern
- * for the compiler to generate JMPs to it.
- */
+
 #ifdef CONFIG_RETHUNK
 
+	.section .text..__x86.return_thunk
+
+#ifdef CONFIG_CPU_SRSO
+
 /*
  * srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at
  * special addresses:
@@ -147,9 +148,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
  *
  * As a result, srso_alias_safe_ret() becomes a safe return.
  */
-#ifdef CONFIG_CPU_SRSO
-	.section .text..__x86.rethunk_untrain
-
+	.pushsection .text..__x86.rethunk_untrain
 SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	UNWIND_HINT_FUNC
 	ANNOTATE_NOENDBR
@@ -157,17 +156,9 @@ SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	lfence
 	jmp srso_alias_return_thunk
 SYM_FUNC_END(srso_alias_untrain_ret)
+	.popsection
 
-	.section .text..__x86.rethunk_safe
-#else
-/* dummy definition for alternatives */
-SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
-	ANNOTATE_UNRET_SAFE
-	ret
-	int3
-SYM_FUNC_END(srso_alias_untrain_ret)
-#endif
-
+	.pushsection .text..__x86.rethunk_safe
 SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	lea 8(%_ASM_SP), %_ASM_SP
 	UNWIND_HINT_FUNC
@@ -182,8 +173,58 @@ SYM_CODE_START_NOALIGN(srso_alias_return_thunk)
 	call srso_alias_safe_ret
 	ud2
 SYM_CODE_END(srso_alias_return_thunk)
+	.popsection
+
+/*
+ * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
+ * above. On kernel entry, srso_untrain_ret() is executed which is a
+ *
+ * movabs $0xccccc30824648d48,%rax
+ *
+ * and when the return thunk executes the inner label srso_safe_ret()
+ * later, it is a stack manipulation and a RET which is mispredicted and
+ * thus a "safe" one to use.
+ */
+	.align 64
+	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
+SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
+	ANNOTATE_NOENDBR
+	.byte 0x48, 0xb8
+
+/*
+ * This forces the function return instruction to speculate into a trap
+ * (UD2 in srso_return_thunk() below).  This RET will then mispredict
+ * and execution will continue at the return site read from the top of
+ * the stack.
+ */
+SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
+	lea 8(%_ASM_SP), %_ASM_SP
+	ret
+	int3
+	int3
+	/* end of movabs */
+	lfence
+	call srso_safe_ret
+	ud2
+SYM_CODE_END(srso_safe_ret)
+SYM_FUNC_END(srso_untrain_ret)
+
+SYM_CODE_START(srso_return_thunk)
+	UNWIND_HINT_FUNC
+	ANNOTATE_NOENDBR
+	call srso_safe_ret
+	ud2
+SYM_CODE_END(srso_return_thunk)
+
+#define JMP_SRSO_UNTRAIN_RET "jmp srso_untrain_ret"
+#define JMP_SRSO_ALIAS_UNTRAIN_RET "jmp srso_alias_untrain_ret"
+#else /* !CONFIG_CPU_SRSO */
+#define JMP_SRSO_UNTRAIN_RET "ud2"
+#define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2"
+#endif /* CONFIG_CPU_SRSO */
+
+#ifdef CONFIG_CPU_UNRET_ENTRY
 
-	.section .text..__x86.return_thunk
 /*
  * Some generic notes on the untraining sequences:
  *
@@ -263,64 +304,21 @@ SYM_CODE_END(retbleed_return_thunk)
 	int3
 SYM_FUNC_END(retbleed_untrain_ret)
 
-/*
- * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
- * above. On kernel entry, srso_untrain_ret() is executed which is a
- *
- * movabs $0xccccc30824648d48,%rax
- *
- * and when the return thunk executes the inner label srso_safe_ret()
- * later, it is a stack manipulation and a RET which is mispredicted and
- * thus a "safe" one to use.
- */
-	.align 64
-	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
-SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
-	ANNOTATE_NOENDBR
-	.byte 0x48, 0xb8
+#define JMP_RETBLEED_UNTRAIN_RET "jmp retbleed_untrain_ret"
+#else /* !CONFIG_CPU_UNRET_ENTRY */
+#define JMP_RETBLEED_UNTRAIN_RET "ud2"
+#endif /* CONFIG_CPU_UNRET_ENTRY */
 
-/*
- * This forces the function return instruction to speculate into a trap
- * (UD2 in srso_return_thunk() below).  This RET will then mispredict
- * and execution will continue at the return site read from the top of
- * the stack.
- */
-SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
-	lea 8(%_ASM_SP), %_ASM_SP
-	ret
-	int3
-	int3
-	/* end of movabs */
-	lfence
-	call srso_safe_ret
-	ud2
-SYM_CODE_END(srso_safe_ret)
-SYM_FUNC_END(srso_untrain_ret)
-
-SYM_CODE_START(srso_return_thunk)
-	UNWIND_HINT_FUNC
-	ANNOTATE_NOENDBR
-	call srso_safe_ret
-	ud2
-SYM_CODE_END(srso_return_thunk)
+#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
 
 SYM_FUNC_START(entry_untrain_ret)
-	ALTERNATIVE_2 "jmp retbleed_untrain_ret", \
-		      "jmp srso_untrain_ret", X86_FEATURE_SRSO, \
-		      "jmp srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS
+	ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET,				\
+		      JMP_SRSO_UNTRAIN_RET, X86_FEATURE_SRSO,		\
+		      JMP_SRSO_ALIAS_UNTRAIN_RET, X86_FEATURE_SRSO_ALIAS
 SYM_FUNC_END(entry_untrain_ret)
 __EXPORT_THUNK(entry_untrain_ret)
 
-SYM_CODE_START(__x86_return_thunk)
-	UNWIND_HINT_FUNC
-	ANNOTATE_NOENDBR
-	ANNOTATE_UNRET_SAFE
-	ret
-	int3
-SYM_CODE_END(__x86_return_thunk)
-EXPORT_SYMBOL(__x86_return_thunk)
-
-#endif /* CONFIG_RETHUNK */
+#endif /* CONFIG_CPU_UNRET_ENTRY || CONFIG_CPU_SRSO */
 
 #ifdef CONFIG_CALL_DEPTH_TRACKING
 
@@ -355,3 +353,22 @@ SYM_FUNC_START(__x86_return_skl)
 SYM_FUNC_END(__x86_return_skl)
 
 #endif /* CONFIG_CALL_DEPTH_TRACKING */
+
+/*
+ * This function name is magical and is used by -mfunction-return=thunk-extern
+ * for the compiler to generate JMPs to it.
+ *
+ * This code is only used during kernel boot or module init.  All
+ * 'JMP __x86_return_thunk' sites are changed to something else by
+ * apply_returns().
+ */
+SYM_CODE_START(__x86_return_thunk)
+	UNWIND_HINT_FUNC
+	ANNOTATE_NOENDBR
+	ANNOTATE_UNRET_SAFE
+	ret
+	int3
+SYM_CODE_END(__x86_return_thunk)
+EXPORT_SYMBOL(__x86_return_thunk)
+
+#endif /* CONFIG_RETHUNK */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 20/23] x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (18 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 19/23] x86/srso: Disentangle rethunk-dependent options Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 21/23] x86/retpoline: Remove .text..__x86.return_thunk section Josh Poimboeuf
                   ` (4 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Macros already exist for unaligned code block symbols.  Use them.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/lib/retpoline.S | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 8ba79d2b8997..415521dbe15e 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -149,7 +149,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
  * As a result, srso_alias_safe_ret() becomes a safe return.
  */
 	.pushsection .text..__x86.rethunk_untrain
-SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_CODE_START_NOALIGN(srso_alias_untrain_ret)
 	UNWIND_HINT_FUNC
 	ANNOTATE_NOENDBR
 	ASM_NOP2
@@ -159,7 +159,7 @@ SYM_FUNC_END(srso_alias_untrain_ret)
 	.popsection
 
 	.pushsection .text..__x86.rethunk_safe
-SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_CODE_START_NOALIGN(srso_alias_safe_ret)
 	lea 8(%_ASM_SP), %_ASM_SP
 	UNWIND_HINT_FUNC
 	ANNOTATE_UNRET_SAFE
@@ -187,7 +187,7 @@ SYM_CODE_END(srso_alias_return_thunk)
  */
 	.align 64
 	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
-SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
+SYM_CODE_START_LOCAL_NOALIGN(srso_untrain_ret)
 	ANNOTATE_NOENDBR
 	.byte 0x48, 0xb8
 
@@ -255,7 +255,7 @@ SYM_CODE_END(srso_return_thunk)
  */
 	.align 64
 	.skip 64 - (retbleed_return_thunk - retbleed_untrain_ret), 0xcc
-SYM_START(retbleed_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
+SYM_CODE_START_LOCAL_NOALIGN(retbleed_untrain_ret)
 	ANNOTATE_NOENDBR
 	/*
 	 * As executed from retbleed_untrain_ret, this is:
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 21/23] x86/retpoline: Remove .text..__x86.return_thunk section
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (19 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 20/23] x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25  7:01 ` [PATCH 22/23] x86/nospec: Refactor UNTRAIN_RET[_*] Josh Poimboeuf
                   ` (3 subsequent siblings)
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

The '.text..__x86.return_thunk' section has no purpose.  Remove it and
let the return thunk code live in '.text..__x86.indirect_thunk'.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/kernel/vmlinux.lds.S | 3 ---
 arch/x86/lib/retpoline.S      | 2 --
 2 files changed, 5 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 9188834e56c9..f1c3516d356d 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -132,10 +132,7 @@ SECTIONS
 		LOCK_TEXT
 		KPROBES_TEXT
 		SOFTIRQENTRY_TEXT
-#ifdef CONFIG_RETPOLINE
 		*(.text..__x86.indirect_thunk)
-		*(.text..__x86.return_thunk)
-#endif
 		STATIC_CALL_TEXT
 
 		ALIGN_ENTRY_TEXT_BEGIN
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 415521dbe15e..49f2be7c7b35 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -129,8 +129,6 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
 
 #ifdef CONFIG_RETHUNK
 
-	.section .text..__x86.return_thunk
-
 #ifdef CONFIG_CPU_SRSO
 
 /*
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 22/23] x86/nospec: Refactor UNTRAIN_RET[_*]
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (20 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 21/23] x86/retpoline: Remove .text..__x86.return_thunk section Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25 18:22   ` [PATCH 22/23] " Nikolay Borisov
  2023-08-25  7:01 ` [PATCH 23/23] x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk() Josh Poimboeuf
                   ` (2 subsequent siblings)
  24 siblings, 2 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

Factor out the UNTRAIN_RET[_*] common bits into a helper macro.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/nospec-branch.h | 31 +++++++++-------------------
 1 file changed, 10 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 51e3f1a287d2..dcc78477a38d 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -288,35 +288,24 @@
  * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point
  * where we have a stack but before any RET instruction.
  */
-.macro UNTRAIN_RET
+.macro __UNTRAIN_RET ibpb_feature, call_depth_insns
 #if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
-		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
-		     __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
+		      "call entry_ibpb", \ibpb_feature,			\
+		     __stringify(\call_depth_insns), X86_FEATURE_CALL_DEPTH
 #endif
 .endm
 
-.macro UNTRAIN_RET_VM
-#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
-	VALIDATE_UNRET_END
-	ALTERNATIVE_3 "",						\
-		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
-		      "call entry_ibpb", X86_FEATURE_IBPB_ON_VMEXIT,	\
-		      __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
-#endif
-.endm
+#define UNTRAIN_RET \
+	__UNTRAIN_RET X86_FEATURE_ENTRY_IBPB, __stringify(RESET_CALL_DEPTH)
 
-.macro UNTRAIN_RET_FROM_CALL
-#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
-	VALIDATE_UNRET_END
-	ALTERNATIVE_3 "",						\
-		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
-		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
-		      __stringify(RESET_CALL_DEPTH_FROM_CALL), X86_FEATURE_CALL_DEPTH
-#endif
-.endm
+#define UNTRAIN_RET_VM \
+	__UNTRAIN_RET X86_FEATURE_IBPB_ON_VMEXIT, __stringify(RESET_CALL_DEPTH)
+
+#define UNTRAIN_RET_FROM_CALL \
+	__UNTRAIN_RET X86_FEATURE_ENTRY_IBPB, __stringify(RESET_CALL_DEPTH_FROM_CALL)
 
 
 .macro CALL_DEPTH_ACCOUNT
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH 23/23] x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk()
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (21 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 22/23] x86/nospec: Refactor UNTRAIN_RET[_*] Josh Poimboeuf
@ 2023-08-25  7:01 ` Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25 10:38 ` [PATCH v2 00/23] SRSO fixes/cleanups Ingo Molnar
  2023-10-05  1:29 ` Sean Christopherson
  24 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-25  7:01 UTC (permalink / raw)
  To: x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

For consistency with the other return thunks, rename __x86_return_skl()
to call_depth_return_thunk().

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
 arch/x86/include/asm/nospec-branch.h | 13 ++++---------
 arch/x86/kernel/cpu/bugs.c           |  3 ++-
 arch/x86/lib/retpoline.S             |  4 ++--
 3 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index dcc78477a38d..14cd3cd5f85a 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -358,12 +358,7 @@ extern void entry_ibpb(void);
 extern void (*x86_return_thunk)(void);
 
 #ifdef CONFIG_CALL_DEPTH_TRACKING
-extern void __x86_return_skl(void);
-
-static inline void x86_set_skl_return_thunk(void)
-{
-	x86_return_thunk = &__x86_return_skl;
-}
+extern void call_depth_return_thunk(void);
 
 #define CALL_DEPTH_ACCOUNT					\
 	ALTERNATIVE("",						\
@@ -376,12 +371,12 @@ DECLARE_PER_CPU(u64, __x86_ret_count);
 DECLARE_PER_CPU(u64, __x86_stuffs_count);
 DECLARE_PER_CPU(u64, __x86_ctxsw_count);
 #endif
-#else
-static inline void x86_set_skl_return_thunk(void) {}
+#else /* !CONFIG_CALL_DEPTH_TRACKING */
 
+static inline void call_depth_return_thunk(void) {}
 #define CALL_DEPTH_ACCOUNT ""
 
-#endif
+#endif /* CONFIG_CALL_DEPTH_TRACKING */
 
 #ifdef CONFIG_RETPOLINE
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0ebdaa734e33..d538043c776d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1059,7 +1059,8 @@ static void __init retbleed_select_mitigation(void)
 	case RETBLEED_MITIGATION_STUFF:
 		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
 		setup_force_cpu_cap(X86_FEATURE_CALL_DEPTH);
-		x86_set_skl_return_thunk();
+
+		x86_return_thunk = call_depth_return_thunk;
 		break;
 
 	default:
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 49f2be7c7b35..6376d0164395 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -321,7 +321,7 @@ __EXPORT_THUNK(entry_untrain_ret)
 #ifdef CONFIG_CALL_DEPTH_TRACKING
 
 	.align 64
-SYM_FUNC_START(__x86_return_skl)
+SYM_FUNC_START(call_depth_return_thunk)
 	ANNOTATE_NOENDBR
 	/*
 	 * Keep the hotpath in a 16byte I-fetch for the non-debug
@@ -348,7 +348,7 @@ SYM_FUNC_START(__x86_return_skl)
 	ANNOTATE_UNRET_SAFE
 	ret
 	int3
-SYM_FUNC_END(__x86_return_skl)
+SYM_FUNC_END(call_depth_return_thunk)
 
 #endif /* CONFIG_CALL_DEPTH_TRACKING */
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* Re: [PATCH 03/23] x86/srso: Don't probe microcode in a guest
  2023-08-25  7:01 ` [PATCH 03/23] x86/srso: Don't probe microcode in a guest Josh Poimboeuf
@ 2023-08-25  7:52   ` Andrew Cooper
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  1 sibling, 0 replies; 71+ messages in thread
From: Andrew Cooper @ 2023-08-25  7:52 UTC (permalink / raw)
  To: Josh Poimboeuf, x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Nikolay Borisov,
	gregkh, Thomas Gleixner

On 25/08/2023 8:01 am, Josh Poimboeuf wrote:
> To support live migration, the hypervisor sets the "lowest common
> denominator" of features.  Probing the microcode isn't allowed because
> any detected features might go away after a migration.
>
> As Andy Cooper states:
>
>   "Linux must not probe microcode when virtualised.  What it may see
>   instantaneously on boot (owing to MSR_PRED_CMD being fully passed
>   through) is not accurate for the lifetime of the VM."
>
> Rely on the hypervisor to set the needed IBPB_BRTYPE and SBPB bits.
>
> Fixes: 1b5277c0ea0b ("x86/srso: Add SRSO_NO support")
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thankyou for doing this patch.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 12/23] x86/alternatives: Remove faulty optimization
  2023-08-25  7:01 ` [PATCH 12/23] x86/alternatives: Remove faulty optimization Josh Poimboeuf
@ 2023-08-25  9:20   ` Ingo Molnar
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-08-25 10:27   ` [tip: x86/urgent] " tip-bot2 for Josh Poimboeuf
  2 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2023-08-25  9:20 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner


* Josh Poimboeuf <jpoimboe@kernel.org> wrote:

> The following commit
> 
>   095b8303f383 ("x86/alternative: Make custom return thunk

End of line got chopped here, I extended it to:

    095b8303f383 ("x86/alternative: Make custom return thunk unconditional")

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk()
  2023-08-25  7:01 ` [PATCH 23/23] x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk() Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     8e6dc5f993a23b40dac1f26abab6a980913c1d24
Gitweb:        https://git.kernel.org/tip/8e6dc5f993a23b40dac1f26abab6a980913c1d24
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:54 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:02 +02:00

x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk()

For consistency with the other return thunks, rename __x86_return_skl()
to call_depth_return_thunk().

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/c8a6f5e4e62300d30c829af28789a958e10277ba.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/include/asm/nospec-branch.h | 13 ++++---------
 arch/x86/kernel/cpu/bugs.c           |  3 ++-
 arch/x86/lib/retpoline.S             |  4 ++--
 3 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index dcc7847..14cd3cd 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -358,12 +358,7 @@ extern void entry_ibpb(void);
 extern void (*x86_return_thunk)(void);
 
 #ifdef CONFIG_CALL_DEPTH_TRACKING
-extern void __x86_return_skl(void);
-
-static inline void x86_set_skl_return_thunk(void)
-{
-	x86_return_thunk = &__x86_return_skl;
-}
+extern void call_depth_return_thunk(void);
 
 #define CALL_DEPTH_ACCOUNT					\
 	ALTERNATIVE("",						\
@@ -376,12 +371,12 @@ DECLARE_PER_CPU(u64, __x86_ret_count);
 DECLARE_PER_CPU(u64, __x86_stuffs_count);
 DECLARE_PER_CPU(u64, __x86_ctxsw_count);
 #endif
-#else
-static inline void x86_set_skl_return_thunk(void) {}
+#else /* !CONFIG_CALL_DEPTH_TRACKING */
 
+static inline void call_depth_return_thunk(void) {}
 #define CALL_DEPTH_ACCOUNT ""
 
-#endif
+#endif /* CONFIG_CALL_DEPTH_TRACKING */
 
 #ifdef CONFIG_RETPOLINE
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0ebdaa7..d538043 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1059,7 +1059,8 @@ do_cmd_auto:
 	case RETBLEED_MITIGATION_STUFF:
 		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
 		setup_force_cpu_cap(X86_FEATURE_CALL_DEPTH);
-		x86_set_skl_return_thunk();
+
+		x86_return_thunk = call_depth_return_thunk;
 		break;
 
 	default:
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 49f2be7..6376d01 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -321,7 +321,7 @@ __EXPORT_THUNK(entry_untrain_ret)
 #ifdef CONFIG_CALL_DEPTH_TRACKING
 
 	.align 64
-SYM_FUNC_START(__x86_return_skl)
+SYM_FUNC_START(call_depth_return_thunk)
 	ANNOTATE_NOENDBR
 	/*
 	 * Keep the hotpath in a 16byte I-fetch for the non-debug
@@ -348,7 +348,7 @@ SYM_FUNC_START(__x86_return_skl)
 	ANNOTATE_UNRET_SAFE
 	ret
 	int3
-SYM_FUNC_END(__x86_return_skl)
+SYM_FUNC_END(call_depth_return_thunk)
 
 #endif /* CONFIG_CALL_DEPTH_TRACKING */
 

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/retpoline: Remove .text..__x86.return_thunk section
  2023-08-25  7:01 ` [PATCH 21/23] x86/retpoline: Remove .text..__x86.return_thunk section Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     dc184c7c1fe9670148fd74d9d5d7cf8894a65e64
Gitweb:        https://git.kernel.org/tip/dc184c7c1fe9670148fd74d9d5d7cf8894a65e64
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:52 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:02 +02:00

x86/retpoline: Remove .text..__x86.return_thunk section

The '.text..__x86.return_thunk' section has no purpose.  Remove it and
let the return thunk code live in '.text..__x86.indirect_thunk'.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/34947acf1c8a1be2d3ba9a4d0dd8a3001ae3c0db.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/vmlinux.lds.S | 3 ---
 arch/x86/lib/retpoline.S      | 2 --
 2 files changed, 5 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 9188834..f1c3516 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -132,10 +132,7 @@ SECTIONS
 		LOCK_TEXT
 		KPROBES_TEXT
 		SOFTIRQENTRY_TEXT
-#ifdef CONFIG_RETPOLINE
 		*(.text..__x86.indirect_thunk)
-		*(.text..__x86.return_thunk)
-#endif
 		STATIC_CALL_TEXT
 
 		ALIGN_ENTRY_TEXT_BEGIN
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 415521d..49f2be7 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -129,8 +129,6 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
 
 #ifdef CONFIG_RETHUNK
 
-	.section .text..__x86.return_thunk
-
 #ifdef CONFIG_CPU_SRSO
 
 /*

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/nospec: Refactor UNTRAIN_RET[_*]
  2023-08-25  7:01 ` [PATCH 22/23] x86/nospec: Refactor UNTRAIN_RET[_*] Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  2023-08-25 18:22   ` [PATCH 22/23] " Nikolay Borisov
  1 sibling, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     e94280f458d27cf0ef1afc29557e841f03ec7476
Gitweb:        https://git.kernel.org/tip/e94280f458d27cf0ef1afc29557e841f03ec7476
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:53 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:02 +02:00

x86/nospec: Refactor UNTRAIN_RET[_*]

Factor out the UNTRAIN_RET[_*] common bits into a helper macro.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/d9ad341e6ce84ccdbd3924615f4a47b3d7b19942.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/include/asm/nospec-branch.h | 31 ++++++++-------------------
 1 file changed, 10 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 51e3f1a..dcc7847 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -288,35 +288,24 @@
  * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point
  * where we have a stack but before any RET instruction.
  */
-.macro UNTRAIN_RET
+.macro __UNTRAIN_RET ibpb_feature, call_depth_insns
 #if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
-		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
-		     __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
+		      "call entry_ibpb", \ibpb_feature,			\
+		     __stringify(\call_depth_insns), X86_FEATURE_CALL_DEPTH
 #endif
 .endm
 
-.macro UNTRAIN_RET_VM
-#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
-	VALIDATE_UNRET_END
-	ALTERNATIVE_3 "",						\
-		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
-		      "call entry_ibpb", X86_FEATURE_IBPB_ON_VMEXIT,	\
-		      __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
-#endif
-.endm
+#define UNTRAIN_RET \
+	__UNTRAIN_RET X86_FEATURE_ENTRY_IBPB, __stringify(RESET_CALL_DEPTH)
 
-.macro UNTRAIN_RET_FROM_CALL
-#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
-	VALIDATE_UNRET_END
-	ALTERNATIVE_3 "",						\
-		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
-		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
-		      __stringify(RESET_CALL_DEPTH_FROM_CALL), X86_FEATURE_CALL_DEPTH
-#endif
-.endm
+#define UNTRAIN_RET_VM \
+	__UNTRAIN_RET X86_FEATURE_IBPB_ON_VMEXIT, __stringify(RESET_CALL_DEPTH)
+
+#define UNTRAIN_RET_FROM_CALL \
+	__UNTRAIN_RET X86_FEATURE_ENTRY_IBPB, __stringify(RESET_CALL_DEPTH_FROM_CALL)
 
 
 .macro CALL_DEPTH_ACCOUNT

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros
  2023-08-25  7:01 ` [PATCH 20/23] x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     03edd09ff88a4ecb6fd5b8c7999e62bf475b4e22
Gitweb:        https://git.kernel.org/tip/03edd09ff88a4ecb6fd5b8c7999e62bf475b4e22
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:51 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:01 +02:00

x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros

Macros already exist for unaligned code block symbols.  Use them.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/1ae65b98ddc256ebc446768d9d0c461675dd0437.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/lib/retpoline.S | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 8ba79d2..415521d 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -149,7 +149,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
  * As a result, srso_alias_safe_ret() becomes a safe return.
  */
 	.pushsection .text..__x86.rethunk_untrain
-SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_CODE_START_NOALIGN(srso_alias_untrain_ret)
 	UNWIND_HINT_FUNC
 	ANNOTATE_NOENDBR
 	ASM_NOP2
@@ -159,7 +159,7 @@ SYM_FUNC_END(srso_alias_untrain_ret)
 	.popsection
 
 	.pushsection .text..__x86.rethunk_safe
-SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_CODE_START_NOALIGN(srso_alias_safe_ret)
 	lea 8(%_ASM_SP), %_ASM_SP
 	UNWIND_HINT_FUNC
 	ANNOTATE_UNRET_SAFE
@@ -187,7 +187,7 @@ SYM_CODE_END(srso_alias_return_thunk)
  */
 	.align 64
 	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
-SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
+SYM_CODE_START_LOCAL_NOALIGN(srso_untrain_ret)
 	ANNOTATE_NOENDBR
 	.byte 0x48, 0xb8
 
@@ -255,7 +255,7 @@ SYM_CODE_END(srso_return_thunk)
  */
 	.align 64
 	.skip 64 - (retbleed_return_thunk - retbleed_untrain_ret), 0xcc
-SYM_START(retbleed_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
+SYM_CODE_START_LOCAL_NOALIGN(retbleed_untrain_ret)
 	ANNOTATE_NOENDBR
 	/*
 	 * As executed from retbleed_untrain_ret, this is:

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Disentangle rethunk-dependent options
  2023-08-25  7:01 ` [PATCH 19/23] x86/srso: Disentangle rethunk-dependent options Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     6f72cbba797955f87200941c72131c840f005c64
Gitweb:        https://git.kernel.org/tip/6f72cbba797955f87200941c72131c840f005c64
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:50 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:01 +02:00

x86/srso: Disentangle rethunk-dependent options

CONFIG_RETHUNK, CONFIG_CPU_UNRET_ENTRY and CONFIG_CPU_SRSO are all
tangled up.  De-spaghettify the code a bit.

Some of the rethunk-related code has been shuffled around within the
'.text..__x86.return_thunk' section, but otherwise there are no
functional changes.  srso_alias_untrain_ret() and srso_alias_safe_ret()
((which are very address-sensitive) haven't moved.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20377aee28715a70ab6ca4dd187460ca7f56ac86.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/include/asm/nospec-branch.h |  25 ++--
 arch/x86/kernel/cpu/bugs.c           |   5 +-
 arch/x86/kernel/vmlinux.lds.S        |   7 +-
 arch/x86/lib/retpoline.S             | 157 ++++++++++++++------------
 4 files changed, 109 insertions(+), 85 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 6c14fd1..51e3f1a 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -289,19 +289,17 @@
  * where we have a stack but before any RET instruction.
  */
 .macro UNTRAIN_RET
-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
 		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
-		      __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
+		     __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
 #endif
 .endm
 
 .macro UNTRAIN_RET_VM
-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
@@ -311,8 +309,7 @@
 .endm
 
 .macro UNTRAIN_RET_FROM_CALL
-#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
+#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
@@ -348,6 +345,20 @@ extern void __x86_return_thunk(void);
 static inline void __x86_return_thunk(void) {}
 #endif
 
+#ifdef CONFIG_CPU_UNRET_ENTRY
+extern void retbleed_return_thunk(void);
+#else
+static inline void retbleed_return_thunk(void) {}
+#endif
+
+#ifdef CONFIG_CPU_SRSO
+extern void srso_return_thunk(void);
+extern void srso_alias_return_thunk(void);
+#else
+static inline void srso_return_thunk(void) {}
+static inline void srso_alias_return_thunk(void) {}
+#endif
+
 extern void retbleed_return_thunk(void);
 extern void srso_return_thunk(void);
 extern void srso_alias_return_thunk(void);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 563f09b..0ebdaa7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -63,7 +63,7 @@ EXPORT_SYMBOL_GPL(x86_pred_cmd);
 
 static DEFINE_MUTEX(spec_ctrl_mutex);
 
-void (*x86_return_thunk)(void) __ro_after_init = &__x86_return_thunk;
+void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk;
 
 /* Update SPEC_CTRL MSR and its cached copy unconditionally */
 static void update_spec_ctrl(u64 val)
@@ -1041,8 +1041,7 @@ do_cmd_auto:
 		setup_force_cpu_cap(X86_FEATURE_RETHUNK);
 		setup_force_cpu_cap(X86_FEATURE_UNRET);
 
-		if (IS_ENABLED(CONFIG_RETHUNK))
-			x86_return_thunk = retbleed_return_thunk;
+		x86_return_thunk = retbleed_return_thunk;
 
 		if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
 		    boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 83d41c2..9188834 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -139,10 +139,7 @@ SECTIONS
 		STATIC_CALL_TEXT
 
 		ALIGN_ENTRY_TEXT_BEGIN
-#ifdef CONFIG_CPU_SRSO
 		*(.text..__x86.rethunk_untrain)
-#endif
-
 		ENTRY_TEXT
 
 #ifdef CONFIG_CPU_SRSO
@@ -520,12 +517,12 @@ INIT_PER_CPU(irq_stack_backing_store);
            "fixed_percpu_data is not at start of per-cpu area");
 #endif
 
-#ifdef CONFIG_RETHUNK
+#ifdef CONFIG_CPU_UNRET_ENTRY
 . = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned");
-. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
 #endif
 
 #ifdef CONFIG_CPU_SRSO
+. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned");
 /*
  * GNU ld cannot do XOR until 2.41.
  * https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f6f78318fca803c4907fb8d7f6ded8295f1947b1
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index a40ba18..8ba79d2 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -126,12 +126,13 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
 #include <asm/GEN-for-each-reg.h>
 #undef GEN
 #endif
-/*
- * This function name is magical and is used by -mfunction-return=thunk-extern
- * for the compiler to generate JMPs to it.
- */
+
 #ifdef CONFIG_RETHUNK
 
+	.section .text..__x86.return_thunk
+
+#ifdef CONFIG_CPU_SRSO
+
 /*
  * srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at
  * special addresses:
@@ -147,9 +148,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array)
  *
  * As a result, srso_alias_safe_ret() becomes a safe return.
  */
-#ifdef CONFIG_CPU_SRSO
-	.section .text..__x86.rethunk_untrain
-
+	.pushsection .text..__x86.rethunk_untrain
 SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	UNWIND_HINT_FUNC
 	ANNOTATE_NOENDBR
@@ -157,17 +156,9 @@ SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	lfence
 	jmp srso_alias_return_thunk
 SYM_FUNC_END(srso_alias_untrain_ret)
+	.popsection
 
-	.section .text..__x86.rethunk_safe
-#else
-/* dummy definition for alternatives */
-SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
-	ANNOTATE_UNRET_SAFE
-	ret
-	int3
-SYM_FUNC_END(srso_alias_untrain_ret)
-#endif
-
+	.pushsection .text..__x86.rethunk_safe
 SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	lea 8(%_ASM_SP), %_ASM_SP
 	UNWIND_HINT_FUNC
@@ -182,8 +173,58 @@ SYM_CODE_START_NOALIGN(srso_alias_return_thunk)
 	call srso_alias_safe_ret
 	ud2
 SYM_CODE_END(srso_alias_return_thunk)
+	.popsection
+
+/*
+ * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
+ * above. On kernel entry, srso_untrain_ret() is executed which is a
+ *
+ * movabs $0xccccc30824648d48,%rax
+ *
+ * and when the return thunk executes the inner label srso_safe_ret()
+ * later, it is a stack manipulation and a RET which is mispredicted and
+ * thus a "safe" one to use.
+ */
+	.align 64
+	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
+SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
+	ANNOTATE_NOENDBR
+	.byte 0x48, 0xb8
+
+/*
+ * This forces the function return instruction to speculate into a trap
+ * (UD2 in srso_return_thunk() below).  This RET will then mispredict
+ * and execution will continue at the return site read from the top of
+ * the stack.
+ */
+SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
+	lea 8(%_ASM_SP), %_ASM_SP
+	ret
+	int3
+	int3
+	/* end of movabs */
+	lfence
+	call srso_safe_ret
+	ud2
+SYM_CODE_END(srso_safe_ret)
+SYM_FUNC_END(srso_untrain_ret)
+
+SYM_CODE_START(srso_return_thunk)
+	UNWIND_HINT_FUNC
+	ANNOTATE_NOENDBR
+	call srso_safe_ret
+	ud2
+SYM_CODE_END(srso_return_thunk)
+
+#define JMP_SRSO_UNTRAIN_RET "jmp srso_untrain_ret"
+#define JMP_SRSO_ALIAS_UNTRAIN_RET "jmp srso_alias_untrain_ret"
+#else /* !CONFIG_CPU_SRSO */
+#define JMP_SRSO_UNTRAIN_RET "ud2"
+#define JMP_SRSO_ALIAS_UNTRAIN_RET "ud2"
+#endif /* CONFIG_CPU_SRSO */
+
+#ifdef CONFIG_CPU_UNRET_ENTRY
 
-	.section .text..__x86.return_thunk
 /*
  * Some generic notes on the untraining sequences:
  *
@@ -263,64 +304,21 @@ SYM_CODE_END(retbleed_return_thunk)
 	int3
 SYM_FUNC_END(retbleed_untrain_ret)
 
-/*
- * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
- * above. On kernel entry, srso_untrain_ret() is executed which is a
- *
- * movabs $0xccccc30824648d48,%rax
- *
- * and when the return thunk executes the inner label srso_safe_ret()
- * later, it is a stack manipulation and a RET which is mispredicted and
- * thus a "safe" one to use.
- */
-	.align 64
-	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
-SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
-	ANNOTATE_NOENDBR
-	.byte 0x48, 0xb8
+#define JMP_RETBLEED_UNTRAIN_RET "jmp retbleed_untrain_ret"
+#else /* !CONFIG_CPU_UNRET_ENTRY */
+#define JMP_RETBLEED_UNTRAIN_RET "ud2"
+#endif /* CONFIG_CPU_UNRET_ENTRY */
 
-/*
- * This forces the function return instruction to speculate into a trap
- * (UD2 in srso_return_thunk() below).  This RET will then mispredict
- * and execution will continue at the return site read from the top of
- * the stack.
- */
-SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
-	lea 8(%_ASM_SP), %_ASM_SP
-	ret
-	int3
-	int3
-	/* end of movabs */
-	lfence
-	call srso_safe_ret
-	ud2
-SYM_CODE_END(srso_safe_ret)
-SYM_FUNC_END(srso_untrain_ret)
-
-SYM_CODE_START(srso_return_thunk)
-	UNWIND_HINT_FUNC
-	ANNOTATE_NOENDBR
-	call srso_safe_ret
-	ud2
-SYM_CODE_END(srso_return_thunk)
+#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
 
 SYM_FUNC_START(entry_untrain_ret)
-	ALTERNATIVE_2 "jmp retbleed_untrain_ret", \
-		      "jmp srso_untrain_ret", X86_FEATURE_SRSO, \
-		      "jmp srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS
+	ALTERNATIVE_2 JMP_RETBLEED_UNTRAIN_RET,				\
+		      JMP_SRSO_UNTRAIN_RET, X86_FEATURE_SRSO,		\
+		      JMP_SRSO_ALIAS_UNTRAIN_RET, X86_FEATURE_SRSO_ALIAS
 SYM_FUNC_END(entry_untrain_ret)
 __EXPORT_THUNK(entry_untrain_ret)
 
-SYM_CODE_START(__x86_return_thunk)
-	UNWIND_HINT_FUNC
-	ANNOTATE_NOENDBR
-	ANNOTATE_UNRET_SAFE
-	ret
-	int3
-SYM_CODE_END(__x86_return_thunk)
-EXPORT_SYMBOL(__x86_return_thunk)
-
-#endif /* CONFIG_RETHUNK */
+#endif /* CONFIG_CPU_UNRET_ENTRY || CONFIG_CPU_SRSO */
 
 #ifdef CONFIG_CALL_DEPTH_TRACKING
 
@@ -355,3 +353,22 @@ SYM_FUNC_START(__x86_return_skl)
 SYM_FUNC_END(__x86_return_skl)
 
 #endif /* CONFIG_CALL_DEPTH_TRACKING */
+
+/*
+ * This function name is magical and is used by -mfunction-return=thunk-extern
+ * for the compiler to generate JMPs to it.
+ *
+ * This code is only used during kernel boot or module init.  All
+ * 'JMP __x86_return_thunk' sites are changed to something else by
+ * apply_returns().
+ */
+SYM_CODE_START(__x86_return_thunk)
+	UNWIND_HINT_FUNC
+	ANNOTATE_NOENDBR
+	ANNOTATE_UNRET_SAFE
+	ret
+	int3
+SYM_CODE_END(__x86_return_thunk)
+EXPORT_SYMBOL(__x86_return_thunk)
+
+#endif /* CONFIG_RETHUNK */

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check
  2023-08-25  7:01 ` [PATCH 18/23] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  2023-09-02  9:10   ` [PATCH 18/23] " Borislav Petkov
  1 sibling, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     071fcf6c28adcee293addd7258097949b8b54819
Gitweb:        https://git.kernel.org/tip/071fcf6c28adcee293addd7258097949b8b54819
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:49 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:01 +02:00

x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check

The X86_FEATURE_ENTRY_IBPB check is redundant here due to the above
RETBLEED_MITIGATION_IBPB check.  RETBLEED_MITIGATION_IBPB already
implies X86_FEATURE_ENTRY_IBPB.  So if we got here and 'has_microcode'
is true, it means X86_FEATURE_ENTRY_IBPB is not set.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/9b671422643939792afe05c625e93ef40d9b57b5.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b086fd4..563f09b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2494,7 +2494,7 @@ static void __init srso_select_mitigation(void)
 
 	case SRSO_CMD_IBPB_ON_VMEXIT:
 		if (IS_ENABLED(CONFIG_CPU_SRSO)) {
-			if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
+			if (has_microcode) {
 				setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
 				srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
 			}

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Move retbleed IBPB check into existing 'has_microcode' code block
  2023-08-25  7:01 ` [PATCH 17/23] x86/srso: Move retbleed IBPB check into existing 'has_microcode' code block Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     a542794756c50e0b339a751635cdbbd5893f2b4e
Gitweb:        https://git.kernel.org/tip/a542794756c50e0b339a751635cdbbd5893f2b4e
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:48 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:01 +02:00

x86/srso: Move retbleed IBPB check into existing 'has_microcode' code block

Simplify the code flow a bit by moving the retbleed IBPB check into the
existing 'has_microcode' block.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/c64b84b6df4e82423abe2441a1a088a8c7f1ae14.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/cpu/bugs.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0621615..b086fd4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2430,10 +2430,8 @@ static void __init srso_select_mitigation(void)
 			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
 			return;
 		}
-	}
 
-	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
-		if (has_microcode) {
+		if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto out;
 		}

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/bugs: Remove default case for fully switched enums
  2023-08-25  7:01 ` [PATCH 16/23] x86/bugs: Remove default case for fully switched enums Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  2023-09-02  9:02   ` [PATCH 16/23] " Borislav Petkov
  1 sibling, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     e1894bd679d20539119ab3b1b61f44e8ac722ba8
Gitweb:        https://git.kernel.org/tip/e1894bd679d20539119ab3b1b61f44e8ac722ba8
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:47 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:01 +02:00

x86/bugs: Remove default case for fully switched enums

For enum switch statements which handle all possible cases, remove the
default case so a compiler warning gets printed if one of the enums gets
accidentally omitted from the switch statement.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/858e6f4ef71cd531e64db2903d8ac4763bec0af4.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/cpu/bugs.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3c7f634..0621615 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1019,7 +1019,6 @@ static void __init retbleed_select_mitigation(void)
 
 do_cmd_auto:
 	case RETBLEED_CMD_AUTO:
-	default:
 		if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
 		    boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
 			if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY))
@@ -1290,6 +1289,8 @@ spectre_v2_user_select_mitigation(void)
 
 		spectre_v2_user_ibpb = mode;
 		switch (cmd) {
+		case SPECTRE_V2_USER_CMD_NONE:
+			break;
 		case SPECTRE_V2_USER_CMD_FORCE:
 		case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
 		case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
@@ -1301,8 +1302,6 @@ spectre_v2_user_select_mitigation(void)
 		case SPECTRE_V2_USER_CMD_SECCOMP:
 			static_branch_enable(&switch_mm_cond_ibpb);
 			break;
-		default:
-			break;
 		}
 
 		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
@@ -2160,6 +2159,10 @@ static int l1d_flush_prctl_get(struct task_struct *task)
 static int ssb_prctl_get(struct task_struct *task)
 {
 	switch (ssb_mode) {
+	case SPEC_STORE_BYPASS_NONE:
+		if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
+			return PR_SPEC_ENABLE;
+		return PR_SPEC_NOT_AFFECTED;
 	case SPEC_STORE_BYPASS_DISABLE:
 		return PR_SPEC_DISABLE;
 	case SPEC_STORE_BYPASS_SECCOMP:
@@ -2171,11 +2174,8 @@ static int ssb_prctl_get(struct task_struct *task)
 		if (task_spec_ssb_disable(task))
 			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
 		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
-	default:
-		if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
-			return PR_SPEC_ENABLE;
-		return PR_SPEC_NOT_AFFECTED;
 	}
+	BUG();
 }
 
 static int ib_prctl_get(struct task_struct *task)
@@ -2504,9 +2504,6 @@ static void __init srso_select_mitigation(void)
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
                 }
 		break;
-
-	default:
-		break;
 	}
 
 out:

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Remove 'pred_cmd' label
  2023-08-25  7:01 ` [PATCH 15/23] x86/srso: Remove 'pred_cmd' label Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  2023-08-25 19:51   ` [PATCH 15/23] " Nikolay Borisov
  1 sibling, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     2685c96f0cd51e56a1bad4d08d41eddf8f0f5890
Gitweb:        https://git.kernel.org/tip/2685c96f0cd51e56a1bad4d08d41eddf8f0f5890
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:46 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:00 +02:00

x86/srso: Remove 'pred_cmd' label

SBPB is only enabled in two distinct cases:

  1) when SRSO has been disabled with srso=off

  2) when SRSO has been fixed (in future HW)

Simplify the control flow by getting rid of the 'pred_cmd' label and
moving the SBPB enablement check to the two corresponding code sites.
This makes it more clear when exactly SBPB gets enabled.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/ec18b04787fc21874303f29746a49847751eddd6.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d883d1c..3c7f634 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2410,13 +2410,21 @@ static void __init srso_select_mitigation(void)
 {
 	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
 
-	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
-		goto pred_cmd;
+	if (cpu_mitigations_off())
+		return;
+
+	if (!boot_cpu_has_bug(X86_BUG_SRSO)) {
+		if (boot_cpu_has(X86_FEATURE_SBPB))
+			x86_pred_cmd = PRED_CMD_SBPB;
+		return;
+	}
 
 	if (has_microcode) {
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
+		 *
+		 * Zen1/2 don't have SBPB, no need to try to enable it here.
 		 */
 		if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
 			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
@@ -2439,7 +2447,9 @@ static void __init srso_select_mitigation(void)
 
 	switch (srso_cmd) {
 	case SRSO_CMD_OFF:
-		goto pred_cmd;
+		if (boot_cpu_has(X86_FEATURE_SBPB))
+			x86_pred_cmd = PRED_CMD_SBPB;
+		return;
 
 	case SRSO_CMD_MICROCODE:
 		if (has_microcode) {
@@ -2501,11 +2511,6 @@ static void __init srso_select_mitigation(void)
 
 out:
 	pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");
-
-pred_cmd:
-	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
-	     boot_cpu_has(X86_FEATURE_SBPB))
-		x86_pred_cmd = PRED_CMD_SBPB;
 }
 
 #undef pr_fmt

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Unexport untraining functions
  2023-08-25  7:01 ` [PATCH 14/23] x86/srso: Unexport untraining functions Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     c55f644e022eb9ddc199bcca6d945d820bce4c06
Gitweb:        https://git.kernel.org/tip/c55f644e022eb9ddc199bcca6d945d820bce4c06
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:45 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:00 +02:00

x86/srso: Unexport untraining functions

These functions aren't called outside of retpoline.S.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/94caf421d80924666be921e387851665054ba9b7.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/include/asm/nospec-branch.h | 4 ----
 arch/x86/lib/retpoline.S             | 7 ++-----
 2 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 197ff4f..6c14fd1 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -352,10 +352,6 @@ extern void retbleed_return_thunk(void);
 extern void srso_return_thunk(void);
 extern void srso_alias_return_thunk(void);
 
-extern void retbleed_untrain_ret(void);
-extern void srso_untrain_ret(void);
-extern void srso_alias_untrain_ret(void);
-
 extern void entry_untrain_ret(void);
 extern void entry_ibpb(void);
 
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 9ab634f..a40ba18 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -157,7 +157,6 @@ SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	lfence
 	jmp srso_alias_return_thunk
 SYM_FUNC_END(srso_alias_untrain_ret)
-__EXPORT_THUNK(srso_alias_untrain_ret)
 
 	.section .text..__x86.rethunk_safe
 #else
@@ -215,7 +214,7 @@ SYM_CODE_END(srso_alias_return_thunk)
  */
 	.align 64
 	.skip 64 - (retbleed_return_thunk - retbleed_untrain_ret), 0xcc
-SYM_START(retbleed_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_START(retbleed_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
 	ANNOTATE_NOENDBR
 	/*
 	 * As executed from retbleed_untrain_ret, this is:
@@ -263,7 +262,6 @@ SYM_CODE_END(retbleed_return_thunk)
 	jmp retbleed_return_thunk
 	int3
 SYM_FUNC_END(retbleed_untrain_ret)
-__EXPORT_THUNK(retbleed_untrain_ret)
 
 /*
  * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret()
@@ -277,7 +275,7 @@ __EXPORT_THUNK(retbleed_untrain_ret)
  */
 	.align 64
 	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
-SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
+SYM_START(srso_untrain_ret, SYM_L_LOCAL, SYM_A_NONE)
 	ANNOTATE_NOENDBR
 	.byte 0x48, 0xb8
 
@@ -298,7 +296,6 @@ SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
 	ud2
 SYM_CODE_END(srso_safe_ret)
 SYM_FUNC_END(srso_untrain_ret)
-__EXPORT_THUNK(srso_untrain_ret)
 
 SYM_CODE_START(srso_return_thunk)
 	UNWIND_HINT_FUNC

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/alternatives: Remove faulty optimization
  2023-08-25  7:01 ` [PATCH 12/23] x86/alternatives: Remove faulty optimization Josh Poimboeuf
  2023-08-25  9:20   ` Ingo Molnar
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  2023-08-25 10:27   ` [tip: x86/urgent] " tip-bot2 for Josh Poimboeuf
  2 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     4f643529501794ef9baabfe65612da8a2a8eff5b
Gitweb:        https://git.kernel.org/tip/4f643529501794ef9baabfe65612da8a2a8eff5b
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:43 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:00 +02:00

x86/alternatives: Remove faulty optimization

The following commit:

  095b8303f383 ("x86/alternative: Make custom return thunk unconditional")

made '__x86_return_thunk' a placeholder value.  All code setting
X86_FEATURE_RETHUNK also changes the value of 'x86_return_thunk'.  So
the optimization at the beginning of apply_returns() is dead code.

Also, before the above-mentioned commit, the optimization actually had a
bug It bypassed __static_call_fixup(), causing some raw returns to
remain unpatched in static call trampolines.  Thus the 'Fixes' tag.

Fixes: d2408e043e72 ("x86/alternative: Optimize returns patching")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/ca76a2e94217d6fc8e007d2ca79fee219f3168f8.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/alternative.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 099d58d..34be5fb 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -720,14 +720,6 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end)
 {
 	s32 *s;
 
-	/*
-	 * Do not patch out the default return thunks if those needed are the
-	 * ones generated by the compiler.
-	 */
-	if (cpu_feature_enabled(X86_FEATURE_RETHUNK) &&
-	    (x86_return_thunk == __x86_return_thunk))
-		return;
-
 	for (s = start; s < end; s++) {
 		void *dest = NULL, *addr = (void *)s + *s;
 		struct insn insn;

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Improve i-cache locality for alias mitigation
  2023-08-25  7:01 ` [PATCH 13/23] x86/srso: Improve i-cache locality for alias mitigation Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     3033c66da13ca363ad2f10dcd858c994e2f4bd00
Gitweb:        https://git.kernel.org/tip/3033c66da13ca363ad2f10dcd858c994e2f4bd00
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:44 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:22:00 +02:00

x86/srso: Improve i-cache locality for alias mitigation

Move srso_alias_return_thunk() to the same section as
srso_alias_safe_ret() so they can share a cache line.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/4f975cf178ab641d3720362f244694408d85ecca.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/lib/retpoline.S | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index cd86aeb..9ab634f 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -177,15 +177,14 @@ SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	int3
 SYM_FUNC_END(srso_alias_safe_ret)
 
-	.section .text..__x86.return_thunk
-
-SYM_CODE_START(srso_alias_return_thunk)
+SYM_CODE_START_NOALIGN(srso_alias_return_thunk)
 	UNWIND_HINT_FUNC
 	ANNOTATE_NOENDBR
 	call srso_alias_safe_ret
 	ud2
 SYM_CODE_END(srso_alias_return_thunk)
 
+	.section .text..__x86.return_thunk
 /*
  * Some generic notes on the untraining sequences:
  *

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Print mitigation for retbleed IBPB case
  2023-08-25  7:01 ` [PATCH 09/23] x86/srso: Print mitigation for retbleed IBPB case Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     ff32aa3d112156877c2ebc931099cda1c973fa6a
Gitweb:        https://git.kernel.org/tip/ff32aa3d112156877c2ebc931099cda1c973fa6a
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:40 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:21:59 +02:00

x86/srso: Print mitigation for retbleed IBPB case

When overriding the requested mitigation with IBPB due to retbleed=ibpb,
print the mitigation in the usual format instead of a custom error
message.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/1f2f2257abc4047c707d0b8dbbd8e796730597f6.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/cpu/bugs.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 235c0e0..6c47f37 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2425,9 +2425,8 @@ static void __init srso_select_mitigation(void)
 
 	if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
 		if (has_microcode) {
-			pr_err("Retbleed IBPB mitigation enabled, using same for SRSO\n");
 			srso_mitigation = SRSO_MITIGATION_IBPB;
-			goto pred_cmd;
+			goto out;
 		}
 	}
 
@@ -2490,7 +2489,8 @@ static void __init srso_select_mitigation(void)
 		break;
 	}
 
-	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
+out:
+	pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");
 
 pred_cmd:
 	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Fix unret validation dependencies
  2023-08-25  7:01 ` [PATCH 11/23] x86/srso: Fix unret validation dependencies Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     a31dddd6a76dcfb65940d1c3252df771f850e547
Gitweb:        https://git.kernel.org/tip/a31dddd6a76dcfb65940d1c3252df771f850e547
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:42 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:21:59 +02:00

x86/srso: Fix unret validation dependencies

CONFIG_CPU_SRSO isn't dependent on CONFIG_CPU_UNRET_ENTRY (AMD
Retbleed), so the two features are independently configurable.  Fix
several issues for the (presumably rare) case where CONFIG_CPU_SRSO is
enabled but CONFIG_CPU_UNRET_ENTRY isn't.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/6d3818c914d04684ec9a01397b0ef229c93d5fdf.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/include/asm/nospec-branch.h | 4 ++--
 include/linux/objtool.h              | 3 ++-
 scripts/Makefile.vmlinux_o           | 3 ++-
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c55cc24..197ff4f 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -271,7 +271,7 @@
 .Lskip_rsb_\@:
 .endm
 
-#ifdef CONFIG_CPU_UNRET_ENTRY
+#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)
 #define CALL_UNTRAIN_RET	"call entry_untrain_ret"
 #else
 #define CALL_UNTRAIN_RET	""
@@ -312,7 +312,7 @@
 
 .macro UNTRAIN_RET_FROM_CALL
 #if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
-	defined(CONFIG_CALL_DEPTH_TRACKING)
+	defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
 	VALIDATE_UNRET_END
 	ALTERNATIVE_3 "",						\
 		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
diff --git a/include/linux/objtool.h b/include/linux/objtool.h
index 03f82c2..b5440e7 100644
--- a/include/linux/objtool.h
+++ b/include/linux/objtool.h
@@ -130,7 +130,8 @@
  * it will be ignored.
  */
 .macro VALIDATE_UNRET_BEGIN
-#if defined(CONFIG_NOINSTR_VALIDATION) && defined(CONFIG_CPU_UNRET_ENTRY)
+#if defined(CONFIG_NOINSTR_VALIDATION) && \
+	(defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO))
 .Lhere_\@:
 	.pushsection .discard.validate_unret
 	.long	.Lhere_\@ - .
diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
index 0edfdb4..25b3b58 100644
--- a/scripts/Makefile.vmlinux_o
+++ b/scripts/Makefile.vmlinux_o
@@ -37,7 +37,8 @@ objtool-enabled := $(or $(delay-objtool),$(CONFIG_NOINSTR_VALIDATION))
 
 vmlinux-objtool-args-$(delay-objtool)			+= $(objtool-args-y)
 vmlinux-objtool-args-$(CONFIG_GCOV_KERNEL)		+= --no-unreachable
-vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr $(if $(CONFIG_CPU_UNRET_ENTRY), --unret)
+vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION)	+= --noinstr \
+							   $(if $(or $(CONFIG_CPU_UNRET_ENTRY),$(CONFIG_CPU_SRSO)), --unret)
 
 objtool-args = $(vmlinux-objtool-args-y) --link
 

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Fix vulnerability reporting for missing microcode
  2023-08-25  7:01 ` [PATCH 10/23] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  2023-09-01  9:40     ` Borislav Petkov
  0 siblings, 1 reply; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     b3be1397be0340b2c30b2dcd7339dbfaa5563e2b
Gitweb:        https://git.kernel.org/tip/b3be1397be0340b2c30b2dcd7339dbfaa5563e2b
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:41 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:21:59 +02:00

x86/srso: Fix vulnerability reporting for missing microcode

The SRSO default safe-ret mitigation is reported as "mitigated" even if
microcode hasn't been updated.  That's wrong because userspace may still
be vulnerable to SRSO attacks due to IBPB not flushing branch type
predictions.

Report the safe-ret + !microcode case as vulnerable.

Also report the microcode-only case as vulnerable as it leaves the
kernel open to attacks.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/65556eeb1bf7cb9bd7db8662ef115dd73191db84.1692919072.git.jpoimboe@kernel.org
---
 Documentation/admin-guide/hw-vuln/srso.rst | 22 ++++++++++----
 arch/x86/kernel/cpu/bugs.c                 | 34 ++++++++++++---------
 2 files changed, 37 insertions(+), 19 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
index b6cfb51..4516719 100644
--- a/Documentation/admin-guide/hw-vuln/srso.rst
+++ b/Documentation/admin-guide/hw-vuln/srso.rst
@@ -46,12 +46,22 @@ The possible values in this file are:
 
    The processor is not vulnerable
 
- * 'Vulnerable: no microcode':
+* 'Vulnerable':
+
+   The processor is vulnerable and no mitigations have been applied.
+
+ * 'Vulnerable: No microcode':
 
    The processor is vulnerable, no microcode extending IBPB
    functionality to address the vulnerability has been applied.
 
- * 'Mitigation: microcode':
+ * 'Vulnerable: Safe RET, no microcode':
+
+   The "Safe Ret" mitigation (see below) has been applied to protect the
+   kernel, but the IBPB-extending microcode has not been applied.  User
+   space tasks may still be vulnerable.
+
+ * 'Vulnerable: Microcode, no safe RET':
 
    Extended IBPB functionality microcode patch has been applied. It does
    not address User->Kernel and Guest->Host transitions protection but it
@@ -72,11 +82,11 @@ The possible values in this file are:
 
    (spec_rstack_overflow=microcode)
 
- * 'Mitigation: safe RET':
+ * 'Mitigation: Safe RET':
 
-   Software-only mitigation. It complements the extended IBPB microcode
-   patch functionality by addressing User->Kernel and Guest->Host
-   transitions protection.
+   Combined microcode/software mitigation. It complements the
+   extended IBPB microcode patch functionality by addressing
+   User->Kernel and Guest->Host transitions protection.
 
    Selected by default or by spec_rstack_overflow=safe-ret
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6c47f37..d883d1c 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2353,6 +2353,8 @@ early_param("l1tf", l1tf_cmdline);
 
 enum srso_mitigation {
 	SRSO_MITIGATION_NONE,
+	SRSO_MITIGATION_UCODE_NEEDED,
+	SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
 	SRSO_MITIGATION_MICROCODE,
 	SRSO_MITIGATION_SAFE_RET,
 	SRSO_MITIGATION_IBPB,
@@ -2368,11 +2370,13 @@ enum srso_mitigation_cmd {
 };
 
 static const char * const srso_strings[] = {
-	[SRSO_MITIGATION_NONE]           = "Vulnerable",
-	[SRSO_MITIGATION_MICROCODE]      = "Mitigation: microcode",
-	[SRSO_MITIGATION_SAFE_RET]	 = "Mitigation: safe RET",
-	[SRSO_MITIGATION_IBPB]		 = "Mitigation: IBPB",
-	[SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
+	[SRSO_MITIGATION_NONE]			= "Vulnerable",
+	[SRSO_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
+	[SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED]	= "Vulnerable: Safe RET, no microcode",
+	[SRSO_MITIGATION_MICROCODE]		= "Vulnerable: Microcode, no safe RET",
+	[SRSO_MITIGATION_SAFE_RET]		= "Mitigation: Safe RET",
+	[SRSO_MITIGATION_IBPB]			= "Mitigation: IBPB",
+	[SRSO_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT only"
 };
 
 static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
@@ -2409,10 +2413,7 @@ static void __init srso_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
 		goto pred_cmd;
 
-	if (!has_microcode) {
-		pr_warn("IBPB-extending microcode not applied!\n");
-		pr_warn(SRSO_NOTICE);
-	} else {
+	if (has_microcode) {
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
@@ -2428,6 +2429,12 @@ static void __init srso_select_mitigation(void)
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto out;
 		}
+	} else {
+		pr_warn("IBPB-extending microcode not applied!\n");
+		pr_warn(SRSO_NOTICE);
+
+		/* may be overwritten by SRSO_CMD_SAFE_RET below */
+		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
 	}
 
 	switch (srso_cmd) {
@@ -2457,7 +2464,10 @@ static void __init srso_select_mitigation(void)
 				setup_force_cpu_cap(X86_FEATURE_SRSO);
 				x86_return_thunk = srso_return_thunk;
 			}
-			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			if (has_microcode)
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			else
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
 		}
@@ -2701,9 +2711,7 @@ static ssize_t srso_show_state(char *buf)
 	if (boot_cpu_has(X86_FEATURE_SRSO_NO))
 		return sysfs_emit(buf, "Mitigation: SMT disabled\n");
 
-	return sysfs_emit(buf, "%s%s\n",
-			  srso_strings[srso_mitigation],
-			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+	return sysfs_emit(buf, "%s\n", srso_strings[srso_mitigation]);
 }
 
 static ssize_t gds_show_state(char *buf)

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Fix SBPB enablement for (possible) future fixed HW
  2023-08-25  7:01 ` [PATCH 07/23] x86/srso: Fix SBPB enablement for (possible) future fixed HW Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     71278e8405530b12feb55dc3ef71ed045056950c
Gitweb:        https://git.kernel.org/tip/71278e8405530b12feb55dc3ef71ed045056950c
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:38 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:21:59 +02:00

x86/srso: Fix SBPB enablement for (possible) future fixed HW

Make the SBPB check more robust against the (possible) case where future
HW has SRSO fixed but doesn't have the SRSO_NO bit set.

Fixes: 1b5277c0ea0b ("x86/srso: Add SRSO_NO support")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/d04e617b39eefdb0221857d53858579acdc3f580.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 10499bc..2859a54 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2496,7 +2496,7 @@ static void __init srso_select_mitigation(void)
 	pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
 
 pred_cmd:
-	if ((boot_cpu_has(X86_FEATURE_SRSO_NO) || srso_cmd == SRSO_CMD_OFF) &&
+	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
 	     boot_cpu_has(X86_FEATURE_SBPB))
 		x86_pred_cmd = PRED_CMD_SBPB;
 }

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Print actual mitigation if requested mitigation isn't possible
  2023-08-25  7:01 ` [PATCH 08/23] x86/srso: Print actual mitigation if requested mitigation isn't possible Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Josh Poimboeuf, Ingo Molnar, Borislav Petkov (AMD), x86,
	linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     60b65d71d58153e1d3a2a87052c7b3e252008b91
Gitweb:        https://git.kernel.org/tip/60b65d71d58153e1d3a2a87052c7b3e252008b91
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:39 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:21:59 +02:00

x86/srso: Print actual mitigation if requested mitigation isn't possible

If the kernel wasn't compiled to support the requested option, print the
actual option that ends up getting used.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/0f8b03be6b785efdc4a3d37feca0b25ef850e011.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/cpu/bugs.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 2859a54..235c0e0 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2461,7 +2461,6 @@ static void __init srso_select_mitigation(void)
 			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
-			goto pred_cmd;
 		}
 		break;
 
@@ -2473,7 +2472,6 @@ static void __init srso_select_mitigation(void)
 			}
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n");
-			goto pred_cmd;
 		}
 		break;
 
@@ -2485,7 +2483,6 @@ static void __init srso_select_mitigation(void)
 			}
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
-			goto pred_cmd;
                 }
 		break;
 

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Don't probe microcode in a guest
  2023-08-25  7:01 ` [PATCH 03/23] x86/srso: Don't probe microcode in a guest Josh Poimboeuf
  2023-08-25  7:52   ` Andrew Cooper
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  1 sibling, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Andrew Cooper, Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     3f267f825c98e3b3db0194645ac68ea154eac19d
Gitweb:        https://git.kernel.org/tip/3f267f825c98e3b3db0194645ac68ea154eac19d
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:34 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:21:58 +02:00

x86/srso: Don't probe microcode in a guest

To support live migration, the hypervisor sets the "lowest common
denominator" of features.  Probing the microcode isn't allowed because
any detected features might go away after a migration.

As Andy Cooper states:

  "Linux must not probe microcode when virtualised.  What it may see
  instantaneously on boot (owing to MSR_PRED_CMD being fully passed
  through) is not accurate for the lifetime of the VM."

Rely on the hypervisor to set the needed IBPB_BRTYPE and SBPB bits.

Fixes: 1b5277c0ea0b ("x86/srso: Add SRSO_NO support")
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Link: https://lore.kernel.org/r/3e293282e96b9fe1835c8bd22d1aaac07d9628e7.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/cpu/amd.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index b08af92..28e77c5 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -767,7 +767,7 @@ static void early_init_amd(struct cpuinfo_x86 *c)
 	if (cpu_has(c, X86_FEATURE_TOPOEXT))
 		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
 
-	if (!cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
+	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
 		if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
 			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
 		else if (c->x86 >= 0x19 && !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
  2023-08-25  7:01 ` [PATCH 06/23] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     85e22808baacb80d17c5d3179be9aa4121638c1e
Gitweb:        https://git.kernel.org/tip/85e22808baacb80d17c5d3179be9aa4121638c1e
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:37 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:21:58 +02:00

x86/srso: Fix SBPB enablement for spec_rstack_overflow=off

If the user has requested no SRSO mitigation, other mitigations can use
the lighter-weight SBPB instead of IBPB.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/d025b558e451325db3a2c76f3daafe26a0928789.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b0ae985..10499bc 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2433,7 +2433,7 @@ static void __init srso_select_mitigation(void)
 
 	switch (srso_cmd) {
 	case SRSO_CMD_OFF:
-		return;
+		goto pred_cmd;
 
 	case SRSO_CMD_MICROCODE:
 		if (has_microcode) {

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Fix srso_show_state() side effect
  2023-08-25  7:01 ` [PATCH 01/23] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Josh Poimboeuf, Ingo Molnar, Nikolay Borisov, x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     c17312ac07853597500572e91a0469169f73058f
Gitweb:        https://git.kernel.org/tip/c17312ac07853597500572e91a0469169f73058f
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:32 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:21:58 +02:00

x86/srso: Fix srso_show_state() side effect

Reading the 'spec_rstack_overflow' sysfs file can trigger an unnecessary
MSR write, and possibly even a (handled) exception if the microcode
hasn't been updated.

Avoid all that by just checking X86_FEATURE_IBPB_BRTYPE instead, which
gets set by srso_select_mitigation() if the updated microcode exists.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
Link: https://lore.kernel.org/r/40b2e6af3a94d2c6eb9a3afaa63f34ee910a17d0.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/cpu/bugs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index f081d26..bdd3e29 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2717,7 +2717,7 @@ static ssize_t srso_show_state(char *buf)
 
 	return sysfs_emit(buf, "%s%s\n",
 			  srso_strings[srso_mitigation],
-			  (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
+			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
 }
 
 static ssize_t gds_show_state(char *buf)

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Set CPUID feature bits independently of bug or mitigation status
  2023-08-25  7:01 ` [PATCH 02/23] x86/srso: Set CPUID feature bits independently of bug or mitigation status Josh Poimboeuf
@ 2023-08-25 10:19   ` tip-bot2 for Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:19 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Josh Poimboeuf, Ingo Molnar, Nikolay Borisov,
	Borislav Petkov (AMD), x86, linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     6a22cc6cfedcf7c4aa63230568a028a0fdd83e67
Gitweb:        https://git.kernel.org/tip/6a22cc6cfedcf7c4aa63230568a028a0fdd83e67
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:33 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 11:21:58 +02:00

x86/srso: Set CPUID feature bits independently of bug or mitigation status

Booting with mitigations=off incorrectly prevents the
X86_FEATURE_{IBPB_BRTYPE,SBPB} CPUID bits from getting set.

Also, future CPUs without X86_BUG_SRSO might still have IBPB with branch
type prediction flushing, in which case SBPB should be used instead of
IBPB.  The current code doesn't allow for that.

Also, cpu_has_ibpb_brtype_microcode() has some surprising side effects
and the setting of these feature bits really doesn't belong in the
mitigation code anyway.  Move it to earlier.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/f90527afa07624ba5912f0f3fec626e40cde0f24.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/include/asm/processor.h |  2 --
 arch/x86/kernel/cpu/amd.c        | 28 +++++++++-------------------
 arch/x86/kernel/cpu/bugs.c       | 13 +------------
 3 files changed, 10 insertions(+), 33 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index fd75024..9e26294 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -676,12 +676,10 @@ extern u16 get_llc_id(unsigned int cpu);
 #ifdef CONFIG_CPU_SUP_AMD
 extern u32 amd_get_nodes_per_socket(void);
 extern u32 amd_get_highest_perf(void);
-extern bool cpu_has_ibpb_brtype_microcode(void);
 extern void amd_clear_divider(void);
 #else
 static inline u32 amd_get_nodes_per_socket(void)	{ return 0; }
 static inline u32 amd_get_highest_perf(void)		{ return 0; }
-static inline bool cpu_has_ibpb_brtype_microcode(void)	{ return false; }
 static inline void amd_clear_divider(void)		{ }
 #endif
 
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 7eca6a8..b08af92 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -766,6 +766,15 @@ static void early_init_amd(struct cpuinfo_x86 *c)
 
 	if (cpu_has(c, X86_FEATURE_TOPOEXT))
 		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
+
+	if (!cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
+		if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
+			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
+		else if (c->x86 >= 0x19 && !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
+			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
+			setup_force_cpu_cap(X86_FEATURE_SBPB);
+		}
+	}
 }
 
 static void init_amd_k8(struct cpuinfo_x86 *c)
@@ -1301,25 +1310,6 @@ void amd_check_microcode(void)
 	on_each_cpu(zenbleed_check_cpu, NULL, 1);
 }
 
-bool cpu_has_ibpb_brtype_microcode(void)
-{
-	switch (boot_cpu_data.x86) {
-	/* Zen1/2 IBPB flushes branch type predictions too. */
-	case 0x17:
-		return boot_cpu_has(X86_FEATURE_AMD_IBPB);
-	case 0x19:
-		/* Poke the MSR bit on Zen3/4 to check its presence. */
-		if (!wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
-			setup_force_cpu_cap(X86_FEATURE_SBPB);
-			return true;
-		} else {
-			return false;
-		}
-	default:
-		return false;
-	}
-}
-
 /*
  * Issue a DIV 0/1 insn to clear any division data from previous DIV
  * operations.
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index bdd3e29..b0ae985 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2404,27 +2404,16 @@ early_param("spec_rstack_overflow", srso_parse_cmdline);
 
 static void __init srso_select_mitigation(void)
 {
-	bool has_microcode;
+	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
 
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
 		goto pred_cmd;
 
-	/*
-	 * The first check is for the kernel running as a guest in order
-	 * for guests to verify whether IBPB is a viable mitigation.
-	 */
-	has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) || cpu_has_ibpb_brtype_microcode();
 	if (!has_microcode) {
 		pr_warn("IBPB-extending microcode not applied!\n");
 		pr_warn(SRSO_NOTICE);
 	} else {
 		/*
-		 * Enable the synthetic (even if in a real CPUID leaf)
-		 * flags for guests.
-		 */
-		setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
-
-		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
 		 */

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/urgent] x86/alternatives: Remove faulty optimization
  2023-08-25  7:01 ` [PATCH 12/23] x86/alternatives: Remove faulty optimization Josh Poimboeuf
  2023-08-25  9:20   ` Ingo Molnar
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
@ 2023-08-25 10:27   ` tip-bot2 for Josh Poimboeuf
  2 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-08-25 10:27 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

The following commit has been merged into the x86/urgent branch of tip:

Commit-ID:     71cb8d530bfa25abe4c1b9f67c4a24dc6c61e9b0
Gitweb:        https://git.kernel.org/tip/71cb8d530bfa25abe4c1b9f67c4a24dc6c61e9b0
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Fri, 25 Aug 2023 00:01:43 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 25 Aug 2023 12:22:04 +02:00

x86/alternatives: Remove faulty optimization

The following commit:

  095b8303f383 ("x86/alternative: Make custom return thunk unconditional")

made '__x86_return_thunk' a placeholder value.  All code setting
X86_FEATURE_RETHUNK also changes the value of 'x86_return_thunk'.  So
the optimization at the beginning of apply_returns() is dead code.

Also, before the above-mentioned commit, the optimization actually had a
bug It bypassed __static_call_fixup(), causing some raw returns to
remain unpatched in static call trampolines.  Thus the 'Fixes' tag.

Fixes: d2408e043e72 ("x86/alternative: Optimize returns patching")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/ca76a2e94217d6fc8e007d2ca79fee219f3168f8.1692919072.git.jpoimboe@kernel.org
---
 arch/x86/kernel/alternative.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 099d58d..34be5fb 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -720,14 +720,6 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end)
 {
 	s32 *s;
 
-	/*
-	 * Do not patch out the default return thunks if those needed are the
-	 * ones generated by the compiler.
-	 */
-	if (cpu_feature_enabled(X86_FEATURE_RETHUNK) &&
-	    (x86_return_thunk == __x86_return_thunk))
-		return;
-
 	for (s = start; s < end; s++) {
 		void *dest = NULL, *addr = (void *)s + *s;
 		struct insn insn;

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* Re: [PATCH v2 00/23] SRSO fixes/cleanups
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (22 preceding siblings ...)
  2023-08-25  7:01 ` [PATCH 23/23] x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk() Josh Poimboeuf
@ 2023-08-25 10:38 ` Ingo Molnar
  2023-08-26 15:57   ` Josh Poimboeuf
  2023-10-05  1:29 ` Sean Christopherson
  24 siblings, 1 reply; 71+ messages in thread
From: Ingo Molnar @ 2023-08-25 10:38 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner


* Josh Poimboeuf <jpoimboe@kernel.org> wrote:

> v2:
> - reorder everything: fixes/functionality before cleanups
> - split up KVM patch, add Sean's changes
> - add patch to support live migration
> - remove "default:" case for enums throughout bugs.c
> - various minor tweaks based on v1 discussions with Boris
> - add Reviewed-by's
> 
> Josh Poimboeuf (23):
>   x86/srso: Fix srso_show_state() side effect
>   x86/srso: Set CPUID feature bits independently of bug or mitigation
>     status
>   x86/srso: Don't probe microcode in a guest
>   KVM: x86: Add IBPB_BRTYPE support
>   KVM: x86: Add SBPB support
>   x86/srso: Fix SBPB enablement for spec_rstack_overflow=off
>   x86/srso: Fix SBPB enablement for (possible) future fixed HW
>   x86/srso: Print actual mitigation if requested mitigation isn't
>     possible
>   x86/srso: Print mitigation for retbleed IBPB case
>   x86/srso: Fix vulnerability reporting for missing microcode
>   x86/srso: Fix unret validation dependencies
>   x86/alternatives: Remove faulty optimization
>   x86/srso: Improve i-cache locality for alias mitigation
>   x86/srso: Unexport untraining functions
>   x86/srso: Remove 'pred_cmd' label
>   x86/bugs: Remove default case for fully switched enums
>   x86/srso: Move retbleed IBPB check into existing 'has_microcode' code
>     block
>   x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check
>   x86/srso: Disentangle rethunk-dependent options
>   x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros
>   x86/retpoline: Remove .text..__x86.return_thunk section
>   x86/nospec: Refactor UNTRAIN_RET[_*]
>   x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk()
> 
>  Documentation/admin-guide/hw-vuln/srso.rst |  22 ++-
>  arch/x86/include/asm/nospec-branch.h       |  69 ++++-----
>  arch/x86/include/asm/processor.h           |   2 -
>  arch/x86/kernel/alternative.c              |   8 -
>  arch/x86/kernel/cpu/amd.c                  |  28 ++--
>  arch/x86/kernel/cpu/bugs.c                 | 104 ++++++-------
>  arch/x86/kernel/vmlinux.lds.S              |  10 +-
>  arch/x86/kvm/cpuid.c                       |   5 +-
>  arch/x86/kvm/cpuid.h                       |   3 +-
>  arch/x86/kvm/x86.c                         |  29 +++-
>  arch/x86/lib/retpoline.S                   | 171 +++++++++++----------
>  include/linux/objtool.h                    |   3 +-
>  scripts/Makefile.vmlinux_o                 |   3 +-
>  13 files changed, 230 insertions(+), 227 deletions(-)

Thank you, this all looks very nice. I've applied your fixes to
tip:x86/bugs, with the exception of the two KVM enablement patches.

I've also cherry-picked the apply_returns() fix separately to x86/urgent,
AFAICS that's the only super-urgent fix we want to push to the final v6.5
release before the weekend, right? The other fixes look like
reporting bugs, Kconfig oddities and inefficiencies at worst. Backporters
may still pick the rest from x86/bugs too - but we are too close to the
release right now.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 04/23] KVM: x86: Add IBPB_BRTYPE support
  2023-08-25  7:01 ` [PATCH 04/23] KVM: x86: Add IBPB_BRTYPE support Josh Poimboeuf
@ 2023-08-25 18:15   ` Sean Christopherson
  2023-08-26 15:49     ` Josh Poimboeuf
  0 siblings, 1 reply; 71+ messages in thread
From: Sean Christopherson @ 2023-08-25 18:15 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, David.Kaplan, Andrew Cooper, Nikolay Borisov,
	gregkh, Thomas Gleixner

On Fri, Aug 25, 2023, Josh Poimboeuf wrote:
> Add support for the IBPB_BRTYPE CPUID flag, which indicates that IBPB
> includes branch type prediction flushing.

Please add:

  Note, like SRSO_NO, advertise support for IBPB_BRTYPE even if it's not
  enumerated by in the raw CPUID, i.e. bypass the cpuid_count() in
  __kvm_cpu_cap_mask().  Some CPUs that gained support via a uCode patch
  don't report IBPB_BRTYPE via CPUID (the kernel forces the flag).

  Opportunistically use kvm_cpu_cap_check_and_set() for SRSS_NO instead
  of manually querying host support (cpu_feature_enabled() and
  boot_cpu_has() yield the same end result in this case).

Feel free to take this through tip if this is urgent enough to go into 6.6,
otherwise I'll grab it for 6.7.

Acked-by: Sean Christopherson <seanjc@google.com>

> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kvm/cpuid.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index d3432687c9e6..c65f3ff1c79d 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -729,8 +729,8 @@ void kvm_set_cpu_caps(void)
>  		F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */
>  	);
>  
> -	if (cpu_feature_enabled(X86_FEATURE_SRSO_NO))
> -		kvm_cpu_cap_set(X86_FEATURE_SRSO_NO);
> +	kvm_cpu_cap_check_and_set(X86_FEATURE_IBPB_BRTYPE);
> +	kvm_cpu_cap_check_and_set(X86_FEATURE_SRSO_NO);
>  
>  	kvm_cpu_cap_init_kvm_defined(CPUID_8000_0022_EAX,
>  		F(PERFMON_V2)
> -- 
> 2.41.0
> 

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 05/23] KVM: x86: Add SBPB support
  2023-08-25  7:01 ` [PATCH 05/23] KVM: x86: Add SBPB support Josh Poimboeuf
@ 2023-08-25 18:20   ` Sean Christopherson
  0 siblings, 0 replies; 71+ messages in thread
From: Sean Christopherson @ 2023-08-25 18:20 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, David.Kaplan, Andrew Cooper, Nikolay Borisov,
	gregkh, Thomas Gleixner

On Fri, Aug 25, 2023, Josh Poimboeuf wrote:
> Add support for the AMD Selective Branch Predictor Barrier (SBPB) by
> advertising the CPUID bit and handling PRED_CMD writes accordingly.

Same as the other patch, please call out that not doing the "standard" F(SBPB)
is intentional, e.g.

  Note, like SRSO_NO and IBPB_BRTYPE before it, advertise support for SBPB
  even if it's not enumerated by in the raw CPUID.  Some CPUs that gained
  support via a uCode patch don't report SBPB via CPUID (the kernel forces
  the flag).

And again, feel free to take this through tip if this should go in 6.6.  Turns out
our Milan systems have the SBPB fun, so I was able to actually test this, including
the emulated WRMSR handling (KVM allows forcing emulation via a magic prefix).  I
have a KVM-Unit-Test patch that I'll post next week.

Thanks Josh!

Acked-by: Sean Christopherson <seanjc@google.com>

> Co-developed-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 22/23] x86/nospec: Refactor UNTRAIN_RET[_*]
  2023-08-25  7:01 ` [PATCH 22/23] x86/nospec: Refactor UNTRAIN_RET[_*] Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
@ 2023-08-25 18:22   ` Nikolay Borisov
  2023-08-26 15:42     ` Josh Poimboeuf
  1 sibling, 1 reply; 71+ messages in thread
From: Nikolay Borisov @ 2023-08-25 18:22 UTC (permalink / raw)
  To: Josh Poimboeuf, x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	gregkh, Thomas Gleixner



On 25.08.23 г. 10:01 ч., Josh Poimboeuf wrote:
> Factor out the UNTRAIN_RET[_*] common bits into a helper macro.
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>   arch/x86/include/asm/nospec-branch.h | 31 +++++++++-------------------
>   1 file changed, 10 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index 51e3f1a287d2..dcc78477a38d 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -288,35 +288,24 @@
>    * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point
>    * where we have a stack but before any RET instruction.
>    */
> -.macro UNTRAIN_RET
> +.macro __UNTRAIN_RET ibpb_feature, call_depth_insns
>   #if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
>   	VALIDATE_UNRET_END
>   	ALTERNATIVE_3 "",						\
>   		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
> -		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
> -		     __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
> +		      "call entry_ibpb", \ibpb_feature,			\
> +		     __stringify(\call_depth_insns), X86_FEATURE_CALL_DEPTH

so this becomes __stringify(__stringify(RESET_CALL_DEPTH)) etc.

Meaning in the high-level macros you want to pass just 
RESET_CALL_DEPTH/RESET_CALL_DEPTH_FROM_CALL ?

>   #endif
>   .endm
>   
> -.macro UNTRAIN_RET_VM
> -#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
> -	VALIDATE_UNRET_END
> -	ALTERNATIVE_3 "",						\
> -		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
> -		      "call entry_ibpb", X86_FEATURE_IBPB_ON_VMEXIT,	\
> -		      __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
> -#endif
> -.endm
> +#define UNTRAIN_RET \
> +	__UNTRAIN_RET X86_FEATURE_ENTRY_IBPB, __stringify(RESET_CALL_DEPTH)
>   
> -.macro UNTRAIN_RET_FROM_CALL
> -#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
> -	VALIDATE_UNRET_END
> -	ALTERNATIVE_3 "",						\
> -		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
> -		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
> -		      __stringify(RESET_CALL_DEPTH_FROM_CALL), X86_FEATURE_CALL_DEPTH
> -#endif
> -.endm
> +#define UNTRAIN_RET_VM \
> +	__UNTRAIN_RET X86_FEATURE_IBPB_ON_VMEXIT, __stringify(RESET_CALL_DEPTH)
> +
> +#define UNTRAIN_RET_FROM_CALL \
> +	__UNTRAIN_RET X86_FEATURE_ENTRY_IBPB, __stringify(RESET_CALL_DEPTH_FROM_CALL)
>   
>   
>   .macro CALL_DEPTH_ACCOUNT

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 15/23] x86/srso: Remove 'pred_cmd' label
  2023-08-25  7:01 ` [PATCH 15/23] x86/srso: Remove 'pred_cmd' label Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
@ 2023-08-25 19:51   ` Nikolay Borisov
  2023-08-26 15:45     ` Josh Poimboeuf
  1 sibling, 1 reply; 71+ messages in thread
From: Nikolay Borisov @ 2023-08-25 19:51 UTC (permalink / raw)
  To: Josh Poimboeuf, x86
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	gregkh, Thomas Gleixner



On 25.08.23 г. 10:01 ч., Josh Poimboeuf wrote:
> SBPB is only enabled in two distinct cases:
> 
> 1) when SRSO has been disabled with srso=off
> 
> 2) when SRSO has been fixed (in future HW)
> 
> Simplify the control flow by getting rid of the 'pred_cmd' label and
> moving the SBPB enablement check to the two corresponding code sites.
> This makes it more clear when exactly SBPB gets enabled.
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>


I think it never was explained why SBPB should be used when SRSO is 
off/hw is not affected? There's nothing in AMD's whitepape and there's 
nothing in the original patch introducing SRSO_NO. This patch deals with 
the "when", but what about the "Why" ? Can you put this in the changelog 
(if I'm the only one missing this detail)?
> ---
>   arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++--------
>   1 file changed, 13 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index d883d1c38f7f..3c7f634b6148 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -2410,13 +2410,21 @@ static void __init srso_select_mitigation(void)
>   {
>   	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
>   
> -	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
> -		goto pred_cmd;
> +	if (cpu_mitigations_off())
> +		return;
> +
> +	if (!boot_cpu_has_bug(X86_BUG_SRSO)) {
> +		if (boot_cpu_has(X86_FEATURE_SBPB))
> +			x86_pred_cmd = PRED_CMD_SBPB;
> +		return;
> +	}
>   
>   	if (has_microcode) {
>   		/*
>   		 * Zen1/2 with SMT off aren't vulnerable after the right
>   		 * IBPB microcode has been applied.
> +		 *
> +		 * Zen1/2 don't have SBPB, no need to try to enable it here.
>   		 */
>   		if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
>   			setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
> @@ -2439,7 +2447,9 @@ static void __init srso_select_mitigation(void)
>   
>   	switch (srso_cmd) {
>   	case SRSO_CMD_OFF:
> -		goto pred_cmd;
> +		if (boot_cpu_has(X86_FEATURE_SBPB))
> +			x86_pred_cmd = PRED_CMD_SBPB;
> +		return;
>   
>   	case SRSO_CMD_MICROCODE:
>   		if (has_microcode) {
> @@ -2501,11 +2511,6 @@ static void __init srso_select_mitigation(void)
>   
>   out:
>   	pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");
> -
> -pred_cmd:
> -	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
> -	     boot_cpu_has(X86_FEATURE_SBPB))
> -		x86_pred_cmd = PRED_CMD_SBPB;
>   }
>   
>   #undef pr_fmt

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 22/23] x86/nospec: Refactor UNTRAIN_RET[_*]
  2023-08-25 18:22   ` [PATCH 22/23] " Nikolay Borisov
@ 2023-08-26 15:42     ` Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-26 15:42 UTC (permalink / raw)
  To: Nikolay Borisov
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	gregkh, Thomas Gleixner

On Fri, Aug 25, 2023 at 09:22:10PM +0300, Nikolay Borisov wrote:
> 
> 
> On 25.08.23 г. 10:01 ч., Josh Poimboeuf wrote:
> > Factor out the UNTRAIN_RET[_*] common bits into a helper macro.
> > 
> > Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> > ---
> >   arch/x86/include/asm/nospec-branch.h | 31 +++++++++-------------------
> >   1 file changed, 10 insertions(+), 21 deletions(-)
> > 
> > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > index 51e3f1a287d2..dcc78477a38d 100644
> > --- a/arch/x86/include/asm/nospec-branch.h
> > +++ b/arch/x86/include/asm/nospec-branch.h
> > @@ -288,35 +288,24 @@
> >    * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point
> >    * where we have a stack but before any RET instruction.
> >    */
> > -.macro UNTRAIN_RET
> > +.macro __UNTRAIN_RET ibpb_feature, call_depth_insns
> >   #if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
> >   	VALIDATE_UNRET_END
> >   	ALTERNATIVE_3 "",						\
> >   		      CALL_UNTRAIN_RET, X86_FEATURE_UNRET,		\
> > -		      "call entry_ibpb", X86_FEATURE_ENTRY_IBPB,	\
> > -		     __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
> > +		      "call entry_ibpb", \ibpb_feature,			\
> > +		     __stringify(\call_depth_insns), X86_FEATURE_CALL_DEPTH
> 
> so this becomes __stringify(__stringify(RESET_CALL_DEPTH)) etc.

Apparently the gas macro un-stringifies the argument when using it, so
it needs to be re-stringified again.  ¯\_(ツ)_/¯

-- 
Josh

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 15/23] x86/srso: Remove 'pred_cmd' label
  2023-08-25 19:51   ` [PATCH 15/23] " Nikolay Borisov
@ 2023-08-26 15:45     ` Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-26 15:45 UTC (permalink / raw)
  To: Nikolay Borisov
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	gregkh, Thomas Gleixner

On Fri, Aug 25, 2023 at 10:51:04PM +0300, Nikolay Borisov wrote:
> 
> 
> On 25.08.23 г. 10:01 ч., Josh Poimboeuf wrote:
> > SBPB is only enabled in two distinct cases:
> > 
> > 1) when SRSO has been disabled with srso=off
> > 
> > 2) when SRSO has been fixed (in future HW)
> > 
> > Simplify the control flow by getting rid of the 'pred_cmd' label and
> > moving the SBPB enablement check to the two corresponding code sites.
> > This makes it more clear when exactly SBPB gets enabled.
> > 
> > Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> 
> 
> I think it never was explained why SBPB should be used when SRSO is off/hw
> is not affected? There's nothing in AMD's whitepape and there's nothing in
> the original patch introducing SRSO_NO. This patch deals with the "when",
> but what about the "Why" ? Can you put this in the changelog (if I'm the
> only one missing this detail)?

This patch was merged, but the "why" is that on Zen3/4, the new
microcode adds branch type flushing to IBPB, making IBPB slower.  SBPB
is the "old" IBPB, without branch type flushing.  So if you don't need
the branch type flushing (i.e., to mitigate SRSO) then you can just use
the old IBPB (aka SBPB).

-- 
Josh

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 04/23] KVM: x86: Add IBPB_BRTYPE support
  2023-08-25 18:15   ` Sean Christopherson
@ 2023-08-26 15:49     ` Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-26 15:49 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, David.Kaplan, Andrew Cooper, Nikolay Borisov,
	gregkh, Thomas Gleixner

On Fri, Aug 25, 2023 at 11:15:49AM -0700, Sean Christopherson wrote:
> On Fri, Aug 25, 2023, Josh Poimboeuf wrote:
> > Add support for the IBPB_BRTYPE CPUID flag, which indicates that IBPB
> > includes branch type prediction flushing.
> 
> Please add:
> 
>   Note, like SRSO_NO, advertise support for IBPB_BRTYPE even if it's not
>   enumerated by in the raw CPUID, i.e. bypass the cpuid_count() in
>   __kvm_cpu_cap_mask().  Some CPUs that gained support via a uCode patch
>   don't report IBPB_BRTYPE via CPUID (the kernel forces the flag).
> 
>   Opportunistically use kvm_cpu_cap_check_and_set() for SRSS_NO instead

"SRSO_NO"

>   of manually querying host support (cpu_feature_enabled() and
>   boot_cpu_has() yield the same end result in this case).

Sounds good.

> Feel free to take this through tip if this is urgent enough to go into 6.6,
> otherwise I'll grab it for 6.7.

Ingo grabbed all the patches except for the two KVM ones, so I think
he's expecting you to take them.

> Acked-by: Sean Christopherson <seanjc@google.com>

Thanks!

-- 
Josh

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v2 00/23] SRSO fixes/cleanups
  2023-08-25 10:38 ` [PATCH v2 00/23] SRSO fixes/cleanups Ingo Molnar
@ 2023-08-26 15:57   ` Josh Poimboeuf
  2023-08-26 17:00     ` Ingo Molnar
  0 siblings, 1 reply; 71+ messages in thread
From: Josh Poimboeuf @ 2023-08-26 15:57 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner

On Fri, Aug 25, 2023 at 12:38:53PM +0200, Ingo Molnar wrote:
> Thank you, this all looks very nice. I've applied your fixes to
> tip:x86/bugs, with the exception of the two KVM enablement patches.
> 
> I've also cherry-picked the apply_returns() fix separately to x86/urgent,
> AFAICS that's the only super-urgent fix we want to push to the final v6.5
> release before the weekend, right? The other fixes look like
> reporting bugs, Kconfig oddities and inefficiencies at worst. Backporters
> may still pick the rest from x86/bugs too - but we are too close to the
> release right now.

As far as I can tell, the apply_returns() fix isn't necessarily urgent,
since after commit 095b8303f383 it went from being an actual bug to just
dead code: the optimization will never take effect now that none of the
rethunk cases use __x86_return_thunk.

On the other hand, if I'm too late sending this, it should be harmless
to merge it into the final v6.5 release.

For the rest of the patches, I think the merge window is fine.

Thanks!

-- 
Josh

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH v2 00/23] SRSO fixes/cleanups
  2023-08-26 15:57   ` Josh Poimboeuf
@ 2023-08-26 17:00     ` Ingo Molnar
  0 siblings, 0 replies; 71+ messages in thread
From: Ingo Molnar @ 2023-08-26 17:00 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, Sean Christopherson, David.Kaplan, Andrew Cooper,
	Nikolay Borisov, gregkh, Thomas Gleixner


* Josh Poimboeuf <jpoimboe@kernel.org> wrote:

> On Fri, Aug 25, 2023 at 12:38:53PM +0200, Ingo Molnar wrote:
> > Thank you, this all looks very nice. I've applied your fixes to
> > tip:x86/bugs, with the exception of the two KVM enablement patches.
> > 
> > I've also cherry-picked the apply_returns() fix separately to x86/urgent,
> > AFAICS that's the only super-urgent fix we want to push to the final v6.5
> > release before the weekend, right? The other fixes look like
> > reporting bugs, Kconfig oddities and inefficiencies at worst. Backporters
> > may still pick the rest from x86/bugs too - but we are too close to the
> > release right now.
> 
> As far as I can tell, the apply_returns() fix isn't necessarily urgent,
> since after commit 095b8303f383 it went from being an actual bug to just
> dead code: the optimization will never take effect now that none of the
> rethunk cases use __x86_return_thunk.
>
> On the other hand, if I'm too late sending this, it should be harmless to 
> merge it into the final v6.5 release.

Not too late at all - I've removed it from x86/urgent.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [tip: x86/bugs] x86/srso: Fix vulnerability reporting for missing microcode
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
@ 2023-09-01  9:40     ` Borislav Petkov
  2023-09-02 10:46       ` Ingo Molnar
  0 siblings, 1 reply; 71+ messages in thread
From: Borislav Petkov @ 2023-09-01  9:40 UTC (permalink / raw)
  To: Ingo Molnar, Josh Poimboeuf
  Cc: linux-tip-commits, Josh Poimboeuf, Ingo Molnar, x86, linux-kernel

On Fri, Aug 25, 2023 at 10:19:32AM -0000, tip-bot2 for Josh Poimboeuf wrote:
> The following commit has been merged into the x86/bugs branch of tip:
> 
> Commit-ID:     b3be1397be0340b2c30b2dcd7339dbfaa5563e2b
> Gitweb:        https://git.kernel.org/tip/b3be1397be0340b2c30b2dcd7339dbfaa5563e2b
> Author:        Josh Poimboeuf <jpoimboe@kernel.org>
> AuthorDate:    Fri, 25 Aug 2023 00:01:41 -07:00
> Committer:     Ingo Molnar <mingo@kernel.org>
> CommitterDate: Fri, 25 Aug 2023 11:21:59 +02:00
> 
> x86/srso: Fix vulnerability reporting for missing microcode
> 
> The SRSO default safe-ret mitigation is reported as "mitigated" even if
> microcode hasn't been updated.  That's wrong because userspace may still
> be vulnerable to SRSO attacks due to IBPB not flushing branch type
> predictions.
> 
> Report the safe-ret + !microcode case as vulnerable.
> 
> Also report the microcode-only case as vulnerable as it leaves the
> kernel open to attacks.
> 
> Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> Link: https://lore.kernel.org/r/65556eeb1bf7cb9bd7db8662ef115dd73191db84.1692919072.git.jpoimboe@kernel.org
> ---
>  Documentation/admin-guide/hw-vuln/srso.rst | 22 ++++++++++----
>  arch/x86/kernel/cpu/bugs.c                 | 34 ++++++++++++---------
>  2 files changed, 37 insertions(+), 19 deletions(-)

This is still unfixed:

https://lore.kernel.org/r/20230825072542.GFZOhXdgXpUidW51lC@fat_crate.local

mingo, do you want fixes ontop or do you wanna rebase this branch?

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 16/23] x86/bugs: Remove default case for fully switched enums
  2023-08-25  7:01 ` [PATCH 16/23] x86/bugs: Remove default case for fully switched enums Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
@ 2023-09-02  9:02   ` Borislav Petkov
  2023-09-05  5:08     ` Josh Poimboeuf
  1 sibling, 1 reply; 71+ messages in thread
From: Borislav Petkov @ 2023-09-02  9:02 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper, Nikolay Borisov,
	gregkh, Thomas Gleixner

On Fri, Aug 25, 2023 at 12:01:47AM -0700, Josh Poimboeuf wrote:
> For enum switch statements which handle all possible cases, remove the
> default case so a compiler warning gets printed if one of the enums gets
> accidentally omitted from the switch statement.
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kernel/cpu/bugs.c | 17 +++++++----------
>  1 file changed, 7 insertions(+), 10 deletions(-)

You could just as well take care of the default: cases in
update_srbds_msr(), retbleed_select_mitigation(), unpriv_ebpf_notify(),
spectre_v2_parse_user_cmdline() and cpu_show_common() and get rid of
them all in this file and have the compiler warn for all of them.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 18/23] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check
  2023-08-25  7:01 ` [PATCH 18/23] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check Josh Poimboeuf
  2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
@ 2023-09-02  9:10   ` Borislav Petkov
  1 sibling, 0 replies; 71+ messages in thread
From: Borislav Petkov @ 2023-09-02  9:10 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper, Nikolay Borisov,
	gregkh, Thomas Gleixner

On Fri, Aug 25, 2023 at 12:01:49AM -0700, Josh Poimboeuf wrote:
> The X86_FEATURE_ENTRY_IBPB check is redundant here due to the above
> RETBLEED_MITIGATION_IBPB check.  RETBLEED_MITIGATION_IBPB already
> implies X86_FEATURE_ENTRY_IBPB.  So if we got here and 'has_microcode'
> is true, it means X86_FEATURE_ENTRY_IBPB is not set.
> 
> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> ---
>  arch/x86/kernel/cpu/bugs.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

I still don't like this one:

https://lore.kernel.org/r/20230825070936.GEZOhTsPiTLhY1i9xH@fat_crate.local

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [tip: x86/bugs] x86/srso: Fix vulnerability reporting for missing microcode
  2023-09-01  9:40     ` Borislav Petkov
@ 2023-09-02 10:46       ` Ingo Molnar
  2023-09-02 17:04         ` Borislav Petkov
  2023-09-05  4:57         ` Josh Poimboeuf
  0 siblings, 2 replies; 71+ messages in thread
From: Ingo Molnar @ 2023-09-02 10:46 UTC (permalink / raw)
  To: Borislav Petkov; +Cc: Josh Poimboeuf, linux-tip-commits, x86, linux-kernel


* Borislav Petkov <bp@alien8.de> wrote:

> On Fri, Aug 25, 2023 at 10:19:32AM -0000, tip-bot2 for Josh Poimboeuf wrote:
> > The following commit has been merged into the x86/bugs branch of tip:
> > 
> > Commit-ID:     b3be1397be0340b2c30b2dcd7339dbfaa5563e2b
> > Gitweb:        https://git.kernel.org/tip/b3be1397be0340b2c30b2dcd7339dbfaa5563e2b
> > Author:        Josh Poimboeuf <jpoimboe@kernel.org>
> > AuthorDate:    Fri, 25 Aug 2023 00:01:41 -07:00
> > Committer:     Ingo Molnar <mingo@kernel.org>
> > CommitterDate: Fri, 25 Aug 2023 11:21:59 +02:00
> > 
> > x86/srso: Fix vulnerability reporting for missing microcode
> > 
> > The SRSO default safe-ret mitigation is reported as "mitigated" even if
> > microcode hasn't been updated.  That's wrong because userspace may still
> > be vulnerable to SRSO attacks due to IBPB not flushing branch type
> > predictions.
> > 
> > Report the safe-ret + !microcode case as vulnerable.
> > 
> > Also report the microcode-only case as vulnerable as it leaves the
> > kernel open to attacks.
> > 
> > Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> > Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> > Link: https://lore.kernel.org/r/65556eeb1bf7cb9bd7db8662ef115dd73191db84.1692919072.git.jpoimboe@kernel.org
> > ---
> >  Documentation/admin-guide/hw-vuln/srso.rst | 22 ++++++++++----
> >  arch/x86/kernel/cpu/bugs.c                 | 34 ++++++++++++---------
> >  2 files changed, 37 insertions(+), 19 deletions(-)
> 
> This is still unfixed:
> 
> https://lore.kernel.org/r/20230825072542.GFZOhXdgXpUidW51lC@fat_crate.local
> 
> mingo, do you want fixes ontop or do you wanna rebase this branch?

Since these are fixes that are supposed to be fully correct,
I'd suggest we rebase it.

Josh, mind sending a v3 SRSO series, as a replacement for x86/bugs,
with Boris's review & testing feedback addressed?

[ Feel free to send it as a delta series against v2 in x86/bugs and I'll 
  backmerge it all. ]

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [tip: x86/bugs] x86/srso: Fix vulnerability reporting for missing microcode
  2023-09-02 10:46       ` Ingo Molnar
@ 2023-09-02 17:04         ` Borislav Petkov
  2023-09-03 14:37           ` Borislav Petkov
  2023-09-05  4:57         ` Josh Poimboeuf
  1 sibling, 1 reply; 71+ messages in thread
From: Borislav Petkov @ 2023-09-02 17:04 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Josh Poimboeuf, linux-tip-commits, x86, linux-kernel

On September 2, 2023 1:46:05 PM GMT+03:00, Ingo Molnar <mingo@kernel.org> wrote:
>Since these are fixes that are supposed to be fully correct,
>I'd suggest we rebase it.
>
>Josh, mind sending a v3 SRSO series, as a replacement for x86/bugs,
>with Boris's review & testing feedback addressed?
>
>[ Feel free to send it as a delta series against v2 in x86/bugs and I'll 
>  backmerge it all. ]

Ok, sounds good. Give  me a while to go through the rest first. I'll let you guys know. Reviewing with a slow, ancient laptop is not the easiest thing in the world...

Thx.

-- 
Sent from a small device: formatting sucks and brevity is inevitable.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [tip: x86/bugs] x86/srso: Fix vulnerability reporting for missing microcode
  2023-09-02 17:04         ` Borislav Petkov
@ 2023-09-03 14:37           ` Borislav Petkov
  0 siblings, 0 replies; 71+ messages in thread
From: Borislav Petkov @ 2023-09-03 14:37 UTC (permalink / raw)
  To: Josh Poimboeuf; +Cc: Ingo Molnar, linux-tip-commits, x86, linux-kernel

On Sat, Sep 02, 2023 at 08:04:07PM +0300, Borislav Petkov wrote:
> Ok, sounds good. Give  me a while to go through the rest first. I'll
> let you guys know. Reviewing with a slow, ancient laptop is not the
> easiest thing in the world...

Ok, apart from the couple of small things I mentioned, this cleanup
looks good, thanks Josh!

Pls send a new version with the feedback addressed.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [tip: x86/bugs] x86/srso: Fix vulnerability reporting for missing microcode
  2023-09-02 10:46       ` Ingo Molnar
  2023-09-02 17:04         ` Borislav Petkov
@ 2023-09-05  4:57         ` Josh Poimboeuf
  1 sibling, 0 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-09-05  4:57 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Borislav Petkov, linux-tip-commits, x86, linux-kernel

On Sat, Sep 02, 2023 at 12:46:05PM +0200, Ingo Molnar wrote:
> 
> * Borislav Petkov <bp@alien8.de> wrote:
> 
> > On Fri, Aug 25, 2023 at 10:19:32AM -0000, tip-bot2 for Josh Poimboeuf wrote:
> > > The following commit has been merged into the x86/bugs branch of tip:
> > > 
> > > Commit-ID:     b3be1397be0340b2c30b2dcd7339dbfaa5563e2b
> > > Gitweb:        https://git.kernel.org/tip/b3be1397be0340b2c30b2dcd7339dbfaa5563e2b
> > > Author:        Josh Poimboeuf <jpoimboe@kernel.org>
> > > AuthorDate:    Fri, 25 Aug 2023 00:01:41 -07:00
> > > Committer:     Ingo Molnar <mingo@kernel.org>
> > > CommitterDate: Fri, 25 Aug 2023 11:21:59 +02:00
> > > 
> > > x86/srso: Fix vulnerability reporting for missing microcode
> > > 
> > > The SRSO default safe-ret mitigation is reported as "mitigated" even if
> > > microcode hasn't been updated.  That's wrong because userspace may still
> > > be vulnerable to SRSO attacks due to IBPB not flushing branch type
> > > predictions.
> > > 
> > > Report the safe-ret + !microcode case as vulnerable.
> > > 
> > > Also report the microcode-only case as vulnerable as it leaves the
> > > kernel open to attacks.
> > > 
> > > Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
> > > Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> > > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> > > Link: https://lore.kernel.org/r/65556eeb1bf7cb9bd7db8662ef115dd73191db84.1692919072.git.jpoimboe@kernel.org
> > > ---
> > >  Documentation/admin-guide/hw-vuln/srso.rst | 22 ++++++++++----
> > >  arch/x86/kernel/cpu/bugs.c                 | 34 ++++++++++++---------
> > >  2 files changed, 37 insertions(+), 19 deletions(-)
> > 
> > This is still unfixed:
> > 
> > https://lore.kernel.org/r/20230825072542.GFZOhXdgXpUidW51lC@fat_crate.local
> > 
> > mingo, do you want fixes ontop or do you wanna rebase this branch?
> 
> Since these are fixes that are supposed to be fully correct,
> I'd suggest we rebase it.
> 
> Josh, mind sending a v3 SRSO series, as a replacement for x86/bugs,
> with Boris's review & testing feedback addressed?

Ok, I'll post a v3 (with Boris' comments integrated).

-- 
Josh

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH 16/23] x86/bugs: Remove default case for fully switched enums
  2023-09-02  9:02   ` [PATCH 16/23] " Borislav Petkov
@ 2023-09-05  5:08     ` Josh Poimboeuf
  0 siblings, 0 replies; 71+ messages in thread
From: Josh Poimboeuf @ 2023-09-05  5:08 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, linux-kernel, Peter Zijlstra, Babu Moger, Paolo Bonzini,
	Sean Christopherson, David.Kaplan, Andrew Cooper, Nikolay Borisov,
	gregkh, Thomas Gleixner

On Sat, Sep 02, 2023 at 11:02:16AM +0200, Borislav Petkov wrote:
> On Fri, Aug 25, 2023 at 12:01:47AM -0700, Josh Poimboeuf wrote:
> > For enum switch statements which handle all possible cases, remove the
> > default case so a compiler warning gets printed if one of the enums gets
> > accidentally omitted from the switch statement.
> > 
> > Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
> > ---
> >  arch/x86/kernel/cpu/bugs.c | 17 +++++++----------
> >  1 file changed, 7 insertions(+), 10 deletions(-)
> 
> You could just as well take care of the default: cases in
> update_srbds_msr(), retbleed_select_mitigation(), unpriv_ebpf_notify(),
> spectre_v2_parse_user_cmdline() and cpu_show_common() and get rid of
> them all in this file and have the compiler warn for all of them.

I tried that, but adding all the unused cases adds a LOT of noise.  For
example the switch statement in spectre_v2_parse_user_cmdline() has
eight unused enums.  'default' is much more compact and readable than a
big list of all the unused enums which aren't really relevant there.

And in other places it added confusion. e.g., "RETBLEED_MITIGATION_NONE
isn't possible here, why is it being checked?"

It's a balance between readability and robustness (which is itself
affected by readability).  So I just constrained the patch to switch
statements which already handle all possible cases (as described above).

-- 
Josh

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Fix vulnerability reporting for missing microcode
  2023-09-05  5:04 [PATCH v3 08/20] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
@ 2023-09-05 10:09 ` tip-bot2 for Josh Poimboeuf
  2023-09-19  9:53 ` tip-bot2 for Josh Poimboeuf
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-09-05 10:09 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Josh Poimboeuf, Ingo Molnar, Borislav Petkov (AMD), x86,
	linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     534be1d0ecfa327cda06fd9e556b2f56062da3d7
Gitweb:        https://git.kernel.org/tip/534be1d0ecfa327cda06fd9e556b2f56062da3d7
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Mon, 04 Sep 2023 22:04:52 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 05 Sep 2023 12:05:07 +02:00

x86/srso: Fix vulnerability reporting for missing microcode

The SRSO default safe-ret mitigation is reported as "mitigated" even if
microcode hasn't been updated.  That's wrong because userspace may still
be vulnerable to SRSO attacks due to IBPB not flushing branch type
predictions.

Report the safe-ret + !microcode case as vulnerable.

Also report the microcode-only case as vulnerable as it leaves the
kernel open to attacks.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/a8a14f97d1b0e03ec255c81637afdf4cf0ae9c99.1693889988.git.jpoimboe@kernel.org
---
 Documentation/admin-guide/hw-vuln/srso.rst | 24 +++++++++-----
 arch/x86/kernel/cpu/bugs.c                 | 36 ++++++++++++---------
 2 files changed, 39 insertions(+), 21 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
index b6cfb51..e715bfc 100644
--- a/Documentation/admin-guide/hw-vuln/srso.rst
+++ b/Documentation/admin-guide/hw-vuln/srso.rst
@@ -46,12 +46,22 @@ The possible values in this file are:
 
    The processor is not vulnerable
 
- * 'Vulnerable: no microcode':
+* 'Vulnerable':
+
+   The processor is vulnerable and no mitigations have been applied.
+
+ * 'Vulnerable: No microcode':
 
    The processor is vulnerable, no microcode extending IBPB
    functionality to address the vulnerability has been applied.
 
- * 'Mitigation: microcode':
+ * 'Vulnerable: Safe RET, no microcode':
+
+   The "Safe RET" mitigation (see below) has been applied to protect the
+   kernel, but the IBPB-extending microcode has not been applied.  User
+   space tasks may still be vulnerable.
+
+ * 'Vulnerable: Microcode, no safe RET':
 
    Extended IBPB functionality microcode patch has been applied. It does
    not address User->Kernel and Guest->Host transitions protection but it
@@ -72,11 +82,11 @@ The possible values in this file are:
 
    (spec_rstack_overflow=microcode)
 
- * 'Mitigation: safe RET':
+ * 'Mitigation: Safe RET':
 
-   Software-only mitigation. It complements the extended IBPB microcode
-   patch functionality by addressing User->Kernel and Guest->Host
-   transitions protection.
+   Combined microcode/software mitigation. It complements the
+   extended IBPB microcode patch functionality by addressing
+   User->Kernel and Guest->Host transitions protection.
 
    Selected by default or by spec_rstack_overflow=safe-ret
 
@@ -129,7 +139,7 @@ an indrect branch prediction barrier after having applied the required
 microcode patch for one's system. This mitigation comes also at
 a performance cost.
 
-Mitigation: safe RET
+Mitigation: Safe RET
 --------------------
 
 The mitigation works by ensuring all RET instructions speculate to
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6c47f37..e45dd69 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2353,6 +2353,8 @@ early_param("l1tf", l1tf_cmdline);
 
 enum srso_mitigation {
 	SRSO_MITIGATION_NONE,
+	SRSO_MITIGATION_UCODE_NEEDED,
+	SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
 	SRSO_MITIGATION_MICROCODE,
 	SRSO_MITIGATION_SAFE_RET,
 	SRSO_MITIGATION_IBPB,
@@ -2368,11 +2370,13 @@ enum srso_mitigation_cmd {
 };
 
 static const char * const srso_strings[] = {
-	[SRSO_MITIGATION_NONE]           = "Vulnerable",
-	[SRSO_MITIGATION_MICROCODE]      = "Mitigation: microcode",
-	[SRSO_MITIGATION_SAFE_RET]	 = "Mitigation: safe RET",
-	[SRSO_MITIGATION_IBPB]		 = "Mitigation: IBPB",
-	[SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
+	[SRSO_MITIGATION_NONE]			= "Vulnerable",
+	[SRSO_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
+	[SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED]	= "Vulnerable: Safe RET, no microcode",
+	[SRSO_MITIGATION_MICROCODE]		= "Vulnerable: Microcode, no safe RET",
+	[SRSO_MITIGATION_SAFE_RET]		= "Mitigation: Safe RET",
+	[SRSO_MITIGATION_IBPB]			= "Mitigation: IBPB",
+	[SRSO_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT only"
 };
 
 static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
@@ -2409,10 +2413,7 @@ static void __init srso_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
 		goto pred_cmd;
 
-	if (!has_microcode) {
-		pr_warn("IBPB-extending microcode not applied!\n");
-		pr_warn(SRSO_NOTICE);
-	} else {
+	if (has_microcode) {
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
@@ -2428,6 +2429,12 @@ static void __init srso_select_mitigation(void)
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto out;
 		}
+	} else {
+		pr_warn("IBPB-extending microcode not applied!\n");
+		pr_warn(SRSO_NOTICE);
+
+		/* may be overwritten by SRSO_CMD_SAFE_RET below */
+		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
 	}
 
 	switch (srso_cmd) {
@@ -2457,7 +2464,10 @@ static void __init srso_select_mitigation(void)
 				setup_force_cpu_cap(X86_FEATURE_SRSO);
 				x86_return_thunk = srso_return_thunk;
 			}
-			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			if (has_microcode)
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			else
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
 		}
@@ -2490,7 +2500,7 @@ static void __init srso_select_mitigation(void)
 	}
 
 out:
-	pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");
+	pr_info("%s\n", srso_strings[srso_mitigation]);
 
 pred_cmd:
 	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
@@ -2701,9 +2711,7 @@ static ssize_t srso_show_state(char *buf)
 	if (boot_cpu_has(X86_FEATURE_SRSO_NO))
 		return sysfs_emit(buf, "Mitigation: SMT disabled\n");
 
-	return sysfs_emit(buf, "%s%s\n",
-			  srso_strings[srso_mitigation],
-			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+	return sysfs_emit(buf, "%s\n", srso_strings[srso_mitigation]);
 }
 
 static ssize_t gds_show_state(char *buf)

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Fix vulnerability reporting for missing microcode
  2023-09-05  5:04 [PATCH v3 08/20] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
  2023-09-05 10:09 ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
@ 2023-09-19  9:53 ` tip-bot2 for Josh Poimboeuf
  2023-09-23 12:20 ` tip-bot2 for Josh Poimboeuf
  2023-10-20 11:37 ` tip-bot2 for Josh Poimboeuf
  3 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-09-19  9:53 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Josh Poimboeuf, Ingo Molnar, Borislav Petkov (AMD), x86,
	linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     3f0659662ac8e0b76e715c904ccbf2ca9bf64d74
Gitweb:        https://git.kernel.org/tip/3f0659662ac8e0b76e715c904ccbf2ca9bf64d74
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Mon, 04 Sep 2023 22:04:52 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 19 Sep 2023 11:42:47 +02:00

x86/srso: Fix vulnerability reporting for missing microcode

The SRSO default safe-ret mitigation is reported as "mitigated" even if
microcode hasn't been updated.  That's wrong because userspace may still
be vulnerable to SRSO attacks due to IBPB not flushing branch type
predictions.

Report the safe-ret + !microcode case as vulnerable.

Also report the microcode-only case as vulnerable as it leaves the
kernel open to attacks.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/a8a14f97d1b0e03ec255c81637afdf4cf0ae9c99.1693889988.git.jpoimboe@kernel.org
---
 Documentation/admin-guide/hw-vuln/srso.rst | 24 +++++++++-----
 arch/x86/kernel/cpu/bugs.c                 | 36 ++++++++++++---------
 2 files changed, 39 insertions(+), 21 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
index b6cfb51..e715bfc 100644
--- a/Documentation/admin-guide/hw-vuln/srso.rst
+++ b/Documentation/admin-guide/hw-vuln/srso.rst
@@ -46,12 +46,22 @@ The possible values in this file are:
 
    The processor is not vulnerable
 
- * 'Vulnerable: no microcode':
+* 'Vulnerable':
+
+   The processor is vulnerable and no mitigations have been applied.
+
+ * 'Vulnerable: No microcode':
 
    The processor is vulnerable, no microcode extending IBPB
    functionality to address the vulnerability has been applied.
 
- * 'Mitigation: microcode':
+ * 'Vulnerable: Safe RET, no microcode':
+
+   The "Safe RET" mitigation (see below) has been applied to protect the
+   kernel, but the IBPB-extending microcode has not been applied.  User
+   space tasks may still be vulnerable.
+
+ * 'Vulnerable: Microcode, no safe RET':
 
    Extended IBPB functionality microcode patch has been applied. It does
    not address User->Kernel and Guest->Host transitions protection but it
@@ -72,11 +82,11 @@ The possible values in this file are:
 
    (spec_rstack_overflow=microcode)
 
- * 'Mitigation: safe RET':
+ * 'Mitigation: Safe RET':
 
-   Software-only mitigation. It complements the extended IBPB microcode
-   patch functionality by addressing User->Kernel and Guest->Host
-   transitions protection.
+   Combined microcode/software mitigation. It complements the
+   extended IBPB microcode patch functionality by addressing
+   User->Kernel and Guest->Host transitions protection.
 
    Selected by default or by spec_rstack_overflow=safe-ret
 
@@ -129,7 +139,7 @@ an indrect branch prediction barrier after having applied the required
 microcode patch for one's system. This mitigation comes also at
 a performance cost.
 
-Mitigation: safe RET
+Mitigation: Safe RET
 --------------------
 
 The mitigation works by ensuring all RET instructions speculate to
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6c47f37..e45dd69 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2353,6 +2353,8 @@ early_param("l1tf", l1tf_cmdline);
 
 enum srso_mitigation {
 	SRSO_MITIGATION_NONE,
+	SRSO_MITIGATION_UCODE_NEEDED,
+	SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
 	SRSO_MITIGATION_MICROCODE,
 	SRSO_MITIGATION_SAFE_RET,
 	SRSO_MITIGATION_IBPB,
@@ -2368,11 +2370,13 @@ enum srso_mitigation_cmd {
 };
 
 static const char * const srso_strings[] = {
-	[SRSO_MITIGATION_NONE]           = "Vulnerable",
-	[SRSO_MITIGATION_MICROCODE]      = "Mitigation: microcode",
-	[SRSO_MITIGATION_SAFE_RET]	 = "Mitigation: safe RET",
-	[SRSO_MITIGATION_IBPB]		 = "Mitigation: IBPB",
-	[SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
+	[SRSO_MITIGATION_NONE]			= "Vulnerable",
+	[SRSO_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
+	[SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED]	= "Vulnerable: Safe RET, no microcode",
+	[SRSO_MITIGATION_MICROCODE]		= "Vulnerable: Microcode, no safe RET",
+	[SRSO_MITIGATION_SAFE_RET]		= "Mitigation: Safe RET",
+	[SRSO_MITIGATION_IBPB]			= "Mitigation: IBPB",
+	[SRSO_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT only"
 };
 
 static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
@@ -2409,10 +2413,7 @@ static void __init srso_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
 		goto pred_cmd;
 
-	if (!has_microcode) {
-		pr_warn("IBPB-extending microcode not applied!\n");
-		pr_warn(SRSO_NOTICE);
-	} else {
+	if (has_microcode) {
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
@@ -2428,6 +2429,12 @@ static void __init srso_select_mitigation(void)
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto out;
 		}
+	} else {
+		pr_warn("IBPB-extending microcode not applied!\n");
+		pr_warn(SRSO_NOTICE);
+
+		/* may be overwritten by SRSO_CMD_SAFE_RET below */
+		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
 	}
 
 	switch (srso_cmd) {
@@ -2457,7 +2464,10 @@ static void __init srso_select_mitigation(void)
 				setup_force_cpu_cap(X86_FEATURE_SRSO);
 				x86_return_thunk = srso_return_thunk;
 			}
-			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			if (has_microcode)
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			else
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
 		}
@@ -2490,7 +2500,7 @@ static void __init srso_select_mitigation(void)
 	}
 
 out:
-	pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");
+	pr_info("%s\n", srso_strings[srso_mitigation]);
 
 pred_cmd:
 	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
@@ -2701,9 +2711,7 @@ static ssize_t srso_show_state(char *buf)
 	if (boot_cpu_has(X86_FEATURE_SRSO_NO))
 		return sysfs_emit(buf, "Mitigation: SMT disabled\n");
 
-	return sysfs_emit(buf, "%s%s\n",
-			  srso_strings[srso_mitigation],
-			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+	return sysfs_emit(buf, "%s\n", srso_strings[srso_mitigation]);
 }
 
 static ssize_t gds_show_state(char *buf)

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Fix vulnerability reporting for missing microcode
  2023-09-05  5:04 [PATCH v3 08/20] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
  2023-09-05 10:09 ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
  2023-09-19  9:53 ` tip-bot2 for Josh Poimboeuf
@ 2023-09-23 12:20 ` tip-bot2 for Josh Poimboeuf
  2023-10-20 11:37 ` tip-bot2 for Josh Poimboeuf
  3 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-09-23 12:20 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Josh Poimboeuf, Ingo Molnar, Borislav Petkov (AMD), x86,
	linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     8caca8ceaae016329eb055f39bb0c95246bcc5b1
Gitweb:        https://git.kernel.org/tip/8caca8ceaae016329eb055f39bb0c95246bcc5b1
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Mon, 04 Sep 2023 22:04:52 -07:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Sat, 23 Sep 2023 14:13:02 +02:00

x86/srso: Fix vulnerability reporting for missing microcode

The SRSO default safe-ret mitigation is reported as "mitigated" even if
microcode hasn't been updated.  That's wrong because userspace may still
be vulnerable to SRSO attacks due to IBPB not flushing branch type
predictions.

Report the safe-ret + !microcode case as vulnerable.

Also report the microcode-only case as vulnerable as it leaves the
kernel open to attacks.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/a8a14f97d1b0e03ec255c81637afdf4cf0ae9c99.1693889988.git.jpoimboe@kernel.org
---
 Documentation/admin-guide/hw-vuln/srso.rst | 24 +++++++++-----
 arch/x86/kernel/cpu/bugs.c                 | 36 ++++++++++++---------
 2 files changed, 39 insertions(+), 21 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
index b6cfb51..e715bfc 100644
--- a/Documentation/admin-guide/hw-vuln/srso.rst
+++ b/Documentation/admin-guide/hw-vuln/srso.rst
@@ -46,12 +46,22 @@ The possible values in this file are:
 
    The processor is not vulnerable
 
- * 'Vulnerable: no microcode':
+* 'Vulnerable':
+
+   The processor is vulnerable and no mitigations have been applied.
+
+ * 'Vulnerable: No microcode':
 
    The processor is vulnerable, no microcode extending IBPB
    functionality to address the vulnerability has been applied.
 
- * 'Mitigation: microcode':
+ * 'Vulnerable: Safe RET, no microcode':
+
+   The "Safe RET" mitigation (see below) has been applied to protect the
+   kernel, but the IBPB-extending microcode has not been applied.  User
+   space tasks may still be vulnerable.
+
+ * 'Vulnerable: Microcode, no safe RET':
 
    Extended IBPB functionality microcode patch has been applied. It does
    not address User->Kernel and Guest->Host transitions protection but it
@@ -72,11 +82,11 @@ The possible values in this file are:
 
    (spec_rstack_overflow=microcode)
 
- * 'Mitigation: safe RET':
+ * 'Mitigation: Safe RET':
 
-   Software-only mitigation. It complements the extended IBPB microcode
-   patch functionality by addressing User->Kernel and Guest->Host
-   transitions protection.
+   Combined microcode/software mitigation. It complements the
+   extended IBPB microcode patch functionality by addressing
+   User->Kernel and Guest->Host transitions protection.
 
    Selected by default or by spec_rstack_overflow=safe-ret
 
@@ -129,7 +139,7 @@ an indrect branch prediction barrier after having applied the required
 microcode patch for one's system. This mitigation comes also at
 a performance cost.
 
-Mitigation: safe RET
+Mitigation: Safe RET
 --------------------
 
 The mitigation works by ensuring all RET instructions speculate to
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6c47f37..e45dd69 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2353,6 +2353,8 @@ early_param("l1tf", l1tf_cmdline);
 
 enum srso_mitigation {
 	SRSO_MITIGATION_NONE,
+	SRSO_MITIGATION_UCODE_NEEDED,
+	SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
 	SRSO_MITIGATION_MICROCODE,
 	SRSO_MITIGATION_SAFE_RET,
 	SRSO_MITIGATION_IBPB,
@@ -2368,11 +2370,13 @@ enum srso_mitigation_cmd {
 };
 
 static const char * const srso_strings[] = {
-	[SRSO_MITIGATION_NONE]           = "Vulnerable",
-	[SRSO_MITIGATION_MICROCODE]      = "Mitigation: microcode",
-	[SRSO_MITIGATION_SAFE_RET]	 = "Mitigation: safe RET",
-	[SRSO_MITIGATION_IBPB]		 = "Mitigation: IBPB",
-	[SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
+	[SRSO_MITIGATION_NONE]			= "Vulnerable",
+	[SRSO_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
+	[SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED]	= "Vulnerable: Safe RET, no microcode",
+	[SRSO_MITIGATION_MICROCODE]		= "Vulnerable: Microcode, no safe RET",
+	[SRSO_MITIGATION_SAFE_RET]		= "Mitigation: Safe RET",
+	[SRSO_MITIGATION_IBPB]			= "Mitigation: IBPB",
+	[SRSO_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT only"
 };
 
 static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
@@ -2409,10 +2413,7 @@ static void __init srso_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
 		goto pred_cmd;
 
-	if (!has_microcode) {
-		pr_warn("IBPB-extending microcode not applied!\n");
-		pr_warn(SRSO_NOTICE);
-	} else {
+	if (has_microcode) {
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
@@ -2428,6 +2429,12 @@ static void __init srso_select_mitigation(void)
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto out;
 		}
+	} else {
+		pr_warn("IBPB-extending microcode not applied!\n");
+		pr_warn(SRSO_NOTICE);
+
+		/* may be overwritten by SRSO_CMD_SAFE_RET below */
+		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
 	}
 
 	switch (srso_cmd) {
@@ -2457,7 +2464,10 @@ static void __init srso_select_mitigation(void)
 				setup_force_cpu_cap(X86_FEATURE_SRSO);
 				x86_return_thunk = srso_return_thunk;
 			}
-			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			if (has_microcode)
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			else
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
 		}
@@ -2490,7 +2500,7 @@ static void __init srso_select_mitigation(void)
 	}
 
 out:
-	pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");
+	pr_info("%s\n", srso_strings[srso_mitigation]);
 
 pred_cmd:
 	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
@@ -2701,9 +2711,7 @@ static ssize_t srso_show_state(char *buf)
 	if (boot_cpu_has(X86_FEATURE_SRSO_NO))
 		return sysfs_emit(buf, "Mitigation: SMT disabled\n");
 
-	return sysfs_emit(buf, "%s%s\n",
-			  srso_strings[srso_mitigation],
-			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+	return sysfs_emit(buf, "%s\n", srso_strings[srso_mitigation]);
 }
 
 static ssize_t gds_show_state(char *buf)

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* Re: [PATCH v2 00/23] SRSO fixes/cleanups
  2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
                   ` (23 preceding siblings ...)
  2023-08-25 10:38 ` [PATCH v2 00/23] SRSO fixes/cleanups Ingo Molnar
@ 2023-10-05  1:29 ` Sean Christopherson
  24 siblings, 0 replies; 71+ messages in thread
From: Sean Christopherson @ 2023-10-05  1:29 UTC (permalink / raw)
  To: Sean Christopherson, x86, Josh Poimboeuf
  Cc: linux-kernel, Borislav Petkov, Peter Zijlstra, Babu Moger,
	Paolo Bonzini, David.Kaplan, Andrew Cooper, Nikolay Borisov,
	gregkh, Thomas Gleixner

On Fri, 25 Aug 2023 00:01:31 -0700, Josh Poimboeuf wrote:
> v2:
> - reorder everything: fixes/functionality before cleanups
> - split up KVM patch, add Sean's changes
> - add patch to support live migration
> - remove "default:" case for enums throughout bugs.c
> - various minor tweaks based on v1 discussions with Boris
> - add Reviewed-by's
> 
> [...]

Applied the KVM patches to kvm-x86 misc, thanks!  (I still haven't posted the
KVM-Unit-Test patches, *sigh*)

[4/23] KVM: x86: Add IBPB_BRTYPE support
       https://github.com/kvm-x86/linux/commit/6f0f23ef76be
[5/23] KVM: x86: Add SBPB support
       https://github.com/kvm-x86/linux/commit/e47d86083c66

--
https://github.com/kvm-x86/linux/tree/next

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [tip: x86/bugs] x86/srso: Fix vulnerability reporting for missing microcode
  2023-09-05  5:04 [PATCH v3 08/20] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
                   ` (2 preceding siblings ...)
  2023-09-23 12:20 ` tip-bot2 for Josh Poimboeuf
@ 2023-10-20 11:37 ` tip-bot2 for Josh Poimboeuf
  3 siblings, 0 replies; 71+ messages in thread
From: tip-bot2 for Josh Poimboeuf @ 2023-10-20 11:37 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Josh Poimboeuf, Ingo Molnar, Borislav Petkov (AMD), x86,
	linux-kernel

The following commit has been merged into the x86/bugs branch of tip:

Commit-ID:     dc6306ad5b0dda040baf1fde3cfd458e6abfc4da
Gitweb:        https://git.kernel.org/tip/dc6306ad5b0dda040baf1fde3cfd458e6abfc4da
Author:        Josh Poimboeuf <jpoimboe@kernel.org>
AuthorDate:    Mon, 04 Sep 2023 22:04:52 -07:00
Committer:     Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Fri, 20 Oct 2023 11:46:09 +02:00

x86/srso: Fix vulnerability reporting for missing microcode

The SRSO default safe-ret mitigation is reported as "mitigated" even if
microcode hasn't been updated.  That's wrong because userspace may still
be vulnerable to SRSO attacks due to IBPB not flushing branch type
predictions.

Report the safe-ret + !microcode case as vulnerable.

Also report the microcode-only case as vulnerable as it leaves the
kernel open to attacks.

Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/a8a14f97d1b0e03ec255c81637afdf4cf0ae9c99.1693889988.git.jpoimboe@kernel.org
---
 Documentation/admin-guide/hw-vuln/srso.rst | 24 +++++++++-----
 arch/x86/kernel/cpu/bugs.c                 | 36 ++++++++++++---------
 2 files changed, 39 insertions(+), 21 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
index b6cfb51..e715bfc 100644
--- a/Documentation/admin-guide/hw-vuln/srso.rst
+++ b/Documentation/admin-guide/hw-vuln/srso.rst
@@ -46,12 +46,22 @@ The possible values in this file are:
 
    The processor is not vulnerable
 
- * 'Vulnerable: no microcode':
+* 'Vulnerable':
+
+   The processor is vulnerable and no mitigations have been applied.
+
+ * 'Vulnerable: No microcode':
 
    The processor is vulnerable, no microcode extending IBPB
    functionality to address the vulnerability has been applied.
 
- * 'Mitigation: microcode':
+ * 'Vulnerable: Safe RET, no microcode':
+
+   The "Safe RET" mitigation (see below) has been applied to protect the
+   kernel, but the IBPB-extending microcode has not been applied.  User
+   space tasks may still be vulnerable.
+
+ * 'Vulnerable: Microcode, no safe RET':
 
    Extended IBPB functionality microcode patch has been applied. It does
    not address User->Kernel and Guest->Host transitions protection but it
@@ -72,11 +82,11 @@ The possible values in this file are:
 
    (spec_rstack_overflow=microcode)
 
- * 'Mitigation: safe RET':
+ * 'Mitigation: Safe RET':
 
-   Software-only mitigation. It complements the extended IBPB microcode
-   patch functionality by addressing User->Kernel and Guest->Host
-   transitions protection.
+   Combined microcode/software mitigation. It complements the
+   extended IBPB microcode patch functionality by addressing
+   User->Kernel and Guest->Host transitions protection.
 
    Selected by default or by spec_rstack_overflow=safe-ret
 
@@ -129,7 +139,7 @@ an indrect branch prediction barrier after having applied the required
 microcode patch for one's system. This mitigation comes also at
 a performance cost.
 
-Mitigation: safe RET
+Mitigation: Safe RET
 --------------------
 
 The mitigation works by ensuring all RET instructions speculate to
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6c47f37..e45dd69 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2353,6 +2353,8 @@ early_param("l1tf", l1tf_cmdline);
 
 enum srso_mitigation {
 	SRSO_MITIGATION_NONE,
+	SRSO_MITIGATION_UCODE_NEEDED,
+	SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
 	SRSO_MITIGATION_MICROCODE,
 	SRSO_MITIGATION_SAFE_RET,
 	SRSO_MITIGATION_IBPB,
@@ -2368,11 +2370,13 @@ enum srso_mitigation_cmd {
 };
 
 static const char * const srso_strings[] = {
-	[SRSO_MITIGATION_NONE]           = "Vulnerable",
-	[SRSO_MITIGATION_MICROCODE]      = "Mitigation: microcode",
-	[SRSO_MITIGATION_SAFE_RET]	 = "Mitigation: safe RET",
-	[SRSO_MITIGATION_IBPB]		 = "Mitigation: IBPB",
-	[SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
+	[SRSO_MITIGATION_NONE]			= "Vulnerable",
+	[SRSO_MITIGATION_UCODE_NEEDED]		= "Vulnerable: No microcode",
+	[SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED]	= "Vulnerable: Safe RET, no microcode",
+	[SRSO_MITIGATION_MICROCODE]		= "Vulnerable: Microcode, no safe RET",
+	[SRSO_MITIGATION_SAFE_RET]		= "Mitigation: Safe RET",
+	[SRSO_MITIGATION_IBPB]			= "Mitigation: IBPB",
+	[SRSO_MITIGATION_IBPB_ON_VMEXIT]	= "Mitigation: IBPB on VMEXIT only"
 };
 
 static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
@@ -2409,10 +2413,7 @@ static void __init srso_select_mitigation(void)
 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
 		goto pred_cmd;
 
-	if (!has_microcode) {
-		pr_warn("IBPB-extending microcode not applied!\n");
-		pr_warn(SRSO_NOTICE);
-	} else {
+	if (has_microcode) {
 		/*
 		 * Zen1/2 with SMT off aren't vulnerable after the right
 		 * IBPB microcode has been applied.
@@ -2428,6 +2429,12 @@ static void __init srso_select_mitigation(void)
 			srso_mitigation = SRSO_MITIGATION_IBPB;
 			goto out;
 		}
+	} else {
+		pr_warn("IBPB-extending microcode not applied!\n");
+		pr_warn(SRSO_NOTICE);
+
+		/* may be overwritten by SRSO_CMD_SAFE_RET below */
+		srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
 	}
 
 	switch (srso_cmd) {
@@ -2457,7 +2464,10 @@ static void __init srso_select_mitigation(void)
 				setup_force_cpu_cap(X86_FEATURE_SRSO);
 				x86_return_thunk = srso_return_thunk;
 			}
-			srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			if (has_microcode)
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+			else
+				srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
 		} else {
 			pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
 		}
@@ -2490,7 +2500,7 @@ static void __init srso_select_mitigation(void)
 	}
 
 out:
-	pr_info("%s%s\n", srso_strings[srso_mitigation], has_microcode ? "" : ", no microcode");
+	pr_info("%s\n", srso_strings[srso_mitigation]);
 
 pred_cmd:
 	if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
@@ -2701,9 +2711,7 @@ static ssize_t srso_show_state(char *buf)
 	if (boot_cpu_has(X86_FEATURE_SRSO_NO))
 		return sysfs_emit(buf, "Mitigation: SMT disabled\n");
 
-	return sysfs_emit(buf, "%s%s\n",
-			  srso_strings[srso_mitigation],
-			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+	return sysfs_emit(buf, "%s\n", srso_strings[srso_mitigation]);
 }
 
 static ssize_t gds_show_state(char *buf)

^ permalink raw reply related	[flat|nested] 71+ messages in thread

end of thread, other threads:[~2023-10-20 11:38 UTC | newest]

Thread overview: 71+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-25  7:01 [PATCH v2 00/23] SRSO fixes/cleanups Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 01/23] x86/srso: Fix srso_show_state() side effect Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 02/23] x86/srso: Set CPUID feature bits independently of bug or mitigation status Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 03/23] x86/srso: Don't probe microcode in a guest Josh Poimboeuf
2023-08-25  7:52   ` Andrew Cooper
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 04/23] KVM: x86: Add IBPB_BRTYPE support Josh Poimboeuf
2023-08-25 18:15   ` Sean Christopherson
2023-08-26 15:49     ` Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 05/23] KVM: x86: Add SBPB support Josh Poimboeuf
2023-08-25 18:20   ` Sean Christopherson
2023-08-25  7:01 ` [PATCH 06/23] x86/srso: Fix SBPB enablement for spec_rstack_overflow=off Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 07/23] x86/srso: Fix SBPB enablement for (possible) future fixed HW Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 08/23] x86/srso: Print actual mitigation if requested mitigation isn't possible Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 09/23] x86/srso: Print mitigation for retbleed IBPB case Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 10/23] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-09-01  9:40     ` Borislav Petkov
2023-09-02 10:46       ` Ingo Molnar
2023-09-02 17:04         ` Borislav Petkov
2023-09-03 14:37           ` Borislav Petkov
2023-09-05  4:57         ` Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 11/23] x86/srso: Fix unret validation dependencies Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 12/23] x86/alternatives: Remove faulty optimization Josh Poimboeuf
2023-08-25  9:20   ` Ingo Molnar
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25 10:27   ` [tip: x86/urgent] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 13/23] x86/srso: Improve i-cache locality for alias mitigation Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 14/23] x86/srso: Unexport untraining functions Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 15/23] x86/srso: Remove 'pred_cmd' label Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25 19:51   ` [PATCH 15/23] " Nikolay Borisov
2023-08-26 15:45     ` Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 16/23] x86/bugs: Remove default case for fully switched enums Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-09-02  9:02   ` [PATCH 16/23] " Borislav Petkov
2023-09-05  5:08     ` Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 17/23] x86/srso: Move retbleed IBPB check into existing 'has_microcode' code block Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 18/23] x86/srso: Remove redundant X86_FEATURE_ENTRY_IBPB check Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-09-02  9:10   ` [PATCH 18/23] " Borislav Petkov
2023-08-25  7:01 ` [PATCH 19/23] x86/srso: Disentangle rethunk-dependent options Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 20/23] x86/rethunk: Use SYM_CODE_START[_LOCAL]_NOALIGN macros Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 21/23] x86/retpoline: Remove .text..__x86.return_thunk section Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 22/23] x86/nospec: Refactor UNTRAIN_RET[_*] Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25 18:22   ` [PATCH 22/23] " Nikolay Borisov
2023-08-26 15:42     ` Josh Poimboeuf
2023-08-25  7:01 ` [PATCH 23/23] x86/calldepth: Rename __x86_return_skl() to call_depth_return_thunk() Josh Poimboeuf
2023-08-25 10:19   ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-08-25 10:38 ` [PATCH v2 00/23] SRSO fixes/cleanups Ingo Molnar
2023-08-26 15:57   ` Josh Poimboeuf
2023-08-26 17:00     ` Ingo Molnar
2023-10-05  1:29 ` Sean Christopherson
  -- strict thread matches above, loose matches on Subject: below --
2023-09-05  5:04 [PATCH v3 08/20] x86/srso: Fix vulnerability reporting for missing microcode Josh Poimboeuf
2023-09-05 10:09 ` [tip: x86/bugs] " tip-bot2 for Josh Poimboeuf
2023-09-19  9:53 ` tip-bot2 for Josh Poimboeuf
2023-09-23 12:20 ` tip-bot2 for Josh Poimboeuf
2023-10-20 11:37 ` tip-bot2 for Josh Poimboeuf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox