public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/speculation: Mitigate eIBRS PBRSB predictions with WRMSR
@ 2022-10-05 22:02 Suraj Jitindar Singh
  2022-10-05 22:29 ` Jim Mattson
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Suraj Jitindar Singh @ 2022-10-05 22:02 UTC (permalink / raw)
  To: kvm
  Cc: surajjs, sjitindarsingh, linux-kernel, x86, tglx, mingo, bp,
	dave.hansen, seanjc, pbonzini, peterz, jpoimboe, daniel.sneddon,
	pawan.kumar.gupta, benh, stable

tl;dr: The existing mitigation for eIBRS PBRSB predictions uses an INT3 to
ensure a call instruction retires before a following unbalanced RET. Replace
this with a WRMSR serialising instruction which has a lower performance
penalty.

== Background ==

eIBRS (enhanced indirect branch restricted speculation) is used to prevent
predictor addresses from one privilege domain from being used for prediction
in a higher privilege domain.

== Problem ==

On processors with eIBRS protections there can be a case where upon VM exit
a guest address may be used as an RSB prediction for an unbalanced RET if a
CALL instruction hasn't yet been retired. This is termed PBRSB (Post-Barrier
Return Stack Buffer).

A mitigation for this was introduced in:
(2b1299322016731d56807aa49254a5ea3080b6b3 x86/speculation: Add RSB VM Exit protections)

This mitigation [1] has a ~1% performance impact on VM exit compared to without
it [2].

== Solution ==

The WRMSR instruction can be used as a speculation barrier and a serialising
instruction. Use this on the VM exit path instead to ensure that a CALL
instruction (in this case the call to vmx_spec_ctrl_restore_host) has retired
before the prediction of a following unbalanced RET.

This mitigation [3] has a negligible performance impact.

== Testing ==

Run the outl_to_kernel kvm-unit-tests test 200 times per configuration which
counts the cycles for an exit to kernel mode.

[1] With existing mitigation:
Average: 2026 cycles
[2] With no mitigation:
Average: 2008 cycles
[3] With proposed mitigation:
Average: 2008 cycles

Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>
Cc: stable@vger.kernel.org
---
 arch/x86/include/asm/nospec-branch.h | 7 +++----
 arch/x86/kvm/vmx/vmenter.S           | 3 +--
 arch/x86/kvm/vmx/vmx.c               | 5 +++++
 3 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c936ce9f0c47..e5723e024b47 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -159,10 +159,9 @@
   * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP
   * monstrosity above, manually.
   */
-.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req ftr2=ALT_NOT(X86_FEATURE_ALWAYS)
-	ALTERNATIVE_2 "jmp .Lskip_rsb_\@", \
-		__stringify(__FILL_RETURN_BUFFER(\reg,\nr)), \ftr, \
-		__stringify(__FILL_ONE_RETURN), \ftr2
+.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req
+	ALTERNATIVE "jmp .Lskip_rsb_\@", \
+		__stringify(__FILL_RETURN_BUFFER(\reg,\nr)), \ftr
 
 .Lskip_rsb_\@:
 .endm
diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index 6de96b943804..eb82797bd7bf 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -231,8 +231,7 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL)
 	 * single call to retire, before the first unbalanced RET.
          */
 
-	FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT,\
-			   X86_FEATURE_RSB_VMEXIT_LITE
+	FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
 
 
 	pop %_ASM_ARG2	/* @flags */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c9b49a09e6b5..fdcd8e10c2ab 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7049,8 +7049,13 @@ void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx,
 	 * For legacy IBRS, the IBRS bit always needs to be written after
 	 * transitioning from a less privileged predictor mode, regardless of
 	 * whether the guest/host values differ.
+	 *
+	 * For eIBRS affected by Post Barrier RSB Predictions a serialising
+	 * instruction (wrmsr) must be executed to ensure a call instruction has
+	 * retired before the prediction of a following unbalanced ret.
 	 */
 	if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) ||
+	    cpu_feature_enabled(X86_FEATURE_RSB_VMEXIT_LITE) ||
 	    vmx->spec_ctrl != hostval)
 		native_wrmsrl(MSR_IA32_SPEC_CTRL, hostval);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-10-07  1:54 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-10-05 22:02 [PATCH] x86/speculation: Mitigate eIBRS PBRSB predictions with WRMSR Suraj Jitindar Singh
2022-10-05 22:29 ` Jim Mattson
2022-10-06  8:25   ` David Laight
2022-10-06 20:27     ` pawan.kumar.gupta
2022-10-05 23:24 ` Jim Mattson
2022-10-05 23:45   ` Pawan Gupta
2022-10-05 23:46 ` Jim Mattson
2022-10-06  0:26   ` Daniel Sneddon
2022-10-06  1:28     ` Jim Mattson
2022-10-06  8:18   ` Peter Zijlstra
2022-10-06  2:42 ` Andrew Cooper
2022-10-07  1:44   ` pawan.kumar.gupta
2022-10-07  1:54 ` Pawan Gupta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox