* [PATCH 0/6] IBPB cleanups and a fixup
@ 2025-02-19 22:08 Yosry Ahmed
2025-02-19 22:08 ` [PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers Yosry Ahmed
` (6 more replies)
0 siblings, 7 replies; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-19 22:08 UTC (permalink / raw)
To: x86
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Sean Christopherson, Paolo Bonzini, kvm,
linux-kernel
This series removes X86_FEATURE_USE_IBPB, and fixes a KVM nVMX bug in
the process. The motivation is mostly the confusing name of
X86_FEATURE_USE_IBPB, which sounds like it controls IBPBs in general,
but it only controls IBPBs for spectre_v2_mitigation. A side effect of
this confusion is the nVMX bug, where virtualizing IBRS correctly
depends on the spectre_v2_user mitigation.
The feature bit is mostly redundant, except in controlling the IBPB in
the vCPU load path. For that, a separate static branch is introduced,
similar to switch_mm_*_ibpb.
I wanted to do more, but decided to stay conservative. I was mainly
hoping to merge indirect_branch_prediction_barrier() with entry_ibpb()
to have a single IBPB primitive that always stuffs the RSB if the IBPB
doesn't, but this would add some overhead in paths that currently use
indirect_branch_prediction_barrier(), and I was not sure if that's
acceptable.
For the record, my measurements of the latency of
indirect_branch_prediction_barrier() and entry_ibpb() on Rome and Milan
(both do not have X86_FEATURE_AMD_IBPB_RET) are as follows:
Rome:
400ns (indirect_branch_prediction_barrier) vs 500ns (entry_ibpb)
Milan:
220ns (indirect_branch_prediction_barrier) vs 280ns (entry_ibpb)
I also wanted to move controlling the IBPB on vCPU load from
being under spectre_v2_user to spectre_v2, because "user" in a lot of
mitigation contexts does not include VMs.
Just laying out these thoughts in case others have any comments.
Yosry Ahmed (6):
x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers
x86/mm: Remove X86_FEATURE_USE_IBPB checks in cond_mitigation()
x86/bugs: Remove the X86_FEATURE_USE_IBPB check in ib_prctl_set()
x86/bugs: Use a static branch to guard IBPB on vCPU load
KVM: nVMX: Always use IBPB to properly virtualize IBRS
x86/bugs: Remove X86_FEATURE_USE_IBPB
arch/x86/include/asm/cpufeatures.h | 1 -
arch/x86/include/asm/nospec-branch.h | 4 +++-
arch/x86/kernel/cpu/bugs.c | 7 +++++--
arch/x86/kvm/svm/svm.c | 3 ++-
arch/x86/kvm/vmx/vmx.c | 3 ++-
arch/x86/mm/tlb.c | 3 +--
tools/arch/x86/include/asm/cpufeatures.h | 1 -
7 files changed, 13 insertions(+), 9 deletions(-)
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers
2025-02-19 22:08 [PATCH 0/6] IBPB cleanups and a fixup Yosry Ahmed
@ 2025-02-19 22:08 ` Yosry Ahmed
2025-02-25 19:47 ` Sean Christopherson
2025-02-19 22:08 ` [PATCH 2/6] x86/mm: Remove X86_FEATURE_USE_IBPB checks in cond_mitigation() Yosry Ahmed
` (5 subsequent siblings)
6 siblings, 1 reply; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-19 22:08 UTC (permalink / raw)
To: x86
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Sean Christopherson, Paolo Bonzini, kvm,
linux-kernel
indirect_branch_prediction_barrier() only performs the MSR write if
X86_FEATURE_USE_IBPB is set, using alternative_msr_write(). In
preparation for removing X86_FEATURE_USE_IBPB, move the feature check
into the callers so that they can be addressed one-by-one, and use
X86_FEATURE_IBPB instead to guard the MSR write.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
arch/x86/include/asm/nospec-branch.h | 2 +-
arch/x86/kernel/cpu/bugs.c | 2 +-
arch/x86/kvm/svm/svm.c | 3 ++-
arch/x86/kvm/vmx/nested.c | 3 ++-
arch/x86/kvm/vmx/vmx.c | 3 ++-
arch/x86/mm/tlb.c | 7 ++++---
6 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 7e8bf78c03d5d..7cbb76a2434b9 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -515,7 +515,7 @@ extern u64 x86_pred_cmd;
static inline void indirect_branch_prediction_barrier(void)
{
- alternative_msr_write(MSR_IA32_PRED_CMD, x86_pred_cmd, X86_FEATURE_USE_IBPB);
+ alternative_msr_write(MSR_IA32_PRED_CMD, x86_pred_cmd, X86_FEATURE_IBPB);
}
/* The Intel SPEC CTRL MSR base value cache */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a5d0998d76049..fc7ce7a2fc495 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2272,7 +2272,7 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
if (ctrl == PR_SPEC_FORCE_DISABLE)
task_set_spec_ib_force_disable(task);
task_update_spec_tif(task);
- if (task == current)
+ if (task == current && cpu_feature_enabled(X86_FEATURE_USE_IBPB))
indirect_branch_prediction_barrier();
break;
default:
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index a713c803a3a37..a4ba5b4e3d682 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1559,7 +1559,8 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
if (sd->current_vmcb != svm->vmcb) {
sd->current_vmcb = svm->vmcb;
- if (!cpu_feature_enabled(X86_FEATURE_IBPB_ON_VMEXIT))
+ if (!cpu_feature_enabled(X86_FEATURE_IBPB_ON_VMEXIT) &&
+ cpu_feature_enabled(X86_FEATURE_USE_IBPB))
indirect_branch_prediction_barrier();
}
if (kvm_vcpu_apicv_active(vcpu))
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index ca18c3eec76d8..504f328907ad4 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -5026,7 +5026,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
* doesn't isolate different VMCSs, i.e. in this case, doesn't provide
* separate modes for L2 vs L1.
*/
- if (guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL))
+ if (guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
+ cpu_feature_enabled(X86_FEATURE_USE_IBPB))
indirect_branch_prediction_barrier();
/* Update any VMCS fields that might have changed while L2 ran */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 6c56d5235f0f3..729a8ee24037b 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1478,7 +1478,8 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
* may switch the active VMCS multiple times).
*/
if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
- indirect_branch_prediction_barrier();
+ if (cpu_feature_enabled(X86_FEATURE_USE_IBPB))
+ indirect_branch_prediction_barrier();
}
if (!already_loaded) {
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index ffc25b3480415..be0c1a509869c 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -437,7 +437,8 @@ static void cond_mitigation(struct task_struct *next)
* both have the IBPB bit set.
*/
if (next_mm != prev_mm &&
- (next_mm | prev_mm) & LAST_USER_MM_IBPB)
+ (next_mm | prev_mm) & LAST_USER_MM_IBPB &&
+ cpu_feature_enabled(X86_FEATURE_USE_IBPB))
indirect_branch_prediction_barrier();
}
@@ -447,8 +448,8 @@ static void cond_mitigation(struct task_struct *next)
* different context than the user space task which ran
* last on this CPU.
*/
- if ((prev_mm & ~LAST_USER_MM_SPEC_MASK) !=
- (unsigned long)next->mm)
+ if ((prev_mm & ~LAST_USER_MM_SPEC_MASK) != (unsigned long)next->mm &&
+ cpu_feature_enabled(X86_FEATURE_USE_IBPB))
indirect_branch_prediction_barrier();
}
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 2/6] x86/mm: Remove X86_FEATURE_USE_IBPB checks in cond_mitigation()
2025-02-19 22:08 [PATCH 0/6] IBPB cleanups and a fixup Yosry Ahmed
2025-02-19 22:08 ` [PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers Yosry Ahmed
@ 2025-02-19 22:08 ` Yosry Ahmed
2025-02-19 22:08 ` [PATCH 3/6] x86/bugs: Remove the X86_FEATURE_USE_IBPB check in ib_prctl_set() Yosry Ahmed
` (4 subsequent siblings)
6 siblings, 0 replies; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-19 22:08 UTC (permalink / raw)
To: x86
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Sean Christopherson, Paolo Bonzini, kvm,
linux-kernel
The check is performed when either switch_mm_cond_ibpb or
switch_mm_always_ibpb is set. In both cases, X86_FEATURE_USE_IBPB is
always set. Remove the redundant check.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
arch/x86/mm/tlb.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index be0c1a509869c..e860fc8edfae4 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -437,8 +437,7 @@ static void cond_mitigation(struct task_struct *next)
* both have the IBPB bit set.
*/
if (next_mm != prev_mm &&
- (next_mm | prev_mm) & LAST_USER_MM_IBPB &&
- cpu_feature_enabled(X86_FEATURE_USE_IBPB))
+ (next_mm | prev_mm) & LAST_USER_MM_IBPB)
indirect_branch_prediction_barrier();
}
@@ -448,8 +447,7 @@ static void cond_mitigation(struct task_struct *next)
* different context than the user space task which ran
* last on this CPU.
*/
- if ((prev_mm & ~LAST_USER_MM_SPEC_MASK) != (unsigned long)next->mm &&
- cpu_feature_enabled(X86_FEATURE_USE_IBPB))
+ if ((prev_mm & ~LAST_USER_MM_SPEC_MASK) != (unsigned long)next->mm)
indirect_branch_prediction_barrier();
}
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 3/6] x86/bugs: Remove the X86_FEATURE_USE_IBPB check in ib_prctl_set()
2025-02-19 22:08 [PATCH 0/6] IBPB cleanups and a fixup Yosry Ahmed
2025-02-19 22:08 ` [PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers Yosry Ahmed
2025-02-19 22:08 ` [PATCH 2/6] x86/mm: Remove X86_FEATURE_USE_IBPB checks in cond_mitigation() Yosry Ahmed
@ 2025-02-19 22:08 ` Yosry Ahmed
2025-02-19 22:08 ` [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load Yosry Ahmed
` (3 subsequent siblings)
6 siblings, 0 replies; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-19 22:08 UTC (permalink / raw)
To: x86
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Sean Christopherson, Paolo Bonzini, kvm,
linux-kernel
If X86_FEATURE_USE_IBPB is not set, then both spectre_v2_user_ibpb and
spectre_v2_user_stibp are set to SPECTRE_V2_USER_NONE in
spectre_v2_user_select_mitigation(). Since ib_prctl_set() already checks
for this before performing the IBPB, the X86_FEATURE_USE_IBPB check is
redundant. Remove it.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
arch/x86/kernel/cpu/bugs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index fc7ce7a2fc495..a5d0998d76049 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2272,7 +2272,7 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
if (ctrl == PR_SPEC_FORCE_DISABLE)
task_set_spec_ib_force_disable(task);
task_update_spec_tif(task);
- if (task == current && cpu_feature_enabled(X86_FEATURE_USE_IBPB))
+ if (task == current)
indirect_branch_prediction_barrier();
break;
default:
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load
2025-02-19 22:08 [PATCH 0/6] IBPB cleanups and a fixup Yosry Ahmed
` (2 preceding siblings ...)
2025-02-19 22:08 ` [PATCH 3/6] x86/bugs: Remove the X86_FEATURE_USE_IBPB check in ib_prctl_set() Yosry Ahmed
@ 2025-02-19 22:08 ` Yosry Ahmed
2025-02-25 19:49 ` Sean Christopherson
2025-02-19 22:08 ` [PATCH 5/6] KVM: nVMX: Always use IBPB to properly virtualize IBRS Yosry Ahmed
` (2 subsequent siblings)
6 siblings, 1 reply; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-19 22:08 UTC (permalink / raw)
To: x86
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Sean Christopherson, Paolo Bonzini, kvm,
linux-kernel
Instead of using X86_FEATURE_USE_IBPB to guard the IBPB execution in the
vCPU load path, introduce a static branch, similar to switch_mm_*_ibpb.
This makes it obvious in spectre_v2_user_select_mitigation() what
exactly is being toggled, instead of the unclear X86_FEATURE_USE_IBPB
(which will be shortly removed). It also provides more fine-grained
control, making it simpler to change/add paths that control the IBPB in
the vCPU load path without affecting other IBPBs.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
arch/x86/include/asm/nospec-branch.h | 2 ++
arch/x86/kernel/cpu/bugs.c | 5 +++++
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/vmx/vmx.c | 2 +-
4 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 7cbb76a2434b9..a22836c5fb338 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -552,6 +552,8 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+DECLARE_STATIC_KEY_FALSE(vcpu_load_ibpb);
+
DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a5d0998d76049..685a6f97fea8f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -113,6 +113,10 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
/* Control unconditional IBPB in switch_mm() */
DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+/* Control IBPB on vCPU load */
+DEFINE_STATIC_KEY_FALSE(vcpu_load_ibpb);
+EXPORT_SYMBOL_GPL(vcpu_load_ibpb);
+
/* Control MDS CPU buffer clear before idling (halt, mwait) */
DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
EXPORT_SYMBOL_GPL(mds_idle_clear);
@@ -1365,6 +1369,7 @@ spectre_v2_user_select_mitigation(void)
/* Initialize Indirect Branch Prediction Barrier */
if (boot_cpu_has(X86_FEATURE_IBPB)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+ static_branch_enable(&vcpu_load_ibpb);
spectre_v2_user_ibpb = mode;
switch (cmd) {
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index a4ba5b4e3d682..043d56d276ad6 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1560,7 +1560,7 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
sd->current_vmcb = svm->vmcb;
if (!cpu_feature_enabled(X86_FEATURE_IBPB_ON_VMEXIT) &&
- cpu_feature_enabled(X86_FEATURE_USE_IBPB))
+ static_branch_likely(&vcpu_load_ibpb))
indirect_branch_prediction_barrier();
}
if (kvm_vcpu_apicv_active(vcpu))
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 729a8ee24037b..7f950d0b50757 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1478,7 +1478,7 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
* may switch the active VMCS multiple times).
*/
if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
- if (cpu_feature_enabled(X86_FEATURE_USE_IBPB))
+ if (static_branch_likely(&vcpu_load_ibpb))
indirect_branch_prediction_barrier();
}
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 5/6] KVM: nVMX: Always use IBPB to properly virtualize IBRS
2025-02-19 22:08 [PATCH 0/6] IBPB cleanups and a fixup Yosry Ahmed
` (3 preceding siblings ...)
2025-02-19 22:08 ` [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load Yosry Ahmed
@ 2025-02-19 22:08 ` Yosry Ahmed
2025-02-19 23:09 ` Jim Mattson
2025-02-25 19:50 ` Sean Christopherson
2025-02-19 22:08 ` [PATCH 6/6] x86/bugs: Remove X86_FEATURE_USE_IBPB Yosry Ahmed
2025-02-20 19:04 ` [PATCH 0/6] IBPB cleanups and a fixup Josh Poimboeuf
6 siblings, 2 replies; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-19 22:08 UTC (permalink / raw)
To: x86
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Sean Christopherson, Paolo Bonzini, kvm,
linux-kernel
On synthesized nested VM-exits in VMX, an IBPB is performed if IBRS is
advertised to the guest to properly provide separate prediction domains
for L1 and L2. However, this is currently conditional on
X86_FEATURE_USE_IBPB, which depends on the host spectre_v2_user
mitigation.
In short, if spectre_v2_user=no, IBRS is not virtualized correctly and
L1 becomes suspectible to attacks from L2. Fix this by performing the
IBPB regardless of X86_FEATURE_USE_IBPB.
Fixes: 2e7eab81425a ("KVM: VMX: Execute IBPB on emulated VM-exit when guest has IBRS")
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
arch/x86/kvm/vmx/nested.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 504f328907ad4..ca18c3eec76d8 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -5026,8 +5026,7 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
* doesn't isolate different VMCSs, i.e. in this case, doesn't provide
* separate modes for L2 vs L1.
*/
- if (guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL) &&
- cpu_feature_enabled(X86_FEATURE_USE_IBPB))
+ if (guest_cpu_cap_has(vcpu, X86_FEATURE_SPEC_CTRL))
indirect_branch_prediction_barrier();
/* Update any VMCS fields that might have changed while L2 ran */
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 6/6] x86/bugs: Remove X86_FEATURE_USE_IBPB
2025-02-19 22:08 [PATCH 0/6] IBPB cleanups and a fixup Yosry Ahmed
` (4 preceding siblings ...)
2025-02-19 22:08 ` [PATCH 5/6] KVM: nVMX: Always use IBPB to properly virtualize IBRS Yosry Ahmed
@ 2025-02-19 22:08 ` Yosry Ahmed
2025-02-20 19:04 ` [PATCH 0/6] IBPB cleanups and a fixup Josh Poimboeuf
6 siblings, 0 replies; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-19 22:08 UTC (permalink / raw)
To: x86
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Sean Christopherson, Paolo Bonzini, kvm,
linux-kernel
X86_FEATURE_USE_IBPB was introduced in commit 2961298efe1e
("x86/cpufeatures: Clean up Spectre v2 related CPUID flags") to have
separate flags for when the CPU supports IBPB (i.e. X86_FEATURE_IBPB)
and when an IBPB is actually used to mitigate Spectre v2.
Ever since then, the uses of IBPB expanded. The name became confusing
because it does not control all IBPB executions in the kernel.
Furthermore, because its name is generic and it's buried within
indirect_branch_prediction_barrier(), it's easy to use it not knowing
that it is specific to Spectre v2.
X86_FEATURE_USE_IBPB is no longer needed because all the IBPB executions
it used to control are now controlled through other means (e.g.
switch_mm_*_ibpb static branches). Remove the unused feature bit.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
arch/x86/include/asm/cpufeatures.h | 1 -
arch/x86/kernel/cpu/bugs.c | 1 -
tools/arch/x86/include/asm/cpufeatures.h | 1 -
3 files changed, 3 deletions(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 508c0dad116bc..5033b23db86d2 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -210,7 +210,6 @@
#define X86_FEATURE_MBA ( 7*32+18) /* "mba" Memory Bandwidth Allocation */
#define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* Fill RSB on context switches */
#define X86_FEATURE_PERFMON_V2 ( 7*32+20) /* "perfmon_v2" AMD Performance Monitoring Version 2 */
-#define X86_FEATURE_USE_IBPB ( 7*32+21) /* Indirect Branch Prediction Barrier enabled */
#define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* Use IBRS during runtime firmware calls */
#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* Disable Speculative Store Bypass. */
#define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* AMD SSBD implementation via LS_CFG MSR */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 685a6f97fea8f..d5642f787ebcb 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1368,7 +1368,6 @@ spectre_v2_user_select_mitigation(void)
/* Initialize Indirect Branch Prediction Barrier */
if (boot_cpu_has(X86_FEATURE_IBPB)) {
- setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
static_branch_enable(&vcpu_load_ibpb);
spectre_v2_user_ibpb = mode;
diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
index 17b6590748c00..ec9911379c617 100644
--- a/tools/arch/x86/include/asm/cpufeatures.h
+++ b/tools/arch/x86/include/asm/cpufeatures.h
@@ -210,7 +210,6 @@
#define X86_FEATURE_MBA ( 7*32+18) /* "mba" Memory Bandwidth Allocation */
#define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* Fill RSB on context switches */
#define X86_FEATURE_PERFMON_V2 ( 7*32+20) /* "perfmon_v2" AMD Performance Monitoring Version 2 */
-#define X86_FEATURE_USE_IBPB ( 7*32+21) /* Indirect Branch Prediction Barrier enabled */
#define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* Use IBRS during runtime firmware calls */
#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* Disable Speculative Store Bypass. */
#define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* AMD SSBD implementation via LS_CFG MSR */
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 5/6] KVM: nVMX: Always use IBPB to properly virtualize IBRS
2025-02-19 22:08 ` [PATCH 5/6] KVM: nVMX: Always use IBPB to properly virtualize IBRS Yosry Ahmed
@ 2025-02-19 23:09 ` Jim Mattson
2025-02-25 19:50 ` Sean Christopherson
1 sibling, 0 replies; 21+ messages in thread
From: Jim Mattson @ 2025-02-19 23:09 UTC (permalink / raw)
To: Yosry Ahmed
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Sean Christopherson, Paolo Bonzini, kvm,
linux-kernel
On Wed, Feb 19, 2025 at 2:11 PM Yosry Ahmed <yosry.ahmed@linux.dev> wrote:
>
> On synthesized nested VM-exits in VMX, an IBPB is performed if IBRS is
> advertised to the guest to properly provide separate prediction domains
> for L1 and L2. However, this is currently conditional on
> X86_FEATURE_USE_IBPB, which depends on the host spectre_v2_user
> mitigation.
>
> In short, if spectre_v2_user=no, IBRS is not virtualized correctly and
> L1 becomes suspectible to attacks from L2. Fix this by performing the
Nit: susceptible.
> IBPB regardless of X86_FEATURE_USE_IBPB.
>
> Fixes: 2e7eab81425a ("KVM: VMX: Execute IBPB on emulated VM-exit when guest has IBRS")
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
Argh! No doubt, I was burned once again by assuming that a function
name (indirect_branch_prediction_barrier) was actually descriptive.
Reviewed-by: Jim Mattson <jmattson@google.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/6] IBPB cleanups and a fixup
2025-02-19 22:08 [PATCH 0/6] IBPB cleanups and a fixup Yosry Ahmed
` (5 preceding siblings ...)
2025-02-19 22:08 ` [PATCH 6/6] x86/bugs: Remove X86_FEATURE_USE_IBPB Yosry Ahmed
@ 2025-02-20 19:04 ` Josh Poimboeuf
2025-02-20 19:59 ` Yosry Ahmed
6 siblings, 1 reply; 21+ messages in thread
From: Josh Poimboeuf @ 2025-02-20 19:04 UTC (permalink / raw)
To: Yosry Ahmed
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Pawan Gupta, Andy Lutomirski,
Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Wed, Feb 19, 2025 at 10:08:20PM +0000, Yosry Ahmed wrote:
> This series removes X86_FEATURE_USE_IBPB, and fixes a KVM nVMX bug in
> the process. The motivation is mostly the confusing name of
> X86_FEATURE_USE_IBPB, which sounds like it controls IBPBs in general,
> but it only controls IBPBs for spectre_v2_mitigation. A side effect of
> this confusion is the nVMX bug, where virtualizing IBRS correctly
> depends on the spectre_v2_user mitigation.
>
> The feature bit is mostly redundant, except in controlling the IBPB in
> the vCPU load path. For that, a separate static branch is introduced,
> similar to switch_mm_*_ibpb.
Thanks for doing this. A few months ago I was working on patches to fix
the same thing but I got preempted multiple times over.
> I wanted to do more, but decided to stay conservative. I was mainly
> hoping to merge indirect_branch_prediction_barrier() with entry_ibpb()
> to have a single IBPB primitive that always stuffs the RSB if the IBPB
> doesn't, but this would add some overhead in paths that currently use
> indirect_branch_prediction_barrier(), and I was not sure if that's
> acceptable.
We always rely on IBPB clearing RSB, so yes, I'd say that's definitely
needed. In fact I had a patch to do exactly that, with it ending up
like this:
static inline void indirect_branch_prediction_barrier(void)
{
asm volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
: ASM_CALL_CONSTRAINT
: : "rax", "rcx", "rdx", "memory");
}
I also renamed "entry_ibpb" -> "write_ibpb" since it's no longer just
for entry code.
--
Josh
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/6] IBPB cleanups and a fixup
2025-02-20 19:04 ` [PATCH 0/6] IBPB cleanups and a fixup Josh Poimboeuf
@ 2025-02-20 19:59 ` Yosry Ahmed
2025-02-20 20:47 ` Josh Poimboeuf
0 siblings, 1 reply; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-20 19:59 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Pawan Gupta, Andy Lutomirski,
Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Thu, Feb 20, 2025 at 11:04:44AM -0800, Josh Poimboeuf wrote:
> On Wed, Feb 19, 2025 at 10:08:20PM +0000, Yosry Ahmed wrote:
> > This series removes X86_FEATURE_USE_IBPB, and fixes a KVM nVMX bug in
> > the process. The motivation is mostly the confusing name of
> > X86_FEATURE_USE_IBPB, which sounds like it controls IBPBs in general,
> > but it only controls IBPBs for spectre_v2_mitigation. A side effect of
> > this confusion is the nVMX bug, where virtualizing IBRS correctly
> > depends on the spectre_v2_user mitigation.
> >
> > The feature bit is mostly redundant, except in controlling the IBPB in
> > the vCPU load path. For that, a separate static branch is introduced,
> > similar to switch_mm_*_ibpb.
>
> Thanks for doing this. A few months ago I was working on patches to fix
> the same thing but I got preempted multiple times over.
>
> > I wanted to do more, but decided to stay conservative. I was mainly
> > hoping to merge indirect_branch_prediction_barrier() with entry_ibpb()
> > to have a single IBPB primitive that always stuffs the RSB if the IBPB
> > doesn't, but this would add some overhead in paths that currently use
> > indirect_branch_prediction_barrier(), and I was not sure if that's
> > acceptable.
>
> We always rely on IBPB clearing RSB, so yes, I'd say that's definitely
> needed. In fact I had a patch to do exactly that, with it ending up
> like this:
I was mainly concerned about the overhead this adds, but if it's a
requirement then yes we should do it.
>
> static inline void indirect_branch_prediction_barrier(void)
> {
> asm volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
> : ASM_CALL_CONSTRAINT
> : : "rax", "rcx", "rdx", "memory");
> }
>
> I also renamed "entry_ibpb" -> "write_ibpb" since it's no longer just
> for entry code.
Do you want me to add this in this series or do you want to do it on top
of it? If you have a patch lying around I can also include it as-is.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/6] IBPB cleanups and a fixup
2025-02-20 19:59 ` Yosry Ahmed
@ 2025-02-20 20:47 ` Josh Poimboeuf
2025-02-20 21:50 ` Yosry Ahmed
0 siblings, 1 reply; 21+ messages in thread
From: Josh Poimboeuf @ 2025-02-20 20:47 UTC (permalink / raw)
To: Yosry Ahmed
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Pawan Gupta, Andy Lutomirski,
Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Thu, Feb 20, 2025 at 07:59:54PM +0000, Yosry Ahmed wrote:
> > static inline void indirect_branch_prediction_barrier(void)
> > {
> > asm volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
> > : ASM_CALL_CONSTRAINT
> > : : "rax", "rcx", "rdx", "memory");
> > }
> >
> > I also renamed "entry_ibpb" -> "write_ibpb" since it's no longer just
> > for entry code.
>
> Do you want me to add this in this series or do you want to do it on top
> of it? If you have a patch lying around I can also include it as-is.
Your patches are already an improvement and can be taken as-is:
Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
I'll try to dust off my patches soon and rebase them on yours.
--
Josh
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 0/6] IBPB cleanups and a fixup
2025-02-20 20:47 ` Josh Poimboeuf
@ 2025-02-20 21:50 ` Yosry Ahmed
0 siblings, 0 replies; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-20 21:50 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Pawan Gupta, Andy Lutomirski,
Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Thu, Feb 20, 2025 at 12:47:24PM -0800, Josh Poimboeuf wrote:
> On Thu, Feb 20, 2025 at 07:59:54PM +0000, Yosry Ahmed wrote:
> > > static inline void indirect_branch_prediction_barrier(void)
> > > {
> > > asm volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)
> > > : ASM_CALL_CONSTRAINT
> > > : : "rax", "rcx", "rdx", "memory");
> > > }
> > >
> > > I also renamed "entry_ibpb" -> "write_ibpb" since it's no longer just
> > > for entry code.
> >
> > Do you want me to add this in this series or do you want to do it on top
> > of it? If you have a patch lying around I can also include it as-is.
>
> Your patches are already an improvement and can be taken as-is:
>
> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
>
> I'll try to dust off my patches soon and rebase them on yours.
SGTM, thanks!
>
> --
> Josh
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers
2025-02-19 22:08 ` [PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers Yosry Ahmed
@ 2025-02-25 19:47 ` Sean Christopherson
2025-02-25 21:28 ` Yosry Ahmed
0 siblings, 1 reply; 21+ messages in thread
From: Sean Christopherson @ 2025-02-25 19:47 UTC (permalink / raw)
To: Yosry Ahmed
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Paolo Bonzini, kvm, linux-kernel
On Wed, Feb 19, 2025, Yosry Ahmed wrote:
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 6c56d5235f0f3..729a8ee24037b 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -1478,7 +1478,8 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
> * may switch the active VMCS multiple times).
> */
> if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
> - indirect_branch_prediction_barrier();
> + if (cpu_feature_enabled(X86_FEATURE_USE_IBPB))
Combine this into a single if-statement, to make it readable and because as-is
the outer if would need curly braces.
And since this check will stay around in the form of a static_branch, I vote to
check it first so that the checks on "buddy" are elided if vcpu_load_ibpb is disabled.
That'll mean the WARN_ON_ONCE() won't fire if we have a bug and someone is running
with mitigations disabled, but I'm a-ok with that.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load
2025-02-19 22:08 ` [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load Yosry Ahmed
@ 2025-02-25 19:49 ` Sean Christopherson
2025-02-25 21:27 ` Yosry Ahmed
0 siblings, 1 reply; 21+ messages in thread
From: Sean Christopherson @ 2025-02-25 19:49 UTC (permalink / raw)
To: Yosry Ahmed
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Paolo Bonzini, kvm, linux-kernel
On Wed, Feb 19, 2025, Yosry Ahmed wrote:
> Instead of using X86_FEATURE_USE_IBPB to guard the IBPB execution in the
> vCPU load path, introduce a static branch, similar to switch_mm_*_ibpb.
>
> This makes it obvious in spectre_v2_user_select_mitigation() what
> exactly is being toggled, instead of the unclear X86_FEATURE_USE_IBPB
> (which will be shortly removed). It also provides more fine-grained
> control, making it simpler to change/add paths that control the IBPB in
> the vCPU load path without affecting other IBPBs.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
> arch/x86/include/asm/nospec-branch.h | 2 ++
> arch/x86/kernel/cpu/bugs.c | 5 +++++
> arch/x86/kvm/svm/svm.c | 2 +-
> arch/x86/kvm/vmx/vmx.c | 2 +-
> 4 files changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index 7cbb76a2434b9..a22836c5fb338 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -552,6 +552,8 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
> DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
> DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
>
> +DECLARE_STATIC_KEY_FALSE(vcpu_load_ibpb);
How about ibpb_on_vcpu_load? To make it easy for readers to understand exactly
what the knob controls.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/6] KVM: nVMX: Always use IBPB to properly virtualize IBRS
2025-02-19 22:08 ` [PATCH 5/6] KVM: nVMX: Always use IBPB to properly virtualize IBRS Yosry Ahmed
2025-02-19 23:09 ` Jim Mattson
@ 2025-02-25 19:50 ` Sean Christopherson
1 sibling, 0 replies; 21+ messages in thread
From: Sean Christopherson @ 2025-02-25 19:50 UTC (permalink / raw)
To: Yosry Ahmed
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Paolo Bonzini, kvm, linux-kernel
On Wed, Feb 19, 2025, Yosry Ahmed wrote:
> On synthesized nested VM-exits in VMX, an IBPB is performed if IBRS is
> advertised to the guest to properly provide separate prediction domains
> for L1 and L2. However, this is currently conditional on
> X86_FEATURE_USE_IBPB, which depends on the host spectre_v2_user
> mitigation.
>
> In short, if spectre_v2_user=no, IBRS is not virtualized correctly and
> L1 becomes suspectible to attacks from L2. Fix this by performing the
> IBPB regardless of X86_FEATURE_USE_IBPB.
>
> Fixes: 2e7eab81425a ("KVM: VMX: Execute IBPB on emulated VM-exit when guest has IBRS")
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
Acked-by: Sean Christopherson <seanjc@google.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load
2025-02-25 19:49 ` Sean Christopherson
@ 2025-02-25 21:27 ` Yosry Ahmed
2025-02-25 22:40 ` Sean Christopherson
0 siblings, 1 reply; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-25 21:27 UTC (permalink / raw)
To: Sean Christopherson
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Paolo Bonzini, kvm, linux-kernel
February 25, 2025 at 11:49 AM, "Sean Christopherson" <seanjc@google.com> wrote:
>
> On Wed, Feb 19, 2025, Yosry Ahmed wrote:
> >
> > Instead of using X86_FEATURE_USE_IBPB to guard the IBPB execution in the
> > vCPU load path, introduce a static branch, similar to switch_mm_*_ibpb.
> >
> > This makes it obvious in spectre_v2_user_select_mitigation() what
> > exactly is being toggled, instead of the unclear X86_FEATURE_USE_IBPB
> > (which will be shortly removed). It also provides more fine-grained
> > control, making it simpler to change/add paths that control the IBPB in
> > the vCPU load path without affecting other IBPBs.
> >
> > Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> >
> > ---
> >
> > arch/x86/include/asm/nospec-branch.h | 2 ++
> > arch/x86/kernel/cpu/bugs.c | 5 +++++
> > arch/x86/kvm/svm/svm.c | 2 +-
> > arch/x86/kvm/vmx/vmx.c | 2 +-
> > 4 files changed, 9 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > index 7cbb76a2434b9..a22836c5fb338 100644
> > --- a/arch/x86/include/asm/nospec-branch.h
> > +++ b/arch/x86/include/asm/nospec-branch.h
> > @@ -552,6 +552,8 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
> > DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
> >
DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
> >
+DECLARE_STATIC_KEY_FALSE(vcpu_load_ibpb);
> >
>
> How about ibpb_on_vcpu_load? To make it easy for readers to understand exactly
> what the knob controls.
I was trying to remain consistent with the existing static branches' names, but I am fine with ibpb_on_vcpu_load if others don't object.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers
2025-02-25 19:47 ` Sean Christopherson
@ 2025-02-25 21:28 ` Yosry Ahmed
0 siblings, 0 replies; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-25 21:28 UTC (permalink / raw)
To: Sean Christopherson
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Paolo Bonzini, kvm, linux-kernel
February 25, 2025 at 11:47 AM, "Sean Christopherson" <seanjc@google.com> wrote:
>
> On Wed, Feb 19, 2025, Yosry Ahmed wrote:
> >
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 6c56d5235f0f3..729a8ee24037b 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -1478,7 +1478,8 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
> > * may switch the active VMCS multiple times).
> > */
> > if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
> > - indirect_branch_prediction_barrier();
> > + if (cpu_feature_enabled(X86_FEATURE_USE_IBPB))
>
> Combine this into a single if-statement, to make it readable and because as-is
> the outer if would need curly braces.
> And since this check will stay around in the form of a static_branch, I vote to
> check it first so that the checks on "buddy" are elided if vcpu_load_ibpb is disabled.
> That'll mean the WARN_ON_ONCE() won't fire if we have a bug and someone is running
> with mitigations disabled, but I'm a-ok with that.
SGTM, will do that in the next version. Thanks!
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load
2025-02-25 21:27 ` Yosry Ahmed
@ 2025-02-25 22:40 ` Sean Christopherson
2025-02-26 2:49 ` Yosry Ahmed
0 siblings, 1 reply; 21+ messages in thread
From: Sean Christopherson @ 2025-02-25 22:40 UTC (permalink / raw)
To: Yosry Ahmed
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Paolo Bonzini, kvm, linux-kernel
On Tue, Feb 25, 2025, Yosry Ahmed wrote:
> February 25, 2025 at 11:49 AM, "Sean Christopherson" <seanjc@google.com> wrote:
> >
> > On Wed, Feb 19, 2025, Yosry Ahmed wrote:
> > >
> > > Instead of using X86_FEATURE_USE_IBPB to guard the IBPB execution in the
> > > vCPU load path, introduce a static branch, similar to switch_mm_*_ibpb.
> > >
> > > This makes it obvious in spectre_v2_user_select_mitigation() what
> > > exactly is being toggled, instead of the unclear X86_FEATURE_USE_IBPB
> > > (which will be shortly removed). It also provides more fine-grained
> > > control, making it simpler to change/add paths that control the IBPB in
> > > the vCPU load path without affecting other IBPBs.
> > >
> > > Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> > >
> > > ---
> > >
> > > arch/x86/include/asm/nospec-branch.h | 2 ++
> > > arch/x86/kernel/cpu/bugs.c | 5 +++++
> > > arch/x86/kvm/svm/svm.c | 2 +-
> > > arch/x86/kvm/vmx/vmx.c | 2 +-
> > > 4 files changed, 9 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > > index 7cbb76a2434b9..a22836c5fb338 100644
> > > --- a/arch/x86/include/asm/nospec-branch.h
> > > +++ b/arch/x86/include/asm/nospec-branch.h
> > > @@ -552,6 +552,8 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
> > > DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
> > >
> DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
> > >
> +DECLARE_STATIC_KEY_FALSE(vcpu_load_ibpb);
> > >
> >
> > How about ibpb_on_vcpu_load? To make it easy for readers to understand exactly
> > what the knob controls.
>
> I was trying to remain consistent with the existing static branches' names,
> but I am fine with ibpb_on_vcpu_load if others don't object.
I assumed as much :-) I'm ok with vcpu_load_ibpb if that's what others prefer.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load
2025-02-25 22:40 ` Sean Christopherson
@ 2025-02-26 2:49 ` Yosry Ahmed
2025-02-27 0:46 ` Sean Christopherson
0 siblings, 1 reply; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-26 2:49 UTC (permalink / raw)
To: Sean Christopherson
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Paolo Bonzini, kvm, linux-kernel
On Tue, Feb 25, 2025 at 02:40:24PM -0800, Sean Christopherson wrote:
> On Tue, Feb 25, 2025, Yosry Ahmed wrote:
> > February 25, 2025 at 11:49 AM, "Sean Christopherson" <seanjc@google.com> wrote:
> > >
> > > On Wed, Feb 19, 2025, Yosry Ahmed wrote:
> > > >
> > > > Instead of using X86_FEATURE_USE_IBPB to guard the IBPB execution in the
> > > > vCPU load path, introduce a static branch, similar to switch_mm_*_ibpb.
> > > >
> > > > This makes it obvious in spectre_v2_user_select_mitigation() what
> > > > exactly is being toggled, instead of the unclear X86_FEATURE_USE_IBPB
> > > > (which will be shortly removed). It also provides more fine-grained
> > > > control, making it simpler to change/add paths that control the IBPB in
> > > > the vCPU load path without affecting other IBPBs.
> > > >
> > > > Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> > > >
> > > > ---
> > > >
> > > > arch/x86/include/asm/nospec-branch.h | 2 ++
> > > > arch/x86/kernel/cpu/bugs.c | 5 +++++
> > > > arch/x86/kvm/svm/svm.c | 2 +-
> > > > arch/x86/kvm/vmx/vmx.c | 2 +-
> > > > 4 files changed, 9 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > > > index 7cbb76a2434b9..a22836c5fb338 100644
> > > > --- a/arch/x86/include/asm/nospec-branch.h
> > > > +++ b/arch/x86/include/asm/nospec-branch.h
> > > > @@ -552,6 +552,8 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
> > > > DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
> > > >
> > DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
> > > >
> > +DECLARE_STATIC_KEY_FALSE(vcpu_load_ibpb);
> > > >
> > >
> > > How about ibpb_on_vcpu_load? To make it easy for readers to understand exactly
> > > what the knob controls.
> >
> > I was trying to remain consistent with the existing static branches' names,
> > but I am fine with ibpb_on_vcpu_load if others don't object.
>
> I assumed as much :-) I'm ok with vcpu_load_ibpb if that's what others prefer.
To be honest looking at this again I think I prefer consistency, so if
you don't mind and others don't chime in I'd rather keep it as-is.
Alternatively I can rename all the static branches (e.g.
ibpb_always_on_switch_mm) :P
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load
2025-02-26 2:49 ` Yosry Ahmed
@ 2025-02-27 0:46 ` Sean Christopherson
2025-02-27 0:54 ` Yosry Ahmed
0 siblings, 1 reply; 21+ messages in thread
From: Sean Christopherson @ 2025-02-27 0:46 UTC (permalink / raw)
To: Yosry Ahmed
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Paolo Bonzini, kvm, linux-kernel
On Wed, Feb 26, 2025, Yosry Ahmed wrote:
> On Tue, Feb 25, 2025 at 02:40:24PM -0800, Sean Christopherson wrote:
> > On Tue, Feb 25, 2025, Yosry Ahmed wrote:
> > > February 25, 2025 at 11:49 AM, "Sean Christopherson" <seanjc@google.com> wrote:
> > > >
> > > > On Wed, Feb 19, 2025, Yosry Ahmed wrote:
> > > > >
> > > > > Instead of using X86_FEATURE_USE_IBPB to guard the IBPB execution in the
> > > > > vCPU load path, introduce a static branch, similar to switch_mm_*_ibpb.
> > > > >
> > > > > This makes it obvious in spectre_v2_user_select_mitigation() what
> > > > > exactly is being toggled, instead of the unclear X86_FEATURE_USE_IBPB
> > > > > (which will be shortly removed). It also provides more fine-grained
> > > > > control, making it simpler to change/add paths that control the IBPB in
> > > > > the vCPU load path without affecting other IBPBs.
> > > > >
> > > > > Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> > > > >
> > > > > ---
> > > > >
> > > > > arch/x86/include/asm/nospec-branch.h | 2 ++
> > > > > arch/x86/kernel/cpu/bugs.c | 5 +++++
> > > > > arch/x86/kvm/svm/svm.c | 2 +-
> > > > > arch/x86/kvm/vmx/vmx.c | 2 +-
> > > > > 4 files changed, 9 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > > > > index 7cbb76a2434b9..a22836c5fb338 100644
> > > > > --- a/arch/x86/include/asm/nospec-branch.h
> > > > > +++ b/arch/x86/include/asm/nospec-branch.h
> > > > > @@ -552,6 +552,8 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
> > > > > DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
> > > > >
> > > DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
> > > > >
> > > +DECLARE_STATIC_KEY_FALSE(vcpu_load_ibpb);
> > > > >
> > > >
> > > > How about ibpb_on_vcpu_load? To make it easy for readers to understand exactly
> > > > what the knob controls.
> > >
> > > I was trying to remain consistent with the existing static branches' names,
> > > but I am fine with ibpb_on_vcpu_load if others don't object.
> >
> > I assumed as much :-) I'm ok with vcpu_load_ibpb if that's what others prefer.
>
> To be honest looking at this again I think I prefer consistency, so if
> you don't mind and others don't chime in I'd rather keep it as-is.
Works for me.
Actually, looking at the names again, wouldn't "switch_vcpu_ibpb" be better?
KVM doesn't do IBPB on every vCPU load or even every VMCS load, only when a
different vCPU is being loaded.
> Alternatively I can rename all the static branches (e.g.
> ibpb_always_on_switch_mm) :P
LOL, also works for me.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load
2025-02-27 0:46 ` Sean Christopherson
@ 2025-02-27 0:54 ` Yosry Ahmed
0 siblings, 0 replies; 21+ messages in thread
From: Yosry Ahmed @ 2025-02-27 0:54 UTC (permalink / raw)
To: Sean Christopherson
Cc: x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Andy Lutomirski, Paolo Bonzini, kvm, linux-kernel
On Wed, Feb 26, 2025 at 04:46:06PM -0800, Sean Christopherson wrote:
> On Wed, Feb 26, 2025, Yosry Ahmed wrote:
> > On Tue, Feb 25, 2025 at 02:40:24PM -0800, Sean Christopherson wrote:
> > > On Tue, Feb 25, 2025, Yosry Ahmed wrote:
> > > > February 25, 2025 at 11:49 AM, "Sean Christopherson" <seanjc@google.com> wrote:
> > > > >
> > > > > On Wed, Feb 19, 2025, Yosry Ahmed wrote:
> > > > > >
> > > > > > Instead of using X86_FEATURE_USE_IBPB to guard the IBPB execution in the
> > > > > > vCPU load path, introduce a static branch, similar to switch_mm_*_ibpb.
> > > > > >
> > > > > > This makes it obvious in spectre_v2_user_select_mitigation() what
> > > > > > exactly is being toggled, instead of the unclear X86_FEATURE_USE_IBPB
> > > > > > (which will be shortly removed). It also provides more fine-grained
> > > > > > control, making it simpler to change/add paths that control the IBPB in
> > > > > > the vCPU load path without affecting other IBPBs.
> > > > > >
> > > > > > Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> > > > > >
> > > > > > ---
> > > > > >
> > > > > > arch/x86/include/asm/nospec-branch.h | 2 ++
> > > > > > arch/x86/kernel/cpu/bugs.c | 5 +++++
> > > > > > arch/x86/kvm/svm/svm.c | 2 +-
> > > > > > arch/x86/kvm/vmx/vmx.c | 2 +-
> > > > > > 4 files changed, 9 insertions(+), 2 deletions(-)
> > > > > >
> > > > > > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > > > > > index 7cbb76a2434b9..a22836c5fb338 100644
> > > > > > --- a/arch/x86/include/asm/nospec-branch.h
> > > > > > +++ b/arch/x86/include/asm/nospec-branch.h
> > > > > > @@ -552,6 +552,8 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
> > > > > > DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
> > > > > >
> > > > DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
> > > > > >
> > > > +DECLARE_STATIC_KEY_FALSE(vcpu_load_ibpb);
> > > > > >
> > > > >
> > > > > How about ibpb_on_vcpu_load? To make it easy for readers to understand exactly
> > > > > what the knob controls.
> > > >
> > > > I was trying to remain consistent with the existing static branches' names,
> > > > but I am fine with ibpb_on_vcpu_load if others don't object.
> > >
> > > I assumed as much :-) I'm ok with vcpu_load_ibpb if that's what others prefer.
> >
> > To be honest looking at this again I think I prefer consistency, so if
> > you don't mind and others don't chime in I'd rather keep it as-is.
>
> Works for me.
>
> Actually, looking at the names again, wouldn't "switch_vcpu_ibpb" be better?
> KVM doesn't do IBPB on every vCPU load or even every VMCS load, only when a
> different vCPU is being loaded.
Sold :)
>
> > Alternatively I can rename all the static branches (e.g.
> > ibpb_always_on_switch_mm) :P
>
> LOL, also works for me.
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2025-02-27 0:54 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-19 22:08 [PATCH 0/6] IBPB cleanups and a fixup Yosry Ahmed
2025-02-19 22:08 ` [PATCH 1/6] x86/bugs: Move the X86_FEATURE_USE_IBPB check into callers Yosry Ahmed
2025-02-25 19:47 ` Sean Christopherson
2025-02-25 21:28 ` Yosry Ahmed
2025-02-19 22:08 ` [PATCH 2/6] x86/mm: Remove X86_FEATURE_USE_IBPB checks in cond_mitigation() Yosry Ahmed
2025-02-19 22:08 ` [PATCH 3/6] x86/bugs: Remove the X86_FEATURE_USE_IBPB check in ib_prctl_set() Yosry Ahmed
2025-02-19 22:08 ` [PATCH 4/6] x86/bugs: Use a static branch to guard IBPB on vCPU load Yosry Ahmed
2025-02-25 19:49 ` Sean Christopherson
2025-02-25 21:27 ` Yosry Ahmed
2025-02-25 22:40 ` Sean Christopherson
2025-02-26 2:49 ` Yosry Ahmed
2025-02-27 0:46 ` Sean Christopherson
2025-02-27 0:54 ` Yosry Ahmed
2025-02-19 22:08 ` [PATCH 5/6] KVM: nVMX: Always use IBPB to properly virtualize IBRS Yosry Ahmed
2025-02-19 23:09 ` Jim Mattson
2025-02-25 19:50 ` Sean Christopherson
2025-02-19 22:08 ` [PATCH 6/6] x86/bugs: Remove X86_FEATURE_USE_IBPB Yosry Ahmed
2025-02-20 19:04 ` [PATCH 0/6] IBPB cleanups and a fixup Josh Poimboeuf
2025-02-20 19:59 ` Yosry Ahmed
2025-02-20 20:47 ` Josh Poimboeuf
2025-02-20 21:50 ` Yosry Ahmed
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox