From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
To: x86@kernel.org, David Kaplan <david.kaplan@amd.com>,
Nikolay Borisov <nik.borisov@suse.com>,
"H. Peter Anvin" <hpa@zytor.com>,
Josh Poimboeuf <jpoimboe@kernel.org>,
Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
Asit Mallick <asit.k.mallick@intel.com>,
Tao Zhang <tao1.zhang@intel.com>
Subject: [PATCH v4 09/11] x86/vmscape: Deploy BHB clearing mitigation
Date: Wed, 19 Nov 2025 22:19:51 -0800 [thread overview]
Message-ID: <20251119-vmscape-bhb-v4-9-1adad4e69ddc@linux.intel.com> (raw)
In-Reply-To: <20251119-vmscape-bhb-v4-0-1adad4e69ddc@linux.intel.com>
IBPB mitigation for VMSCAPE is an overkill on CPUs that are only affected
by the BHI variant of VMSCAPE. On such CPUs, eIBRS already provides
indirect branch isolation between guest and host userspace. However, branch
history from guest may also influence the indirect branches in host
userspace.
To mitigate the BHI aspect, use clear_bhb_loop().
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
Documentation/admin-guide/hw-vuln/vmscape.rst | 4 ++++
arch/x86/include/asm/nospec-branch.h | 2 ++
arch/x86/kernel/cpu/bugs.c | 30 ++++++++++++++++++++-------
3 files changed, 29 insertions(+), 7 deletions(-)
diff --git a/Documentation/admin-guide/hw-vuln/vmscape.rst b/Documentation/admin-guide/hw-vuln/vmscape.rst
index d9b9a2b6c114c05a7325e5f3c9d42129339b870b..dc63a0bac03d43d1e295de0791dd6497d101f986 100644
--- a/Documentation/admin-guide/hw-vuln/vmscape.rst
+++ b/Documentation/admin-guide/hw-vuln/vmscape.rst
@@ -86,6 +86,10 @@ The possible values in this file are:
run a potentially malicious guest and issues an IBPB before the first
exit to userspace after VM-exit.
+ * 'Mitigation: Clear BHB before exit to userspace':
+
+ As above, conditional BHB clearing mitigation is enabled.
+
* 'Mitigation: IBPB on VMEXIT':
IBPB is issued on every VM-exit. This occurs when other mitigations like
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 15a2fa8f2f48a066e102263513eff9537ac1d25f..1e8c26c37dbed4256b35101fb41c0e1eb6ef9272 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -388,6 +388,8 @@ extern void write_ibpb(void);
#ifdef CONFIG_X86_64
extern void clear_bhb_loop(void);
+#else
+static inline void clear_bhb_loop(void) {}
#endif
extern void (*x86_return_thunk)(void);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index cbb3341b9a19f835738eda7226323d88b7e41e52..d12c07ccf59479ecf590935607394492c988b2ff 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -109,9 +109,8 @@ DEFINE_PER_CPU(u64, x86_spec_ctrl_current);
EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current);
/*
- * Set when the CPU has run a potentially malicious guest. An IBPB will
- * be needed to before running userspace. That IBPB will flush the branch
- * predictor content.
+ * Set when the CPU has run a potentially malicious guest. Indicates that a
+ * branch predictor flush is needed before running userspace.
*/
DEFINE_PER_CPU(bool, x86_predictor_flush_exit_to_user);
EXPORT_PER_CPU_SYMBOL_GPL(x86_predictor_flush_exit_to_user);
@@ -3200,13 +3199,15 @@ enum vmscape_mitigations {
VMSCAPE_MITIGATION_AUTO,
VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER,
VMSCAPE_MITIGATION_IBPB_ON_VMEXIT,
+ VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER,
};
static const char * const vmscape_strings[] = {
- [VMSCAPE_MITIGATION_NONE] = "Vulnerable",
+ [VMSCAPE_MITIGATION_NONE] = "Vulnerable",
/* [VMSCAPE_MITIGATION_AUTO] */
- [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] = "Mitigation: IBPB before exit to userspace",
- [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT",
+ [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] = "Mitigation: IBPB before exit to userspace",
+ [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT",
+ [VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER] = "Mitigation: Clear BHB before exit to userspace",
};
static enum vmscape_mitigations vmscape_mitigation __ro_after_init =
@@ -3253,8 +3254,19 @@ static void __init vmscape_select_mitigation(void)
vmscape_mitigation = VMSCAPE_MITIGATION_NONE;
break;
+ case VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER:
+ if (!boot_cpu_has(X86_FEATURE_BHI_CTRL))
+ vmscape_mitigation = VMSCAPE_MITIGATION_NONE;
+ break;
case VMSCAPE_MITIGATION_AUTO:
- if (boot_cpu_has(X86_FEATURE_IBPB))
+ /*
+ * CPUs with BHI_CTRL(ADL and newer) can avoid the IBPB and use BHB
+ * clear sequence. These CPUs are only vulnerable to the BHI variant
+ * of the VMSCAPE attack and does not require an IBPB flush.
+ */
+ if (boot_cpu_has(X86_FEATURE_BHI_CTRL))
+ vmscape_mitigation = VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER;
+ else if (boot_cpu_has(X86_FEATURE_IBPB))
vmscape_mitigation = VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER;
else
vmscape_mitigation = VMSCAPE_MITIGATION_NONE;
@@ -3278,6 +3290,9 @@ static void __init vmscape_apply_mitigation(void)
{
if (vmscape_mitigation == VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER)
static_call_update(vmscape_predictor_flush, write_ibpb);
+ else if (vmscape_mitigation == VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER &&
+ IS_ENABLED(CONFIG_X86_64))
+ static_call_update(vmscape_predictor_flush, clear_bhb_loop);
}
#undef pr_fmt
@@ -3369,6 +3384,7 @@ void cpu_bugs_smt_update(void)
break;
case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT:
case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER:
+ case VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER:
/*
* Hypervisors can be attacked across-threads, warn for SMT when
* STIBP is not already enabled system-wide.
--
2.34.1
next prev parent reply other threads:[~2025-11-20 6:19 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-20 6:17 [PATCH v4 00/11] VMSCAPE optimization for BHI variant Pawan Gupta
2025-11-20 6:17 ` [PATCH v4 01/11] x86/bhi: x86/vmscape: Move LFENCE out of clear_bhb_loop() Pawan Gupta
2025-11-20 16:15 ` Nikolay Borisov
2025-11-20 16:56 ` Pawan Gupta
2025-11-20 16:58 ` Nikolay Borisov
2025-11-20 6:18 ` [PATCH v4 02/11] x86/bhi: Move the BHB sequence to a macro for reuse Pawan Gupta
2025-11-20 16:28 ` Nikolay Borisov
2025-11-20 16:57 ` Pawan Gupta
2025-11-25 0:21 ` Pawan Gupta
2025-11-20 6:18 ` [PATCH v4 03/11] x86/bhi: Make the depth of BHB-clearing configurable Pawan Gupta
2025-11-20 17:02 ` Nikolay Borisov
2025-11-20 6:18 ` [PATCH v4 04/11] x86/bhi: Make clear_bhb_loop() effective on newer CPUs Pawan Gupta
2025-11-21 12:33 ` Nikolay Borisov
2025-11-21 16:40 ` Dave Hansen
2025-11-21 16:45 ` Nikolay Borisov
2025-11-21 16:50 ` Dave Hansen
2025-11-21 18:16 ` Pawan Gupta
2025-11-21 18:42 ` Dave Hansen
2025-11-21 21:26 ` Pawan Gupta
2025-11-21 21:36 ` Dave Hansen
2025-11-24 19:21 ` Pawan Gupta
2025-11-22 11:05 ` david laight
2025-11-24 19:31 ` Pawan Gupta
2025-11-25 11:34 ` david laight
2025-12-04 1:40 ` Pawan Gupta
2025-12-04 9:15 ` david laight
2025-12-04 21:56 ` Pawan Gupta
2025-12-05 9:21 ` david laight
2025-11-26 19:23 ` Pawan Gupta
2026-03-06 21:00 ` Jim Mattson
2026-03-06 22:32 ` Pawan Gupta
2026-03-06 22:57 ` Jim Mattson
2026-03-06 23:29 ` Pawan Gupta
2026-03-07 0:35 ` Jim Mattson
2026-03-07 1:00 ` Pawan Gupta
2026-03-07 1:10 ` Jim Mattson
2026-03-07 2:41 ` Pawan Gupta
2026-03-07 5:05 ` Jim Mattson
2026-03-09 22:29 ` Pawan Gupta
2026-03-09 23:05 ` Jim Mattson
2026-03-10 0:00 ` Pawan Gupta
2026-03-10 0:08 ` Jim Mattson
2026-03-10 0:52 ` Pawan Gupta
2025-11-20 6:18 ` [PATCH v4 05/11] x86/vmscape: Rename x86_ibpb_exit_to_user to x86_predictor_flush_exit_to_user Pawan Gupta
2025-11-20 6:19 ` [PATCH v4 06/11] x86/vmscape: Move mitigation selection to a switch() Pawan Gupta
2025-11-21 14:27 ` Nikolay Borisov
2025-11-24 23:09 ` Pawan Gupta
2025-11-25 10:19 ` Nikolay Borisov
2025-11-25 17:45 ` Pawan Gupta
2025-11-20 6:19 ` [PATCH v4 07/11] x86/vmscape: Use write_ibpb() instead of indirect_branch_prediction_barrier() Pawan Gupta
2025-11-21 12:59 ` Nikolay Borisov
2025-11-20 6:19 ` [PATCH v4 08/11] x86/vmscape: Use static_call() for predictor flush Pawan Gupta
2025-11-20 6:19 ` Pawan Gupta [this message]
2025-11-21 14:18 ` [PATCH v4 09/11] x86/vmscape: Deploy BHB clearing mitigation Nikolay Borisov
2025-11-21 18:29 ` Pawan Gupta
2025-11-21 14:23 ` Nikolay Borisov
2025-11-21 18:41 ` Pawan Gupta
2025-11-21 18:53 ` Nikolay Borisov
2025-11-21 21:29 ` Pawan Gupta
2025-11-20 6:20 ` [PATCH v4 10/11] x86/vmscape: Override conflicting attack-vector controls with =force Pawan Gupta
2025-11-21 18:04 ` Nikolay Borisov
2025-11-20 6:20 ` [PATCH v4 11/11] x86/vmscape: Add cmdline vmscape=on to override attack vector controls Pawan Gupta
2025-11-25 11:41 ` Nikolay Borisov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251119-vmscape-bhb-v4-9-1adad4e69ddc@linux.intel.com \
--to=pawan.kumar.gupta@linux.intel.com \
--cc=asit.k.mallick@intel.com \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=david.kaplan@amd.com \
--cc=hpa@zytor.com \
--cc=jpoimboe@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=nik.borisov@suse.com \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=tao1.zhang@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.