From: sashiko-bot@kernel.org
To: "Pawan Gupta" <pawan.kumar.gupta@linux.intel.com>
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH v11 09/12] x86/vmscape: Use static_call() for predictor flush
Date: Sat, 25 Apr 2026 01:40:30 +0000 [thread overview]
Message-ID: <20260425014031.A058EC19425@smtp.kernel.org> (raw)
In-Reply-To: <20260422-vmscape-bhb-v11-9-b18e0cf32af4@linux.intel.com>
Sashiko AI review found 3 potential issue(s):
- [High] Performance regressions in highly sensitive fast paths due to replaced zero-overhead checks.
- [Low] Missing `<linux/static_call.h>` header inclusion in `arch/x86/include/asm/entry-common.h`.
- [Low] Out-of-sync tools ABI header.
--
commit e665a60c1a2370dfaa4dba80cd6f14bfe1c7e97b
Author: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
x86/vmscape: Use static_call() for predictor flush
This commit replaces the flag-based mitigation selector for VMSCAPE with a
static_call to the mitigation function. This avoids adding multiple checks
for different mitigation options at exit-to-userspace.
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
[ ... ]
> @@ -503,7 +503,7 @@
> #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */
> #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */
> #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */
> -#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */
> +/* Free */
> #define X86_FEATURE_ABMC (21*32+15) /* Assignable Bandwidth Monitoring Counters */
This isn't a bug, but freeing the feature flag here without updating
tools/arch/x86/include/asm/cpufeatures.h might trigger ABI header sync
warnings during tools builds. Is a follow-up planned for the tools header?
> diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
> --- a/arch/x86/include/asm/entry-common.h
> +++ b/arch/x86/include/asm/entry-common.h
> @@ -4,6 +4,7 @@
>
> #include <linux/randomize_kstack.h>
> #include <linux/user-return-notifier.h>
> +#include <linux/static_call_types.h>
Does this code need to include linux/static_call.h directly?
Using static_call_cond below relies on the macro definition from
linux/static_call.h. Depending on transitive includes might lead to
implicit function declaration or macro expansion errors.
[ ... ]
> @@ -94,10 +95,8 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
> */
> choose_random_kstack_offset(rdtsc());
>
> - /* Avoid unnecessary reads of 'x86_predictor_flush_exit_to_user' */
> - if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) &&
> - this_cpu_read(x86_predictor_flush_exit_to_user)) {
> - write_ibpb();
> + if (unlikely(this_cpu_read(x86_predictor_flush_exit_to_user))) {
> + static_call_cond(vmscape_predictor_flush)();
> this_cpu_write(x86_predictor_flush_exit_to_user, false);
> }
> }
Does removing the alternative-patched cpu feature check introduce a
performance regression here?
By dropping the zero-overhead check, it appears an unconditional per-cpu
memory read is now evaluated on every exit to userspace. This seems to
reverse the explicitly documented optimization to avoid unnecessary reads.
Could a static key or static branch be used to maintain the zero-overhead
path when the mitigation is disabled?
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
[ ... ]
> @@ -11463,7 +11463,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> * set for the CPU that actually ran the guest, and not the CPU that it
> * may migrate to.
> */
> - if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER))
> + if (static_call_query(vmscape_predictor_flush))
> this_cpu_write(x86_predictor_flush_exit_to_user, true);
>
> /*
Will replacing the cpu feature check with static_call_query add overhead on
the KVM VM-exit fast path?
Looking at linux/static_call.h, static_call_query reads a global function
pointer.
Would it be better to use a static key here to avoid the memory read when
the mitigation is disabled?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260422-vmscape-bhb-v11-0-b18e0cf32af4@linux.intel.com?part=9
next prev parent reply other threads:[~2026-04-25 1:40 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-23 6:14 [PATCH v11 00/12] VMSCAPE optimization for BHI variant Pawan Gupta
2026-04-23 6:14 ` [PATCH v11 01/12] x86/bhi: x86/vmscape: Move LFENCE out of clear_bhb_loop() Pawan Gupta
2026-04-23 6:15 ` [PATCH v11 02/12] x86/bhi: Make clear_bhb_loop() effective on newer CPUs Pawan Gupta
2026-04-23 6:15 ` [PATCH v11 03/12] x86/bhi: Rename clear_bhb_loop() to clear_bhb_loop_nofence() Pawan Gupta
2026-04-23 6:15 ` [PATCH v11 04/12] x86/vmscape: Rename x86_ibpb_exit_to_user to x86_predictor_flush_exit_to_user Pawan Gupta
2026-04-23 6:16 ` [PATCH v11 05/12] x86/vmscape: Move mitigation selection to a switch() Pawan Gupta
2026-04-23 6:16 ` [PATCH v11 06/12] x86/vmscape: Use write_ibpb() instead of indirect_branch_prediction_barrier() Pawan Gupta
2026-04-23 6:16 ` [PATCH v11 07/12] static_call: Define EXPORT_STATIC_CALL_FOR_MODULES() Pawan Gupta
2026-04-23 6:16 ` [PATCH v11 08/12] kvm: Define EXPORT_STATIC_CALL_FOR_KVM() Pawan Gupta
2026-04-23 13:12 ` Sean Christopherson
2026-04-23 6:17 ` [PATCH v11 09/12] x86/vmscape: Use static_call() for predictor flush Pawan Gupta
2026-04-25 1:40 ` sashiko-bot [this message]
2026-04-23 6:17 ` [PATCH v11 10/12] x86/vmscape: Deploy BHB clearing mitigation Pawan Gupta
2026-04-23 6:17 ` [PATCH v11 11/12] x86/vmscape: Resolve conflict between attack-vectors and vmscape=force Pawan Gupta
2026-04-23 6:17 ` [PATCH v11 12/12] x86/vmscape: Add cmdline vmscape=on to override attack vector controls Pawan Gupta
2026-04-23 17:00 ` [PATCH v11 00/12] VMSCAPE optimization for BHI variant Dave Hansen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260425014031.A058EC19425@smtp.kernel.org \
--to=sashiko-bot@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=sashiko@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox