public inbox for linux-doc@vger.kernel.org
 help / color / mirror / Atom feed
From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
To: Sean Christopherson <seanjc@google.com>
Cc: x86@kernel.org, Jon Kohler <jon@nutanix.com>,
	Nikolay Borisov <nik.borisov@suse.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Josh Poimboeuf <jpoimboe@kernel.org>,
	David Kaplan <david.kaplan@amd.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andrii Nakryiko <andrii@kernel.org>,
	KP Singh <kpsingh@kernel.org>, Jiri Olsa <jolsa@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	David Laight <david.laight.linux@gmail.com>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	David Ahern <dsahern@kernel.org>,
	Martin KaFai Lau <martin.lau@linux.dev>,
	Eduard Zingerman <eddyz87@gmail.com>, Song Liu <song@kernel.org>,
	Yonghong Song <yonghong.song@linux.dev>,
	John Fastabend <john.fastabend@gmail.com>,
	Stanislav Fomichev <sdf@fomichev.me>, Hao Luo <haoluo@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Jonathan Corbet <corbet@lwn.net>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	Asit Mallick <asit.k.mallick@intel.com>,
	Tao Zhang <tao1.zhang@intel.com>,
	bpf@vger.kernel.org, netdev@vger.kernel.org,
	linux-doc@vger.kernel.org
Subject: Re: [PATCH v9 07/10] x86/vmscape: Use static_call() for predictor flush
Date: Fri, 3 Apr 2026 09:44:32 -0700	[thread overview]
Message-ID: <20260403164432.ltnr5oupddscwaqu@desk> (raw)
In-Reply-To: <ac_UJx99kDJY8j3t@google.com>

On Fri, Apr 03, 2026 at 07:52:23AM -0700, Sean Christopherson wrote:
> On Thu, Apr 02, 2026, Pawan Gupta wrote:
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -11463,7 +11463,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >  	 * set for the CPU that actually ran the guest, and not the CPU that it
> >  	 * may migrate to.
> >  	 */
> > -	if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER))
> > +	if (vmscape_mitigation_enabled())
> 
> This is pretty lame.  It turns a statically patched MOV

Yes it is, this was done ...

>   11548		if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER))
>   11549			this_cpu_write(x86_ibpb_exit_to_user, true);
>      0x000000000003c57a <+858>:	movb   $0x1,%gs:0x0(%rip)        # 0x3c582 <vcpu_enter_guest+866>
> 
> into a function call and two sets of conditional branches.  And with mitigations
> enabled, that function call may trigger the wonderful unret insanity
> 
>   11548		if (vmscape_mitigation_enabled())
>      0x000000000003c575 <+853>:	call   0x3c57a <vcpu_enter_guest+858>
>      0x000000000003c57a <+858>:	test   %al,%al
>      0x000000000003c57c <+860>:	je     0x3c586 <vcpu_enter_guest+870>
> 
>   11549			this_cpu_write(x86_predictor_flush_exit_to_user, true);
>      0x000000000003c57e <+862>:	movb   $0x1,%gs:0x0(%rip)        # 0x3c586 <vcpu_enter_guest+870>
> 
> 
>   3166	{
>      0xffffffff81285320 <+0>:	endbr64
>      0xffffffff81285324 <+4>:	call   0xffffffff812aa5a0 <__fentry__>
> 
>   3167		return !!static_call_query(vmscape_predictor_flush);
>      0xffffffff81285329 <+9>:	mov    0x13a4f30(%rip),%rax        # 0xffffffff8262a260 <__SCK__vmscape_predictor_flush>
>      0xffffffff81285330 <+16>:	test   %rax,%rax
>      0xffffffff81285333 <+19>:	setne  %al
> 
>   3168	}
>      0xffffffff81285336 <+22>:	jmp    0xffffffff81db1e30 <__x86_return_thunk>
> 
> While this isn't KVM's super hot inner run loop, it's still very much a hot path.
> Even more annoying, KVM will eat the function call on kernels with CPU_MITIGATIONS=n.
> 
> I'd like to at least do something like the below to make the common case of
> multiple guest entry/exits more or less free, and to avoid the CALL+(UN)RET
> overhead, but trying to include linux/static_call.h in processor.h (or any other
> core x86 header) creates a cyclical dependency :-/
> 
> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index 20ab4dd588c6..0dc0680a80f8 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -36,6 +36,7 @@ struct vm86;
>  #include <linux/err.h>
>  #include <linux/irqflags.h>
>  #include <linux/mem_encrypt.h>
> +#include <linux/static_call.h>
>  
>  /*
>   * We handle most unaligned accesses in hardware.  On the other hand
> @@ -753,7 +754,11 @@ enum mds_mitigations {
>  };
>  
>  extern bool gds_ucode_mitigated(void);
> -extern bool vmscape_mitigation_enabled(void);
> +
> +static inline bool vmscape_mitigation_enabled(void)
> +{
> +       return !!static_call_query(vmscape_predictor_flush);
> +}
>  
>  /*
>   * Make previous memory operations globally visible before
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 366ebe1e1fb9..02bf626f0773 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -148,6 +148,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
>   * sequence. This defaults to no mitigation.
>   */
>  DEFINE_STATIC_CALL_NULL(vmscape_predictor_flush, write_ibpb);
> +EXPORT_STATIC_CALL_GPL(vmscape_predictor_flush);

... to avoid exporting the static key, so that modules (other than KVM)
cannot do static_call_update(vmscape_predictor_flush).

Peter suggested changes that allowed adding EXPORT_STATIC_CALL_FOR_KVM():

  https://lore.kernel.org/all/20260319214409.GL3738786@noisy.programming.kicks-ass.net/

EXPORT_STATIC_CALL_FOR_KVM() seems to be a cleaner approach to me.

Boris, I know you didn't like exporting the static_key. But, as Sean said
this is a hot path, and avoiding the unnecessary call would benefit all
CPUs (affected or unaffected). Moreover, EXPORT_STATIC_CALL_FOR_KVM()
somewhat addresses your concern of exporting the static_key to the world.
Would you be okay with it?

  reply	other threads:[~2026-04-03 16:44 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-03  0:30 [PATCH v9 00/10] VMSCAPE optimization for BHI variant Pawan Gupta
2026-04-03  0:30 ` [PATCH v9 01/10] x86/bhi: x86/vmscape: Move LFENCE out of clear_bhb_loop() Pawan Gupta
2026-04-03 15:16   ` Borislav Petkov
2026-04-03 16:45     ` Pawan Gupta
2026-04-03 17:11       ` Borislav Petkov
2026-04-03  0:31 ` [PATCH v9 02/10] x86/bhi: Make clear_bhb_loop() effective on newer CPUs Pawan Gupta
2026-04-03 18:10   ` Jim Mattson
2026-04-03 18:52     ` Pawan Gupta
2026-04-03 20:19       ` Jim Mattson
2026-04-03 21:34         ` Pawan Gupta
2026-04-03 21:59           ` Jim Mattson
2026-04-03 23:16             ` Pawan Gupta
2026-04-03 23:22               ` Jim Mattson
2026-04-03 23:33                 ` Pawan Gupta
2026-04-03 23:39                   ` Jim Mattson
2026-04-04  0:21                     ` Pawan Gupta
2026-04-04  2:21                       ` Jim Mattson
2026-04-04  3:49                         ` Pawan Gupta
2026-04-06 14:23                           ` Jim Mattson
2026-04-03  0:31 ` [PATCH v9 03/10] x86/bhi: Rename clear_bhb_loop() to clear_bhb_loop_nofence() Pawan Gupta
2026-04-03  0:31 ` [PATCH v9 04/10] x86/vmscape: Rename x86_ibpb_exit_to_user to x86_predictor_flush_exit_to_user Pawan Gupta
2026-04-03  0:31 ` [PATCH v9 05/10] x86/vmscape: Move mitigation selection to a switch() Pawan Gupta
2026-04-03  0:32 ` [PATCH v9 06/10] x86/vmscape: Use write_ibpb() instead of indirect_branch_prediction_barrier() Pawan Gupta
2026-04-03  0:32 ` [PATCH v9 07/10] x86/vmscape: Use static_call() for predictor flush Pawan Gupta
2026-04-03 14:52   ` Sean Christopherson
2026-04-03 16:44     ` Pawan Gupta [this message]
2026-04-03 17:26       ` Pawan Gupta
2026-04-03  0:32 ` [PATCH v9 08/10] x86/vmscape: Deploy BHB clearing mitigation Pawan Gupta
2026-04-03  0:32 ` [PATCH v9 09/10] x86/vmscape: Resolve conflict between attack-vectors and vmscape=force Pawan Gupta
2026-04-03  0:33 ` [PATCH v9 10/10] x86/vmscape: Add cmdline vmscape=on to override attack vector controls Pawan Gupta
2026-04-04 15:20 ` [PATCH v9 00/10] VMSCAPE optimization for BHI variant David Laight
2026-04-05  7:23   ` Pawan Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260403164432.ltnr5oupddscwaqu@desk \
    --to=pawan.kumar.gupta@linux.intel.com \
    --cc=andrii@kernel.org \
    --cc=asit.k.mallick@intel.com \
    --cc=ast@kernel.org \
    --cc=bp@alien8.de \
    --cc=bpf@vger.kernel.org \
    --cc=corbet@lwn.net \
    --cc=daniel@iogearbox.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=davem@davemloft.net \
    --cc=david.kaplan@amd.com \
    --cc=david.laight.linux@gmail.com \
    --cc=dsahern@kernel.org \
    --cc=eddyz87@gmail.com \
    --cc=haoluo@google.com \
    --cc=hpa@zytor.com \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=jon@nutanix.com \
    --cc=jpoimboe@kernel.org \
    --cc=kpsingh@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=martin.lau@linux.dev \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=nik.borisov@suse.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=sdf@fomichev.me \
    --cc=seanjc@google.com \
    --cc=song@kernel.org \
    --cc=tao1.zhang@intel.com \
    --cc=tglx@kernel.org \
    --cc=x86@kernel.org \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox