public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Cc: x86@kernel.org, Nikolay Borisov <nik.borisov@suse.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Josh Poimboeuf <jpoimboe@kernel.org>,
	David Kaplan <david.kaplan@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andrii Nakryiko <andrii@kernel.org>,
	KP Singh <kpsingh@kernel.org>, Jiri Olsa <jolsa@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	David Laight <david.laight.linux@gmail.com>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	David Ahern <dsahern@kernel.org>,
	Martin KaFai Lau <martin.lau@linux.dev>,
	Eduard Zingerman <eddyz87@gmail.com>, Song Liu <song@kernel.org>,
	Yonghong Song <yonghong.song@linux.dev>,
	John Fastabend <john.fastabend@gmail.com>,
	Stanislav Fomichev <sdf@fomichev.me>, Hao Luo <haoluo@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Jonathan Corbet <corbet@lwn.net>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	Asit Mallick <asit.k.mallick@intel.com>,
	Tao Zhang <tao1.zhang@intel.com>,
	bpf@vger.kernel.org, netdev@vger.kernel.org,
	linux-doc@vger.kernel.org
Subject: Re: [PATCH v7 07/10] x86/vmscape: Use static_call() for predictor flush
Date: Thu, 19 Mar 2026 22:44:09 +0100	[thread overview]
Message-ID: <20260319214409.GL3738786@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <20260319213421.br6na4dulrjm6eke@desk>

On Thu, Mar 19, 2026 at 02:34:21PM -0700, Pawan Gupta wrote:
> On Thu, Mar 19, 2026 at 09:58:02PM +0100, Peter Zijlstra wrote:
> > On Thu, Mar 19, 2026 at 08:41:54AM -0700, Pawan Gupta wrote:
> > > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > > index 68e2df3e3bf5..b75eda114503 100644
> > > --- a/arch/x86/kernel/cpu/bugs.c
> > > +++ b/arch/x86/kernel/cpu/bugs.c
> > > @@ -144,6 +144,17 @@ EXPORT_SYMBOL_GPL(cpu_buf_idle_clear);
> > >   */
> > >  DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
> > >  
> > > +/*
> > > + * Controls CPU Fill buffer clear before VMenter. This is a subset of
> > > + * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only
> > > + * mitigation is required.
> > > + */
> > > +DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear);
> > > +EXPORT_SYMBOL_GPL(cpu_buf_vm_clear);
> > > +
> > > +DEFINE_STATIC_CALL_NULL(vmscape_predictor_flush, write_ibpb);
> > > +EXPORT_STATIC_CALL_GPL(vmscape_predictor_flush);
> > 
> > Does that want to be:
> > 
> > EXPORT_STATIC_CALL_TRAMP_GPL(vmscape_predictor_flush);
> > 
> > The distinction being that if you only export the trampoline, modules
> > can do the static_call() thing, but cannot do static_call_update().
> 
> Right, modules shouldn't be updating this static_call().
> 
> One caveat of not exporting the static key is that KVM uses the key to
> determine whether the mitigation is deployed or not:
> 
>   vcpu_enter_guest()
>   {
>       ...
> 
>      /*
>       * Mark this CPU as needing a branch predictor flush before running
>       * userspace. Must be done before enabling preemption to ensure it gets
>       * set for the CPU that actually ran the guest, and not the CPU that it
>       * may migrate to.
>       */
>      if (static_call_query(vmscape_predictor_flush))
>                    this_cpu_write(x86_predictor_flush_exit_to_user, true);
> 
> With _TRAMP, KVM complains:
> 
>  ERROR: modpost: "__SCK__vmscape_predictor_flush" [arch/x86/kvm/kvm.ko] undefined!

Ah, tricky. Yeah, this would need to be solved differenlty. Perhaps wrap
this in a helper and export that?

Or use the below little thing and change it to
EXPORT_STATIC_CALL_FOR_MODULES(foo, "kvm"); or whatnot.

> Probably one option is to somehow make sure that the key can be set to
> __ro_after_init? I don't see a use case for modifying the static_call() after
> boot.

So we have __ro_after_init for static_branch, but we'd not done
it for static_call yet. It shouldn't be terribly difficult, just hasn't
been done. Not sure this is the moment to do so.


---
diff --git a/include/linux/static_call.h b/include/linux/static_call.h
index 78a77a4ae0ea..b610afd1ed55 100644
--- a/include/linux/static_call.h
+++ b/include/linux/static_call.h
@@ -216,6 +216,9 @@ extern long __static_call_return0(void);
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+#define EXPORT_STATIC_CALL_FOR_MODULES(name, mods)			\
+	EXPORT_SYMBOL_FOR_MODULES(STATIC_CALL_KEY(name), mods);		\
+	EXPORT_SYMBOL_FOR_MODULES(STATIC_CALL_TRAMP(name), mods)
 
 /* Leave the key unexported, so modules can't change static call targets: */
 #define EXPORT_STATIC_CALL_TRAMP(name)					\
@@ -276,6 +279,9 @@ extern long __static_call_return0(void);
 #define EXPORT_STATIC_CALL_GPL(name)					\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name));			\
 	EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name))
+#define EXPORT_STATIC_CALL_FOR_MODULES(name, mods)			\
+	EXPORT_SYMBOL_FOR_MODULES(STATIC_CALL_KEY(name), mods);		\
+	EXPORT_SYMBOL_FOR_MODULES(STATIC_CALL_TRAMP(name), mods)
 
 /* Leave the key unexported, so modules can't change static call targets: */
 #define EXPORT_STATIC_CALL_TRAMP(name)					\
@@ -346,6 +352,8 @@ static inline int static_call_text_reserved(void *start, void *end)
 
 #define EXPORT_STATIC_CALL(name)	EXPORT_SYMBOL(STATIC_CALL_KEY(name))
 #define EXPORT_STATIC_CALL_GPL(name)	EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name))
+#define EXPORT_STATIC_CALL_FOR_MODULES(name, mods)			\
+	EXPORT_SYMBOL_FOR_MODULES(STATIC_CALL_KEY(name), mods)
 
 #endif /* CONFIG_HAVE_STATIC_CALL */
 


  reply	other threads:[~2026-03-19 21:44 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19 15:40 [PATCH v7 00/10] VMSCAPE optimization for BHI variant Pawan Gupta
2026-03-19 15:40 ` [PATCH v7 01/10] x86/bhi: x86/vmscape: Move LFENCE out of clear_bhb_loop() Pawan Gupta
2026-03-19 15:40 ` [PATCH v7 02/10] x86/bhi: Make clear_bhb_loop() effective on newer CPUs Pawan Gupta
2026-03-19 15:40 ` [PATCH v7 03/10] x86/bhi: Rename clear_bhb_loop() to clear_bhb_loop_nofence() Pawan Gupta
2026-03-23 14:44   ` Nikolay Borisov
2026-03-23 17:07     ` Pawan Gupta
2026-03-19 15:41 ` [PATCH v7 04/10] x86/vmscape: Rename x86_ibpb_exit_to_user to x86_predictor_flush_exit_to_user Pawan Gupta
2026-03-19 15:41 ` [PATCH v7 05/10] x86/vmscape: Move mitigation selection to a switch() Pawan Gupta
2026-03-19 15:41 ` [PATCH v7 06/10] x86/vmscape: Use write_ibpb() instead of indirect_branch_prediction_barrier() Pawan Gupta
2026-03-19 15:41 ` [PATCH v7 07/10] x86/vmscape: Use static_call() for predictor flush Pawan Gupta
2026-03-19 16:56   ` bot+bpf-ci
2026-03-19 18:05     ` Pawan Gupta
2026-03-19 20:58   ` Peter Zijlstra
2026-03-19 21:34     ` Pawan Gupta
2026-03-19 21:44       ` Peter Zijlstra [this message]
2026-03-19 22:06         ` Pawan Gupta
2026-03-20  6:22         ` Pawan Gupta
2026-03-20  9:03           ` Peter Zijlstra
2026-03-20 11:31             ` Borislav Petkov
2026-03-20 18:23               ` Pawan Gupta
2026-03-24 20:00                 ` Borislav Petkov
2026-03-24 20:14                   ` Pawan Gupta
2026-03-19 15:42 ` [PATCH v7 08/10] x86/vmscape: Deploy BHB clearing mitigation Pawan Gupta
2026-03-19 15:42 ` [PATCH v7 09/10] x86/vmscape: Fix conflicting attack-vector controls with =force Pawan Gupta
2026-03-19 15:42 ` [PATCH v7 10/10] x86/vmscape: Add cmdline vmscape=on to override attack vector controls Pawan Gupta
2026-03-19 16:40   ` bot+bpf-ci
2026-03-19 17:57     ` Pawan Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260319214409.GL3738786@noisy.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=andrii@kernel.org \
    --cc=asit.k.mallick@intel.com \
    --cc=ast@kernel.org \
    --cc=bp@alien8.de \
    --cc=bpf@vger.kernel.org \
    --cc=corbet@lwn.net \
    --cc=daniel@iogearbox.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=davem@davemloft.net \
    --cc=david.kaplan@amd.com \
    --cc=david.laight.linux@gmail.com \
    --cc=dsahern@kernel.org \
    --cc=eddyz87@gmail.com \
    --cc=haoluo@google.com \
    --cc=hpa@zytor.com \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=jpoimboe@kernel.org \
    --cc=kpsingh@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=martin.lau@linux.dev \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=nik.borisov@suse.com \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=pbonzini@redhat.com \
    --cc=sdf@fomichev.me \
    --cc=seanjc@google.com \
    --cc=song@kernel.org \
    --cc=tao1.zhang@intel.com \
    --cc=tglx@kernel.org \
    --cc=x86@kernel.org \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox