From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FD3E37DEA0; Tue, 24 Mar 2026 18:18:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774376295; cv=none; b=obEDgGpBh32U4jxTyVNrjvqPlRu3yUn6cmTirMJ8MDm6Q4l9sBBFosk7v4sx8TJanBEC/TJAcZmWWy8iGsPJM9dWp8XPx37qnkPJZ8ANtTx1VoIJnykEGGEfrIUNoQKxnM+CkaQGVI9XQr2Q049F9woFroFTVDBviRu1X/XcaLg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774376295; c=relaxed/simple; bh=BfGqaoPByCgcbN0j0dFpAGziltvkHNZ8Ap33MPYcA44=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=igKH/IkjK9xoRVVteqfj83RgGMovmoRABYlVyMMe12kuUSjvo6mgs1xRjph4hhzCHWbYSHTv2WneLHH1hCZ5+BTvLMWuoLy78zz1cjY0zLVWQzCfMidIMwCe8VbGo3KpowADBYbrpsmVoxaJGuPZbTngHgaAhQaZV/rryMYHeb0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=W99TnVbE; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="W99TnVbE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1774376293; x=1805912293; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=BfGqaoPByCgcbN0j0dFpAGziltvkHNZ8Ap33MPYcA44=; b=W99TnVbE3ZZNke+Cvc3UfiQSatHCIzbbutIZ26vOrqqxwb3x8VcsAHOJ M86NToXkRotQ3cPLTHS2wkuuNmIRL9BCNYZF+M+0k0E5Z6DKla89SQYue 0iRalyH4VnOE969rA9ooO8MVRvu9A44A1NTKsNLtt7+lnIC8o+BZMOsWn /n2d3zr0EtyW+v9AMQzah9/IdW6KoRF8tDS7YdTeHX4BzJhK+RBBThWn2 yjzW8ibCw+s2ZxNsnL3RHU61Lgr1/8KUeXA/7X+pBncSs7gLkxhF8De3n ScswlCAhzRpQlKnmnMaA/67N+4FI3Zrg74719GxtZezxUKkxTUhPAK9lt g==; X-CSE-ConnectionGUID: epvPKMyaSNCaeNTzgr9jUQ== X-CSE-MsgGUID: EkZ4O4wuQeGu7efRooZmxw== X-IronPort-AV: E=McAfee;i="6800,10657,11739"; a="79001013" X-IronPort-AV: E=Sophos;i="6.23,138,1770624000"; d="scan'208";a="79001013" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2026 11:18:13 -0700 X-CSE-ConnectionGUID: PC0+nKMlSfu2jSbHqz99Dw== X-CSE-MsgGUID: bNPjS4C8Qb2R9CTIIOOLTQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,138,1770624000"; d="scan'208";a="224443670" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2026 11:18:14 -0700 Date: Tue, 24 Mar 2026 11:18:13 -0700 From: Pawan Gupta To: x86@kernel.org, Jon Kohler , Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v8 07/10] x86/vmscape: Use static_call() for predictor flush Message-ID: <20260324-vmscape-bhb-v8-7-68bb524b3ab9@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260324-vmscape-bhb-v8-0-68bb524b3ab9@linux.intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260324-vmscape-bhb-v8-0-68bb524b3ab9@linux.intel.com> Adding more mitigation options at exit-to-userspace for VMSCAPE would usually require a series of checks to decide which mitigation to use. In this case, the mitigation is done by calling a function, which is decided at boot. So, adding more feature flags and multiple checks can be avoided by using static_call() to the mitigating function. Replace the flag-based mitigation selector with a static_call(). This also frees the existing X86_FEATURE_IBPB_EXIT_TO_USER. Suggested-by: Dave Hansen Signed-off-by: Pawan Gupta --- arch/x86/Kconfig | 1 + arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/entry-common.h | 7 +++---- arch/x86/include/asm/nospec-branch.h | 3 +++ arch/x86/include/asm/processor.h | 1 + arch/x86/kernel/cpu/bugs.c | 14 +++++++++++++- arch/x86/kvm/x86.c | 2 +- 7 files changed, 23 insertions(+), 7 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index e2df1b147184..5b8def9ddb98 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2720,6 +2720,7 @@ config MITIGATION_TSA config MITIGATION_VMSCAPE bool "Mitigate VMSCAPE" depends on KVM + depends on HAVE_STATIC_CALL default y help Enable mitigation for VMSCAPE attacks. VMSCAPE is a hardware security diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index dbe104df339b..b4d529dd6d30 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -503,7 +503,7 @@ #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */ #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */ #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */ -#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */ +/* Free */ #define X86_FEATURE_ABMC (21*32+15) /* Assignable Bandwidth Monitoring Counters */ #define X86_FEATURE_MSR_IMM (21*32+16) /* MSR immediate form instructions */ #define X86_FEATURE_SGX_EUPDATESVN (21*32+17) /* Support for ENCLS[EUPDATESVN] instruction */ diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h index 78b143673ca7..783e7cb50cae 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -4,6 +4,7 @@ #include #include +#include #include #include @@ -94,10 +95,8 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs, */ choose_random_kstack_offset(rdtsc()); - /* Avoid unnecessary reads of 'x86_predictor_flush_exit_to_user' */ - if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && - this_cpu_read(x86_predictor_flush_exit_to_user)) { - write_ibpb(); + if (unlikely(this_cpu_read(x86_predictor_flush_exit_to_user))) { + static_call_cond(vmscape_predictor_flush)(); this_cpu_write(x86_predictor_flush_exit_to_user, false); } } diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 0a55b1c64741..e45e49f1e0c9 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -542,6 +542,9 @@ static inline void indirect_branch_prediction_barrier(void) :: "rax", "rcx", "rdx", "memory"); } +#include +DECLARE_STATIC_CALL(vmscape_predictor_flush, write_ibpb); + /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; DECLARE_PER_CPU(u64, x86_spec_ctrl_current); diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index a24c7805acdb..20ab4dd588c6 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -753,6 +753,7 @@ enum mds_mitigations { }; extern bool gds_ucode_mitigated(void); +extern bool vmscape_mitigation_enabled(void); /* * Make previous memory operations globally visible before diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 68e2df3e3bf5..a7dee7ec6ea3 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -144,6 +144,12 @@ EXPORT_SYMBOL_GPL(cpu_buf_idle_clear); */ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); +/* + * Controls how vmscape is mitigated e.g. via IBPB or BHB-clear + * sequence. This defaults to no mitigation. + */ +DEFINE_STATIC_CALL_NULL(vmscape_predictor_flush, write_ibpb); + #undef pr_fmt #define pr_fmt(fmt) "mitigations: " fmt @@ -3129,8 +3135,14 @@ static void __init vmscape_update_mitigation(void) static void __init vmscape_apply_mitigation(void) { if (vmscape_mitigation == VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER) - setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_TO_USER); + static_call_update(vmscape_predictor_flush, write_ibpb); +} + +bool vmscape_mitigation_enabled(void) +{ + return !!static_call_query(vmscape_predictor_flush); } +EXPORT_SYMBOL_FOR_KVM(vmscape_mitigation_enabled); #undef pr_fmt #define pr_fmt(fmt) fmt diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 45d7cfedc507..e204482e64f3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11463,7 +11463,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) * set for the CPU that actually ran the guest, and not the CPU that it * may migrate to. */ - if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER)) + if (vmscape_mitigation_enabled()) this_cpu_write(x86_predictor_flush_exit_to_user, true); /* -- 2.34.1