From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
To: x86@kernel.org, David Kaplan <david.kaplan@amd.com>,
Nikolay Borisov <nik.borisov@suse.com>,
"H. Peter Anvin" <hpa@zytor.com>,
Josh Poimboeuf <jpoimboe@kernel.org>,
Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
Asit Mallick <asit.k.mallick@intel.com>,
Tao Zhang <tao1.zhang@intel.com>
Subject: Re: [PATCH v4 02/11] x86/bhi: Move the BHB sequence to a macro for reuse
Date: Mon, 24 Nov 2025 16:21:30 -0800 [thread overview]
Message-ID: <20251125002130.2dfsa7buv4aps5js@desk> (raw)
In-Reply-To: <20251119-vmscape-bhb-v4-2-1adad4e69ddc@linux.intel.com>
On Wed, Nov 19, 2025 at 10:18:04PM -0800, Pawan Gupta wrote:
> In preparation to make clear_bhb_loop() work for CPUs with larger BHB, move
> the sequence to a macro. This will allow setting the depth of BHB-clearing
> easily via arguments.
>
> No functional change intended.
>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---
> arch/x86/entry/entry_64.S | 37 +++++++++++++++++++++++--------------
> 1 file changed, 23 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
> index 886f86790b4467347031bc27d3d761d5cc286da1..a62dbc89c5e75b955ebf6d84f20d157d4bce0253 100644
> --- a/arch/x86/entry/entry_64.S
> +++ b/arch/x86/entry/entry_64.S
> @@ -1499,11 +1499,6 @@ SYM_CODE_END(rewind_stack_and_make_dead)
> * from the branch history tracker in the Branch Predictor, therefore removing
> * user influence on subsequent BTB lookups.
> *
> - * It should be used on parts prior to Alder Lake. Newer parts should use the
> - * BHI_DIS_S hardware control instead. If a pre-Alder Lake part is being
> - * virtualized on newer hardware the VMM should protect against BHI attacks by
> - * setting BHI_DIS_S for the guests.
> - *
> * CALLs/RETs are necessary to prevent Loop Stream Detector(LSD) from engaging
> * and not clearing the branch history. The call tree looks like:
> *
> @@ -1532,10 +1527,7 @@ SYM_CODE_END(rewind_stack_and_make_dead)
> * Note, callers should use a speculation barrier like LFENCE immediately after
> * a call to this function to ensure BHB is cleared before indirect branches.
> */
> -SYM_FUNC_START(clear_bhb_loop)
> - ANNOTATE_NOENDBR
> - push %rbp
> - mov %rsp, %rbp
> +.macro CLEAR_BHB_LOOP_SEQ
> movl $5, %ecx
> ANNOTATE_INTRA_FUNCTION_CALL
> call 1f
> @@ -1545,15 +1537,16 @@ SYM_FUNC_START(clear_bhb_loop)
> * Shift instructions so that the RET is in the upper half of the
> * cacheline and don't take the slowpath to its_return_thunk.
> */
> - .skip 32 - (.Lret1 - 1f), 0xcc
> + .skip 32 - (.Lret1_\@ - 1f), 0xcc
> ANNOTATE_INTRA_FUNCTION_CALL
> 1: call 2f
> -.Lret1: RET
> +.Lret1_\@:
> + RET
> .align 64, 0xcc
> /*
> - * As above shift instructions for RET at .Lret2 as well.
> + * As above shift instructions for RET at .Lret2_\@ as well.
> *
> - * This should be ideally be: .skip 32 - (.Lret2 - 2f), 0xcc
> + * This should ideally be: .skip 32 - (.Lret2_\@ - 2f), 0xcc
> * but some Clang versions (e.g. 18) don't like this.
> */
> .skip 32 - 18, 0xcc
> @@ -1564,8 +1557,24 @@ SYM_FUNC_START(clear_bhb_loop)
> jnz 3b
> sub $1, %ecx
> jnz 1b
> -.Lret2: RET
> +.Lret2_\@:
> + RET
> 5:
> +.endm
> +
> +/*
> + * This should be used on parts prior to Alder Lake. Newer parts should use the
> + * BHI_DIS_S hardware control instead. If a pre-Alder Lake part is being
> + * virtualized on newer hardware the VMM should protect against BHI attacks by
> + * setting BHI_DIS_S for the guests.
> + */
> +SYM_FUNC_START(clear_bhb_loop)
> + ANNOTATE_NOENDBR
> + push %rbp
> + mov %rsp, %rbp
> +
> + CLEAR_BHB_LOOP_SEQ
> +
> pop %rbp
> RET
> SYM_FUNC_END(clear_bhb_loop)
Dropping this and the next patch, they are not needed with globals for BHB
loop count.
next prev parent reply other threads:[~2025-11-25 0:21 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-20 6:17 [PATCH v4 00/11] VMSCAPE optimization for BHI variant Pawan Gupta
2025-11-20 6:17 ` [PATCH v4 01/11] x86/bhi: x86/vmscape: Move LFENCE out of clear_bhb_loop() Pawan Gupta
2025-11-20 16:15 ` Nikolay Borisov
2025-11-20 16:56 ` Pawan Gupta
2025-11-20 16:58 ` Nikolay Borisov
2025-11-20 6:18 ` [PATCH v4 02/11] x86/bhi: Move the BHB sequence to a macro for reuse Pawan Gupta
2025-11-20 16:28 ` Nikolay Borisov
2025-11-20 16:57 ` Pawan Gupta
2025-11-25 0:21 ` Pawan Gupta [this message]
2025-11-20 6:18 ` [PATCH v4 03/11] x86/bhi: Make the depth of BHB-clearing configurable Pawan Gupta
2025-11-20 17:02 ` Nikolay Borisov
2025-11-20 6:18 ` [PATCH v4 04/11] x86/bhi: Make clear_bhb_loop() effective on newer CPUs Pawan Gupta
2025-11-21 12:33 ` Nikolay Borisov
2025-11-21 16:40 ` Dave Hansen
2025-11-21 16:45 ` Nikolay Borisov
2025-11-21 16:50 ` Dave Hansen
2025-11-21 18:16 ` Pawan Gupta
2025-11-21 18:42 ` Dave Hansen
2025-11-21 21:26 ` Pawan Gupta
2025-11-21 21:36 ` Dave Hansen
2025-11-24 19:21 ` Pawan Gupta
2025-11-22 11:05 ` david laight
2025-11-24 19:31 ` Pawan Gupta
2025-11-25 11:34 ` david laight
2025-12-04 1:40 ` Pawan Gupta
2025-12-04 9:15 ` david laight
2025-12-04 21:56 ` Pawan Gupta
2025-12-05 9:21 ` david laight
2025-11-26 19:23 ` Pawan Gupta
2026-03-06 21:00 ` Jim Mattson
2026-03-06 22:32 ` Pawan Gupta
2026-03-06 22:57 ` Jim Mattson
2026-03-06 23:29 ` Pawan Gupta
2026-03-07 0:35 ` Jim Mattson
2026-03-07 1:00 ` Pawan Gupta
2026-03-07 1:10 ` Jim Mattson
2026-03-07 2:41 ` Pawan Gupta
2026-03-07 5:05 ` Jim Mattson
2026-03-09 22:29 ` Pawan Gupta
2026-03-09 23:05 ` Jim Mattson
2026-03-10 0:00 ` Pawan Gupta
2026-03-10 0:08 ` Jim Mattson
2026-03-10 0:52 ` Pawan Gupta
2025-11-20 6:18 ` [PATCH v4 05/11] x86/vmscape: Rename x86_ibpb_exit_to_user to x86_predictor_flush_exit_to_user Pawan Gupta
2025-11-20 6:19 ` [PATCH v4 06/11] x86/vmscape: Move mitigation selection to a switch() Pawan Gupta
2025-11-21 14:27 ` Nikolay Borisov
2025-11-24 23:09 ` Pawan Gupta
2025-11-25 10:19 ` Nikolay Borisov
2025-11-25 17:45 ` Pawan Gupta
2025-11-20 6:19 ` [PATCH v4 07/11] x86/vmscape: Use write_ibpb() instead of indirect_branch_prediction_barrier() Pawan Gupta
2025-11-21 12:59 ` Nikolay Borisov
2025-11-20 6:19 ` [PATCH v4 08/11] x86/vmscape: Use static_call() for predictor flush Pawan Gupta
2025-11-20 6:19 ` [PATCH v4 09/11] x86/vmscape: Deploy BHB clearing mitigation Pawan Gupta
2025-11-21 14:18 ` Nikolay Borisov
2025-11-21 18:29 ` Pawan Gupta
2025-11-21 14:23 ` Nikolay Borisov
2025-11-21 18:41 ` Pawan Gupta
2025-11-21 18:53 ` Nikolay Borisov
2025-11-21 21:29 ` Pawan Gupta
2025-11-20 6:20 ` [PATCH v4 10/11] x86/vmscape: Override conflicting attack-vector controls with =force Pawan Gupta
2025-11-21 18:04 ` Nikolay Borisov
2025-11-20 6:20 ` [PATCH v4 11/11] x86/vmscape: Add cmdline vmscape=on to override attack vector controls Pawan Gupta
2025-11-25 11:41 ` Nikolay Borisov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251125002130.2dfsa7buv4aps5js@desk \
--to=pawan.kumar.gupta@linux.intel.com \
--cc=asit.k.mallick@intel.com \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=david.kaplan@amd.com \
--cc=hpa@zytor.com \
--cc=jpoimboe@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=nik.borisov@suse.com \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=tao1.zhang@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox