From: Sean Christopherson <seanjc@google.com>
To: Uros Bizjak <ubizjak@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Borislav Petkov <bp@alien8.de>,
Peter Zijlstra <peterz@infradead.org>,
Josh Poimboeuf <jpoimboe@kernel.org>,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
Brendan Jackman <jackmanb@google.com>
Subject: Re: [PATCH v5 1/9] KVM: VMX: Use on-stack copy of @flags in __vmx_vcpu_run()
Date: Tue, 18 Nov 2025 16:29:16 -0800 [thread overview]
Message-ID: <aR0PXEyP_OKuiQOO@google.com> (raw)
In-Reply-To: <6908a285-b7b7-457a-baaf-fd01c55fe571@gmail.com>
On Fri, Nov 14, 2025, Uros Bizjak wrote:
> On 11/14/25 00:37, Sean Christopherson wrote:
> > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> > index 574159a84ee9..93cf2ca7919a 100644
> > --- a/arch/x86/kvm/vmx/vmenter.S
> > +++ b/arch/x86/kvm/vmx/vmenter.S
> > @@ -92,7 +92,7 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > /* Save @vmx for SPEC_CTRL handling */
> > push %_ASM_ARG1
> > - /* Save @flags for SPEC_CTRL handling */
> > + /* Save @flags (used for VMLAUNCH vs. VMRESUME and mitigations). */
> > push %_ASM_ARG3
> > /*
> > @@ -101,9 +101,6 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > */
> > push %_ASM_ARG2
> > - /* Copy @flags to EBX, _ASM_ARG3 is volatile. */
> > - mov %_ASM_ARG3L, %ebx
> > -
> > lea (%_ASM_SP), %_ASM_ARG2
> > call vmx_update_host_rsp
> > @@ -147,9 +144,6 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > /* Load @regs to RAX. */
> > mov (%_ASM_SP), %_ASM_AX
> > - /* Check if vmlaunch or vmresume is needed */
> > - bt $VMX_RUN_VMRESUME_SHIFT, %ebx
> > -
> > /* Load guest registers. Don't clobber flags. */
> > mov VCPU_RCX(%_ASM_AX), %_ASM_CX
> > mov VCPU_RDX(%_ASM_AX), %_ASM_DX
> > @@ -173,8 +167,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > /* Clobbers EFLAGS.ZF */
> > CLEAR_CPU_BUFFERS
> > - /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
> > - jnc .Lvmlaunch
> > + /* Check @flags to see if vmlaunch or vmresume is needed. */
> > + testl $VMX_RUN_VMRESUME, WORD_SIZE(%_ASM_SP)
> > + jz .Lvmlaunch
>
>
> You could use TESTB instead of TESTL in the above code to save 3 bytes
> of code and some memory bandwidth.
>
> Assembler will report unwanted truncation if VMX_RUN_VRESUME ever
> becomes larger than 255.
Unfortunately, the warning with gcc isn't escalated to an error with -Werror,
e.g. with KVM_WERROR=y. And AFAICT clang's assembler doesn't warn at all and
happily generates garbage. E.g. with VMX_RUN_VMRESUME relocated to bit 10, clang
generates this without a warning:
33c: f6 44 24 08 00 testb $0x0,0x8(%rsp)
341: 74 08 je 34b <__vmx_vcpu_run+0x9b>
343: 0f 01 c3 vmresume
versus the expected:
33c: f7 44 24 08 00 04 00 testl $0x400,0x8(%rsp)
343: 00
344: 74 08 je 34e <__vmx_vcpu_run+0x9e>
346: 0f 01 c3 vmresume
So for now at least, I'll stick with testl.
next prev parent reply other threads:[~2025-11-19 0:29 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-13 23:37 [PATCH v5 0/9] x86/bugs: KVM: L1TF and MMIO Stale Data cleanups Sean Christopherson
2025-11-13 23:37 ` [PATCH v5 1/9] KVM: VMX: Use on-stack copy of @flags in __vmx_vcpu_run() Sean Christopherson
2025-11-14 12:36 ` Brendan Jackman
2025-11-14 15:06 ` Uros Bizjak
2025-11-19 0:29 ` Sean Christopherson [this message]
2025-11-14 16:40 ` Borislav Petkov
2025-11-13 23:37 ` [PATCH v5 2/9] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well Sean Christopherson
2025-11-14 12:40 ` Brendan Jackman
2025-11-13 23:37 ` [PATCH v5 3/9] x86/bugs: Decouple ALTERNATIVE usage from VERW macro definition Sean Christopherson
2025-11-17 10:11 ` Borislav Petkov
2025-11-17 15:33 ` Sean Christopherson
2025-11-18 10:32 ` Borislav Petkov
2025-11-13 23:37 ` [PATCH v5 4/9] x86/bugs: Use an x86 feature to track the MMIO Stale Data mitigation Sean Christopherson
2025-11-13 23:37 ` [PATCH v5 5/9] KVM: VMX: Handle MMIO Stale Data in VM-Enter assembly via ALTERNATIVES_2 Sean Christopherson
2025-11-14 12:55 ` Brendan Jackman
2025-11-13 23:37 ` [PATCH v5 6/9] x86/bugs: KVM: Move VM_CLEAR_CPU_BUFFERS into SVM as SVM_CLEAR_CPU_BUFFERS Sean Christopherson
2025-11-13 23:37 ` [PATCH v5 7/9] KVM: VMX: Bundle all L1 data cache flush mitigation code together Sean Christopherson
2025-11-13 23:37 ` [PATCH v5 8/9] KVM: VMX: Disable L1TF L1 data cache flush if CONFIG_CPU_MITIGATIONS=n Sean Christopherson
2025-11-13 23:37 ` [PATCH v5 9/9] KVM: x86: Unify L1TF flushing under per-CPU variable Sean Christopherson
2025-11-21 18:55 ` [PATCH v5 0/9] x86/bugs: KVM: L1TF and MMIO Stale Data cleanups Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aR0PXEyP_OKuiQOO@google.com \
--to=seanjc@google.com \
--cc=bp@alien8.de \
--cc=jackmanb@google.com \
--cc=jpoimboe@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=ubizjak@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox