Kernel KVM virtualization development
 help / color / mirror / Atom feed
From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
To: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Borislav Petkov <bp@alien8.de>,
	Peter Zijlstra <peterz@infradead.org>,
	Josh Poimboeuf <jpoimboe@kernel.org>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	Brendan Jackman <jackmanb@google.com>
Subject: Re: [PATCH v4 4/8] KVM: VMX: Handle MMIO Stale Data in VM-Enter assembly via ALTERNATIVES_2
Date: Fri, 31 Oct 2025 20:41:32 -0700	[thread overview]
Message-ID: <20251101034132.2qi5b2ysld6fi2cq@desk> (raw)
In-Reply-To: <20251031235524.cuwrx4qys46xnpjr@desk>

On Fri, Oct 31, 2025 at 04:55:37PM -0700, Pawan Gupta wrote:
> On Thu, Oct 30, 2025 at 05:30:36PM -0700, Sean Christopherson wrote:
> ...
> > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> > index 1f99a98a16a2..61a809790a58 100644
> > --- a/arch/x86/kvm/vmx/vmenter.S
> > +++ b/arch/x86/kvm/vmx/vmenter.S
> > @@ -71,6 +71,7 @@
> >   * @regs:	unsigned long * (to guest registers)
> >   * @flags:	VMX_RUN_VMRESUME:	use VMRESUME instead of VMLAUNCH
> >   *		VMX_RUN_SAVE_SPEC_CTRL: save guest SPEC_CTRL into vmx->spec_ctrl
> > + *		VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO: vCPU can access host MMIO
> >   *
> >   * Returns:
> >   *	0 on VM-Exit, 1 on VM-Fail
> > @@ -137,6 +138,12 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >  	/* Load @regs to RAX. */
> >  	mov (%_ASM_SP), %_ASM_AX
> >  
> > +	/* Stash "clear for MMIO" in EFLAGS.ZF (used below). */
> > +	ALTERNATIVE_2 "",								\
> > +		      __stringify(test $VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO, %ebx), 	\
> > +		      X86_FEATURE_CLEAR_CPU_BUF_MMIO,					\
> > +		      "", X86_FEATURE_CLEAR_CPU_BUF_VM
> > +
> >  	/* Check if vmlaunch or vmresume is needed */
> >  	bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
> >  
> > @@ -161,7 +168,12 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
> >  
> >  	/* Clobbers EFLAGS.ZF */
> > -	VM_CLEAR_CPU_BUFFERS
> > +	ALTERNATIVE_2 "",							\
> > +		      __stringify(jz .Lskip_clear_cpu_buffers;			\
> > +				  CLEAR_CPU_BUFFERS_SEQ;			\
> > +				  .Lskip_clear_cpu_buffers:),			\
> > +		      X86_FEATURE_CLEAR_CPU_BUF_MMIO,				\
> > +		      __CLEAR_CPU_BUFFERS, X86_FEATURE_CLEAR_CPU_BUF_VM
> 
> Another way to write this could be:
> 
> 	ALTERNATIVE_2 "jmp .Lskip_clear_cpu_buffers",					\
> 		      "jz  .Lskip_clear_cpu_buffers", X86_FEATURE_CLEAR_CPU_BUF_MMIO,	\
> 		      "",			      X86_FEATURE_CLEAR_CPU_BUF_VM
> 
> 	CLEAR_CPU_BUFFERS_SEQ
> .Lskip_clear_cpu_buffers:
> 
> With this jmp;verw; would show up in the disassembly on unaffected CPUs, I
> don't know how big a problem is that. OTOH, I find this easier to understand.

As far as execution is concerned, it basically boils down to 9 NOPs:

54:	48 8b 00             	mov    (%rax),%rax
				---
57:	90                   	nop
58:	90                   	nop
59:	90                   	nop
5a:	90                   	nop
5b:	90                   	nop
5c:	90                   	nop
5d:	90                   	nop
5e:	90                   	nop
5f:	90                   	nop
				---
60:	73 08                	jae

versus 1 near jump:

54:	48 8b 00             	mov    (%rax),%rax
				---
57:	eb 0b                	jmp    ffffffff81fa1064
59:	90                   	nop
5a:	90                   	nop
5b:	90                   	nop
5c:	90                   	nop
5d:	0f 00 2d dc ef 05 ff 	verw   -0xfa1024(%rip)
				---
64:	73 08                	jae

I can't tell which one is better.

  reply	other threads:[~2025-11-01  3:41 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-31  0:30 [PATCH v4 0/8] x86/bugs: KVM: L1TF and MMIO Stale Data cleanups Sean Christopherson
2025-10-31  0:30 ` [PATCH v4 1/8] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well Sean Christopherson
2025-10-31 11:30   ` Brendan Jackman
2025-11-01  1:46     ` Pawan Gupta
2025-11-03 18:18   ` Pawan Gupta
2025-11-07 19:05     ` Borislav Petkov
2025-11-11 22:03       ` Sean Christopherson
2025-11-12 10:23         ` Borislav Petkov
2025-11-12 18:19           ` Pawan Gupta
2025-11-12 18:17       ` Pawan Gupta
2025-11-07 18:59   ` Borislav Petkov
2025-11-12 18:02     ` Pawan Gupta
2025-10-31  0:30 ` [PATCH v4 2/8] x86/bugs: Decouple ALTERNATIVE usage from VERW macro definition Sean Christopherson
2025-10-31 11:37   ` Brendan Jackman
2025-10-31 17:43     ` Sean Christopherson
2025-11-01  4:13   ` Pawan Gupta
2025-11-03 17:00     ` Sean Christopherson
2025-11-03 17:40       ` Pawan Gupta
2025-11-12 12:15       ` Borislav Petkov
2025-10-31  0:30 ` [PATCH v4 3/8] x86/bugs: Use an X86_FEATURE_xxx flag for the MMIO Stale Data mitigation Sean Christopherson
2025-10-31 11:44   ` Brendan Jackman
2025-10-31 21:47     ` Sean Christopherson
2025-11-03 10:49       ` Brendan Jackman
2025-10-31 22:28   ` Pawan Gupta
2025-10-31 22:37     ` Sean Christopherson
2025-10-31 22:50       ` Pawan Gupta
2025-11-12 14:46   ` Borislav Petkov
2025-11-12 18:24     ` Pawan Gupta
2025-10-31  0:30 ` [PATCH v4 4/8] KVM: VMX: Handle MMIO Stale Data in VM-Enter assembly via ALTERNATIVES_2 Sean Christopherson
2025-10-31 12:32   ` Brendan Jackman
2025-10-31 21:44     ` Sean Christopherson
2025-11-03 10:51       ` Brendan Jackman
2025-10-31 23:55   ` Pawan Gupta
2025-11-01  3:41     ` Pawan Gupta [this message]
2025-11-03  9:17     ` Peter Zijlstra
2025-11-03 17:37       ` Pawan Gupta
2025-11-03 17:46   ` Pawan Gupta
2025-11-12 16:41   ` Borislav Petkov
2025-11-12 17:15     ` Sean Christopherson
2025-11-12 18:38       ` Borislav Petkov
2025-11-12 20:30         ` Sean Christopherson
2025-11-12 23:01           ` Pawan Gupta
2025-11-13 14:20           ` Borislav Petkov
2025-11-13 22:01             ` Sean Christopherson
2025-10-31  0:30 ` [PATCH v4 5/8] x86/bugs: KVM: Move VM_CLEAR_CPU_BUFFERS into SVM as SVM_CLEAR_CPU_BUFFERS Sean Christopherson
2025-10-31 12:34   ` Brendan Jackman
2025-11-13 15:03   ` Borislav Petkov
2025-11-13 15:37     ` Sean Christopherson
2025-11-13 16:19       ` Borislav Petkov
2025-10-31  0:30 ` [PATCH v4 6/8] KVM: VMX: Bundle all L1 data cache flush mitigation code together Sean Christopherson
2025-11-03 18:26   ` Pawan Gupta
2025-10-31  0:30 ` [PATCH v4 7/8] KVM: VMX: Disable L1TF L1 data cache flush if CONFIG_CPU_MITIGATIONS=n Sean Christopherson
2025-10-31 12:37   ` Brendan Jackman
2025-10-31  0:30 ` [PATCH v4 8/8] KVM: x86: Unify L1TF flushing under per-CPU variable Sean Christopherson
2025-10-31 11:22 ` [PATCH v4 0/8] x86/bugs: KVM: L1TF and MMIO Stale Data cleanups Brendan Jackman
2025-10-31 17:36   ` Sean Christopherson
2025-11-04 10:58     ` Brendan Jackman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251101034132.2qi5b2ysld6fi2cq@desk \
    --to=pawan.kumar.gupta@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=jackmanb@google.com \
    --cc=jpoimboe@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox