public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Chao Gao <chao.gao@intel.com>
To: Sean Christopherson <seanjc@google.com>
Cc: <kvm@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] KVM: VMX: Flush shadow VMCS on emergency reboot
Date: Thu, 9 Oct 2025 13:36:44 +0800	[thread overview]
Message-ID: <aOdJ7JZWsfanX0JV@intel.com> (raw)
In-Reply-To: <aObtM-7S0UfIRreU@google.com>

On Wed, Oct 08, 2025 at 04:01:07PM -0700, Sean Christopherson wrote:
>Trimmed Cc: to lists, as this is basically off-topic, but I thought you might
>be amused :-)
>
>On Thu, Apr 10, 2025, Sean Christopherson wrote:
>> On Mon, Mar 24, 2025, Chao Gao wrote:
>> > Ensure the shadow VMCS cache is evicted during an emergency reboot to
>> > prevent potential memory corruption if the cache is evicted after reboot.
>> 
>> I don't suppose Intel would want to go on record and state what CPUs would actually
>> be affected by this bug.  My understanding is that Intel has never shipped a CPU
>> that caches shadow VMCS state.

Yes. Shadow VMCSs are never cached. But this is an implementation detail.
Per SDM, software is required to VMCLEAR a shadow VMCS that was made active
to be forward compatible.

>> 
>> On a very related topic, doesn't SPR+ now flush the VMCS caches on VMXOFF?  If
>> that's going to be the architectural behavior going forward, will that behavior
>> be enumerated to software?  Regardless of whether there's software enumeration,
>> I would like to have the emergency disable path depend on that behavior.  In part
>> to gain confidence that SEAM VMCSes won't screw over kdump, but also in light of
>> this bug.

Yes. The current implementation is that CPUs with SEAM support flush _all_
VMCS caches on VMXOFF. But the architectural behavior is trending toward
CPUs that enumerate IA32_VMX_PROCBASED_CTRLS3[5] as 1 flushing _SEAM_ VMCS
caches on VMXOFF.

>
>Apparently I completely purged it from my memory, but while poking through an
>internal branch related to moving VMXON out of KVM, I came across this:
>
>--
>Author:     Sean Christopherson <seanjc@google.com>
>AuthorDate: Wed Jan 17 16:19:28 2024 -0800
>Commit:     Sean Christopherson <seanjc@google.com>
>CommitDate: Fri Jan 26 13:16:31 2024 -0800
>
>    KVM: VMX: VMCLEAR loaded shadow VMCSes on kexec()
>    
>    Add a helper to VMCLEAR _all_ loaded VMCSes in a loaded_vmcs pair, and use
>    it when doing VMCLEAR before kexec() after a crash to fix a (likely benign)
>    bug where KVM neglects to VMCLEAR loaded shadow VMCSes.  The bug is likely
>    benign as existing Intel CPUs don't insert shadow VMCSes into the VMCS
>    cache, i.e. shadow VMCSes can't be evicted since they're never cached, and
>    thus won't clobber memory in the new kernel.
>
>--
>
>At least my reaction was more or less the same both times?
>
>> If all past CPUs never cache shadow VMCS state, and all future CPUs flush the
>> caches on VMXOFF, then this is a glorified NOP, and thus probably shouldn't be
>> tagged for stable.
>> 
>> > This issue was identified through code inspection, as __loaded_vmcs_clear()
>> > flushes both the normal VMCS and the shadow VMCS.
>> > 
>> > Avoid checking the "launched" state during an emergency reboot, unlike the
>> > behavior in __loaded_vmcs_clear(). This is important because reboot NMIs
>> > can interfere with operations like copy_shadow_to_vmcs12(), where shadow
>> > VMCSes are loaded directly using VMPTRLD. In such cases, if NMIs occur
>> > right after the VMCS load, the shadow VMCSes will be active but the
>> > "launched" state may not be set.
>> > 
>> > Signed-off-by: Chao Gao <chao.gao@intel.com>
>> > ---
>> >  arch/x86/kvm/vmx/vmx.c | 5 ++++-
>> >  1 file changed, 4 insertions(+), 1 deletion(-)
>> > 
>> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> > index b70ed72c1783..dccd1c9939b8 100644
>> > --- a/arch/x86/kvm/vmx/vmx.c
>> > +++ b/arch/x86/kvm/vmx/vmx.c
>> > @@ -769,8 +769,11 @@ void vmx_emergency_disable_virtualization_cpu(void)
>> >  		return;
>> >  
>> >  	list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu),
>> > -			    loaded_vmcss_on_cpu_link)
>> > +			    loaded_vmcss_on_cpu_link) {
>> >  		vmcs_clear(v->vmcs);
>> > +		if (v->shadow_vmcs)
>> > +			vmcs_clear(v->shadow_vmcs);
>> > +	}
>> >  
>> >  	kvm_cpu_vmxoff();
>> >  }
>> > -- 
>> > 2.46.1
>> > 

  reply	other threads:[~2025-10-09  5:36 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-24 14:08 [PATCH] KVM: VMX: Flush shadow VMCS on emergency reboot Chao Gao
2025-03-31 23:17 ` Huang, Kai
2025-04-10 21:55 ` Sean Christopherson
2025-04-11  8:46   ` Chao Gao
2025-04-11 16:57     ` Sean Christopherson
2025-04-14  6:24       ` Xiaoyao Li
2025-04-14 12:15       ` Huang, Kai
2025-04-14 13:18       ` Chao Gao
2025-04-15  1:03         ` Sean Christopherson
2025-04-15  1:55           ` Chao Gao
2025-10-08 23:01   ` Sean Christopherson
2025-10-09  5:36     ` Chao Gao [this message]
2025-10-10  1:16     ` dan.j.williams
2025-10-10 21:22       ` VMXON for TDX (was: Re: [PATCH] KVM: VMX: Flush shadow VMCS on emergency reboot) Sean Christopherson
2025-05-02 21:51 ` [PATCH] KVM: VMX: Flush shadow VMCS on emergency reboot Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aOdJ7JZWsfanX0JV@intel.com \
    --to=chao.gao@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox