public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Fei Li <lifei.shirley@bytedance.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	 dave.hansen@linux.intel.com, liran.alon@oracle.com,
	hpa@zytor.com,  wanpeng.li@hotmail.com, kvm@vger.kernel.org,
	x86@kernel.org,  linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Subject: Re: [External] Re: [PATCH] KVM: x86: Latch INITs only in specific CPU states in KVM_SET_VCPU_EVENTS
Date: Mon, 8 Sep 2025 09:14:37 -0700	[thread overview]
Message-ID: <aL8A7WKHfAsAkPlh@google.com> (raw)
In-Reply-To: <c2979c40-0cf9-4238-9fb5-5cef6dd9f411@bytedance.com>

On Mon, Sep 08, 2025, Fei Li wrote:
> 
> On 9/5/25 10:59 PM, Fei Li wrote:
> > 
> > On 8/29/25 12:44 AM, Paolo Bonzini wrote:
> > > On Thu, Aug 28, 2025 at 5:13 PM Fei Li <lifei.shirley@bytedance.com>
> > > wrote:
> > > > Actually this is a bug triggered by one monitor tool in our production
> > > > environment. This monitor executes 'info registers -a' hmp at a fixed
> > > > frequency, even during VM startup process, which makes some AP stay in
> > > > KVM_MP_STATE_UNINITIALIZED forever. But this race only occurs with
> > > > extremely low probability, about 1~2 VM hangs per week.
> > > > 
> > > > Considering other emulators, like cloud-hypervisor and
> > > > firecracker maybe
> > > > also have similar potential race issues, I think KVM had better do some
> > > > handling. But anyway, I will check Qemu code to avoid such race. Thanks
> > > > for both of your comments. 🙂
> > > If you can check whether other emulators invoke KVM_SET_VCPU_EVENTS in
> > > similar cases, that of course would help understanding the situation
> > > better.
> > > 
> > > In QEMU, it is possible to delay KVM_GET_VCPU_EVENTS until after all
> > > vCPUs have halted.
> > > 
> > > Paolo
> > > 

Replacing the original message with a decently formatted version.  Please try to
format your emails for plain text, I assume something in your mail system inserted
a pile of line wraps and made the entire thing all but unreadable.

> > `info registers -a` hmp per 2ms[1]
> >                AP(vcpu1) thread[2]
> >       BSP(vcpu0) send INIT/SIPI[3]
> > 
> > [1] for each cpu: cpu_synchronize_state
> >     if !qemu_thread_is_self()
> >         1. insert to cpu->work_list, and handle asynchronously
> >         2. then kick the AP(vcpu1) by sending SIG_IPI/SIGUSR1 signal
> > 
> > [2] KVM: KVM_RUN and then schedule() in kvm_vcpu_block() loop
> >          KVM: checks signal_pending, breaks loop and  returns -EINTR
> >     Qemu: break kvm_cpu_exec loop, run
> >        1. qemu_wait_io_event()
> >           => process_queued_cpu_work => cpu->work_list.func()
> >              e.i. do_kvm_cpu_synchronize_state() callback
> >           => kvm_arch_get_registers
> >              => kvm_get_mp_state
> >                 /* KVM: get_mpstate also calls kvm_apic_accept_events() to handle INIT and SIPI */
> >                 => cpu->vcpu_dirty = true;
> >           // end of qemu_wait_io_event
> > 
> > [3] SeaBIOS: BSP enters non-root mode and runs reset_vector() in SeaBIOS.
> >     send INIT and then SIPI by writing APIC_ICR during smp_scan
> >     KVM: BSP(vcpu0) exits, then 
> >     => handle_apic_write
> >        => kvm_lapic_reg_write
> >           => kvm_apic_send_ipi to all APs
> >              => for each AP: __apic_accept_irq, e.g. for AP(vcpu1)
> >                 => case APIC_DM_INIT:
> >                    apic->pending_events = (1UL << KVM_APIC_INIT) (not kick the AP yet)
> >                 => case APIC_DM_STARTUP:
> >                    set_bit(KVM_APIC_SIPI, &apic->pending_events) (not kick the AP yet)
> > 
> > [2] 2. kvm_cpu_exec()
> >        => if (cpu->vcpu_dirty):
> >           => kvm_arch_put_registers
> >              => kvm_put_vcpu_events
> >                 KVM: kvm_vcpu_ioctl_x86_set_vcpu_events
> >                 => clear_bit(KVM_APIC_INIT, &vcpu->arch.apic->pending_events);
> >                    e.i. pending_events changes from 11b to 10b
> >                 // end of kvm_vcpu_ioctl_x86_set_vcpu_events

Qemu is clearly "putting" stale data here.

> >     Qemu: => after put_registers, cpu->vcpu_dirty = false;
> >           => kvm_vcpu_ioctl(cpu, KVM_RUN, 0)
> >              KVM: KVM_RUN
> >              => schedule() in kvm_vcpu_block() until Qemu's next SIG_IPI/SIGUSR1 signal
> >              /* But AP(vcpu1)'s mp_state will never change from KVM_MP_STATE_UNINITIALIZED
> >                 to KVM_MP_STATE_INIT_RECEIVED, even then to KVM_MP_STATE_RUNNABLE without
> >                 handling INIT inside kvm_apic_accept_events(), considering BSP will never
> >                 send INIT/SIPI again during smp_scan. Then AP(vcpu1) will never enter
> >                 non-root mode */
> > 
> > [3] SeaBIOS: waits CountCPUs == expected_cpus_count and loops forever
> >     e.i. the AP(vcpu1) stays: EIP=0000fff0 && CS =f000 ffff0000
> >     and BSP(vcpu0) appears 100%  utilized as it is in a while loop.

> By the way, this doesn't seem to be a Qemu bug, since calling "info
> registers -a" is allowed regardless of the vcpu state (including when the VM
> is in the bootloader). Thus the INIT should not be latched in this case.

No, this is a Qemu bug.  It is the VMM's responsibility to ensure it doesn't load
stale data into a vCPU.  There is simply no way for KVM to do the right thing,
because KVM can't know if userspace _wants_ to clobber events versus when userspace
is racing, as in this case.

E.g. the exact same race exists with NMIs.

  1. kvm_vcpu_ioctl_x86_get_vcpu_events() 
       vcpu->arch.nmi_queued   = 0
       vcpu->arch.nmi_pending  = 0
       kvm_vcpu_events.pending = 0

  2. kvm_inject_nmi()
       vcpu->arch.nmi_queued   = 1
       vcpu->arch.nmi_pending  = 0
       kvm_vcpu_events.pending = 0

  3. kvm_vcpu_ioctl_x86_set_vcpu_events()
       vcpu->arch.nmi_queued   = 0 // Moved to nmi_pending by process_nmi()
       vcpu->arch.nmi_pending  = 0 // Explicitly cleared after process_nmi() when KVM_VCPUEVENT_VALID_NMI_PENDING
       kvm_vcpu_events.pending = 0 // Stale data

But for NMI, Qemu avoids clobbering state thinks to a 15+ year old commit that
specifically avoids clobbering NMI *and SIPI* when not putting "reset" state:

  commit ea64305139357e89f58fc05ff5d48dc233d44d87
  Author:     Jan Kiszka <jan.kiszka@siemens.com>
  AuthorDate: Mon Mar 1 19:10:31 2010 +0100
  Commit:     Marcelo Tosatti <mtosatti@redhat.com>
  CommitDate: Thu Mar 4 00:29:30 2010 -0300

    KVM: x86: Restrict writeback of VCPU state
    
    Do not write nmi_pending, sipi_vector, and mpstate unless we at least go
    through a reset. And TSC as well as KVM wallclocks should only be
    written on full sync, otherwise we risk to drop some time on state
    read-modify-write.

    if (level >= KVM_PUT_RESET_STATE) {  <=========================
        events.flags |= KVM_VCPUEVENT_VALID_NMI_PENDING;
        if (env->mp_state == KVM_MP_STATE_SIPI_RECEIVED) {
            events.flags |= KVM_VCPUEVENT_VALID_SIPI_VECTOR;
        }
    }

Presumably "SMIs" need the same treatment, e.g.

diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 6c749d4ee8..f5bc0f9327 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -5033,7 +5033,7 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)
 
     events.sipi_vector = env->sipi_vector;
 
-    if (has_msr_smbase) {
+    if (has_msr_smbase && level >= KVM_PUT_RESET_STATE) {
         events.flags |= KVM_VCPUEVENT_VALID_SMM;
         events.smi.smm = !!(env->hflags & HF_SMM_MASK);
         events.smi.smm_inside_nmi = !!(env->hflags2 & HF2_SMM_INSIDE_NMI_MASK);

  reply	other threads:[~2025-09-08 16:14 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-27 15:27 [PATCH] KVM: x86: Latch INITs only in specific CPU states in KVM_SET_VCPU_EVENTS Fei Li
2025-08-27 16:01 ` Sean Christopherson
2025-08-27 16:08   ` Paolo Bonzini
2025-08-28 15:13     ` [External] " Fei Li
2025-08-28 16:44       ` Paolo Bonzini
2025-09-05 14:59         ` Fei Li
2025-09-08 14:55           ` Fei Li
2025-09-08 16:14             ` Sean Christopherson [this message]
2025-09-09  4:15               ` Fei Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aL8A7WKHfAsAkPlh@google.com \
    --to=seanjc@google.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=kvm@vger.kernel.org \
    --cc=lifei.shirley@bytedance.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=liran.alon@oracle.com \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=stable@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=wanpeng.li@hotmail.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox