public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation
@ 2025-11-10  6:32 Dongli Zhang
  2025-11-10  7:08 ` Chao Gao
  2025-11-12 14:47 ` Sean Christopherson
  0 siblings, 2 replies; 9+ messages in thread
From: Dongli Zhang @ 2025-11-10  6:32 UTC (permalink / raw)
  To: kvm
  Cc: x86, linux-kernel, chao.gao, seanjc, pbonzini, tglx, mingo, bp,
	dave.hansen, hpa, joe.jin, alejandro.j.jimenez

The APICv (apic->apicv_active) can be activated or deactivated at runtime,
for instance, because of APICv inhibit reasons. Intel VMX employs different
mechanisms to virtualize LAPIC based on whether APICv is active.

When APICv is activated at runtime, GUEST_INTR_STATUS is used to configure
and report the current pending IRR and ISR states. Unless a specific vector
is explicitly included in EOI_EXIT_BITMAP, its EOI will not be trapped to
KVM. Intel VMX automatically clears the corresponding ISR bit based on the
GUEST_INTR_STATUS.SVI field.

When APICv is deactivated at runtime, the VM_ENTRY_INTR_INFO_FIELD is used
to specify the next interrupt vector to invoke upon VM-entry. The
VMX IDT_VECTORING_INFO_FIELD is used to report un-invoked vectors on
VM-exit. EOIs are always trapped to KVM, so the software can manually clear
pending ISR bits.

There are scenarios where, with APICv activated at runtime, a guest-issued
EOI may not be able to clear the pending ISR bit.

Taking vector 236 as an example, here is one scenario.

1. Suppose APICv is inactive. Vector 236 is pending in the IRR.
2. To handle KVM_REQ_EVENT, KVM moves vector 236 from the IRR to the ISR,
and configures the VM_ENTRY_INTR_INFO_FIELD via vmx_inject_irq().
3. After VM-entry, vector 236 is invoked through the guest IDT. At this
point, the data in VM_ENTRY_INTR_INFO_FIELD is no longer valid. The guest
interrupt handler for vector 236 is invoked.
4. Suppose a VM exit occurs very early in the guest interrupt handler,
before the EOI is issued.
5. Nothing is reported through the IDT_VECTORING_INFO_FIELD because
vector 236 has already been invoked in the guest.
6. Now, suppose APICv is activated. Before the next VM-entry, KVM calls
kvm_vcpu_update_apicv() to activate APICv.
7. Unfortunately, GUEST_INTR_STATUS.SVI is not configured, although
vector 236 is still pending in the ISR.
8. After VM-entry, the guest finally issues the EOI for vector 236.
However, because SVI is not configured, vector 236 is not cleared.
9. ISR is stalled forever on vector 236.

Here is another scenario.

1. Suppose APICv is inactive. Vector 236 is pending in the IRR.
2. To handle KVM_REQ_EVENT, KVM moves vector 236 from the IRR to the ISR,
and configures the VM_ENTRY_INTR_INFO_FIELD via vmx_inject_irq().
3. VM-exit occurs immediately after the next VM-entry. The vector 236 is
not invoked through the guest IDT. Instead, it is saved to the
IDT_VECTORING_INFO_FIELD during the VM-exit.
4. KVM calls kvm_queue_interrupt() to re-queue the un-invoked vector 236
into vcpu->arch.interrupt. A KVM_REQ_EVENT is requested.
5. Now, suppose APICv is activated. Before the next VM-entry, KVM calls
kvm_vcpu_update_apicv() to activate APICv.
6. Although APICv is now active, KVM still uses the legacy
VM_ENTRY_INTR_INFO_FIELD to re-inject vector 236. GUEST_INTR_STATUS.SVI is
not configured.
7. After the next VM-entry, vector 236 is invoked through the guest IDT.
Finally, an EOI occurs. However, due to the lack of GUEST_INTR_STATUS.SVI
configuration, vector 236 is not cleared from the ISR.
8. ISR is stalled forever on vector 236.

Using QEMU as an example, vector 236 is stuck in ISR forever.

(qemu) info lapic 1
dumping local APIC state for CPU 1

LVT0	 0x00010700 active-hi edge  masked                      ExtINT (vec 0)
LVT1	 0x00010400 active-hi edge  masked                      NMI
LVTPC	 0x00000400 active-hi edge                              NMI
LVTERR	 0x000000fe active-hi edge                              Fixed  (vec 254)
LVTTHMR	 0x00010000 active-hi edge  masked                      Fixed  (vec 0)
LVTT	 0x000400ec active-hi edge                 tsc-deadline Fixed  (vec 236)
Timer	 DCR=0x0 (divide by 2) initial_count = 0 current_count = 0
SPIV	 0x000001ff APIC enabled, focus=off, spurious vec 255
ICR	 0x000000fd physical edge de-assert no-shorthand
ICR2	 0x00000000 cpu 0 (X2APIC ID)
ESR	 0x00000000
ISR	 236
IRR	 37(level) 236

The issue is not applicable to AMD SVM which employs a different LAPIC
virtualization mechanism. In addition, APICV_INHIBIT_REASON_IRQWIN ensures
AMD SVM AVIC is not activated until the last interrupt is EOI.

Fix the bug by configuring Intel VMX GUEST_INTR_STATUS.SVI if APICv is
activated at runtime.

Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
Changed since v2:
  - Add support for guest mode (suggested by Chao Gao).
  - Add comments in the code (suggested by Chao Gao).
  - Remove WARN_ON_ONCE from vmx_hwapic_isr_update().
  - Edit commit message "AMD SVM APICv" to "AMD SVM AVIC"
    (suggested by Alejandro Jimenez).

 arch/x86/kvm/vmx/vmx.c | 9 ---------
 arch/x86/kvm/x86.c     | 7 +++++++
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f87c216d976d..d263dbf0b917 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6878,15 +6878,6 @@ void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
 	 * VM-Exit, otherwise L1 with run with a stale SVI.
 	 */
 	if (is_guest_mode(vcpu)) {
-		/*
-		 * KVM is supposed to forward intercepted L2 EOIs to L1 if VID
-		 * is enabled in vmcs12; as above, the EOIs affect L2's vAPIC.
-		 * Note, userspace can stuff state while L2 is active; assert
-		 * that VID is disabled if and only if the vCPU is in KVM_RUN
-		 * to avoid false positives if userspace is setting APIC state.
-		 */
-		WARN_ON_ONCE(vcpu->wants_to_run &&
-			     nested_cpu_has_vid(get_vmcs12(vcpu)));
 		to_vmx(vcpu)->nested.update_vmcs01_hwapic_isr = true;
 		return;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b4b5d2d09634..08b34431c187 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10878,9 +10878,16 @@ void __kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu)
 	 * pending. At the same time, KVM_REQ_EVENT may not be set as APICv was
 	 * still active when the interrupt got accepted. Make sure
 	 * kvm_check_and_inject_events() is called to check for that.
+	 *
+	 * When APICv gets enabled, updating SVI is necessary; otherwise,
+	 * SVI won't reflect the highest bit in vISR and the next EOI from
+	 * the guest won't be virtualized correctly, as the CPU will clear
+	 * the SVI bit from vISR.
 	 */
 	if (!apic->apicv_active)
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
+	else
+		kvm_apic_update_hwapic_isr(vcpu);
 
 out:
 	preempt_enable();
-- 
2.39.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation
  2025-11-10  6:32 [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation Dongli Zhang
@ 2025-11-10  7:08 ` Chao Gao
  2025-11-12 14:47 ` Sean Christopherson
  1 sibling, 0 replies; 9+ messages in thread
From: Chao Gao @ 2025-11-10  7:08 UTC (permalink / raw)
  To: Dongli Zhang
  Cc: kvm, x86, linux-kernel, seanjc, pbonzini, tglx, mingo, bp,
	dave.hansen, hpa, joe.jin, alejandro.j.jimenez

On Sun, Nov 09, 2025 at 10:32:12PM -0800, Dongli Zhang wrote:
>The APICv (apic->apicv_active) can be activated or deactivated at runtime,
>for instance, because of APICv inhibit reasons. Intel VMX employs different
>mechanisms to virtualize LAPIC based on whether APICv is active.
>
>When APICv is activated at runtime, GUEST_INTR_STATUS is used to configure
>and report the current pending IRR and ISR states. Unless a specific vector
>is explicitly included in EOI_EXIT_BITMAP, its EOI will not be trapped to
>KVM. Intel VMX automatically clears the corresponding ISR bit based on the
>GUEST_INTR_STATUS.SVI field.
>
>When APICv is deactivated at runtime, the VM_ENTRY_INTR_INFO_FIELD is used
>to specify the next interrupt vector to invoke upon VM-entry. The
>VMX IDT_VECTORING_INFO_FIELD is used to report un-invoked vectors on
>VM-exit. EOIs are always trapped to KVM, so the software can manually clear
>pending ISR bits.
>
>There are scenarios where, with APICv activated at runtime, a guest-issued
>EOI may not be able to clear the pending ISR bit.
>
>Taking vector 236 as an example, here is one scenario.
>
>1. Suppose APICv is inactive. Vector 236 is pending in the IRR.
>2. To handle KVM_REQ_EVENT, KVM moves vector 236 from the IRR to the ISR,
>and configures the VM_ENTRY_INTR_INFO_FIELD via vmx_inject_irq().
>3. After VM-entry, vector 236 is invoked through the guest IDT. At this
>point, the data in VM_ENTRY_INTR_INFO_FIELD is no longer valid. The guest
>interrupt handler for vector 236 is invoked.
>4. Suppose a VM exit occurs very early in the guest interrupt handler,
>before the EOI is issued.
>5. Nothing is reported through the IDT_VECTORING_INFO_FIELD because
>vector 236 has already been invoked in the guest.
>6. Now, suppose APICv is activated. Before the next VM-entry, KVM calls
>kvm_vcpu_update_apicv() to activate APICv.
>7. Unfortunately, GUEST_INTR_STATUS.SVI is not configured, although
>vector 236 is still pending in the ISR.
>8. After VM-entry, the guest finally issues the EOI for vector 236.
>However, because SVI is not configured, vector 236 is not cleared.
>9. ISR is stalled forever on vector 236.
>
>Here is another scenario.
>
>1. Suppose APICv is inactive. Vector 236 is pending in the IRR.
>2. To handle KVM_REQ_EVENT, KVM moves vector 236 from the IRR to the ISR,
>and configures the VM_ENTRY_INTR_INFO_FIELD via vmx_inject_irq().
>3. VM-exit occurs immediately after the next VM-entry. The vector 236 is
>not invoked through the guest IDT. Instead, it is saved to the
>IDT_VECTORING_INFO_FIELD during the VM-exit.
>4. KVM calls kvm_queue_interrupt() to re-queue the un-invoked vector 236
>into vcpu->arch.interrupt. A KVM_REQ_EVENT is requested.
>5. Now, suppose APICv is activated. Before the next VM-entry, KVM calls
>kvm_vcpu_update_apicv() to activate APICv.
>6. Although APICv is now active, KVM still uses the legacy
>VM_ENTRY_INTR_INFO_FIELD to re-inject vector 236. GUEST_INTR_STATUS.SVI is
>not configured.
>7. After the next VM-entry, vector 236 is invoked through the guest IDT.
>Finally, an EOI occurs. However, due to the lack of GUEST_INTR_STATUS.SVI
>configuration, vector 236 is not cleared from the ISR.
>8. ISR is stalled forever on vector 236.
>
>Using QEMU as an example, vector 236 is stuck in ISR forever.
>
>(qemu) info lapic 1
>dumping local APIC state for CPU 1
>
>LVT0	 0x00010700 active-hi edge  masked                      ExtINT (vec 0)
>LVT1	 0x00010400 active-hi edge  masked                      NMI
>LVTPC	 0x00000400 active-hi edge                              NMI
>LVTERR	 0x000000fe active-hi edge                              Fixed  (vec 254)
>LVTTHMR	 0x00010000 active-hi edge  masked                      Fixed  (vec 0)
>LVTT	 0x000400ec active-hi edge                 tsc-deadline Fixed  (vec 236)
>Timer	 DCR=0x0 (divide by 2) initial_count = 0 current_count = 0
>SPIV	 0x000001ff APIC enabled, focus=off, spurious vec 255
>ICR	 0x000000fd physical edge de-assert no-shorthand
>ICR2	 0x00000000 cpu 0 (X2APIC ID)
>ESR	 0x00000000
>ISR	 236
>IRR	 37(level) 236
>
>The issue is not applicable to AMD SVM which employs a different LAPIC
>virtualization mechanism. In addition, APICV_INHIBIT_REASON_IRQWIN ensures
>AMD SVM AVIC is not activated until the last interrupt is EOI.
>
>Fix the bug by configuring Intel VMX GUEST_INTR_STATUS.SVI if APICv is
>activated at runtime.
>
>Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation
  2025-11-10  6:32 [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation Dongli Zhang
  2025-11-10  7:08 ` Chao Gao
@ 2025-11-12 14:47 ` Sean Christopherson
  2025-11-13  3:06   ` Dongli Zhang
  1 sibling, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2025-11-12 14:47 UTC (permalink / raw)
  To: Dongli Zhang
  Cc: kvm, x86, linux-kernel, chao.gao, pbonzini, tglx, mingo, bp,
	dave.hansen, hpa, joe.jin, alejandro.j.jimenez

On Sun, Nov 09, 2025, Dongli Zhang wrote:
> ---
> Changed since v2:
>   - Add support for guest mode (suggested by Chao Gao).
>   - Add comments in the code (suggested by Chao Gao).
>   - Remove WARN_ON_ONCE from vmx_hwapic_isr_update().
>   - Edit commit message "AMD SVM APICv" to "AMD SVM AVIC"
>     (suggested by Alejandro Jimenez).
> 
>  arch/x86/kvm/vmx/vmx.c | 9 ---------
>  arch/x86/kvm/x86.c     | 7 +++++++
>  2 files changed, 7 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index f87c216d976d..d263dbf0b917 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6878,15 +6878,6 @@ void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
>  	 * VM-Exit, otherwise L1 with run with a stale SVI.
>  	 */
>  	if (is_guest_mode(vcpu)) {
> -		/*
> -		 * KVM is supposed to forward intercepted L2 EOIs to L1 if VID
> -		 * is enabled in vmcs12; as above, the EOIs affect L2's vAPIC.
> -		 * Note, userspace can stuff state while L2 is active; assert
> -		 * that VID is disabled if and only if the vCPU is in KVM_RUN
> -		 * to avoid false positives if userspace is setting APIC state.
> -		 */
> -		WARN_ON_ONCE(vcpu->wants_to_run &&
> -			     nested_cpu_has_vid(get_vmcs12(vcpu)));
>  		to_vmx(vcpu)->nested.update_vmcs01_hwapic_isr = true;
>  		return;
>  	}
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index b4b5d2d09634..08b34431c187 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10878,9 +10878,16 @@ void __kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu)
>  	 * pending. At the same time, KVM_REQ_EVENT may not be set as APICv was
>  	 * still active when the interrupt got accepted. Make sure
>  	 * kvm_check_and_inject_events() is called to check for that.
> +	 *
> +	 * When APICv gets enabled, updating SVI is necessary; otherwise,
> +	 * SVI won't reflect the highest bit in vISR and the next EOI from
> +	 * the guest won't be virtualized correctly, as the CPU will clear
> +	 * the SVI bit from vISR.
>  	 */
>  	if (!apic->apicv_active)
>  		kvm_make_request(KVM_REQ_EVENT, vcpu);
> +	else
> +		kvm_apic_update_hwapic_isr(vcpu);

Rather than trigger the update from x86.c, what if we let VMX make the call?
Then we don't need to drop the WARN, and in the unlikely scenario L2 is active,
we'll save a pointless scan of the vISR (VMX will defer the update until L1 is
active).

We could even have kvm_apic_update_hwapic_isr() WARN if L2 is active.  E.g. with
an opportunistic typo fix in vmx_hwapic_isr_update()'s comment (completely untested):

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 0ae7f913d782..786ccfc24252 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -774,7 +774,8 @@ void kvm_apic_update_hwapic_isr(struct kvm_vcpu *vcpu)
 {
        struct kvm_lapic *apic = vcpu->arch.apic;
 
-       if (WARN_ON_ONCE(!lapic_in_kernel(vcpu)) || !apic->apicv_active)
+       if (WARN_ON_ONCE(!lapic_in_kernel(vcpu)) || !apic->apicv_active ||
+                        is_guest_mode(vcpu))
                return;
 
        kvm_x86_call(hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 91b6f2f3edc2..653b8b713547 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4430,6 +4430,14 @@ void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
                                                 kvm_vcpu_apicv_active(vcpu));
 
        vmx_update_msr_bitmap_x2apic(vcpu);
+
+       /*
+        * Refresh SVI if APICv is enabled, as any changes KVM made to vISR
+        * while APICv was disabled need to be reflected in SVI, e.g. so that
+        * the next accelerated EOI will clear the correct vector in vISR.
+        */
+       if (kvm_vcpu_apicv_active(vcpu))
+               kvm_apic_update_hwapic_isr(vcpu);
 }
 
 static u32 vmx_exec_control(struct vcpu_vmx *vmx)
@@ -6880,7 +6888,7 @@ void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
 
        /*
         * If L2 is active, defer the SVI update until vmcs01 is loaded, as SVI
-        * is only relevant for if and only if Virtual Interrupt Delivery is
+        * is only relevant for L2 if and only if Virtual Interrupt Delivery is
         * enabled in vmcs12, and if VID is enabled then L2 EOIs affect L2's
         * vAPIC, not L1's vAPIC.  KVM must update vmcs01 on the next nested
         * VM-Exit, otherwise L1 with run with a stale SVI.

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation
  2025-11-12 14:47 ` Sean Christopherson
@ 2025-11-13  3:06   ` Dongli Zhang
  2025-11-13 21:13     ` Sean Christopherson
  0 siblings, 1 reply; 9+ messages in thread
From: Dongli Zhang @ 2025-11-13  3:06 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, x86, linux-kernel, chao.gao, pbonzini, tglx, mingo, bp,
	dave.hansen, hpa, joe.jin, alejandro.j.jimenez

Hi Sean,

On 11/12/25 6:47 AM, Sean Christopherson wrote:
> On Sun, Nov 09, 2025, Dongli Zhang wrote:
>> ---
>> Changed since v2:
>>   - Add support for guest mode (suggested by Chao Gao).
>>   - Add comments in the code (suggested by Chao Gao).
>>   - Remove WARN_ON_ONCE from vmx_hwapic_isr_update().
>>   - Edit commit message "AMD SVM APICv" to "AMD SVM AVIC"
>>     (suggested by Alejandro Jimenez).
>>
>>  arch/x86/kvm/vmx/vmx.c | 9 ---------
>>  arch/x86/kvm/x86.c     | 7 +++++++
>>  2 files changed, 7 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> index f87c216d976d..d263dbf0b917 100644
>> --- a/arch/x86/kvm/vmx/vmx.c
>> +++ b/arch/x86/kvm/vmx/vmx.c
>> @@ -6878,15 +6878,6 @@ void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
>>  	 * VM-Exit, otherwise L1 with run with a stale SVI.
>>  	 */
>>  	if (is_guest_mode(vcpu)) {
>> -		/*
>> -		 * KVM is supposed to forward intercepted L2 EOIs to L1 if VID
>> -		 * is enabled in vmcs12; as above, the EOIs affect L2's vAPIC.
>> -		 * Note, userspace can stuff state while L2 is active; assert
>> -		 * that VID is disabled if and only if the vCPU is in KVM_RUN
>> -		 * to avoid false positives if userspace is setting APIC state.
>> -		 */
>> -		WARN_ON_ONCE(vcpu->wants_to_run &&
>> -			     nested_cpu_has_vid(get_vmcs12(vcpu)));
>>  		to_vmx(vcpu)->nested.update_vmcs01_hwapic_isr = true;
>>  		return;
>>  	}
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index b4b5d2d09634..08b34431c187 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -10878,9 +10878,16 @@ void __kvm_vcpu_update_apicv(struct kvm_vcpu *vcpu)
>>  	 * pending. At the same time, KVM_REQ_EVENT may not be set as APICv was
>>  	 * still active when the interrupt got accepted. Make sure
>>  	 * kvm_check_and_inject_events() is called to check for that.
>> +	 *
>> +	 * When APICv gets enabled, updating SVI is necessary; otherwise,
>> +	 * SVI won't reflect the highest bit in vISR and the next EOI from
>> +	 * the guest won't be virtualized correctly, as the CPU will clear
>> +	 * the SVI bit from vISR.
>>  	 */
>>  	if (!apic->apicv_active)
>>  		kvm_make_request(KVM_REQ_EVENT, vcpu);
>> +	else
>> +		kvm_apic_update_hwapic_isr(vcpu);
> 
> Rather than trigger the update from x86.c, what if we let VMX make the call?
> Then we don't need to drop the WARN, and in the unlikely scenario L2 is active,
> we'll save a pointless scan of the vISR (VMX will defer the update until L1 is
> active).
> 
> We could even have kvm_apic_update_hwapic_isr() WARN if L2 is active.  E.g. with
> an opportunistic typo fix in vmx_hwapic_isr_update()'s comment (completely untested):
> 
> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> index 0ae7f913d782..786ccfc24252 100644
> --- a/arch/x86/kvm/lapic.c
> +++ b/arch/x86/kvm/lapic.c
> @@ -774,7 +774,8 @@ void kvm_apic_update_hwapic_isr(struct kvm_vcpu *vcpu)
>  {
>         struct kvm_lapic *apic = vcpu->arch.apic;
>  
> -       if (WARN_ON_ONCE(!lapic_in_kernel(vcpu)) || !apic->apicv_active)
> +       if (WARN_ON_ONCE(!lapic_in_kernel(vcpu)) || !apic->apicv_active ||
> +                        is_guest_mode(vcpu))
>                 return;
>  
>         kvm_x86_call(hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 91b6f2f3edc2..653b8b713547 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -4430,6 +4430,14 @@ void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
>                                                  kvm_vcpu_apicv_active(vcpu));
>  
>         vmx_update_msr_bitmap_x2apic(vcpu);
> +
> +       /*
> +        * Refresh SVI if APICv is enabled, as any changes KVM made to vISR
> +        * while APICv was disabled need to be reflected in SVI, e.g. so that
> +        * the next accelerated EOI will clear the correct vector in vISR.
> +        */
> +       if (kvm_vcpu_apicv_active(vcpu))
> +               kvm_apic_update_hwapic_isr(vcpu);
>  }
>  
>  static u32 vmx_exec_control(struct vcpu_vmx *vmx)
> @@ -6880,7 +6888,7 @@ void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
>  
>         /*
>          * If L2 is active, defer the SVI update until vmcs01 is loaded, as SVI
> -        * is only relevant for if and only if Virtual Interrupt Delivery is
> +        * is only relevant for L2 if and only if Virtual Interrupt Delivery is
>          * enabled in vmcs12, and if VID is enabled then L2 EOIs affect L2's
>          * vAPIC, not L1's vAPIC.  KVM must update vmcs01 on the next nested
>          * VM-Exit, otherwise L1 with run with a stale SVI.


As a quick reply, the idea is to call kvm_apic_update_hwapic_isr() in
vmx_refresh_apicv_exec_ctrl(), instead of __kvm_vcpu_update_apicv().

I think the below case doesn't work:

1. APICv is activated when vCPU is in L2.

kvm_vcpu_update_apicv()
-> __kvm_vcpu_update_apicv()
   -> vmx_refresh_apicv_exec_ctrl()

vmx_refresh_apicv_exec_ctrl() returns after setting:
vmx->nested.update_vmcs01_apicv_status = true.


2. On exit from L2 to L1, __nested_vmx_vmexit() requests for KVM_REQ_APICV_UPDATE.

__nested_vmx_vmexit()
-> leave_guest_mode(vcpu)
-> kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu)


3. vCPU processes KVM_REQ_APICV_UPDATE again.

This time, __kvm_vcpu_update_apicv() returns without calling
refresh_apicv_exec_ctrl(), because (apic->apicv_active == activate).

vmx_refresh_apicv_exec_ctrl() doesn't get any chance to be called.


In order to call kvm_apic_update_hwapic_isr() in vmx_refresh_apicv_exec_ctrl(),
we may need to resolve the issue mentioned by Chao, for instance, with something
like:

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index bcea087b642f..1725c6a94f99 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -19,6 +19,7 @@
 #include "trace.h"
 #include "vmx.h"
 #include "smm.h"
+#include "x86_ops.h"

 static bool __read_mostly enable_shadow_vmcs = 1;
 module_param_named(enable_shadow_vmcs, enable_shadow_vmcs, bool, S_IRUGO);
@@ -5216,7 +5217,7 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32
vm_exit_reason,

        if (vmx->nested.update_vmcs01_apicv_status) {
                vmx->nested.update_vmcs01_apicv_status = false;
-               kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
+               vmx_refresh_apicv_exec_ctrl(vcpu);
        }

        if (vmx->nested.update_vmcs01_hwapic_isr) {

Still validating if it works well.

Thank you very much!

Dongli Zhang

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation
  2025-11-13  3:06   ` Dongli Zhang
@ 2025-11-13 21:13     ` Sean Christopherson
  2025-11-18  3:36       ` Dongli Zhang
  0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2025-11-13 21:13 UTC (permalink / raw)
  To: Dongli Zhang
  Cc: kvm, x86, linux-kernel, chao.gao, pbonzini, tglx, mingo, bp,
	dave.hansen, hpa, joe.jin, alejandro.j.jimenez

On Wed, Nov 12, 2025, Dongli Zhang wrote:
> Hi Sean,
> 
> On 11/12/25 6:47 AM, Sean Christopherson wrote:
> > On Sun, Nov 09, 2025, Dongli Zhang wrote:
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 91b6f2f3edc2..653b8b713547 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -4430,6 +4430,14 @@ void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
> >                                                  kvm_vcpu_apicv_active(vcpu));
> >  
> >         vmx_update_msr_bitmap_x2apic(vcpu);
> > +
> > +       /*
> > +        * Refresh SVI if APICv is enabled, as any changes KVM made to vISR
> > +        * while APICv was disabled need to be reflected in SVI, e.g. so that
> > +        * the next accelerated EOI will clear the correct vector in vISR.
> > +        */
> > +       if (kvm_vcpu_apicv_active(vcpu))
> > +               kvm_apic_update_hwapic_isr(vcpu);
> >  }
> >  
> >  static u32 vmx_exec_control(struct vcpu_vmx *vmx)
> > @@ -6880,7 +6888,7 @@ void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
> >  
> >         /*
> >          * If L2 is active, defer the SVI update until vmcs01 is loaded, as SVI
> > -        * is only relevant for if and only if Virtual Interrupt Delivery is
> > +        * is only relevant for L2 if and only if Virtual Interrupt Delivery is
> >          * enabled in vmcs12, and if VID is enabled then L2 EOIs affect L2's
> >          * vAPIC, not L1's vAPIC.  KVM must update vmcs01 on the next nested
> >          * VM-Exit, otherwise L1 with run with a stale SVI.
> 
> 
> As a quick reply, the idea is to call kvm_apic_update_hwapic_isr() in
> vmx_refresh_apicv_exec_ctrl(), instead of __kvm_vcpu_update_apicv().
> 
> I think the below case doesn't work:
> 
> 1. APICv is activated when vCPU is in L2.
> 
> kvm_vcpu_update_apicv()
> -> __kvm_vcpu_update_apicv()
>    -> vmx_refresh_apicv_exec_ctrl()
> 
> vmx_refresh_apicv_exec_ctrl() returns after setting:
> vmx->nested.update_vmcs01_apicv_status = true.
> 
> 
> 2. On exit from L2 to L1, __nested_vmx_vmexit() requests for KVM_REQ_APICV_UPDATE.
> 
> __nested_vmx_vmexit()
> -> leave_guest_mode(vcpu)
> -> kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu)
> 
> 
> 3. vCPU processes KVM_REQ_APICV_UPDATE again.
> 
> This time, __kvm_vcpu_update_apicv() returns without calling
> refresh_apicv_exec_ctrl(), because (apic->apicv_active == activate).
> 
> vmx_refresh_apicv_exec_ctrl() doesn't get any chance to be called.

Oof, that's nasty.

> In order to call kvm_apic_update_hwapic_isr() in vmx_refresh_apicv_exec_ctrl(),
> we may need to resolve the issue mentioned by Chao, for instance, with something
> like:
> 
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index bcea087b642f..1725c6a94f99 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -19,6 +19,7 @@
>  #include "trace.h"
>  #include "vmx.h"
>  #include "smm.h"
> +#include "x86_ops.h"
> 
>  static bool __read_mostly enable_shadow_vmcs = 1;
>  module_param_named(enable_shadow_vmcs, enable_shadow_vmcs, bool, S_IRUGO);
> @@ -5216,7 +5217,7 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32
> vm_exit_reason,
> 
>         if (vmx->nested.update_vmcs01_apicv_status) {
>                 vmx->nested.update_vmcs01_apicv_status = false;
> -               kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
> +               vmx_refresh_apicv_exec_ctrl(vcpu);
>         }

Hmm, what if we go the opposite direction and bundle the vISR update into
KVM_REQ_APICV_UPDATE?  Then we can drop nested.update_vmcs01_hwapic_isr, and
hopefully avoid similar ordering issues in the future.

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 564f5af5ae86..7bf44a8111e5 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -5168,11 +5168,6 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
                kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
        }
 
-       if (vmx->nested.update_vmcs01_hwapic_isr) {
-               vmx->nested.update_vmcs01_hwapic_isr = false;
-               kvm_apic_update_hwapic_isr(vcpu);
-       }
-
        if ((vm_exit_reason != -1) &&
            (enable_shadow_vmcs || nested_vmx_is_evmptr12_valid(vmx)))
                vmx->nested.need_vmcs12_to_shadow_sync = true;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 6f374c815ce2..64edf47bed02 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6907,7 +6907,7 @@ void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
                 */
                WARN_ON_ONCE(vcpu->wants_to_run &&
                             nested_cpu_has_vid(get_vmcs12(vcpu)));
-               to_vmx(vcpu)->nested.update_vmcs01_hwapic_isr = true;
+               to_vmx(vcpu)->nested.update_vmcs01_apicv_status = true;
                return;
        }
 
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index bc3ed3145d7e..17bd43d6faaf 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -135,7 +135,6 @@ struct nested_vmx {
        bool reload_vmcs01_apic_access_page;
        bool update_vmcs01_cpu_dirty_logging;
        bool update_vmcs01_apicv_status;
-       bool update_vmcs01_hwapic_isr;
 
        /*
         * Enlightened VMCS has been enabled. It does not mean that L1 has to
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9c2e28028c2b..445bf22ee519 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11218,8 +11218,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
                if (kvm_check_request(KVM_REQ_HV_STIMER, vcpu))
                        kvm_hv_process_stimers(vcpu);
 #endif
-               if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu))
+               if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu)) {
                        kvm_vcpu_update_apicv(vcpu);
+                       kvm_apic_update_hwapic_isr(vcpu);
+               }
                if (kvm_check_request(KVM_REQ_APF_READY, vcpu))
                        kvm_check_async_pf_completion(vcpu);

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation
  2025-11-13 21:13     ` Sean Christopherson
@ 2025-11-18  3:36       ` Dongli Zhang
  2025-12-05  2:15         ` Sean Christopherson
  0 siblings, 1 reply; 9+ messages in thread
From: Dongli Zhang @ 2025-11-18  3:36 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, x86, linux-kernel, chao.gao, pbonzini, tglx, mingo, bp,
	dave.hansen, hpa, joe.jin, alejandro.j.jimenez

Hi Sean,

[snip]

> 
> Hmm, what if we go the opposite direction and bundle the vISR update into
> KVM_REQ_APICV_UPDATE?  Then we can drop nested.update_vmcs01_hwapic_isr, and
> hopefully avoid similar ordering issues in the future.
> 
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 564f5af5ae86..7bf44a8111e5 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -5168,11 +5168,6 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
>                 kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
>         }
>  
> -       if (vmx->nested.update_vmcs01_hwapic_isr) {
> -               vmx->nested.update_vmcs01_hwapic_isr = false;
> -               kvm_apic_update_hwapic_isr(vcpu);
> -       }
> -
>         if ((vm_exit_reason != -1) &&
>             (enable_shadow_vmcs || nested_vmx_is_evmptr12_valid(vmx)))
>                 vmx->nested.need_vmcs12_to_shadow_sync = true;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 6f374c815ce2..64edf47bed02 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6907,7 +6907,7 @@ void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
>                  */
>                 WARN_ON_ONCE(vcpu->wants_to_run &&
>                              nested_cpu_has_vid(get_vmcs12(vcpu)));
> -               to_vmx(vcpu)->nested.update_vmcs01_hwapic_isr = true;
> +               to_vmx(vcpu)->nested.update_vmcs01_apicv_status = true;
>                 return;
>         }
>  
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index bc3ed3145d7e..17bd43d6faaf 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -135,7 +135,6 @@ struct nested_vmx {
>         bool reload_vmcs01_apic_access_page;
>         bool update_vmcs01_cpu_dirty_logging;
>         bool update_vmcs01_apicv_status;
> -       bool update_vmcs01_hwapic_isr;
>  
>         /*
>          * Enlightened VMCS has been enabled. It does not mean that L1 has to
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 9c2e28028c2b..445bf22ee519 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -11218,8 +11218,10 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>                 if (kvm_check_request(KVM_REQ_HV_STIMER, vcpu))
>                         kvm_hv_process_stimers(vcpu);
>  #endif
> -               if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu))
> +               if (kvm_check_request(KVM_REQ_APICV_UPDATE, vcpu)) {
>                         kvm_vcpu_update_apicv(vcpu);
> +                       kvm_apic_update_hwapic_isr(vcpu);
> +               }
>                 if (kvm_check_request(KVM_REQ_APF_READY, vcpu))
>                         kvm_check_async_pf_completion(vcpu);

Thank you very much for suggestion.

There are still a few issues to fix.

1. We still need to remove WARN_ON_ONCE() from vmx_hwapic_isr_update().

[ 1125.176217] WARNING: CPU: 8 PID: 8034 at arch/x86/kvm/vmx/vmx.c:6896
vmx_hwapic_isr_update+0x1c7/0x250 [kvm_intel]
... ...
[ 1125.339364] Call Trace:
[ 1125.342341]  <TASK>
[ 1125.344793]  vcpu_run+0x2edf/0x3aa0 [kvm]
[ 1125.349629]  ? __pfx_load_fixmap_gdt+0x10/0x10
[ 1125.354771]  ? __pfx_vcpu_run+0x10/0x10 [kvm]
[ 1125.359841]  ? fpregs_mark_activate+0x99/0x150
[ 1125.364909]  ? fpu_swap_kvm_fpstate+0x1a1/0x360
[ 1125.370129]  kvm_arch_vcpu_ioctl_run+0x7b3/0x1560 [kvm]
[ 1125.376123]  ? __pfx_eventfd_write+0x10/0x10
[ 1125.380989]  kvm_vcpu_ioctl+0x525/0x1090 [kvm]
[ 1125.386133]  ? __pfx_kvm_vcpu_ioctl+0x10/0x10 [kvm]
[ 1125.391801]  ? vfs_write+0x21e/0xcc0
[ 1125.395928]  ? __pfx_do_vfs_ioctl+0x10/0x10
[ 1125.400746]  ? __pfx_vfs_write+0x10/0x10
[ 1125.405260]  ? __pfx_ioctl_has_perm.constprop.0.isra.0+0x10/0x10
[ 1125.412141]  ? fdget_pos+0x396/0x4c0
[ 1125.416225]  ? fput+0x25/0x80
[ 1125.419628]  __x64_sys_ioctl+0x133/0x1c0
[ 1125.424102]  do_syscall_64+0x53/0xfa0
[ 1125.433954]  entry_SYSCALL_64_after_hwframe+0x76/0x7e


2. As you mentioned in prior email, while this is not a functional issue,
apic_find_highest_isr() is still invoked unconditionally, as
kvm_apic_update_hwapic_isr() is always called during KVM_REQ_APICV_UPDATE.


3. The issue that Chao reminded is still present.

(1) Suppose APICv is activated during L2.

kvm_vcpu_update_apicv()
-> __kvm_vcpu_update_apicv()
   -> apic->apicv_active = true
   -> vmx_refresh_apicv_exec_ctrl()
      -> vmx->nested.update_vmcs01_apicv_status = true
      -> return

Then L2 exits to L1:

__nested_vmx_vmexit()
-> kvm_make_request(KVM_REQ_APICV_UPDATE)

vcpu_enter_guest: KVM_REQ_APICV_UPDATE
-> kvm_vcpu_update_apicv()
   -> __kvm_vcpu_update_apicv()
      -> return because of
         if (apic->apicv_active == activate)

refresh_apicv_exec_ctrl() is skipped.


4. It looks more complicated if we update "update_vmcs01_apicv_status = true" at
both vmx_hwapic_isr_update() and vmx_refresh_apicv_exec_ctrl().


Therefore, how about we continue to handle 'update_vmcs01_apicv_status' and
'update_vmcs01_hwapic_isr' as independent operations.

1. Take the approach reviewed by Chao, and ...

2. Fix the vmx_refresh_apicv_exec_ctrl() issue with an additional patch:

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index bcea087b642f..7d98c11a8920 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -19,6 +19,7 @@
 #include "trace.h"
 #include "vmx.h"
 #include "smm.h"
+#include "x86_ops.h"

 static bool __read_mostly enable_shadow_vmcs = 1;
 module_param_named(enable_shadow_vmcs, enable_shadow_vmcs, bool, S_IRUGO);
@@ -5214,9 +5215,9 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32
vm_exit_reason,
                kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu);
        }

-       if (vmx->nested.update_vmcs01_apicv_status) {
-               vmx->nested.update_vmcs01_apicv_status = false;
-               kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
+       if (vmx->nested.update_vmcs01_apicv_exec_ctrl) {
+               vmx->nested.update_vmcs01_apicv_exec_ctrl = false;
+               vmx_refresh_apicv_exec_ctrl(vcpu);
        }

        if (vmx->nested.update_vmcs01_hwapic_isr) {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c3b9eb72b6f3..83705a6d5a8a 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4415,7 +4415,7 @@ void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
        struct vcpu_vmx *vmx = to_vmx(vcpu);

        if (is_guest_mode(vcpu)) {
-               vmx->nested.update_vmcs01_apicv_status = true;
+               vmx->nested.update_vmcs01_apicv_exec_ctrl = true;
                return;
        }

diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index ea93121029f9..f6bee0e132a8 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -134,7 +134,7 @@ struct nested_vmx {
        bool change_vmcs01_virtual_apic_mode;
        bool reload_vmcs01_apic_access_page;
        bool update_vmcs01_cpu_dirty_logging;
-       bool update_vmcs01_apicv_status;
+       bool update_vmcs01_apicv_exec_ctrl;
        bool update_vmcs01_hwapic_isr;

        /*



By the way, while reviewing source code, I noticed that certain read accesses to
'apicv_inhibit_reasons' are not protected by 'apicv_update_lock'.

Thank you very much!

Dongli Zhang

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation
  2025-11-18  3:36       ` Dongli Zhang
@ 2025-12-05  2:15         ` Sean Christopherson
  2025-12-05 18:12           ` Dongli Zhang
  0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2025-12-05  2:15 UTC (permalink / raw)
  To: Dongli Zhang
  Cc: kvm, x86, linux-kernel, chao.gao, pbonzini, tglx, mingo, bp,
	dave.hansen, hpa, joe.jin, alejandro.j.jimenez

On Mon, Nov 17, 2025, Dongli Zhang wrote:
> > Hmm, what if we go the opposite direction and bundle the vISR update into
> > KVM_REQ_APICV_UPDATE?  Then we can drop nested.update_vmcs01_hwapic_isr, and
> > hopefully avoid similar ordering issues in the future.

...

> Thank you very much for suggestion.

If only it was a good suggestion :-)

> There are still a few issues to fix.
> 
> 1. We still need to remove WARN_ON_ONCE() from vmx_hwapic_isr_update().

..

> 2. As you mentioned in prior email, while this is not a functional issue,
> apic_find_highest_isr() is still invoked unconditionally, as
> kvm_apic_update_hwapic_isr() is always called during KVM_REQ_APICV_UPDATE.
> 
> 
> 3. The issue that Chao reminded is still present.
> 
> (1) Suppose APICv is activated during L2.
> 
> kvm_vcpu_update_apicv()
> -> __kvm_vcpu_update_apicv()
>    -> apic->apicv_active = true
>    -> vmx_refresh_apicv_exec_ctrl()
>       -> vmx->nested.update_vmcs01_apicv_status = true
>       -> return
> 
> Then L2 exits to L1:
> 
> __nested_vmx_vmexit()
> -> kvm_make_request(KVM_REQ_APICV_UPDATE)
> 
> vcpu_enter_guest: KVM_REQ_APICV_UPDATE
> -> kvm_vcpu_update_apicv()
>    -> __kvm_vcpu_update_apicv()
>       -> return because of
>          if (apic->apicv_active == activate)
> 
> refresh_apicv_exec_ctrl() is skipped.
> 
> 4. It looks more complicated if we update "update_vmcs01_apicv_status = true" at
> both vmx_hwapic_isr_update() and vmx_refresh_apicv_exec_ctrl().
> 
> 
> Therefore, how about we continue to handle 'update_vmcs01_apicv_status' and
> 'update_vmcs01_hwapic_isr' as independent operations.
> 
> 1. Take the approach reviewed by Chao, and ...

Ya.  I spent a lot of time today wrapping my head around what all is going on,
and my lightbulb moment came when reading this from the changelog:

  The issue is not applicable to AMD SVM which employs a different LAPIC
  virtualization mechanism. 

That's not entirely true.  It's definitely true for SVI, but not for the bug that
Chao pointed out.  SVM is "immune" from these bugs because KVM simply updates
vmcb01 directly.  And looking at everything, there's zero reason we can't do the
same for VMX.  Yeah, KVM needs to do a couple VMPTRLDs to swap between vmcs01 and
vmcs02, but those aren't _that_ expensive, and these are slow paths.

And with a guard(), it's pretty trivial to run a section of code with vmcs01
active.

static void vmx_load_vmcs01(struct kvm_vcpu *vcpu)
{
	struct vcpu_vmx *vmx = to_vmx(vcpu);

	if (!is_guest_mode(vcpu)) {
		WARN_ON_ONCE(vmx->loaded_vmcs != &vmx->vmcs01);
		return;
	}

	WARN_ON_ONCE(vmx->loaded_vmcs != &vmx->nested.vmcs02);
	vmx_switch_loaded_vmcs(vcpu, &vmx->vmcs01);
}

static void vmx_put_vmcs01(struct kvm_vcpu *vcpu)
{
	if (!is_guest_mode(vcpu))
		return;

	vmx_switch_loaded_vmcs(vcpu, &to_vmx(vcpu)->nested.vmcs02);
}
DEFINE_GUARD(vmx_vmcs01, struct kvm_vcpu *,
	     vmx_load_vmcs01(_T), vmx_put_vmcs01(_T))

I've got changes to convert everything to guard(vmx_vmcs01); except for
vmx_set_virtual_apic_mode(), they're all quite trivial.  I also have a selftest
that hits both this bug and the one Chao pointed out, so I'm reasonably confident
the changes do indeed work.

But they're most definitely NOT stable material.  So my plan is to grab this
and the below for 6.19, and then do the cleanup for 6.20 or later.

Oh, almost forgot.  We can also sink the hwapic_isr_update() call into
kvm_apic_update_apicv() and drop kvm_apic_update_hwapic_isr() entirely, which is
another argument for your approach.  That's actually a really good fit, because
that's where KVM parses the vISR when APICv is being _disabled_.

I'll post a v3 with everything tomorrow (hopefully) after running the changes
through more normal test flow.

> 2. Fix the vmx_refresh_apicv_exec_ctrl() issue with an additional patch:
> 
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index bcea087b642f..7d98c11a8920 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -19,6 +19,7 @@
>  #include "trace.h"
>  #include "vmx.h"
>  #include "smm.h"
> +#include "x86_ops.h"
> 
>  static bool __read_mostly enable_shadow_vmcs = 1;
>  module_param_named(enable_shadow_vmcs, enable_shadow_vmcs, bool, S_IRUGO);
> @@ -5214,9 +5215,9 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32
> vm_exit_reason,
>                 kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu);
>         }
> 
> -       if (vmx->nested.update_vmcs01_apicv_status) {
> -               vmx->nested.update_vmcs01_apicv_status = false;
> -               kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
> +       if (vmx->nested.update_vmcs01_apicv_exec_ctrl) {
> +               vmx->nested.update_vmcs01_apicv_exec_ctrl = false;
> +               vmx_refresh_apicv_exec_ctrl(vcpu);

+1 to the fix, but I'll omit the update_vmcs01_apicv_exec_ctrl rename because I'm
99% certain we can get rid of it entirely.

Oh, and can you give your SoB for this?  I'll write the changelog, just need the
SoB for the commit.  Thanks!

>         }
> 
>         if (vmx->nested.update_vmcs01_hwapic_isr) {
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index c3b9eb72b6f3..83705a6d5a8a 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -4415,7 +4415,7 @@ void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
>         struct vcpu_vmx *vmx = to_vmx(vcpu);
> 
>         if (is_guest_mode(vcpu)) {
> -               vmx->nested.update_vmcs01_apicv_status = true;
> +               vmx->nested.update_vmcs01_apicv_exec_ctrl = true;
>                 return;
>         }
> 
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index ea93121029f9..f6bee0e132a8 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -134,7 +134,7 @@ struct nested_vmx {
>         bool change_vmcs01_virtual_apic_mode;
>         bool reload_vmcs01_apic_access_page;
>         bool update_vmcs01_cpu_dirty_logging;
> -       bool update_vmcs01_apicv_status;
> +       bool update_vmcs01_apicv_exec_ctrl;
>         bool update_vmcs01_hwapic_isr;
> 
>         /*
> 
> 
> 
> By the way, while reviewing source code, I noticed that certain read accesses to
> 'apicv_inhibit_reasons' are not protected by 'apicv_update_lock'.

Those are fine (hopefully; we spent a stupid amount of time sorting out the ordering).
In all cases, a false positive/negative will be remedied before KVM really truly
consumes the result.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation
  2025-12-05  2:15         ` Sean Christopherson
@ 2025-12-05 18:12           ` Dongli Zhang
  2025-12-05 18:27             ` Sean Christopherson
  0 siblings, 1 reply; 9+ messages in thread
From: Dongli Zhang @ 2025-12-05 18:12 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, x86, linux-kernel, chao.gao, pbonzini, tglx, mingo, bp,
	dave.hansen, hpa, joe.jin, alejandro.j.jimenez

Hi Sean,

On 12/4/25 6:15 PM, Sean Christopherson wrote:
> On Mon, Nov 17, 2025, Dongli Zhang wrote:

[snip]

>>
>> 1. Take the approach reviewed by Chao, and ...
> 
> Ya.  I spent a lot of time today wrapping my head around what all is going on,
> and my lightbulb moment came when reading this from the changelog:
> 
>   The issue is not applicable to AMD SVM which employs a different LAPIC
>   virtualization mechanism. 
> 
> That's not entirely true.  It's definitely true for SVI, but not for the bug that
> Chao pointed out.  SVM is "immune" from these bugs because KVM simply updates
> vmcb01 directly.  And looking at everything, there's zero reason we can't do the
> same for VMX.  Yeah, KVM needs to do a couple VMPTRLDs to swap between vmcs01 and
> vmcs02, but those aren't _that_ expensive, and these are slow paths.
> 
> And with a guard(), it's pretty trivial to run a section of code with vmcs01
> active.
> 
> static void vmx_load_vmcs01(struct kvm_vcpu *vcpu)
> {
> 	struct vcpu_vmx *vmx = to_vmx(vcpu);
> 
> 	if (!is_guest_mode(vcpu)) {
> 		WARN_ON_ONCE(vmx->loaded_vmcs != &vmx->vmcs01);
> 		return;
> 	}
> 
> 	WARN_ON_ONCE(vmx->loaded_vmcs != &vmx->nested.vmcs02);
> 	vmx_switch_loaded_vmcs(vcpu, &vmx->vmcs01);
> }
> 
> static void vmx_put_vmcs01(struct kvm_vcpu *vcpu)
> {
> 	if (!is_guest_mode(vcpu))
> 		return;
> 
> 	vmx_switch_loaded_vmcs(vcpu, &to_vmx(vcpu)->nested.vmcs02);
> }
> DEFINE_GUARD(vmx_vmcs01, struct kvm_vcpu *,
> 	     vmx_load_vmcs01(_T), vmx_put_vmcs01(_T))
> 
> I've got changes to convert everything to guard(vmx_vmcs01); except for
> vmx_set_virtual_apic_mode(), they're all quite trivial.  I also have a selftest
> that hits both this bug and the one Chao pointed out, so I'm reasonably confident
> the changes do indeed work.
> 
> But they're most definitely NOT stable material.  So my plan is to grab this
> and the below for 6.19, and then do the cleanup for 6.20 or later.
> 
> Oh, almost forgot.  We can also sink the hwapic_isr_update() call into
> kvm_apic_update_apicv() and drop kvm_apic_update_hwapic_isr() entirely, which is
> another argument for your approach.  That's actually a really good fit, because
> that's where KVM parses the vISR when APICv is being _disabled_.
> 
> I'll post a v3 with everything tomorrow (hopefully) after running the changes
> through more normal test flow.

Looking forward for how it looks like. The only concern is if it is simple
enough to backport to prior older kernel version, i.e. v5.15.196.

> 
>> 2. Fix the vmx_refresh_apicv_exec_ctrl() issue with an additional patch:
>>
>> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
>> index bcea087b642f..7d98c11a8920 100644
>> --- a/arch/x86/kvm/vmx/nested.c
>> +++ b/arch/x86/kvm/vmx/nested.c
>> @@ -19,6 +19,7 @@
>>  #include "trace.h"
>>  #include "vmx.h"
>>  #include "smm.h"
>> +#include "x86_ops.h"
>>
>>  static bool __read_mostly enable_shadow_vmcs = 1;
>>  module_param_named(enable_shadow_vmcs, enable_shadow_vmcs, bool, S_IRUGO);
>> @@ -5214,9 +5215,9 @@ void __nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32
>> vm_exit_reason,
>>                 kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu);
>>         }
>>
>> -       if (vmx->nested.update_vmcs01_apicv_status) {
>> -               vmx->nested.update_vmcs01_apicv_status = false;
>> -               kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
>> +       if (vmx->nested.update_vmcs01_apicv_exec_ctrl) {
>> +               vmx->nested.update_vmcs01_apicv_exec_ctrl = false;
>> +               vmx_refresh_apicv_exec_ctrl(vcpu);
> 
> +1 to the fix, but I'll omit the update_vmcs01_apicv_exec_ctrl rename because I'm
> 99% certain we can get rid of it entirely.
> 
> Oh, and can you give your SoB for this?  I'll write the changelog, just need the
> SoB for the commit.  Thanks!
> 

Sure.

Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>

Thank you very much!

Dongli Zhang

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation
  2025-12-05 18:12           ` Dongli Zhang
@ 2025-12-05 18:27             ` Sean Christopherson
  0 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2025-12-05 18:27 UTC (permalink / raw)
  To: Dongli Zhang
  Cc: kvm, x86, linux-kernel, chao.gao, pbonzini, tglx, mingo, bp,
	dave.hansen, hpa, joe.jin, alejandro.j.jimenez

On Fri, Dec 05, 2025, Dongli Zhang wrote:
> > But they're most definitely NOT stable material.  So my plan is to grab this
> > and the below for 6.19, and then do the cleanup for 6.20 or later.
> > 
> > Oh, almost forgot.  We can also sink the hwapic_isr_update() call into
> > kvm_apic_update_apicv() and drop kvm_apic_update_hwapic_isr() entirely, which is
> > another argument for your approach.  That's actually a really good fit, because
> > that's where KVM parses the vISR when APICv is being _disabled_.
> > 
> > I'll post a v3 with everything tomorrow (hopefully) after running the changes
> > through more normal test flow.
> 
> Looking forward for how it looks like. The only concern is if it is simple
> enough to backport to prior older kernel version, i.e. v5.15.196.

Oh, the fixes for stable/LTS are literally your patches.  The other stuff is going
on top; I've no intention of it being backported to 6.18, let alone 5.15 :-) 

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-12-05 18:28 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-10  6:32 [PATCH v2 1/1] KVM: VMX: configure SVI during runtime APICv activation Dongli Zhang
2025-11-10  7:08 ` Chao Gao
2025-11-12 14:47 ` Sean Christopherson
2025-11-13  3:06   ` Dongli Zhang
2025-11-13 21:13     ` Sean Christopherson
2025-11-18  3:36       ` Dongli Zhang
2025-12-05  2:15         ` Sean Christopherson
2025-12-05 18:12           ` Dongli Zhang
2025-12-05 18:27             ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox