* [PATCH][v2] x86/kvm: Prefer native qspinlock for dedicated vCPUs irrespective of PV_UNHALT
@ 2025-07-22 11:00 lirongqing
2025-08-15 2:58 ` Guo, Wangyang
2025-08-15 19:28 ` Sean Christopherson
0 siblings, 2 replies; 3+ messages in thread
From: lirongqing @ 2025-07-22 11:00 UTC (permalink / raw)
To: seanjc, pbonzini, vkuznets, tglx, mingo, bp, dave.hansen, x86,
hpa, kvm, linux-kernel
Cc: Li RongQing
From: Li RongQing <lirongqing@baidu.com>
The commit b2798ba0b876 ("KVM: X86: Choose qspinlock when dedicated
physical CPUs are available") states that when PV_DEDICATED=1
(vCPU has dedicated pCPU), qspinlock should be preferred regardless of
PV_UNHALT. However, the current implementation doesn't reflect this: when
PV_UNHALT=0, we still use virt_spin_lock() even with dedicated pCPUs.
This is suboptimal because:
1. Native qspinlocks should outperform virt_spin_lock() for dedicated
vCPUs irrespective of HALT exiting
2. virt_spin_lock() should only be preferred when vCPUs may be preempted
(non-dedicated case)
So reorder the PV spinlock checks to:
1. First handle dedicated pCPU case (disable virt_spin_lock_key)
2. Second check single CPU, and nopvspin configuration
3. Only then check PV_UNHALT support
This ensures we always use native qspinlock for dedicated vCPUs, delivering
pretty performance gains at high contention levels.
Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
diff with v1: rewrite the changelog
arch/x86/kernel/kvm.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 921c1c7..9cda79f 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val)
void __init kvm_spinlock_init(void)
{
/*
- * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
- * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
- * preferred over native qspinlock when vCPU is preempted.
- */
- if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
- pr_info("PV spinlocks disabled, no host support\n");
- return;
- }
-
- /*
* Disable PV spinlocks and use native qspinlock when dedicated pCPUs
* are available.
*/
@@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void)
goto out;
}
+ /*
+ * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
+ * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
+ * preferred over native qspinlock when vCPU is preempted.
+ */
+ if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
+ pr_info("PV spinlocks disabled, no host support\n");
+ return;
+ }
+
pr_info("PV spinlocks enabled\n");
__pv_init_lock_hash();
--
2.9.4
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH][v2] x86/kvm: Prefer native qspinlock for dedicated vCPUs irrespective of PV_UNHALT
2025-07-22 11:00 [PATCH][v2] x86/kvm: Prefer native qspinlock for dedicated vCPUs irrespective of PV_UNHALT lirongqing
@ 2025-08-15 2:58 ` Guo, Wangyang
2025-08-15 19:28 ` Sean Christopherson
1 sibling, 0 replies; 3+ messages in thread
From: Guo, Wangyang @ 2025-08-15 2:58 UTC (permalink / raw)
To: lirongqing, seanjc, pbonzini, vkuznets, tglx, mingo, bp,
dave.hansen, x86, hpa, kvm, linux-kernel
On 7/22/2025 7:00 PM, lirongqing wrote:
> From: Li RongQing <lirongqing@baidu.com>
>
> The commit b2798ba0b876 ("KVM: X86: Choose qspinlock when dedicated
> physical CPUs are available") states that when PV_DEDICATED=1
> (vCPU has dedicated pCPU), qspinlock should be preferred regardless of
> PV_UNHALT. However, the current implementation doesn't reflect this: when
> PV_UNHALT=0, we still use virt_spin_lock() even with dedicated pCPUs.
>
> This is suboptimal because:
> 1. Native qspinlocks should outperform virt_spin_lock() for dedicated
> vCPUs irrespective of HALT exiting
> 2. virt_spin_lock() should only be preferred when vCPUs may be preempted
> (non-dedicated case)
>
> So reorder the PV spinlock checks to:
> 1. First handle dedicated pCPU case (disable virt_spin_lock_key)
> 2. Second check single CPU, and nopvspin configuration
> 3. Only then check PV_UNHALT support
>
> This ensures we always use native qspinlock for dedicated vCPUs, delivering
> pretty performance gains at high contention levels.
>
> Signed-off-by: Li RongQing <lirongqing@baidu.com>
>
> diff with v1: rewrite the changelog
>
> arch/x86/kernel/kvm.c | 20 ++++++++++----------
> 1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 921c1c7..9cda79f 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val)
> void __init kvm_spinlock_init(void)
> {
> /*
> - * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> - * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> - * preferred over native qspinlock when vCPU is preempted.
> - */
> - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> - pr_info("PV spinlocks disabled, no host support\n");
> - return;
> - }
> -
> - /*
> * Disable PV spinlocks and use native qspinlock when dedicated pCPUs
> * are available.
> */
> @@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void)
> goto out;
> }
>
> + /*
> + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> + * preferred over native qspinlock when vCPU is preempted.
> + */
> + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> + pr_info("PV spinlocks disabled, no host support\n");
> + return;
> + }
> +
> pr_info("PV spinlocks enabled\n");
>
> __pv_init_lock_hash();
For non-overcommit VM, we may add `-overcommit cpu-pm=on` options to
qemu-kvm and let guest to handle idle by itself and reduce the latency.
Current kernel will fallback to virt_spin_lock, even kvm-hint-dedicated
is provided. With this patch, it can fix this problem and use mcs queue
spinlock for better performance.
Tested-by: Wangyang Guo <wangyang.guo@intel.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH][v2] x86/kvm: Prefer native qspinlock for dedicated vCPUs irrespective of PV_UNHALT
2025-07-22 11:00 [PATCH][v2] x86/kvm: Prefer native qspinlock for dedicated vCPUs irrespective of PV_UNHALT lirongqing
2025-08-15 2:58 ` Guo, Wangyang
@ 2025-08-15 19:28 ` Sean Christopherson
1 sibling, 0 replies; 3+ messages in thread
From: Sean Christopherson @ 2025-08-15 19:28 UTC (permalink / raw)
To: lirongqing
Cc: pbonzini, vkuznets, tglx, mingo, bp, dave.hansen, x86, hpa, kvm,
linux-kernel
On Tue, Jul 22, 2025, lirongqing wrote:
> From: Li RongQing <lirongqing@baidu.com>
>
> The commit b2798ba0b876 ("KVM: X86: Choose qspinlock when dedicated
> physical CPUs are available") states that when PV_DEDICATED=1
> (vCPU has dedicated pCPU), qspinlock should be preferred regardless of
> PV_UNHALT. However, the current implementation doesn't reflect this: when
> PV_UNHALT=0, we still use virt_spin_lock() even with dedicated pCPUs.
>
> This is suboptimal because:
> 1. Native qspinlocks should outperform virt_spin_lock() for dedicated
> vCPUs irrespective of HALT exiting
> 2. virt_spin_lock() should only be preferred when vCPUs may be preempted
> (non-dedicated case)
>
> So reorder the PV spinlock checks to:
> 1. First handle dedicated pCPU case (disable virt_spin_lock_key)
> 2. Second check single CPU, and nopvspin configuration
> 3. Only then check PV_UNHALT support
>
> This ensures we always use native qspinlock for dedicated vCPUs, delivering
> pretty performance gains at high contention levels.
>
> Signed-off-by: Li RongQing <lirongqing@baidu.com>
> ---
Reviewed-by: Sean Christopherson <seanjc@google.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-08-15 19:28 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-22 11:00 [PATCH][v2] x86/kvm: Prefer native qspinlock for dedicated vCPUs irrespective of PV_UNHALT lirongqing
2025-08-15 2:58 ` Guo, Wangyang
2025-08-15 19:28 ` Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).