linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86/kvm: Reorder PV spinlock checks for dedicated CPU case
@ 2025-07-18  9:49 lirongqing
  2025-07-18 14:12 ` Sean Christopherson
  0 siblings, 1 reply; 3+ messages in thread
From: lirongqing @ 2025-07-18  9:49 UTC (permalink / raw)
  To: pbonzini, vkuznets, tglx, mingo, bp, dave.hansen, x86, hpa, kvm,
	linux-kernel
  Cc: Li RongQing

From: Li RongQing <lirongqing@baidu.com>

When a vCPU has a dedicated physical CPU, typically, the hypervisor
disables the HLT exit too, rendering the KVM_FEATURE_PV_UNHALT feature
unavailable, and virt_spin_lock_key is expected to be disabled in
this configuration, but:

The problematic execution flow caused the enabled virt_spin_lock_key:
- First check PV_UNHALT
- Then check dedicated CPUs

So change the order:
- First check dedicated CPUs
- Then check PV_UNHALT

This ensures virt_spin_lock_key is disable when dedicated physical
CPUs are available and HLT exit is disabled, and this will gives a
pretty performance boost at high contention level

Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
 arch/x86/kernel/kvm.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 921c1c7..9cda79f 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val)
 void __init kvm_spinlock_init(void)
 {
 	/*
-	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
-	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
-	 * preferred over native qspinlock when vCPU is preempted.
-	 */
-	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
-		pr_info("PV spinlocks disabled, no host support\n");
-		return;
-	}
-
-	/*
 	 * Disable PV spinlocks and use native qspinlock when dedicated pCPUs
 	 * are available.
 	 */
@@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void)
 		goto out;
 	}
 
+	/*
+	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
+	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
+	 * preferred over native qspinlock when vCPU is preempted.
+	 */
+	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
+		pr_info("PV spinlocks disabled, no host support\n");
+		return;
+	}
+
 	pr_info("PV spinlocks enabled\n");
 
 	__pv_init_lock_hash();
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] x86/kvm: Reorder PV spinlock checks for dedicated CPU case
  2025-07-18  9:49 [PATCH] x86/kvm: Reorder PV spinlock checks for dedicated CPU case lirongqing
@ 2025-07-18 14:12 ` Sean Christopherson
  2025-07-21  2:26   ` 答复: [????] " Li,Rongqing
  0 siblings, 1 reply; 3+ messages in thread
From: Sean Christopherson @ 2025-07-18 14:12 UTC (permalink / raw)
  To: lirongqing
  Cc: pbonzini, vkuznets, tglx, mingo, bp, dave.hansen, x86, hpa, kvm,
	linux-kernel

On Fri, Jul 18, 2025, lirongqing wrote:
> From: Li RongQing <lirongqing@baidu.com>
> 
> When a vCPU has a dedicated physical CPU, typically, the hypervisor
> disables the HLT exit too, 

But certainly not always.  E.g. the hypervisor may disable MWAIT exiting but not
HLT exiting, so that the hypervisor can take action if a guest kernel refuses to
use MWAIT for whatever reason.

I assume native qspinlocks outperform virt_spin_lock() irrespective of HLT exiting
when the vCPU has a dedicated pCPU?  If so, it's probably worth calling that out
in the changelog, e.g. to assuage any fears/concerns about this being undesirable
for setups with KVM_HINTS_REALTIME *and* KVM_FEATURE_PV_UNHALT.

> rendering the KVM_FEATURE_PV_UNHALT feature unavailable, and
> virt_spin_lock_key is expected to be disabled in this configuration, but:
> 
> The problematic execution flow caused the enabled virt_spin_lock_key:
> - First check PV_UNHALT
> - Then check dedicated CPUs
> 
> So change the order:
> - First check dedicated CPUs
> - Then check PV_UNHALT
> 
> This ensures virt_spin_lock_key is disable when dedicated physical
> CPUs are available and HLT exit is disabled, and this will gives a
> pretty performance boost at high contention level
> 
> Signed-off-by: Li RongQing <lirongqing@baidu.com>
> ---
>  arch/x86/kernel/kvm.c | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 921c1c7..9cda79f 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val)
>  void __init kvm_spinlock_init(void)
>  {
>  	/*
> -	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> -	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> -	 * preferred over native qspinlock when vCPU is preempted.
> -	 */
> -	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> -		pr_info("PV spinlocks disabled, no host support\n");
> -		return;
> -	}
> -
> -	/*
>  	 * Disable PV spinlocks and use native qspinlock when dedicated pCPUs
>  	 * are available.
>  	 */
> @@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void)
>  		goto out;
>  	}
>  
> +	/*
> +	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> +	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> +	 * preferred over native qspinlock when vCPU is preempted.
> +	 */
> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> +		pr_info("PV spinlocks disabled, no host support\n");
> +		return;
> +	}
> +
>  	pr_info("PV spinlocks enabled\n");
>  
>  	__pv_init_lock_hash();
> -- 
> 2.9.4
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

* 答复: [????] Re: [PATCH] x86/kvm: Reorder PV spinlock checks for dedicated CPU case
  2025-07-18 14:12 ` Sean Christopherson
@ 2025-07-21  2:26   ` Li,Rongqing
  0 siblings, 0 replies; 3+ messages in thread
From: Li,Rongqing @ 2025-07-21  2:26 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: pbonzini@redhat.com, vkuznets@redhat.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
	x86@kernel.org, hpa@zytor.com, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org


> 
> On Fri, Jul 18, 2025, lirongqing wrote:
> > From: Li RongQing <lirongqing@baidu.com>
> >
> > When a vCPU has a dedicated physical CPU, typically, the hypervisor
> > disables the HLT exit too,
> 
> But certainly not always.  E.g. the hypervisor may disable MWAIT exiting but
> not HLT exiting, so that the hypervisor can take action if a guest kernel refuses
> to use MWAIT for whatever reason.
> 
> I assume native qspinlocks outperform virt_spin_lock() irrespective of HLT
> exiting when the vCPU has a dedicated pCPU? 

"I think this is true. As the comment (KVM: X86: Choose qspinlock when dedicated physical CPUs are available) says:
'PV_DEDICATED = 1, PV_UNHALT = anything: default is qspinlock'. 
However, the current code doesn't reflect this. When PV_UNHALT=0, it still uses virt_spin_lock(). My patch is fixing this inconsistency.

commit b2798ba0b8769b42f00899b44a538b5fcecb480d
Author: Wanpeng Li <wanpengli@tencent.com>
Date:   Tue Feb 13 09:05:41 2018 +0800

    KVM: X86: Choose qspinlock when dedicated physical CPUs are available

    Waiman Long mentioned that:
    > Generally speaking, unfair lock performs well for VMs with a small
    > number of vCPUs. Native qspinlock may perform better than pvqspinlock
    > if there is vCPU pinning and there is no vCPU over-commitment.

    This patch uses the KVM_HINTS_DEDICATED performance hint, which is
    provided by the hypervisor admin, to choose the qspinlock algorithm
    when a dedicated physical CPU is available.

    PV_DEDICATED = 1, PV_UNHALT = anything: default is qspinlock
    PV_DEDICATED = 0, PV_UNHALT = 1: default is Hybrid PV queued/unfair lock
    PV_DEDICATED = 0, PV_UNHALT = 0: default is tas



> If so, it's probably worth calling
> that out in the changelog, e.g. to assuage any fears/concerns about this being
> undesirable for setups with KVM_HINTS_REALTIME *and*
> KVM_FEATURE_PV_UNHALT.
> 
Ok, I will rewrite the changelog

If you still have concerns, I think we can change the code as below

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 921c1c7..6275d78 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -1078,8 +1078,14 @@ void __init kvm_spinlock_init(void)
         * preferred over native qspinlock when vCPU is preempted.
         */
        if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
-               pr_info("PV spinlocks disabled, no host support\n");
-               return;
+               if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
+                       pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints\n");
+                       goto out;
+               }
+               else {
+                       pr_info("PV spinlocks disabled, no host support\n");
+                       return;
+               }
        }

        /*

Thanks

-Li


> > rendering the KVM_FEATURE_PV_UNHALT feature unavailable, and
> > virt_spin_lock_key is expected to be disabled in this configuration, but:
> >
> > The problematic execution flow caused the enabled virt_spin_lock_key:
> > - First check PV_UNHALT
> > - Then check dedicated CPUs
> >
> > So change the order:
> > - First check dedicated CPUs
> > - Then check PV_UNHALT
> >
> > This ensures virt_spin_lock_key is disable when dedicated physical
> > CPUs are available and HLT exit is disabled, and this will gives a
> > pretty performance boost at high contention level
> >
> > Signed-off-by: Li RongQing <lirongqing@baidu.com>
> > ---
> >  arch/x86/kernel/kvm.c | 20 ++++++++++----------
> >  1 file changed, 10 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index
> > 921c1c7..9cda79f 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val)  void
> > __init kvm_spinlock_init(void)  {
> >  	/*
> > -	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> > -	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> > -	 * preferred over native qspinlock when vCPU is preempted.
> > -	 */
> > -	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> > -		pr_info("PV spinlocks disabled, no host support\n");
> > -		return;
> > -	}
> > -
> > -	/*
> >  	 * Disable PV spinlocks and use native qspinlock when dedicated pCPUs
> >  	 * are available.
> >  	 */
> > @@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void)
> >  		goto out;
> >  	}
> >
> > +	/*
> > +	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> > +	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> > +	 * preferred over native qspinlock when vCPU is preempted.
> > +	 */
> > +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> > +		pr_info("PV spinlocks disabled, no host support\n");
> > +		return;
> > +	}
> > +
> >  	pr_info("PV spinlocks enabled\n");
> >
> >  	__pv_init_lock_hash();
> > --
> > 2.9.4
> >

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-07-21  2:27 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-18  9:49 [PATCH] x86/kvm: Reorder PV spinlock checks for dedicated CPU case lirongqing
2025-07-18 14:12 ` Sean Christopherson
2025-07-21  2:26   ` 答复: [????] " Li,Rongqing

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).