kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/3] KVM: Dynamic Halt-Polling
@ 2015-09-02  7:29 Wanpeng Li
  2015-09-02  9:49 ` Paolo Bonzini
  2015-09-02 18:11 ` David Matlack
  0 siblings, 2 replies; 3+ messages in thread
From: Wanpeng Li @ 2015-09-02  7:29 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: David Matlack, kvm, linux-kernel, Wanpeng Li

v5 -> v6:
 * fix wait_ns and poll_ns 

v4 -> v5:
 * set base case 10us and max poll time 500us
 * handle short/long halt, idea from David, many thanks David ;-)

v3 -> v4:
 * bring back grow vcpu->halt_poll_ns when interrupt arrives and shrinks
   when idle VCPU is detected 

v2 -> v3:
 * grow/shrink vcpu->halt_poll_ns by *halt_poll_ns_grow or /halt_poll_ns_shrink
 * drop the macros and hard coding the numbers in the param definitions
 * update the comments "5-7 us"
 * remove halt_poll_ns_max and use halt_poll_ns as the max halt_poll_ns time,
   vcpu->halt_poll_ns start at zero
 * drop the wrappers 
 * move the grow/shrink logic before "out:" w/ "if (waited)"

v1 -> v2:
 * change kvm_vcpu_block to read halt_poll_ns from the vcpu instead of 
   the module parameter
 * use the shrink/grow matrix which is suggested by David
 * set halt_poll_ns_max to 2ms

There is a downside of always-poll since poll is still happened for idle 
vCPUs which can waste cpu usage. This patchset add the ability to adjust 
halt_poll_ns dynamically, to grow halt_poll_ns when shot halt is detected,  
and to shrink halt_poll_ns when long halt is detected.

There are two new kernel parameters for changing the halt_poll_ns:
halt_poll_ns_grow and halt_poll_ns_shrink. 

                        no-poll      always-poll    dynamic-poll
-----------------------------------------------------------------------
Idle (nohz) vCPU %c0     0.15%        0.3%            0.2%  
Idle (250HZ) vCPU %c0    1.1%         4.6%~14%        1.2%
TCP_RR latency           34us         27us            26.7us

"Idle (X) vCPU %c0" is the percent of time the physical cpu spent in
c0 over 60 seconds (each vCPU is pinned to a pCPU). (nohz) means the
guest was tickless. (250HZ) means the guest was ticking at 250HZ.

The big win is with ticking operating systems. Running the linux guest
with nohz=off (and HZ=250), we save 3.4%~12.8% CPUs/second and get close 
to no-polling overhead levels by using the dynamic-poll. The savings
should be even higher for higher frequency ticks.
	

Wanpeng Li (3):
  KVM: make halt_poll_ns per-vCPU
  KVM: dynamic halt_poll_ns adjustment
  KVM: trace kvm_halt_poll_ns grow/shrink

 include/linux/kvm_host.h   |  1 +
 include/trace/events/kvm.h | 30 ++++++++++++++++++++
 virt/kvm/kvm_main.c        | 70 ++++++++++++++++++++++++++++++++++++++++++----
 3 files changed, 96 insertions(+), 5 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v6 0/3] KVM: Dynamic Halt-Polling
  2015-09-02  7:29 [PATCH v6 0/3] KVM: Dynamic Halt-Polling Wanpeng Li
@ 2015-09-02  9:49 ` Paolo Bonzini
  2015-09-02 18:11 ` David Matlack
  1 sibling, 0 replies; 3+ messages in thread
From: Paolo Bonzini @ 2015-09-02  9:49 UTC (permalink / raw)
  To: Wanpeng Li; +Cc: David Matlack, kvm, linux-kernel



On 02/09/2015 09:29, Wanpeng Li wrote:
> v5 -> v6:
>  * fix wait_ns and poll_ns 
> 
> v4 -> v5:
>  * set base case 10us and max poll time 500us
>  * handle short/long halt, idea from David, many thanks David ;-)
> 
> v3 -> v4:
>  * bring back grow vcpu->halt_poll_ns when interrupt arrives and shrinks
>    when idle VCPU is detected 
> 
> v2 -> v3:
>  * grow/shrink vcpu->halt_poll_ns by *halt_poll_ns_grow or /halt_poll_ns_shrink
>  * drop the macros and hard coding the numbers in the param definitions
>  * update the comments "5-7 us"
>  * remove halt_poll_ns_max and use halt_poll_ns as the max halt_poll_ns time,
>    vcpu->halt_poll_ns start at zero
>  * drop the wrappers 
>  * move the grow/shrink logic before "out:" w/ "if (waited)"
> 
> v1 -> v2:
>  * change kvm_vcpu_block to read halt_poll_ns from the vcpu instead of 
>    the module parameter
>  * use the shrink/grow matrix which is suggested by David
>  * set halt_poll_ns_max to 2ms
> 
> There is a downside of always-poll since poll is still happened for idle 
> vCPUs which can waste cpu usage. This patchset add the ability to adjust 
> halt_poll_ns dynamically, to grow halt_poll_ns when shot halt is detected,  
> and to shrink halt_poll_ns when long halt is detected.
> 
> There are two new kernel parameters for changing the halt_poll_ns:
> halt_poll_ns_grow and halt_poll_ns_shrink. 
> 
>                         no-poll      always-poll    dynamic-poll
> -----------------------------------------------------------------------
> Idle (nohz) vCPU %c0     0.15%        0.3%            0.2%  
> Idle (250HZ) vCPU %c0    1.1%         4.6%~14%        1.2%
> TCP_RR latency           34us         27us            26.7us
> 
> "Idle (X) vCPU %c0" is the percent of time the physical cpu spent in
> c0 over 60 seconds (each vCPU is pinned to a pCPU). (nohz) means the
> guest was tickless. (250HZ) means the guest was ticking at 250HZ.
> 
> The big win is with ticking operating systems. Running the linux guest
> with nohz=off (and HZ=250), we save 3.4%~12.8% CPUs/second and get close 
> to no-polling overhead levels by using the dynamic-poll. The savings
> should be even higher for higher frequency ticks.
> 	
> 
> Wanpeng Li (3):
>   KVM: make halt_poll_ns per-vCPU
>   KVM: dynamic halt_poll_ns adjustment
>   KVM: trace kvm_halt_poll_ns grow/shrink
> 
>  include/linux/kvm_host.h   |  1 +
>  include/trace/events/kvm.h | 30 ++++++++++++++++++++
>  virt/kvm/kvm_main.c        | 70 ++++++++++++++++++++++++++++++++++++++++++----
>  3 files changed, 96 insertions(+), 5 deletions(-)
> 

Looks good, I'll apply it when I'm back.

Paolo

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v6 0/3] KVM: Dynamic Halt-Polling
  2015-09-02  7:29 [PATCH v6 0/3] KVM: Dynamic Halt-Polling Wanpeng Li
  2015-09-02  9:49 ` Paolo Bonzini
@ 2015-09-02 18:11 ` David Matlack
  1 sibling, 0 replies; 3+ messages in thread
From: David Matlack @ 2015-09-02 18:11 UTC (permalink / raw)
  To: Wanpeng Li; +Cc: Paolo Bonzini, kvm list, linux-kernel@vger.kernel.org

On Wed, Sep 2, 2015 at 12:29 AM, Wanpeng Li <wanpeng.li@hotmail.com> wrote:
> v5 -> v6:
>  * fix wait_ns and poll_ns

Thanks for bearing with me through all the reviews. I think it's on the
verge of being done :). There are just few small things to fix.

>
> v4 -> v5:
>  * set base case 10us and max poll time 500us
>  * handle short/long halt, idea from David, many thanks David ;-)
>
> v3 -> v4:
>  * bring back grow vcpu->halt_poll_ns when interrupt arrives and shrinks
>    when idle VCPU is detected
>
> v2 -> v3:
>  * grow/shrink vcpu->halt_poll_ns by *halt_poll_ns_grow or /halt_poll_ns_shrink
>  * drop the macros and hard coding the numbers in the param definitions
>  * update the comments "5-7 us"
>  * remove halt_poll_ns_max and use halt_poll_ns as the max halt_poll_ns time,
>    vcpu->halt_poll_ns start at zero
>  * drop the wrappers
>  * move the grow/shrink logic before "out:" w/ "if (waited)"
>
> v1 -> v2:
>  * change kvm_vcpu_block to read halt_poll_ns from the vcpu instead of
>    the module parameter
>  * use the shrink/grow matrix which is suggested by David
>  * set halt_poll_ns_max to 2ms
>
> There is a downside of always-poll since poll is still happened for idle
> vCPUs which can waste cpu usage. This patchset add the ability to adjust
> halt_poll_ns dynamically, to grow halt_poll_ns when shot halt is detected,
> and to shrink halt_poll_ns when long halt is detected.
>
> There are two new kernel parameters for changing the halt_poll_ns:
> halt_poll_ns_grow and halt_poll_ns_shrink.
>
>                         no-poll      always-poll    dynamic-poll
> -----------------------------------------------------------------------
> Idle (nohz) vCPU %c0     0.15%        0.3%            0.2%
> Idle (250HZ) vCPU %c0    1.1%         4.6%~14%        1.2%
> TCP_RR latency           34us         27us            26.7us
>
> "Idle (X) vCPU %c0" is the percent of time the physical cpu spent in
> c0 over 60 seconds (each vCPU is pinned to a pCPU). (nohz) means the
> guest was tickless. (250HZ) means the guest was ticking at 250HZ.
>
> The big win is with ticking operating systems. Running the linux guest
> with nohz=off (and HZ=250), we save 3.4%~12.8% CPUs/second and get close
> to no-polling overhead levels by using the dynamic-poll. The savings
> should be even higher for higher frequency ticks.
>
>
> Wanpeng Li (3):
>   KVM: make halt_poll_ns per-vCPU
>   KVM: dynamic halt_poll_ns adjustment
>   KVM: trace kvm_halt_poll_ns grow/shrink
>
>  include/linux/kvm_host.h   |  1 +
>  include/trace/events/kvm.h | 30 ++++++++++++++++++++
>  virt/kvm/kvm_main.c        | 70 ++++++++++++++++++++++++++++++++++++++++++----
>  3 files changed, 96 insertions(+), 5 deletions(-)
>
> --
> 1.9.1
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-09-02 18:11 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-02  7:29 [PATCH v6 0/3] KVM: Dynamic Halt-Polling Wanpeng Li
2015-09-02  9:49 ` Paolo Bonzini
2015-09-02 18:11 ` David Matlack

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).