* [PATCH] increase ple_gap default to 64
@ 2011-01-03 15:19 Rik van Riel
2011-01-04 3:21 ` Zhai, Edwin
0 siblings, 1 reply; 8+ messages in thread
From: Rik van Riel @ 2011-01-03 15:19 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm, avi, mtosatti, edwin.zhai
On some CPUs, a ple_gap of 41 is simply insufficient to ever trigger
PLE exits, even with the minimalistic PLE test from kvm-unit-tests.
http://git.kernel.org/?p=virt/kvm/kvm-unit-tests.git;a=commitdiff;h=eda71b28fa122203e316483b35f37aaacd42f545
For example, the Xeon X5670 CPU needs a ple_gap of at least 48 in
order to get pause loop exits:
# modprobe kvm_intel ple_gap=47
# taskset 1 /usr/local/bin/qemu-system-x86_64 -device testdev,chardev=log -chardev stdio,id=log -kernel x86/vmexit.flat -append ple-round-robin -smp 2
VNC server running on `::1:5900'
enabling apic
enabling apic
ple-round-robin 58298446
# rmmod kvm_intel
# modprobe kvm_intel ple_gap=48
# taskset 1 /usr/local/bin/qemu-system-x86_64 -device testdev,chardev=log -chardev stdio,id=log -kernel x86/vmexit.flat -append ple-round-robin -smp 2
VNC server running on `::1:5900'
enabling apic
enabling apic
ple-round-robin 36616
Increase the ple_gap to 64 to be on the safe side.
Is this enough for a CPU with HT that has a busy sibling thread, or
should it be even larger? On the X5670, loading up the sibling thread
with an infinite loop does not seem to increase the required ple_gap.
Signed-off-by: Rik van Riel <riel@redhat.com>
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 81fcbe9..0e38b8e 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -90,14 +90,14 @@ module_param(vmm_exclusive, bool, S_IRUGO);
* These 2 parameters are used to config the controls for Pause-Loop Exiting:
* ple_gap: upper bound on the amount of time between two successive
* executions of PAUSE in a loop. Also indicate if ple enabled.
- * According to test, this time is usually small than 41 cycles.
+ * According to test, this time is usually smaller than 64 cycles.
* ple_window: upper bound on the amount of time a guest is allowed to execute
* in a PAUSE loop. Tests indicate that most spinlocks are held for
* less than 2^12 cycles
* Time is measured based on a counter that runs at the same rate as the TSC,
* refer SDM volume 3b section 21.6.13 & 22.1.3.
*/
-#define KVM_VMX_DEFAULT_PLE_GAP 41
+#define KVM_VMX_DEFAULT_PLE_GAP 64
#define KVM_VMX_DEFAULT_PLE_WINDOW 4096
static int ple_gap = KVM_VMX_DEFAULT_PLE_GAP;
module_param(ple_gap, int, S_IRUGO);
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] increase ple_gap default to 64
2011-01-03 15:19 [PATCH] increase ple_gap default to 64 Rik van Riel
@ 2011-01-04 3:21 ` Zhai, Edwin
2011-01-04 14:18 ` Rik van Riel
2011-01-04 14:51 ` [PATCH -v2] vmx: " Rik van Riel
0 siblings, 2 replies; 8+ messages in thread
From: Zhai, Edwin @ 2011-01-04 3:21 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, avi@redhat.com,
mtosatti@redhat.com
Riel,
Thanks for your patch. I have changed the ple_gap to 128 on xen side,
but forget the patch for KVM:(
A little bit big is no harm, but more perf data is better.
Rik van Riel wrote:
> On some CPUs, a ple_gap of 41 is simply insufficient to ever trigger
> PLE exits, even with the minimalistic PLE test from kvm-unit-tests.
>
> http://git.kernel.org/?p=virt/kvm/kvm-unit-tests.git;a=commitdiff;h=eda71b28fa122203e316483b35f37aaacd42f545
>
> For example, the Xeon X5670 CPU needs a ple_gap of at least 48 in
> order to get pause loop exits:
>
> # modprobe kvm_intel ple_gap=47
> # taskset 1 /usr/local/bin/qemu-system-x86_64 -device testdev,chardev=log -chardev stdio,id=log -kernel x86/vmexit.flat -append ple-round-robin -smp 2
> VNC server running on `::1:5900'
> enabling apic
> enabling apic
> ple-round-robin 58298446
> # rmmod kvm_intel
> # modprobe kvm_intel ple_gap=48
> # taskset 1 /usr/local/bin/qemu-system-x86_64 -device testdev,chardev=log -chardev stdio,id=log -kernel x86/vmexit.flat -append ple-round-robin -smp 2
> VNC server running on `::1:5900'
> enabling apic
> enabling apic
> ple-round-robin 36616
>
> Increase the ple_gap to 64 to be on the safe side.
>
> Is this enough for a CPU with HT that has a busy sibling thread, or
> should it be even larger? On the X5670, loading up the sibling thread
> with an infinite loop does not seem to increase the required ple_gap.
>
> Signed-off-by: Rik van Riel <riel@redhat.com>
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 81fcbe9..0e38b8e 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -90,14 +90,14 @@ module_param(vmm_exclusive, bool, S_IRUGO);
> * These 2 parameters are used to config the controls for Pause-Loop Exiting:
> * ple_gap: upper bound on the amount of time between two successive
> * executions of PAUSE in a loop. Also indicate if ple enabled.
> - * According to test, this time is usually small than 41 cycles.
> + * According to test, this time is usually smaller than 64 cycles.
> * ple_window: upper bound on the amount of time a guest is allowed to execute
> * in a PAUSE loop. Tests indicate that most spinlocks are held for
> * less than 2^12 cycles
> * Time is measured based on a counter that runs at the same rate as the TSC,
> * refer SDM volume 3b section 21.6.13 & 22.1.3.
> */
> -#define KVM_VMX_DEFAULT_PLE_GAP 41
> +#define KVM_VMX_DEFAULT_PLE_GAP 64
> #define KVM_VMX_DEFAULT_PLE_WINDOW 4096
> static int ple_gap = KVM_VMX_DEFAULT_PLE_GAP;
> module_param(ple_gap, int, S_IRUGO);
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] increase ple_gap default to 64
2011-01-04 3:21 ` Zhai, Edwin
@ 2011-01-04 14:18 ` Rik van Riel
2011-01-04 14:29 ` Avi Kivity
2011-01-04 14:51 ` [PATCH -v2] vmx: " Rik van Riel
1 sibling, 1 reply; 8+ messages in thread
From: Rik van Riel @ 2011-01-04 14:18 UTC (permalink / raw)
To: Zhai, Edwin
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, avi@redhat.com,
mtosatti@redhat.com
On 01/03/2011 10:21 PM, Zhai, Edwin wrote:
> Riel,
> Thanks for your patch. I have changed the ple_gap to 128 on xen side,
> but forget the patch for KVM:(
>
> A little bit big is no harm, but more perf data is better.
So should I resend the patch with the ple_gap default
changed to 128, or are you willing to ack the current
patch?
--
All rights reversed
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] increase ple_gap default to 64
2011-01-04 14:18 ` Rik van Riel
@ 2011-01-04 14:29 ` Avi Kivity
2011-01-04 14:35 ` Zhai, Edwin
0 siblings, 1 reply; 8+ messages in thread
From: Avi Kivity @ 2011-01-04 14:29 UTC (permalink / raw)
To: Rik van Riel
Cc: Zhai, Edwin, linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
mtosatti@redhat.com
On 01/04/2011 04:18 PM, Rik van Riel wrote:
> On 01/03/2011 10:21 PM, Zhai, Edwin wrote:
>> Riel,
>> Thanks for your patch. I have changed the ple_gap to 128 on xen side,
>> but forget the patch for KVM:(
>>
>> A little bit big is no harm, but more perf data is better.
>
> So should I resend the patch with the ple_gap default
> changed to 128, or are you willing to ack the current
> patch?
>
I think 128 is safer given than 41 was too low. We have to take into
account newer cpus and slower spin loops. If the spin loop does a cache
ping-pong (which would be a bad, bad possible, implementation), even 128
might be too low.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] increase ple_gap default to 64
2011-01-04 14:29 ` Avi Kivity
@ 2011-01-04 14:35 ` Zhai, Edwin
0 siblings, 0 replies; 8+ messages in thread
From: Zhai, Edwin @ 2011-01-04 14:35 UTC (permalink / raw)
To: Avi Kivity
Cc: Rik van Riel, linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
mtosatti@redhat.com
Avi Kivity wrote:
> On 01/04/2011 04:18 PM, Rik van Riel wrote:
>
>>
>> So should I resend the patch with the ple_gap default
>> changed to 128, or are you willing to ack the current
>> patch?
>>
>>
>
> I think 128 is safer given than 41 was too low. We have to take into
> account newer cpus and slower spin loops. If the spin loop does a cache
> ping-pong (which would be a bad, bad possible, implementation), even 128
> might be too low.
>
Agree with Avi. Let us use 128 at this point.
Thanks,
edwin
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH -v2] vmx: increase ple_gap default to 64
2011-01-04 3:21 ` Zhai, Edwin
2011-01-04 14:18 ` Rik van Riel
@ 2011-01-04 14:51 ` Rik van Riel
2011-01-14 3:08 ` Zhai, Edwin
2011-01-16 15:42 ` Avi Kivity
1 sibling, 2 replies; 8+ messages in thread
From: Rik van Riel @ 2011-01-04 14:51 UTC (permalink / raw)
To: Zhai, Edwin
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, avi@redhat.com,
mtosatti@redhat.com
On some CPUs, a ple_gap of 41 is simply insufficient to ever trigger
PLE exits, even with the minimalistic PLE test from kvm-unit-tests.
http://git.kernel.org/?p=virt/kvm/kvm-unit-tests.git;a=commitdiff;h=eda71b28fa122203e316483b35f37aaacd42f545
For example, the Xeon X5670 CPU needs a ple_gap of at least 48 in
order to get pause loop exits:
# modprobe kvm_intel ple_gap=47
# taskset 1 /usr/local/bin/qemu-system-x86_64 -device testdev,chardev=log -chardev stdio,id=log -kernel x86/vmexit.flat -append ple-round-robin -smp 2
VNC server running on `::1:5900'
enabling apic
enabling apic
ple-round-robin 58298446
# rmmod kvm_intel
# modprobe kvm_intel ple_gap=48
# taskset 1 /usr/local/bin/qemu-system-x86_64 -device testdev,chardev=log -chardev stdio,id=log -kernel x86/vmexit.flat -append ple-round-robin -smp 2
VNC server running on `::1:5900'
enabling apic
enabling apic
ple-round-robin 36616
Increase the ple_gap to 128 to be on the safe side. Is this enough
for a CPU with HT that has a busy sibling thread, or should it be
even larger? On the X5670, loading up the sibling thread with an
infinite loop does not seem to increase the required ple_gap.
Signed-off-by: Rik van Riel <riel@redhat.com>
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 81fcbe9..c61fcbf 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -90,14 +90,14 @@ module_param(vmm_exclusive, bool, S_IRUGO);
* These 2 parameters are used to config the controls for Pause-Loop Exiting:
* ple_gap: upper bound on the amount of time between two successive
* executions of PAUSE in a loop. Also indicate if ple enabled.
- * According to test, this time is usually small than 41 cycles.
+ * According to test, this time is usually smaller than 128 cycles.
* ple_window: upper bound on the amount of time a guest is allowed to execute
* in a PAUSE loop. Tests indicate that most spinlocks are held for
* less than 2^12 cycles
* Time is measured based on a counter that runs at the same rate as the TSC,
* refer SDM volume 3b section 21.6.13 & 22.1.3.
*/
-#define KVM_VMX_DEFAULT_PLE_GAP 41
+#define KVM_VMX_DEFAULT_PLE_GAP 128
#define KVM_VMX_DEFAULT_PLE_WINDOW 4096
static int ple_gap = KVM_VMX_DEFAULT_PLE_GAP;
module_param(ple_gap, int, S_IRUGO);
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH -v2] vmx: increase ple_gap default to 64
2011-01-04 14:51 ` [PATCH -v2] vmx: " Rik van Riel
@ 2011-01-14 3:08 ` Zhai, Edwin
2011-01-16 15:42 ` Avi Kivity
1 sibling, 0 replies; 8+ messages in thread
From: Zhai, Edwin @ 2011-01-14 3:08 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, avi@redhat.com,
mtosatti@redhat.com
Acked.
Thanks,
edwin
Rik van Riel wrote:
> On some CPUs, a ple_gap of 41 is simply insufficient to ever trigger
> PLE exits, even with the minimalistic PLE test from kvm-unit-tests.
>
> http://git.kernel.org/?p=virt/kvm/kvm-unit-tests.git;a=commitdiff;h=eda71b28fa122203e316483b35f37aaacd42f545
>
> For example, the Xeon X5670 CPU needs a ple_gap of at least 48 in
> order to get pause loop exits:
>
> # modprobe kvm_intel ple_gap=47
> # taskset 1 /usr/local/bin/qemu-system-x86_64 -device testdev,chardev=log -chardev stdio,id=log -kernel x86/vmexit.flat -append ple-round-robin -smp 2
> VNC server running on `::1:5900'
> enabling apic
> enabling apic
> ple-round-robin 58298446
> # rmmod kvm_intel
> # modprobe kvm_intel ple_gap=48
> # taskset 1 /usr/local/bin/qemu-system-x86_64 -device testdev,chardev=log -chardev stdio,id=log -kernel x86/vmexit.flat -append ple-round-robin -smp 2
> VNC server running on `::1:5900'
> enabling apic
> enabling apic
> ple-round-robin 36616
>
> Increase the ple_gap to 128 to be on the safe side. Is this enough
> for a CPU with HT that has a busy sibling thread, or should it be
> even larger? On the X5670, loading up the sibling thread with an
> infinite loop does not seem to increase the required ple_gap.
>
> Signed-off-by: Rik van Riel <riel@redhat.com>
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 81fcbe9..c61fcbf 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -90,14 +90,14 @@ module_param(vmm_exclusive, bool, S_IRUGO);
> * These 2 parameters are used to config the controls for Pause-Loop Exiting:
> * ple_gap: upper bound on the amount of time between two successive
> * executions of PAUSE in a loop. Also indicate if ple enabled.
> - * According to test, this time is usually small than 41 cycles.
> + * According to test, this time is usually smaller than 128 cycles.
> * ple_window: upper bound on the amount of time a guest is allowed to execute
> * in a PAUSE loop. Tests indicate that most spinlocks are held for
> * less than 2^12 cycles
> * Time is measured based on a counter that runs at the same rate as the TSC,
> * refer SDM volume 3b section 21.6.13 & 22.1.3.
> */
> -#define KVM_VMX_DEFAULT_PLE_GAP 41
> +#define KVM_VMX_DEFAULT_PLE_GAP 128
> #define KVM_VMX_DEFAULT_PLE_WINDOW 4096
> static int ple_gap = KVM_VMX_DEFAULT_PLE_GAP;
> module_param(ple_gap, int, S_IRUGO);
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH -v2] vmx: increase ple_gap default to 64
2011-01-04 14:51 ` [PATCH -v2] vmx: " Rik van Riel
2011-01-14 3:08 ` Zhai, Edwin
@ 2011-01-16 15:42 ` Avi Kivity
1 sibling, 0 replies; 8+ messages in thread
From: Avi Kivity @ 2011-01-16 15:42 UTC (permalink / raw)
To: Rik van Riel
Cc: Zhai, Edwin, linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
mtosatti@redhat.com
On 01/04/2011 04:51 PM, Rik van Riel wrote:
> On some CPUs, a ple_gap of 41 is simply insufficient to ever trigger
> PLE exits, even with the minimalistic PLE test from kvm-unit-tests.
>
> http://git.kernel.org/?p=virt/kvm/kvm-unit-tests.git;a=commitdiff;h=eda71b28fa122203e316483b35f37aaacd42f545
>
> For example, the Xeon X5670 CPU needs a ple_gap of at least 48 in
> order to get pause loop exits:
>
> # modprobe kvm_intel ple_gap=47
> # taskset 1 /usr/local/bin/qemu-system-x86_64 -device testdev,chardev=log -chardev stdio,id=log -kernel x86/vmexit.flat -append ple-round-robin -smp 2
> VNC server running on `::1:5900'
> enabling apic
> enabling apic
> ple-round-robin 58298446
> # rmmod kvm_intel
> # modprobe kvm_intel ple_gap=48
> # taskset 1 /usr/local/bin/qemu-system-x86_64 -device testdev,chardev=log -chardev stdio,id=log -kernel x86/vmexit.flat -append ple-round-robin -smp 2
> VNC server running on `::1:5900'
> enabling apic
> enabling apic
> ple-round-robin 36616
>
> Increase the ple_gap to 128 to be on the safe side. Is this enough
> for a CPU with HT that has a busy sibling thread, or should it be
> even larger? On the X5670, loading up the sibling thread with an
> infinite loop does not seem to increase the required ple_gap.
>
Applied, thanks.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2011-01-16 15:42 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-01-03 15:19 [PATCH] increase ple_gap default to 64 Rik van Riel
2011-01-04 3:21 ` Zhai, Edwin
2011-01-04 14:18 ` Rik van Riel
2011-01-04 14:29 ` Avi Kivity
2011-01-04 14:35 ` Zhai, Edwin
2011-01-04 14:51 ` [PATCH -v2] vmx: " Rik van Riel
2011-01-14 3:08 ` Zhai, Edwin
2011-01-16 15:42 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox