* High vm-exit latencies during kvm boot-up/shutdown
@ 2007-10-22 23:09 Jan Kiszka
[not found] ` <471D2D8C.1080202-S0/GAf8tV78@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-22 23:09 UTC (permalink / raw)
To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
[-- Attachment #1.1: Type: text/plain, Size: 644 bytes --]
Hi,
I'm seeing fairly high vm-exit latencies (300-400 us) during and only
during qemu/kvm startup and shutdown on a Core2 T5500 in 32-bit mode.
It's most probably while the VM runs inside bios code. During the rest
of the time, while some Linux guest is running, the exit latencies are
within microseconds, thus perfectly fine for the real-time scenarios I
have in mind.
Has anyone some idea what goes on there? As those hundreds of micros
precisely frame the assembly block in vmx_vcpu_run, I wonder what guest
code could cause such long delays. A quick glance at the bochs bios code
did not enlighten me yet.
Thanks,
Jan
[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 252 bytes --]
[-- Attachment #2: Type: text/plain, Size: 314 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471D2D8C.1080202-S0/GAf8tV78@public.gmane.org>
@ 2007-10-23 1:05 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02441127-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Dong, Eddie @ 2007-10-23 1:05 UTC (permalink / raw)
To: Jan Kiszka, kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org wrote:
> Hi,
>
> I'm seeing fairly high vm-exit latencies (300-400 us) during and only
> during qemu/kvm startup and shutdown on a Core2 T5500 in 32-bit mode.
> It's most probably while the VM runs inside bios code. During the rest
> of the time, while some Linux guest is running, the exit latencies are
> within microseconds, thus perfectly fine for the real-time scenarios
> I have in mind.
How is this time spent? All in Qemu?
Usually a kernel only VM Exit cost less than 1us.
Eddie
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02441127-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-10-23 6:38 ` Jan Kiszka
[not found] ` <471D96DC.7070809-S0/GAf8tV78@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-23 6:38 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
[-- Attachment #1.1: Type: text/plain, Size: 1124 bytes --]
Dong, Eddie wrote:
> kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org wrote:
>> Hi,
>>
>> I'm seeing fairly high vm-exit latencies (300-400 us) during and only
>> during qemu/kvm startup and shutdown on a Core2 T5500 in 32-bit mode.
>> It's most probably while the VM runs inside bios code. During the rest
>> of the time, while some Linux guest is running, the exit latencies are
>> within microseconds, thus perfectly fine for the real-time scenarios
>> I have in mind.
>
> How is this time spent? All in Qemu?
Most probably. I have a function tracer installed, and it does not
report any kernel function call between the begin of the asm block and
its end.
> Usually a kernel only VM Exit cost less than 1us.
That's what I'm seeing for the rest as well.
I have read that certain guest states do not allow preemptions by
external interrupts (here it is the timer IRQ), but both
GUEST_INTERRUPTIBILITY_INFO and GUEST_ACTIVITY_STATE are 0 on entry,
e.g. Is there a way for the guest to trigger a non-preemptible SMM
entry, and that without the kernel noticing it?
Jan
[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 252 bytes --]
[-- Attachment #2: Type: text/plain, Size: 314 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471D96DC.7070809-S0/GAf8tV78@public.gmane.org>
@ 2007-10-23 7:46 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A024414D9-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-23 8:16 ` Avi Kivity
1 sibling, 1 reply; 40+ messages in thread
From: Dong, Eddie @ 2007-10-23 7:46 UTC (permalink / raw)
To: jan.kiszka-S0/GAf8tV78; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
[-- Attachment #1: Type: text/plain, Size: 1660 bytes --]
>-----Original Message-----
>From: jan.kiszka-S0/GAf8tV78@public.gmane.org [mailto:jan.kiszka-S0/GAf8tV78@public.gmane.org]
>Sent: 2007年10月23日 14:38
>To: Dong, Eddie
>Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
>Subject: Re: [kvm-devel] High vm-exit latencies during kvm
>boot-up/shutdown
>
>Dong, Eddie wrote:
>> kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org wrote:
>>> Hi,
>>>
>>> I'm seeing fairly high vm-exit latencies (300-400 us)
>during and only
>>> during qemu/kvm startup and shutdown on a Core2 T5500 in
>32-bit mode.
>>> It's most probably while the VM runs inside bios code.
>During the rest
>>> of the time, while some Linux guest is running, the exit
>latencies are
>>> within microseconds, thus perfectly fine for the real-time scenarios
>>> I have in mind.
>>
>> How is this time spent? All in Qemu?
>
>Most probably. I have a function tracer installed, and it does not
>report any kernel function call between the begin of the asm block and
>its end.
If the time is spent in Qemu, then it is possible to be that long
since Qemu may read/write disk for the VM virtual disk.
>
>> Usually a kernel only VM Exit cost less than 1us.
>
>That's what I'm seeing for the rest as well.
>
>I have read that certain guest states do not allow preemptions by
>external interrupts (here it is the timer IRQ), but both
>GUEST_INTERRUPTIBILITY_INFO and GUEST_ACTIVITY_STATE are 0 on entry,
>e.g. Is there a way for the guest to trigger a non-preemptible SMM
>entry, and that without the kernel noticing it?
>
I am not aware of this possibility.
Eddie
[-- Attachment #2: Type: text/plain, Size: 314 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471D96DC.7070809-S0/GAf8tV78@public.gmane.org>
2007-10-23 7:46 ` Dong, Eddie
@ 2007-10-23 8:16 ` Avi Kivity
1 sibling, 0 replies; 40+ messages in thread
From: Avi Kivity @ 2007-10-23 8:16 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> Dong, Eddie wrote:
>
>> kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org wrote:
>>
>>> Hi,
>>>
>>> I'm seeing fairly high vm-exit latencies (300-400 us) during and only
>>> during qemu/kvm startup and shutdown on a Core2 T5500 in 32-bit mode.
>>> It's most probably while the VM runs inside bios code. During the rest
>>> of the time, while some Linux guest is running, the exit latencies are
>>> within microseconds, thus perfectly fine for the real-time scenarios
>>> I have in mind.
>>>
>> How is this time spent? All in Qemu?
>>
>
> Most probably. I have a function tracer installed, and it does not
> report any kernel function call between the begin of the asm block and
> its end.
>
>
Which asm block?
>> Usually a kernel only VM Exit cost less than 1us.
>>
>
> That's what I'm seeing for the rest as well.
>
> I have read that certain guest states do not allow preemptions by
> external interrupts (here it is the timer IRQ), but both
> GUEST_INTERRUPTIBILITY_INFO and GUEST_ACTIVITY_STATE are 0 on entry,
> e.g. Is there a way for the guest to trigger a non-preemptible SMM
> entry, and that without the kernel noticing it?
>
>
Which timer interrupt? The host timer interrupt or the guest timer
interrupt?
Try to be much more specific in your descriptions, including what you
mean by vm exit and exactly how you measured it.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <10EA09EFD8728347A513008B6B0DA77A024414D9-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-10-23 9:08 ` Jan Kiszka
[not found] ` <471DBA1A.2080108-S0/GAf8tV78@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-23 9:08 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
[-- Attachment #1.1: Type: text/plain, Size: 2026 bytes --]
Dong, Eddie wrote:
>
>
>> -----Original Message-----
>> From: jan.kiszka-S0/GAf8tV78@public.gmane.org [mailto:jan.kiszka-S0/GAf8tV78@public.gmane.org]
>> Sent: 2007年10月23日 14:38
>> To: Dong, Eddie
>> Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
>> Subject: Re: [kvm-devel] High vm-exit latencies during kvm
>> boot-up/shutdown
>>
>> Dong, Eddie wrote:
>>> kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org wrote:
>>>> Hi,
>>>>
>>>> I'm seeing fairly high vm-exit latencies (300-400 us)
>> during and only
>>>> during qemu/kvm startup and shutdown on a Core2 T5500 in
>> 32-bit mode.
>>>> It's most probably while the VM runs inside bios code.
>> During the rest
>>>> of the time, while some Linux guest is running, the exit
>> latencies are
>>>> within microseconds, thus perfectly fine for the real-time scenarios
>>>> I have in mind.
>>> How is this time spent? All in Qemu?
>> Most probably. I have a function tracer installed, and it does not
>> report any kernel function call between the begin of the asm block and
>> its end.
>
> If the time is spent in Qemu, then it is possible to be that long
> since Qemu may read/write disk for the VM virtual disk.
Clarification: I can't precisely tell what code is executed in VM mode,
as I don't have qemu or that guest instrumented. I just see the kernel
entering VM mode and leaving it again more than 300 us later. So I
wonder why this is allowed while some external IRQ is pending.
>
>>> Usually a kernel only VM Exit cost less than 1us.
>> That's what I'm seeing for the rest as well.
>>
>> I have read that certain guest states do not allow preemptions by
>> external interrupts (here it is the timer IRQ), but both
>> GUEST_INTERRUPTIBILITY_INFO and GUEST_ACTIVITY_STATE are 0 on entry,
>> e.g. Is there a way for the guest to trigger a non-preemptible SMM
>> entry, and that without the kernel noticing it?
>>
> I am not aware of this possibility.
> Eddie
Jan
[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 252 bytes --]
[-- Attachment #2: Type: text/plain, Size: 314 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471DBA1A.2080108-S0/GAf8tV78@public.gmane.org>
@ 2007-10-23 9:46 ` Avi Kivity
[not found] ` <471DC311.2050003-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-23 9:46 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
[-- Attachment #1: Type: text/plain, Size: 518 bytes --]
Jan Kiszka wrote:
> Clarification: I can't precisely tell what code is executed in VM mode,
> as I don't have qemu or that guest instrumented. I just see the kernel
> entering VM mode and leaving it again more than 300 us later. So I
> wonder why this is allowed while some external IRQ is pending.
>
>
How do you know an external interrupt is pending?
kvm programs the hardware to exit when an external interrupt arrives.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
[-- Attachment #2: Type: text/plain, Size: 314 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471DC311.2050003-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-23 13:30 ` Jan Kiszka
[not found] ` <471DF76B.7040001-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-23 13:30 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Avi,
[somehow your mails do not get through to my private account, so I'm
switching]
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Clarification: I can't precisely tell what code is executed in VM mode,
>> as I don't have qemu or that guest instrumented. I just see the kernel
>> entering VM mode and leaving it again more than 300 us later. So I
>> wonder why this is allowed while some external IRQ is pending.
>>
>>
>
> How do you know an external interrupt is pending?
It's the host timer IRQ, programmed to fire in certain intervals (100 us
here). Test case is some latency measurement tool like tglx's cyclictest
or similar programs we use in Xenomai.
>
> kvm programs the hardware to exit when an external interrupt arrives.
>
Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with
latest kvm from git hacked into (kvm generally seems to work fine this way):
...
qemu-sys-7543 0...1 13897us : vmcs_write16+0xb/0x20 (vmx_save_host_state+0x1a7/0x1c0)
qemu-sys-7543 0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xa0/0x1c0)
qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xb0/0x1c0)
qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xbf/0x1c0)
qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xcf/0x1c0)
qemu-sys-7543 0...1 13898us : load_msrs+0xb/0x40 (vmx_save_host_state+0xe7/0x1c0)
qemu-sys-7543 0...1 13898us : kvm_load_guest_fpu+0x8/0x40 (kvm_vcpu_ioctl_run+0xbf/0x570)
qemu-sys-7543 0D..1 13899us : vmx_vcpu_run+0xc/0x110 (kvm_vcpu_ioctl_run+0x120/0x570)
qemu-sys-7543 0D..1 13899us!: vmcs_writel+0xb/0x30 (vmx_vcpu_run+0x22/0x110)
qemu-sys-7543 0D..1 14344us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xc7/0x110)
qemu-sys-7543 0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
qemu-sys-7543 0D..1 14345us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xf4/0x110)
qemu-sys-7543 0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
qemu-sys-7543 0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
qemu-sys-7543 0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0)
qemu-sys-7543 0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
qemu-sys-7543 0D.h1 14351us : __spin_lock+0xc/0x30 (handle_level_irq+0x24/0x120)
qemu-sys-7543 0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 (handle_level_irq+0x37/0x120)
qemu-sys-7543 0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 (mask_and_ack_8259A+0x2a/0x120)
qemu-sys-7543 0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 (mask_and_ack_8259A+0x7a/0x120)
qemu-sys-7543 0D.h2 14358us : redirect_hardirq+0x8/0x70 (handle_level_irq+0x72/0x120)
qemu-sys-7543 0D.h2 14358us : __spin_unlock+0xb/0x40 (handle_level_irq+0x8e/0x120)
qemu-sys-7543 0D.h1 14358us : handle_IRQ_event+0xe/0x110 (handle_level_irq+0x9a/0x120)
qemu-sys-7543 0D.h1 14359us : timer_interrupt+0xb/0x60 (handle_IRQ_event+0x67/0x110)
qemu-sys-7543 0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 (timer_interrupt+0x20/0x60)
...
One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
and this while cyclictest runs at a period of 100 us!
I got the same results over Adeos/I-pipe & Xenomai with the function
tracer there, also pointing to the period while the CPU is in VM mode.
Anyone any ideas? Greg, I put you on CC as you said you once saw "decent
latencies" with your patches. Are there still magic bits missing in
official kvm?
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471DF76B.7040001-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
@ 2007-10-23 14:19 ` Avi Kivity
[not found] ` <471E02F7.6080408-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-23 14:19 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> Avi,
>
> [somehow your mails do not get through to my private account, so I'm
> switching]
>
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> Clarification: I can't precisely tell what code is executed in VM mode,
>>> as I don't have qemu or that guest instrumented. I just see the kernel
>>> entering VM mode and leaving it again more than 300 us later. So I
>>> wonder why this is allowed while some external IRQ is pending.
>>>
>>>
>>>
>> How do you know an external interrupt is pending?
>>
>
> It's the host timer IRQ, programmed to fire in certain intervals (100 us
> here). Test case is some latency measurement tool like tglx's cyclictest
> or similar programs we use in Xenomai.
>
>
>> kvm programs the hardware to exit when an external interrupt arrives.
>>
>>
>
> Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with
> latest kvm from git hacked into (kvm generally seems to work fine this way):
>
> ...
> qemu-sys-7543 0...1 13897us : vmcs_write16+0xb/0x20 (vmx_save_host_state+0x1a7/0x1c0)
> qemu-sys-7543 0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
> qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xa0/0x1c0)
> qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xb0/0x1c0)
> qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xbf/0x1c0)
> qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xcf/0x1c0)
> qemu-sys-7543 0...1 13898us : load_msrs+0xb/0x40 (vmx_save_host_state+0xe7/0x1c0)
> qemu-sys-7543 0...1 13898us : kvm_load_guest_fpu+0x8/0x40 (kvm_vcpu_ioctl_run+0xbf/0x570)
> qemu-sys-7543 0D..1 13899us : vmx_vcpu_run+0xc/0x110 (kvm_vcpu_ioctl_run+0x120/0x570)
> qemu-sys-7543 0D..1 13899us!: vmcs_writel+0xb/0x30 (vmx_vcpu_run+0x22/0x110)
> qemu-sys-7543 0D..1 14344us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xc7/0x110)
> qemu-sys-7543 0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
> qemu-sys-7543 0D..1 14345us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xf4/0x110)
> qemu-sys-7543 0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
> qemu-sys-7543 0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
> qemu-sys-7543 0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0)
> qemu-sys-7543 0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
> qemu-sys-7543 0D.h1 14351us : __spin_lock+0xc/0x30 (handle_level_irq+0x24/0x120)
> qemu-sys-7543 0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 (handle_level_irq+0x37/0x120)
> qemu-sys-7543 0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 (mask_and_ack_8259A+0x2a/0x120)
> qemu-sys-7543 0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 (mask_and_ack_8259A+0x7a/0x120)
> qemu-sys-7543 0D.h2 14358us : redirect_hardirq+0x8/0x70 (handle_level_irq+0x72/0x120)
> qemu-sys-7543 0D.h2 14358us : __spin_unlock+0xb/0x40 (handle_level_irq+0x8e/0x120)
> qemu-sys-7543 0D.h1 14358us : handle_IRQ_event+0xe/0x110 (handle_level_irq+0x9a/0x120)
> qemu-sys-7543 0D.h1 14359us : timer_interrupt+0xb/0x60 (handle_IRQ_event+0x67/0x110)
> qemu-sys-7543 0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 (timer_interrupt+0x20/0x60)
> ...
>
> One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
> and this while cyclictest runs at a period of 100 us!
>
> I got the same results over Adeos/I-pipe & Xenomai with the function
> tracer there, also pointing to the period while the CPU is in VM mode.
>
> Anyone any ideas? Greg, I put you on CC as you said you once saw "decent
> latencies" with your patches. Are there still magic bits missing in
> official kvm?
>
No bits missing as far as I know. It should just work.
Can you explain some more about the latenct tracer? How does it work?
Seeing vmx_vcpu_run() in there confuses me, as it always runs with
interrupts disabled (it does dispatch NMIs, so we could be seeing an NMI).
Please post a disassembly of your vmx_vcpu_run so we can interpret the
offsets.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E02F7.6080408-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-23 14:41 ` Jan Kiszka
[not found] ` <471E0818.6060405-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-23 14:43 ` Gregory Haskins
1 sibling, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-23 14:41 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Avi,
>>
>> [somehow your mails do not get through to my private account, so I'm
>> switching]
>>
>> Avi Kivity wrote:
>>
>>> Jan Kiszka wrote:
>>>
>>>> Clarification: I can't precisely tell what code is executed in VM mode,
>>>> as I don't have qemu or that guest instrumented. I just see the kernel
>>>> entering VM mode and leaving it again more than 300 us later. So I
>>>> wonder why this is allowed while some external IRQ is pending.
>>>>
>>>>
>>>>
>>> How do you know an external interrupt is pending?
>>>
>> It's the host timer IRQ, programmed to fire in certain intervals (100 us
>> here). Test case is some latency measurement tool like tglx's cyclictest
>> or similar programs we use in Xenomai.
>>
>>
>>> kvm programs the hardware to exit when an external interrupt arrives.
>>>
>>>
>> Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with
>> latest kvm from git hacked into (kvm generally seems to work fine this way):
>>
>> ...
>> qemu-sys-7543 0...1 13897us : vmcs_write16+0xb/0x20 (vmx_save_host_state+0x1a7/0x1c0)
>> qemu-sys-7543 0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
>> qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xa0/0x1c0)
>> qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xb0/0x1c0)
>> qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xbf/0x1c0)
>> qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xcf/0x1c0)
>> qemu-sys-7543 0...1 13898us : load_msrs+0xb/0x40 (vmx_save_host_state+0xe7/0x1c0)
>> qemu-sys-7543 0...1 13898us : kvm_load_guest_fpu+0x8/0x40 (kvm_vcpu_ioctl_run+0xbf/0x570)
>> qemu-sys-7543 0D..1 13899us : vmx_vcpu_run+0xc/0x110 (kvm_vcpu_ioctl_run+0x120/0x570)
>> qemu-sys-7543 0D..1 13899us!: vmcs_writel+0xb/0x30 (vmx_vcpu_run+0x22/0x110)
>> qemu-sys-7543 0D..1 14344us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xc7/0x110)
>> qemu-sys-7543 0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
>> qemu-sys-7543 0D..1 14345us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xf4/0x110)
>> qemu-sys-7543 0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
>> qemu-sys-7543 0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
>> qemu-sys-7543 0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0)
>> qemu-sys-7543 0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
>> qemu-sys-7543 0D.h1 14351us : __spin_lock+0xc/0x30 (handle_level_irq+0x24/0x120)
>> qemu-sys-7543 0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 (handle_level_irq+0x37/0x120)
>> qemu-sys-7543 0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 (mask_and_ack_8259A+0x2a/0x120)
>> qemu-sys-7543 0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 (mask_and_ack_8259A+0x7a/0x120)
>> qemu-sys-7543 0D.h2 14358us : redirect_hardirq+0x8/0x70 (handle_level_irq+0x72/0x120)
>> qemu-sys-7543 0D.h2 14358us : __spin_unlock+0xb/0x40 (handle_level_irq+0x8e/0x120)
>> qemu-sys-7543 0D.h1 14358us : handle_IRQ_event+0xe/0x110 (handle_level_irq+0x9a/0x120)
>> qemu-sys-7543 0D.h1 14359us : timer_interrupt+0xb/0x60 (handle_IRQ_event+0x67/0x110)
>> qemu-sys-7543 0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 (timer_interrupt+0x20/0x60)
>> ...
>>
>> One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
>> and this while cyclictest runs at a period of 100 us!
>>
>> I got the same results over Adeos/I-pipe & Xenomai with the function
>> tracer there, also pointing to the period while the CPU is in VM mode.
>>
>> Anyone any ideas? Greg, I put you on CC as you said you once saw "decent
>> latencies" with your patches. Are there still magic bits missing in
>> official kvm?
>>
>
> No bits missing as far as I know. It should just work.
>
> Can you explain some more about the latenct tracer? How does it work?
Ah, sorry: The latency tracers in both -rt and I-pipe use gcc's -pg to
put a call to a function called mcount at the beginning of each compiled
function. mcount is provided by the tracers and stores the caller
address, its parent, the current time, and more in a log. An API is
provided to start and stop the trace, e.g. after someone (kernel or user
space) detected large wakeup latencies.
> Seeing vmx_vcpu_run() in there confuses me, as it always runs with
> interrupts disabled (it does dispatch NMIs, so we could be seeing an NMI).
The point is that the cyclictest does not find large latencies when kvm
is not happening to start or stop right now. And if you are thinking
about NMIs triggered by the kvm code on vm-exit: I also instrumented
that code path, and it is not taken in case of the long delay.
>
> Please post a disassembly of your vmx_vcpu_run so we can interpret the
> offsets.
Here it comes:
00002df0 <vmx_vcpu_run>:
2df0: 55 push %ebp
2df1: 89 e5 mov %esp,%ebp
2df3: 53 push %ebx
2df4: 83 ec 08 sub $0x8,%esp
2df7: e8 fc ff ff ff call 2df8 <vmx_vcpu_run+0x8>
2dfc: 8b 5d 08 mov 0x8(%ebp),%ebx
2dff: 0f 20 c0 mov %cr0,%eax
2e02: 89 44 24 04 mov %eax,0x4(%esp)
2e06: c7 04 24 00 6c 00 00 movl $0x6c00,(%esp)
2e0d: e8 be d8 ff ff call 6d0 <vmcs_writel>
2e12: 8b 83 80 0d 00 00 mov 0xd80(%ebx),%eax
2e18: ba 14 6c 00 00 mov $0x6c14,%edx
2e1d: 89 d9 mov %ebx,%ecx
2e1f: 60 pusha
2e20: 51 push %ecx
2e21: 0f 79 d4 vmwrite %esp,%edx
2e24: 83 f8 00 cmp $0x0,%eax
2e27: 8b 81 78 01 00 00 mov 0x178(%ecx),%eax
2e2d: 0f 22 d0 mov %eax,%cr2
2e30: 8b 81 50 01 00 00 mov 0x150(%ecx),%eax
2e36: 8b 99 5c 01 00 00 mov 0x15c(%ecx),%ebx
2e3c: 8b 91 58 01 00 00 mov 0x158(%ecx),%edx
2e42: 8b b1 68 01 00 00 mov 0x168(%ecx),%esi
2e48: 8b b9 6c 01 00 00 mov 0x16c(%ecx),%edi
2e4e: 8b a9 64 01 00 00 mov 0x164(%ecx),%ebp
2e54: 8b 89 54 01 00 00 mov 0x154(%ecx),%ecx
2e5a: 75 05 jne 2e61 <vmx_vcpu_run+0x71>
2e5c: 0f 01 c2 vmlaunch
2e5f: eb 03 jmp 2e64 <vmx_vcpu_run+0x74>
2e61: 0f 01 c3 vmresume
2e64: 87 0c 24 xchg %ecx,(%esp)
2e67: 89 81 50 01 00 00 mov %eax,0x150(%ecx)
2e6d: 89 99 5c 01 00 00 mov %ebx,0x15c(%ecx)
2e73: ff 34 24 pushl (%esp)
2e76: 8f 81 54 01 00 00 popl 0x154(%ecx)
2e7c: 89 91 58 01 00 00 mov %edx,0x158(%ecx)
2e82: 89 b1 68 01 00 00 mov %esi,0x168(%ecx)
2e88: 89 b9 6c 01 00 00 mov %edi,0x16c(%ecx)
2e8e: 89 a9 64 01 00 00 mov %ebp,0x164(%ecx)
2e94: 0f 20 d0 mov %cr2,%eax
2e97: 89 81 78 01 00 00 mov %eax,0x178(%ecx)
2e9d: 8b 0c 24 mov (%esp),%ecx
2ea0: 59 pop %ecx
2ea1: 61 popa
2ea2: 0f 96 c0 setbe %al
2ea5: 88 83 84 0d 00 00 mov %al,0xd84(%ebx)
2eab: c7 04 24 24 48 00 00 movl $0x4824,(%esp)
2eb2: e8 49 d2 ff ff call 100 <vmcs_read32>
2eb7: a8 03 test $0x3,%al
2eb9: 0f 94 c0 sete %al
2ebc: 0f b6 c0 movzbl %al,%eax
2ebf: 89 83 28 01 00 00 mov %eax,0x128(%ebx)
2ec5: b8 7b 00 00 00 mov $0x7b,%eax
2eca: 8e d8 mov %eax,%ds
2ecc: 8e c0 mov %eax,%es
2ece: c7 83 80 0d 00 00 01 movl $0x1,0xd80(%ebx)
2ed5: 00 00 00
2ed8: c7 04 24 04 44 00 00 movl $0x4404,(%esp)
2edf: e8 1c d2 ff ff call 100 <vmcs_read32>
2ee4: 25 00 07 00 00 and $0x700,%eax
2ee9: 3d 00 02 00 00 cmp $0x200,%eax
2eee: 75 02 jne 2ef2 <vmx_vcpu_run+0x102>
2ef0: cd 02 int $0x2
2ef2: 83 c4 08 add $0x8,%esp
2ef5: 5b pop %ebx
2ef6: 5d pop %ebp
2ef7: c3 ret
2ef8: 90 nop
2ef9: 8d b4 26 00 00 00 00 lea 0x0(%esi),%esi
Note that the first, unresolved call here goes to mcount().
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E02F7.6080408-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-23 14:41 ` Jan Kiszka
@ 2007-10-23 14:43 ` Gregory Haskins
[not found] ` <1193150619.8343.21.camel-5CR4LY5GPkvLDviKLk5550HKjMygAv58XqFh9Ls21Oc@public.gmane.org>
1 sibling, 1 reply; 40+ messages in thread
From: Gregory Haskins @ 2007-10-23 14:43 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Jan Kiszka
[-- Attachment #1.1: Type: text/plain, Size: 4710 bytes --]
On Tue, 2007-10-23 at 16:19 +0200, Avi Kivity wrote:
> Jan Kiszka wrote:
> > Avi,
> >
> > [somehow your mails do not get through to my private account, so I'm
> > switching]
> >
> > Avi Kivity wrote:
> >
> >> Jan Kiszka wrote:
> >>
> >>> Clarification: I can't precisely tell what code is executed in VM mode,
> >>> as I don't have qemu or that guest instrumented. I just see the kernel
> >>> entering VM mode and leaving it again more than 300 us later. So I
> >>> wonder why this is allowed while some external IRQ is pending.
> >>>
> >>>
> >>>
> >> How do you know an external interrupt is pending?
> >>
> >
> > It's the host timer IRQ, programmed to fire in certain intervals (100 us
> > here). Test case is some latency measurement tool like tglx's cyclictest
> > or similar programs we use in Xenomai.
> >
> >
> >> kvm programs the hardware to exit when an external interrupt arrives.
> >>
> >>
> >
> > Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with
> > latest kvm from git hacked into (kvm generally seems to work fine this way):
> >
> > ...
> > qemu-sys-7543 0...1 13897us : vmcs_write16+0xb/0x20 (vmx_save_host_state+0x1a7/0x1c0)
> > qemu-sys-7543 0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
> > qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xa0/0x1c0)
> > qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xb0/0x1c0)
> > qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xbf/0x1c0)
> > qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xcf/0x1c0)
> > qemu-sys-7543 0...1 13898us : load_msrs+0xb/0x40 (vmx_save_host_state+0xe7/0x1c0)
> > qemu-sys-7543 0...1 13898us : kvm_load_guest_fpu+0x8/0x40 (kvm_vcpu_ioctl_run+0xbf/0x570)
> > qemu-sys-7543 0D..1 13899us : vmx_vcpu_run+0xc/0x110 (kvm_vcpu_ioctl_run+0x120/0x570)
> > qemu-sys-7543 0D..1 13899us!: vmcs_writel+0xb/0x30 (vmx_vcpu_run+0x22/0x110)
> > qemu-sys-7543 0D..1 14344us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xc7/0x110)
> > qemu-sys-7543 0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
> > qemu-sys-7543 0D..1 14345us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xf4/0x110)
> > qemu-sys-7543 0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
> > qemu-sys-7543 0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
> > qemu-sys-7543 0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0)
> > qemu-sys-7543 0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
> > qemu-sys-7543 0D.h1 14351us : __spin_lock+0xc/0x30 (handle_level_irq+0x24/0x120)
> > qemu-sys-7543 0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 (handle_level_irq+0x37/0x120)
> > qemu-sys-7543 0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 (mask_and_ack_8259A+0x2a/0x120)
> > qemu-sys-7543 0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 (mask_and_ack_8259A+0x7a/0x120)
> > qemu-sys-7543 0D.h2 14358us : redirect_hardirq+0x8/0x70 (handle_level_irq+0x72/0x120)
> > qemu-sys-7543 0D.h2 14358us : __spin_unlock+0xb/0x40 (handle_level_irq+0x8e/0x120)
> > qemu-sys-7543 0D.h1 14358us : handle_IRQ_event+0xe/0x110 (handle_level_irq+0x9a/0x120)
> > qemu-sys-7543 0D.h1 14359us : timer_interrupt+0xb/0x60 (handle_IRQ_event+0x67/0x110)
> > qemu-sys-7543 0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 (timer_interrupt+0x20/0x60)
> > ...
> >
> > One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
> > and this while cyclictest runs at a period of 100 us!
> >
> > I got the same results over Adeos/I-pipe & Xenomai with the function
> > tracer there, also pointing to the period while the CPU is in VM mode.
> >
> > Anyone any ideas? Greg, I put you on CC as you said you once saw "decent
> > latencies" with your patches. Are there still magic bits missing in
> > official kvm?
> >
>
> No bits missing as far as I know. It should just work.
That could very well be the case these days. I know back when I was
looking at it, KVM would not run on VMX + -rt without modification or it
would crash/hang (this was around the time I was working on that
smp_function_call stuff). And without careful modification it would run
very poorly (with high (300us+ latencies) revealed in cyclictest.
However, I was able to craft the vmx_vcpu_run path so that a VM could
run side-by-side with cyclictest with sub 40us latencies. In fact,
normally it was sub 30us, but on an occasional run I would get a spike
to ~37us.
Unfortunately I am deep into other non-KVM related -rt issues at the
moment, so I can't work on it any further for a bit.
Regards,
-Greg
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
[-- Attachment #2: Type: text/plain, Size: 314 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E0818.6060405-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
@ 2007-10-23 14:44 ` Jan Kiszka
2007-10-23 15:26 ` Avi Kivity
1 sibling, 0 replies; 40+ messages in thread
From: Jan Kiszka @ 2007-10-23 14:44 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> Avi Kivity wrote:
>> Seeing vmx_vcpu_run() in there confuses me, as it always runs with
>> interrupts disabled (it does dispatch NMIs, so we could be seeing an NMI).
>
> The point is that the cyclictest does not find large latencies when kvm
> is not happening to start or stop right now. And if you are thinking
> about NMIs triggered by the kvm code on vm-exit: I also instrumented
> that code path, and it is not taken in case of the long delay.
Actually, that instrumentation was for an older version. In this one
here, the vm-entry/exit block is clearly framed by traced vmcs_xxx calls.
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <1193150619.8343.21.camel-5CR4LY5GPkvLDviKLk5550HKjMygAv58XqFh9Ls21Oc@public.gmane.org>
@ 2007-10-23 14:50 ` Jan Kiszka
0 siblings, 0 replies; 40+ messages in thread
From: Jan Kiszka @ 2007-10-23 14:50 UTC (permalink / raw)
To: Gregory Haskins; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Avi Kivity
Gregory Haskins wrote:
> On Tue, 2007-10-23 at 16:19 +0200, Avi Kivity wrote:
>> Jan Kiszka wrote:
>>> Avi,
>>>
>>> [somehow your mails do not get through to my private account, so I'm
>>> switching]
>>>
>>> Avi Kivity wrote:
>>>
>>>> Jan Kiszka wrote:
>>>>
>>>>> Clarification: I can't precisely tell what code is executed in VM mode,
>>>>> as I don't have qemu or that guest instrumented. I just see the kernel
>>>>> entering VM mode and leaving it again more than 300 us later. So I
>>>>> wonder why this is allowed while some external IRQ is pending.
>>>>>
>>>>>
>>>>>
>>>> How do you know an external interrupt is pending?
>>>>
>>> It's the host timer IRQ, programmed to fire in certain intervals (100 us
>>> here). Test case is some latency measurement tool like tglx's cyclictest
>>> or similar programs we use in Xenomai.
>>>
>>>
>>>> kvm programs the hardware to exit when an external interrupt arrives.
>>>>
>>>>
>>> Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with
>>> latest kvm from git hacked into (kvm generally seems to work fine this way):
>>>
>>> ...
>>> qemu-sys-7543 0...1 13897us : vmcs_write16+0xb/0x20 (vmx_save_host_state+0x1a7/0x1c0)
>>> qemu-sys-7543 0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
>>> qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xa0/0x1c0)
>>> qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xb0/0x1c0)
>>> qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xbf/0x1c0)
>>> qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xcf/0x1c0)
>>> qemu-sys-7543 0...1 13898us : load_msrs+0xb/0x40 (vmx_save_host_state+0xe7/0x1c0)
>>> qemu-sys-7543 0...1 13898us : kvm_load_guest_fpu+0x8/0x40 (kvm_vcpu_ioctl_run+0xbf/0x570)
>>> qemu-sys-7543 0D..1 13899us : vmx_vcpu_run+0xc/0x110 (kvm_vcpu_ioctl_run+0x120/0x570)
>>> qemu-sys-7543 0D..1 13899us!: vmcs_writel+0xb/0x30 (vmx_vcpu_run+0x22/0x110)
>>> qemu-sys-7543 0D..1 14344us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xc7/0x110)
>>> qemu-sys-7543 0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
>>> qemu-sys-7543 0D..1 14345us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xf4/0x110)
>>> qemu-sys-7543 0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
>>> qemu-sys-7543 0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
>>> qemu-sys-7543 0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0)
>>> qemu-sys-7543 0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
>>> qemu-sys-7543 0D.h1 14351us : __spin_lock+0xc/0x30 (handle_level_irq+0x24/0x120)
>>> qemu-sys-7543 0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 (handle_level_irq+0x37/0x120)
>>> qemu-sys-7543 0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 (mask_and_ack_8259A+0x2a/0x120)
>>> qemu-sys-7543 0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 (mask_and_ack_8259A+0x7a/0x120)
>>> qemu-sys-7543 0D.h2 14358us : redirect_hardirq+0x8/0x70 (handle_level_irq+0x72/0x120)
>>> qemu-sys-7543 0D.h2 14358us : __spin_unlock+0xb/0x40 (handle_level_irq+0x8e/0x120)
>>> qemu-sys-7543 0D.h1 14358us : handle_IRQ_event+0xe/0x110 (handle_level_irq+0x9a/0x120)
>>> qemu-sys-7543 0D.h1 14359us : timer_interrupt+0xb/0x60 (handle_IRQ_event+0x67/0x110)
>>> qemu-sys-7543 0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 (timer_interrupt+0x20/0x60)
>>> ...
>>>
>>> One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
>>> and this while cyclictest runs at a period of 100 us!
>>>
>>> I got the same results over Adeos/I-pipe & Xenomai with the function
>>> tracer there, also pointing to the period while the CPU is in VM mode.
>>>
>>> Anyone any ideas? Greg, I put you on CC as you said you once saw "decent
>>> latencies" with your patches. Are there still magic bits missing in
>>> official kvm?
>>>
>> No bits missing as far as I know. It should just work.
>
> That could very well be the case these days. I know back when I was
> looking at it, KVM would not run on VMX + -rt without modification or it
> would crash/hang (this was around the time I was working on that
> smp_function_call stuff). And without careful modification it would run
> very poorly (with high (300us+ latencies) revealed in cyclictest.
>
> However, I was able to craft the vmx_vcpu_run path so that a VM could
> run side-by-side with cyclictest with sub 40us latencies. In fact,
> normally it was sub 30us, but on an occasional run I would get a spike
> to ~37us.
>
> Unfortunately I am deep into other non-KVM related -rt issues at the
> moment, so I can't work on it any further for a bit.
Do you have some patch fragments left over? At least /me would be
interested to study and maybe forward port them. Or can you briefly
explain the issue above and/or the general problem behind this delay?
Thanks,
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E0818.6060405-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-23 14:44 ` Jan Kiszka
@ 2007-10-23 15:26 ` Avi Kivity
[not found] ` <471E1290.2000208-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
1 sibling, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-23 15:26 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> Avi,
>>>
>>> [somehow your mails do not get through to my private account, so I'm
>>> switching]
>>>
>>> Avi Kivity wrote:
>>>
>>>
>>>> Jan Kiszka wrote:
>>>>
>>>>
>>>>> Clarification: I can't precisely tell what code is executed in VM mode,
>>>>> as I don't have qemu or that guest instrumented. I just see the kernel
>>>>> entering VM mode and leaving it again more than 300 us later. So I
>>>>> wonder why this is allowed while some external IRQ is pending.
>>>>>
>>>>>
>>>>>
>>>>>
>>>> How do you know an external interrupt is pending?
>>>>
>>>>
>>> It's the host timer IRQ, programmed to fire in certain intervals (100 us
>>> here). Test case is some latency measurement tool like tglx's cyclictest
>>> or similar programs we use in Xenomai.
>>>
>>>
>>>
>>>> kvm programs the hardware to exit when an external interrupt arrives.
>>>>
>>>>
>>>>
>>> Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with
>>> latest kvm from git hacked into (kvm generally seems to work fine this way):
>>>
>>> ...
>>> qemu-sys-7543 0...1 13897us : vmcs_write16+0xb/0x20 (vmx_save_host_state+0x1a7/0x1c0)
>>> qemu-sys-7543 0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
>>> qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xa0/0x1c0)
>>> qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xb0/0x1c0)
>>> qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xbf/0x1c0)
>>> qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xcf/0x1c0)
>>> qemu-sys-7543 0...1 13898us : load_msrs+0xb/0x40 (vmx_save_host_state+0xe7/0x1c0)
>>> qemu-sys-7543 0...1 13898us : kvm_load_guest_fpu+0x8/0x40 (kvm_vcpu_ioctl_run+0xbf/0x570)
>>> qemu-sys-7543 0D..1 13899us : vmx_vcpu_run+0xc/0x110 (kvm_vcpu_ioctl_run+0x120/0x570)
>>> qemu-sys-7543 0D..1 13899us!: vmcs_writel+0xb/0x30 (vmx_vcpu_run+0x22/0x110)
>>> qemu-sys-7543 0D..1 14344us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xc7/0x110)
>>> qemu-sys-7543 0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
>>> qemu-sys-7543 0D..1 14345us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xf4/0x110)
>>> qemu-sys-7543 0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
>>> qemu-sys-7543 0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
>>> qemu-sys-7543 0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0)
>>> qemu-sys-7543 0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
>>> qemu-sys-7543 0D.h1 14351us : __spin_lock+0xc/0x30 (handle_level_irq+0x24/0x120)
>>> qemu-sys-7543 0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 (handle_level_irq+0x37/0x120)
>>> qemu-sys-7543 0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 (mask_and_ack_8259A+0x2a/0x120)
>>> qemu-sys-7543 0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 (mask_and_ack_8259A+0x7a/0x120)
>>> qemu-sys-7543 0D.h2 14358us : redirect_hardirq+0x8/0x70 (handle_level_irq+0x72/0x120)
>>> qemu-sys-7543 0D.h2 14358us : __spin_unlock+0xb/0x40 (handle_level_irq+0x8e/0x120)
>>> qemu-sys-7543 0D.h1 14358us : handle_IRQ_event+0xe/0x110 (handle_level_irq+0x9a/0x120)
>>> qemu-sys-7543 0D.h1 14359us : timer_interrupt+0xb/0x60 (handle_IRQ_event+0x67/0x110)
>>> qemu-sys-7543 0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 (timer_interrupt+0x20/0x60)
>>> ...
>>>
>>> One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
>>> and this while cyclictest runs at a period of 100 us!
>>>
>>> I got the same results over Adeos/I-pipe & Xenomai with the function
>>> tracer there, also pointing to the period while the CPU is in VM mode.
>>>
>>> Anyone any ideas? Greg, I put you on CC as you said you once saw "decent
>>> latencies" with your patches. Are there still magic bits missing in
>>> official kvm?
>>>
>>>
>> No bits missing as far as I know. It should just work.
>>
>> Can you explain some more about the latenct tracer? How does it work?
>>
>
> Ah, sorry: The latency tracers in both -rt and I-pipe use gcc's -pg to
> put a call to a function called mcount at the beginning of each compiled
> function. mcount is provided by the tracers and stores the caller
> address, its parent, the current time, and more in a log. An API is
> provided to start and stop the trace, e.g. after someone (kernel or user
> space) detected large wakeup latencies.
>
>
Ok. So it is not interrupt driven, and that's how you get traces in
functions that run with interrupts disabled.
>> Seeing vmx_vcpu_run() in there confuses me, as it always runs with
>> interrupts disabled (it does dispatch NMIs, so we could be seeing an NMI).
>>
>
> The point is that the cyclictest does not find large latencies when kvm
> is not happening to start or stop right now. And if you are thinking
> about NMIs triggered by the kvm code on vm-exit: I also instrumented
> that code path, and it is not taken in case of the long delay.
>
>
Right. With your explanation it all makes sense, and indeed it looks
like the guest is not exiting.
>> Please post a disassembly of your vmx_vcpu_run so we can interpret the
>> offsets.
>>
>
> Here it comes:
>
> 00002df0 <vmx_vcpu_run>:
> 2df0: 55 push %ebp
> 2df1: 89 e5 mov %esp,%ebp
> 2df3: 53 push %ebx
> 2df4: 83 ec 08 sub $0x8,%esp
> 2df7: e8 fc ff ff ff call 2df8 <vmx_vcpu_run+0x8>
> 2dfc: 8b 5d 08 mov 0x8(%ebp),%ebx
> 2dff: 0f 20 c0 mov %cr0,%eax
> 2e02: 89 44 24 04 mov %eax,0x4(%esp)
> 2e06: c7 04 24 00 6c 00 00 movl $0x6c00,(%esp)
> 2e0d: e8 be d8 ff ff call 6d0 <vmcs_writel>
>
first trace
> 2e12: 8b 83 80 0d 00 00 mov 0xd80(%ebx),%eax
> 2e18: ba 14 6c 00 00 mov $0x6c14,%edx
> 2e1d: 89 d9 mov %ebx,%ecx
> 2e1f: 60 pusha
> 2e20: 51 push %ecx
> 2e21: 0f 79 d4 vmwrite %esp,%edx
> 2e24: 83 f8 00 cmp $0x0,%eax
> 2e27: 8b 81 78 01 00 00 mov 0x178(%ecx),%eax
> 2e2d: 0f 22 d0 mov %eax,%cr2
> 2e30: 8b 81 50 01 00 00 mov 0x150(%ecx),%eax
> 2e36: 8b 99 5c 01 00 00 mov 0x15c(%ecx),%ebx
> 2e3c: 8b 91 58 01 00 00 mov 0x158(%ecx),%edx
> 2e42: 8b b1 68 01 00 00 mov 0x168(%ecx),%esi
> 2e48: 8b b9 6c 01 00 00 mov 0x16c(%ecx),%edi
> 2e4e: 8b a9 64 01 00 00 mov 0x164(%ecx),%ebp
> 2e54: 8b 89 54 01 00 00 mov 0x154(%ecx),%ecx
> 2e5a: 75 05 jne 2e61 <vmx_vcpu_run+0x71>
> 2e5c: 0f 01 c2 vmlaunch
> 2e5f: eb 03 jmp 2e64 <vmx_vcpu_run+0x74>
> 2e61: 0f 01 c3 vmresume
> 2e64: 87 0c 24 xchg %ecx,(%esp)
> 2e67: 89 81 50 01 00 00 mov %eax,0x150(%ecx)
> 2e6d: 89 99 5c 01 00 00 mov %ebx,0x15c(%ecx)
> 2e73: ff 34 24 pushl (%esp)
> 2e76: 8f 81 54 01 00 00 popl 0x154(%ecx)
> 2e7c: 89 91 58 01 00 00 mov %edx,0x158(%ecx)
> 2e82: 89 b1 68 01 00 00 mov %esi,0x168(%ecx)
> 2e88: 89 b9 6c 01 00 00 mov %edi,0x16c(%ecx)
> 2e8e: 89 a9 64 01 00 00 mov %ebp,0x164(%ecx)
> 2e94: 0f 20 d0 mov %cr2,%eax
> 2e97: 89 81 78 01 00 00 mov %eax,0x178(%ecx)
> 2e9d: 8b 0c 24 mov (%esp),%ecx
> 2ea0: 59 pop %ecx
> 2ea1: 61 popa
> 2ea2: 0f 96 c0 setbe %al
> 2ea5: 88 83 84 0d 00 00 mov %al,0xd84(%ebx)
> 2eab: c7 04 24 24 48 00 00 movl $0x4824,(%esp)
> 2eb2: e8 49 d2 ff ff call 100 <vmcs_read32>
>
second trace
> 2eb7: a8 03 test $0x3,%al
> 2eb9: 0f 94 c0 sete %al
> 2ebc: 0f b6 c0 movzbl %al,%eax
> 2ebf: 89 83 28 01 00 00 mov %eax,0x128(%ebx)
> 2ec5: b8 7b 00 00 00 mov $0x7b,%eax
> 2eca: 8e d8 mov %eax,%ds
> 2ecc: 8e c0 mov %eax,%es
> 2ece: c7 83 80 0d 00 00 01 movl $0x1,0xd80(%ebx)
> 2ed5: 00 00 00
> 2ed8: c7 04 24 04 44 00 00 movl $0x4404,(%esp)
> 2edf: e8 1c d2 ff ff call 100 <vmcs_read32>
> 2ee4: 25 00 07 00 00 and $0x700,%eax
> 2ee9: 3d 00 02 00 00 cmp $0x200,%eax
> 2eee: 75 02 jne 2ef2 <vmx_vcpu_run+0x102>
> 2ef0: cd 02 int $0x2
> 2ef2: 83 c4 08 add $0x8,%esp
> 2ef5: 5b pop %ebx
> 2ef6: 5d pop %ebp
> 2ef7: c3 ret
> 2ef8: 90 nop
> 2ef9: 8d b4 26 00 00 00 00 lea 0x0(%esi),%esi
>
> Note that the first, unresolved call here goes to mcount().
>
>
(the -r option to objdump is handy)
Exiting on a pending interrupt is controlled by the vmcs word
PIN_BASED_EXEC_CONTROL, bit PIN_BASED_EXT_INTR_MASK. Can you check (via
vmcs_read32()) that the bit is indeed set?
[if not, a guest can just enter a busy loop and kill a processor]
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E1290.2000208-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-23 15:59 ` Jan Kiszka
[not found] ` <471E1A77.90808-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-23 15:59 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Avi Kivity wrote:
>>> Please post a disassembly of your vmx_vcpu_run so we can interpret the
>>> offsets.
>>>
>> Here it comes:
>>
>> 00002df0 <vmx_vcpu_run>:
>> 2df0: 55 push %ebp
>> 2df1: 89 e5 mov %esp,%ebp
>> 2df3: 53 push %ebx
>> 2df4: 83 ec 08 sub $0x8,%esp
>> 2df7: e8 fc ff ff ff call 2df8 <vmx_vcpu_run+0x8>
>> 2dfc: 8b 5d 08 mov 0x8(%ebp),%ebx
>> 2dff: 0f 20 c0 mov %cr0,%eax
>> 2e02: 89 44 24 04 mov %eax,0x4(%esp)
>> 2e06: c7 04 24 00 6c 00 00 movl $0x6c00,(%esp)
>> 2e0d: e8 be d8 ff ff call 6d0 <vmcs_writel>
>>
>
> first trace
>
>
>> 2e12: 8b 83 80 0d 00 00 mov 0xd80(%ebx),%eax
>> 2e18: ba 14 6c 00 00 mov $0x6c14,%edx
>> 2e1d: 89 d9 mov %ebx,%ecx
>> 2e1f: 60 pusha
>> 2e20: 51 push %ecx
>> 2e21: 0f 79 d4 vmwrite %esp,%edx
>> 2e24: 83 f8 00 cmp $0x0,%eax
>> 2e27: 8b 81 78 01 00 00 mov 0x178(%ecx),%eax
>> 2e2d: 0f 22 d0 mov %eax,%cr2
>> 2e30: 8b 81 50 01 00 00 mov 0x150(%ecx),%eax
>> 2e36: 8b 99 5c 01 00 00 mov 0x15c(%ecx),%ebx
>> 2e3c: 8b 91 58 01 00 00 mov 0x158(%ecx),%edx
>> 2e42: 8b b1 68 01 00 00 mov 0x168(%ecx),%esi
>> 2e48: 8b b9 6c 01 00 00 mov 0x16c(%ecx),%edi
>> 2e4e: 8b a9 64 01 00 00 mov 0x164(%ecx),%ebp
>> 2e54: 8b 89 54 01 00 00 mov 0x154(%ecx),%ecx
>> 2e5a: 75 05 jne 2e61 <vmx_vcpu_run+0x71>
>> 2e5c: 0f 01 c2 vmlaunch
>> 2e5f: eb 03 jmp 2e64 <vmx_vcpu_run+0x74>
>> 2e61: 0f 01 c3 vmresume
>> 2e64: 87 0c 24 xchg %ecx,(%esp)
>> 2e67: 89 81 50 01 00 00 mov %eax,0x150(%ecx)
>> 2e6d: 89 99 5c 01 00 00 mov %ebx,0x15c(%ecx)
>> 2e73: ff 34 24 pushl (%esp)
>> 2e76: 8f 81 54 01 00 00 popl 0x154(%ecx)
>> 2e7c: 89 91 58 01 00 00 mov %edx,0x158(%ecx)
>> 2e82: 89 b1 68 01 00 00 mov %esi,0x168(%ecx)
>> 2e88: 89 b9 6c 01 00 00 mov %edi,0x16c(%ecx)
>> 2e8e: 89 a9 64 01 00 00 mov %ebp,0x164(%ecx)
>> 2e94: 0f 20 d0 mov %cr2,%eax
>> 2e97: 89 81 78 01 00 00 mov %eax,0x178(%ecx)
>> 2e9d: 8b 0c 24 mov (%esp),%ecx
>> 2ea0: 59 pop %ecx
>> 2ea1: 61 popa
>> 2ea2: 0f 96 c0 setbe %al
>> 2ea5: 88 83 84 0d 00 00 mov %al,0xd84(%ebx)
>> 2eab: c7 04 24 24 48 00 00 movl $0x4824,(%esp)
>> 2eb2: e8 49 d2 ff ff call 100 <vmcs_read32>
>>
>
> second trace
>
>> 2eb7: a8 03 test $0x3,%al
>> 2eb9: 0f 94 c0 sete %al
>> 2ebc: 0f b6 c0 movzbl %al,%eax
>> 2ebf: 89 83 28 01 00 00 mov %eax,0x128(%ebx)
>> 2ec5: b8 7b 00 00 00 mov $0x7b,%eax
>> 2eca: 8e d8 mov %eax,%ds
>> 2ecc: 8e c0 mov %eax,%es
>> 2ece: c7 83 80 0d 00 00 01 movl $0x1,0xd80(%ebx)
>> 2ed5: 00 00 00
>> 2ed8: c7 04 24 04 44 00 00 movl $0x4404,(%esp)
>> 2edf: e8 1c d2 ff ff call 100 <vmcs_read32>
>> 2ee4: 25 00 07 00 00 and $0x700,%eax
>> 2ee9: 3d 00 02 00 00 cmp $0x200,%eax
>> 2eee: 75 02 jne 2ef2 <vmx_vcpu_run+0x102>
>> 2ef0: cd 02 int $0x2
>> 2ef2: 83 c4 08 add $0x8,%esp
>> 2ef5: 5b pop %ebx
>> 2ef6: 5d pop %ebp
>> 2ef7: c3 ret
>> 2ef8: 90 nop
>> 2ef9: 8d b4 26 00 00 00 00 lea 0x0(%esi),%esi
>>
>> Note that the first, unresolved call here goes to mcount().
>>
>>
>
> (the -r option to objdump is handy)
Great, one never stops learning. :)
>
> Exiting on a pending interrupt is controlled by the vmcs word
> PIN_BASED_EXEC_CONTROL, bit PIN_BASED_EXT_INTR_MASK. Can you check (via
> vmcs_read32()) that the bit is indeed set?
>
> [if not, a guest can just enter a busy loop and kill a processor]
>
I traced it right before and after the asm block, and in all cases
(including those with low latency exits) it's just 0x1f, which should be
fine.
Earlier, I also checked GUEST_INTERRUPTIBILITY_INFO and
GUEST_ACTIVITY_STATE, but found neither some suspicious state nor a
difference compared to fast exits.
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E1A77.90808-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
@ 2007-10-23 16:14 ` Avi Kivity
[not found] ` <471E1DCD.5040301-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-23 16:14 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
>
>> Exiting on a pending interrupt is controlled by the vmcs word
>> PIN_BASED_EXEC_CONTROL, bit PIN_BASED_EXT_INTR_MASK. Can you check (via
>> vmcs_read32()) that the bit is indeed set?
>>
>> [if not, a guest can just enter a busy loop and kill a processor]
>>
>>
>
> I traced it right before and after the asm block, and in all cases
> (including those with low latency exits) it's just 0x1f, which should be
> fine.
>
>
Yes.
> Earlier, I also checked GUEST_INTERRUPTIBILITY_INFO and
> GUEST_ACTIVITY_STATE, but found neither some suspicious state nor a
> difference compared to fast exits.
>
GUEST_* things are unrelated; these talk about whether the guest is
ready to receive an interrupt, but we don't really care, do we?
I don't have any idea why this is happening. This is completely
unrelated to -rt issues -- this is between kvm and the hardware.
Is this on a uniprocessor system? If not, can you try nosmp?
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E1DCD.5040301-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-23 16:31 ` Jan Kiszka
[not found] ` <471E21D7.3000309-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-23 16:31 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Avi Kivity wrote:
> Jan Kiszka wrote:
>>> Exiting on a pending interrupt is controlled by the vmcs word
>>> PIN_BASED_EXEC_CONTROL, bit PIN_BASED_EXT_INTR_MASK. Can you check (via
>>> vmcs_read32()) that the bit is indeed set?
>>>
>>> [if not, a guest can just enter a busy loop and kill a processor]
>>>
>>>
>> I traced it right before and after the asm block, and in all cases
>> (including those with low latency exits) it's just 0x1f, which should be
>> fine.
>>
>>
>
> Yes.
>
>> Earlier, I also checked GUEST_INTERRUPTIBILITY_INFO and
>> GUEST_ACTIVITY_STATE, but found neither some suspicious state nor a
>> difference compared to fast exits.
>>
>
> GUEST_* things are unrelated; these talk about whether the guest is
> ready to receive an interrupt, but we don't really care, do we?
Nope, of course (looks like my VMX manual interpreter still needs
improvements...).
>
> I don't have any idea why this is happening. This is completely
> unrelated to -rt issues -- this is between kvm and the hardware.
>
> Is this on a uniprocessor system? If not, can you try nosmp?
It's both: -rt tests were performed with nosmp (-rt locks up under SMP
here), and the Xenomai tests, including the last instrumentation, ran in
SMP mode. So I tend to exclude SMP effects.
Do you have some suggestion how to analyse what the guest is executing
while those IRQs are delayed? As I said, I suspect some interaction of
the Bochs BIOS is triggering the effect, because it only shows up (with
a Linux guest) before the BIOS prints the first messages.
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E21D7.3000309-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
@ 2007-10-23 16:38 ` Avi Kivity
[not found] ` <471E238E.6040005-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-23 16:38 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> It's both: -rt tests were performed with nosmp (-rt locks up under SMP
> here), and the Xenomai tests, including the last instrumentation, ran in
> SMP mode. So I tend to exclude SMP effects.
>
> Do you have some suggestion how to analyse what the guest is executing
> while those IRQs are delayed? As I said, I suspect some interaction of
> the Bochs BIOS is triggering the effect, because it only shows up (with
> a Linux guest) before the BIOS prints the first messages.
>
The guest is probably spinning. Can you try modifying
user/test/bootstrap.S, around the start label, to look like
start: jmp start
? This can then be executed with
user/kvmctl user/test/bootstrap user/test/access.flat
This will give us something consistent to test. The output of kvm_stat
should also be interesting (should show exits and irq_exits advancing at
10K/sec).
(once Linux starts executing, it takes a lot of pagefault exits for the
mmu, and then a lot of hlt exit once it idles).
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E238E.6040005-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-23 16:56 ` Jan Kiszka
[not found] ` <471E27B0.1090001-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-23 16:56 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Avi Kivity wrote:
> Jan Kiszka wrote:
>> It's both: -rt tests were performed with nosmp (-rt locks up under SMP
>> here), and the Xenomai tests, including the last instrumentation, ran in
>> SMP mode. So I tend to exclude SMP effects.
>>
>> Do you have some suggestion how to analyse what the guest is executing
>> while those IRQs are delayed? As I said, I suspect some interaction of
>> the Bochs BIOS is triggering the effect, because it only shows up (with
>> a Linux guest) before the BIOS prints the first messages.
>>
>
> The guest is probably spinning. Can you try modifying
> user/test/bootstrap.S, around the start label, to look like
>
> start: jmp start
>
> ? This can then be executed with
>
> user/kvmctl user/test/bootstrap user/test/access.flat
Access flat is not being built here (kvm-48), or I'm pressing the wrong
button. Forcing it with "make test/access.flat" gives build errors.
What's the trick?
>
> This will give us something consistent to test. The output of kvm_stat
> should also be interesting (should show exits and irq_exits advancing at
> 10K/sec).
You probably mean with the artificial test, not with qemu/kvm.
>
>
> (once Linux starts executing, it takes a lot of pagefault exits for the
> mmu, and then a lot of hlt exit once it idles).
>
>
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E27B0.1090001-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
@ 2007-10-23 17:05 ` Avi Kivity
[not found] ` <471E29C5.1060807-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-23 17:05 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> It's both: -rt tests were performed with nosmp (-rt locks up under SMP
>>> here), and the Xenomai tests, including the last instrumentation, ran in
>>> SMP mode. So I tend to exclude SMP effects.
>>>
>>> Do you have some suggestion how to analyse what the guest is executing
>>> while those IRQs are delayed? As I said, I suspect some interaction of
>>> the Bochs BIOS is triggering the effect, because it only shows up (with
>>> a Linux guest) before the BIOS prints the first messages.
>>>
>>>
>> The guest is probably spinning. Can you try modifying
>> user/test/bootstrap.S, around the start label, to look like
>>
>> start: jmp start
>>
>> ? This can then be executed with
>>
>> user/kvmctl user/test/bootstrap user/test/access.flat
>>
>
> Access flat is not being built here (kvm-48), or I'm pressing the wrong
> button. Forcing it with "make test/access.flat" gives build errors.
> What's the trick?
>
>
Sorry, it's x86_64 specific. Just pick any other small file, since it
won't be executed anyway, and there are no format requirements.
>> This will give us something consistent to test. The output of kvm_stat
>> should also be interesting (should show exits and irq_exits advancing at
>> 10K/sec).
>>
>
> You probably mean with the artificial test, not with qemu/kvm.
>
>
Yes.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471E29C5.1060807-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-24 8:37 ` Jan Kiszka
[not found] ` <471F0464.4000704-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-24 8:37 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Avi Kivity wrote:
>>
>>> Jan Kiszka wrote:
>>>
>>>> It's both: -rt tests were performed with nosmp (-rt locks up under SMP
>>>> here), and the Xenomai tests, including the last instrumentation, ran in
>>>> SMP mode. So I tend to exclude SMP effects.
>>>>
>>>> Do you have some suggestion how to analyse what the guest is executing
>>>> while those IRQs are delayed? As I said, I suspect some interaction of
>>>> the Bochs BIOS is triggering the effect, because it only shows up (with
>>>> a Linux guest) before the BIOS prints the first messages.
>>>>
>>>>
>>> The guest is probably spinning. Can you try modifying
>>> user/test/bootstrap.S, around the start label, to look like
>>>
>>> start: jmp start
>>>
>>> ? This can then be executed with
>>>
>>> user/kvmctl user/test/bootstrap user/test/access.flat
>>>
>> Access flat is not being built here (kvm-48), or I'm pressing the wrong
>> button. Forcing it with "make test/access.flat" gives build errors.
>> What's the trick?
>>
>>
>
> Sorry, it's x86_64 specific. Just pick any other small file, since it
> won't be executed anyway, and there are no format requirements.
>
>>> This will give us something consistent to test. The output of kvm_stat
>>> should also be interesting (should show exits and irq_exits advancing at
>>> 10K/sec).
>>>
>> You probably mean with the artificial test, not with qemu/kvm.
>>
>>
>
> Yes.
>
I ran
user/kvmctl user/test/bootstrap user/test/smp.flat
with the busy loop hacked into bootstrap, but I got no latency spots
this time. And what should I look for in the output of kvm_stat?
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471F0464.4000704-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
@ 2007-10-24 8:52 ` Avi Kivity
[not found] ` <471F07C0.8040104-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-24 8:52 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> I ran
>
> user/kvmctl user/test/bootstrap user/test/smp.flat
>
> with the busy loop hacked into bootstrap, but I got no latency spots
> this time. And what should I look for in the output of kvm_stat?
>
>
The first numeric column is the total number of exits; the second is the
rate of change (per second). For a spinning workload, both irq_exits
and exits rate should equal the timer frequency.
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471F07C0.8040104-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-24 9:16 ` Jan Kiszka
[not found] ` <471F0D57.3020209-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-24 9:16 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Avi Kivity wrote:
> Jan Kiszka wrote:
>> I ran
>>
>> user/kvmctl user/test/bootstrap user/test/smp.flat
>>
>> with the busy loop hacked into bootstrap, but I got no latency spots
>> this time. And what should I look for in the output of kvm_stat?
>>
>>
>
> The first numeric column is the total number of exits; the second is the
> rate of change (per second). For a spinning workload, both irq_exits
> and exits rate should equal the timer frequency.
It matches roughly, but this isn't surprising given the lack of latency
peaks in this scenario.
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471F0D57.3020209-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
@ 2007-10-24 9:33 ` Avi Kivity
[not found] ` <471F116D.9080903-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-24 9:33 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> I ran
>>>
>>> user/kvmctl user/test/bootstrap user/test/smp.flat
>>>
>>> with the busy loop hacked into bootstrap, but I got no latency spots
>>> this time. And what should I look for in the output of kvm_stat?
>>>
>>>
>>>
>> The first numeric column is the total number of exits; the second is the
>> rate of change (per second). For a spinning workload, both irq_exits
>> and exits rate should equal the timer frequency.
>>
>
> It matches roughly, but this isn't surprising given the lack of latency
> peaks in this scenario.
>
>
Okay, that's expected. But I really can't think what causes the latency
spike you see running the bios code. 300 microseconds is enough for 3000
cache misses. No instruction can cause so many except rep movs and
friends, and these are supposed to be preemptible.
You might try isolating it by adding code to output markers to some I/O
port and scattering it throughout the bios. A binary search should find
the problem spot quickly.
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471F116D.9080903-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-24 16:22 ` Jan Kiszka
[not found] ` <471F7143.8040406-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-24 16:22 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Avi Kivity wrote:
>>
>>> Jan Kiszka wrote:
>>>
>>>> I ran
>>>>
>>>> user/kvmctl user/test/bootstrap user/test/smp.flat
>>>>
>>>> with the busy loop hacked into bootstrap, but I got no latency spots
>>>> this time. And what should I look for in the output of kvm_stat?
>>>>
>>>>
>>>>
>>> The first numeric column is the total number of exits; the second is the
>>> rate of change (per second). For a spinning workload, both irq_exits
>>> and exits rate should equal the timer frequency.
>>>
>> It matches roughly, but this isn't surprising given the lack of latency
>> peaks in this scenario.
>>
>>
>
> Okay, that's expected. But I really can't think what causes the latency
> spike you see running the bios code. 300 microseconds is enough for 3000
> cache misses. No instruction can cause so many except rep movs and
> friends, and these are supposed to be preemptible.
>
> You might try isolating it by adding code to output markers to some I/O
> port and scattering it throughout the bios. A binary search should find
> the problem spot quickly.
>
Got it! It's wbinvd from smm_init in rombios32.c! Anyone any comments on
this?
Jan (who's leaving now...)
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471F7143.8040406-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
@ 2007-10-24 16:52 ` Avi Kivity
[not found] ` <471F7865.7070506-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-24 16:52 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> Avi Kivity wrote:
>>>
>>>
>>>> Jan Kiszka wrote:
>>>>
>>>>
>>>>> I ran
>>>>>
>>>>> user/kvmctl user/test/bootstrap user/test/smp.flat
>>>>>
>>>>> with the busy loop hacked into bootstrap, but I got no latency spots
>>>>> this time. And what should I look for in the output of kvm_stat?
>>>>>
>>>>>
>>>>>
>>>>>
>>>> The first numeric column is the total number of exits; the second is the
>>>> rate of change (per second). For a spinning workload, both irq_exits
>>>> and exits rate should equal the timer frequency.
>>>>
>>>>
>>> It matches roughly, but this isn't surprising given the lack of latency
>>> peaks in this scenario.
>>>
>>>
>>>
>> Okay, that's expected. But I really can't think what causes the latency
>> spike you see running the bios code. 300 microseconds is enough for 3000
>> cache misses. No instruction can cause so many except rep movs and
>> friends, and these are supposed to be preemptible.
>>
>> You might try isolating it by adding code to output markers to some I/O
>> port and scattering it throughout the bios. A binary search should find
>> the problem spot quickly.
>>
>>
>
> Got it! It's wbinvd from smm_init in rombios32.c! Anyone any comments on
> this?
>
Ha! A real life 300usec instruction!
Unfortunately, it cannot be trapped on Intel (it can be trapped on
AMD). Looks like a minor hole in VT, as a guest can generate very high
latencies this way.
For our bios, we can remove it, but there's no way to ensure an
untrusted guest doesn't use it.
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <471F7865.7070506-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-25 17:50 ` Jan Kiszka
[not found] ` <4720D76D.7070300-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-25 17:50 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Avi Kivity wrote:
> Jan Kiszka wrote:
>> Got it! It's wbinvd from smm_init in rombios32.c! Anyone any comments on
>> this?
>>
>
> Ha! A real life 300usec instruction!
>
> Unfortunately, it cannot be trapped on Intel (it can be trapped on
> AMD). Looks like a minor hole in VT, as a guest can generate very high
> latencies this way.
I unfortunately do not have an AMD box at hand to test (I obviously
order the wrong processor then...), but does this mean it is trapped by
kvm-amd already? Or would there be more work required on that side?
>
> For our bios, we can remove it, but there's no way to ensure an
> untrusted guest doesn't use it.
Yep. Any ideas if and at which stages (bootup, normal operation) Windows
is issues wbinvd noise?
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <4720D76D.7070300-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
@ 2007-10-25 18:02 ` Avi Kivity
[not found] ` <4720DA42.6090300-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-25 18:02 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> Avi Kivity wrote:
>
>> Jan Kiszka wrote:
>>
>>> Got it! It's wbinvd from smm_init in rombios32.c! Anyone any comments on
>>> this?
>>>
>>>
>> Ha! A real life 300usec instruction!
>>
>> Unfortunately, it cannot be trapped on Intel (it can be trapped on
>> AMD). Looks like a minor hole in VT, as a guest can generate very high
>> latencies this way.
>>
>
> I unfortunately do not have an AMD box at hand to test (I obviously
> order the wrong processor then...), but does this mean it is trapped by
> kvm-amd already? Or would there be more work required on that side?
>
>
There's a two-liner required to make it work. I'll add it soon.
>> For our bios, we can remove it, but there's no way to ensure an
>> untrusted guest doesn't use it.
>>
>
> Yep. Any ideas if and at which stages (bootup, normal operation) Windows
> is issues wbinvd noise?
>
No idea. I'll check it with an AMD machine.
I'd be surprised it it did that during normal operation. It's also
conceivable that it doesn't issue it at all.
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <4720DA42.6090300-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-26 1:41 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482827-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Dong, Eddie @ 2007-10-26 1:41 UTC (permalink / raw)
To: Avi Kivity, Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
>
> There's a two-liner required to make it work. I'll add it soon.
>
But you still needs to issue WBINVD to all pCPUs which just move
non-response time from one place to another, not?
Eddie
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482827-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-10-26 8:20 ` Avi Kivity
[not found] ` <4721A350.7030409-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-26 8:20 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Jan Kiszka
Dong, Eddie wrote:
>> There's a two-liner required to make it work. I'll add it soon.
>>
>>
> But you still needs to issue WBINVD to all pCPUs which just move
> non-response time from one place to another, not?
>
You don't actually need to emulate wbinvd, you can just ignore it.
The only reason I can think of to use wbinvd is if you're taking a cpu
down (for a deep sleep state, or if you're shutting it off). A guest
need not do that.
Any other reason? dma? all dma today is cache-coherent, no?
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <4721A350.7030409-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-26 8:32 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482B96-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Dong, Eddie @ 2007-10-26 8:32 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Jan Kiszka
Avi Kivity wrote:
> Dong, Eddie wrote:
>>> There's a two-liner required to make it work. I'll add it soon.
>>>
>>>
>> But you still needs to issue WBINVD to all pCPUs which just move
>> non-response time from one place to another, not?
>>
>
> You don't actually need to emulate wbinvd, you can just ignore it.
>
> The only reason I can think of to use wbinvd is if you're taking a cpu
> down (for a deep sleep state, or if you're shutting it off). A guest
> need not do that.
>
> Any other reason? dma? all dma today is cache-coherent, no?
>
For legacy PCI device, yes it is cache-cohetent, but for PCIe devices,
it is no longer a must. A PCIe device may not generate snoopy cycle
and thus require OS to flush cache.
For example, a guest with direct device, say audio, can copy
data to dma buffer and then wbinvd to flush cache out and ask HW
DMA to output.
Thx,Eddie
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482B96-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-10-26 9:03 ` Jan Kiszka
[not found] ` <4721AD49.7010405-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-26 9:18 ` Avi Kivity
1 sibling, 1 reply; 40+ messages in thread
From: Jan Kiszka @ 2007-10-26 9:03 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Avi Kivity
Dong, Eddie wrote:
> Avi Kivity wrote:
>> Dong, Eddie wrote:
>>>> There's a two-liner required to make it work. I'll add it soon.
>>>>
>>>>
>>> But you still needs to issue WBINVD to all pCPUs which just move
>>> non-response time from one place to another, not?
>>>
>> You don't actually need to emulate wbinvd, you can just ignore it.
>>
>> The only reason I can think of to use wbinvd is if you're taking a cpu
>> down (for a deep sleep state, or if you're shutting it off). A guest
>> need not do that.
>>
>> Any other reason? dma? all dma today is cache-coherent, no?
>>
> For legacy PCI device, yes it is cache-cohetent, but for PCIe devices,
> it is no longer a must. A PCIe device may not generate snoopy cycle
> and thus require OS to flush cache.
>
> For example, a guest with direct device, say audio, can copy
> data to dma buffer and then wbinvd to flush cache out and ask HW
> DMA to output.
So if you want the higher performance of PCIe you need
performance-killing wbindv (not to speak of latency)? That sounds a bit
contradictory to me. So this is also true for native PCIe usage?
What really frightens me about wbinvd is that its latency "nicely"
scales with the cache sizes. And I think my observed latency is far from
being the worst case. In a different experiment, I once measured wbinvd
latencies of a few milliseconds... :(
Jan
--
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <4721AD49.7010405-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
@ 2007-10-26 9:18 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482BE4-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-26 9:25 ` Avi Kivity
1 sibling, 1 reply; 40+ messages in thread
From: Dong, Eddie @ 2007-10-26 9:18 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Avi Kivity
Jan Kiszka wrote:
>
> So if you want the higher performance of PCIe you need
> performance-killing wbindv (not to speak of latency)? That sounds a
> bit contradictory to me. So this is also true for native PCIe usage?
>
Mmm, I won't say so. When you want to get RT performance, you
won't use unknown OS such as Windows. If you use your RT
linux, the OS (guest) itself should not use wbinvd.
For the Bochs BIOS case, like Avi mentioned, we can simply remove it.
> What really frightens me about wbinvd is that its latency "nicely"
> scales with the cache sizes. And I think my observed latency
> is far from
> being the worst case. In a different experiment, I once measured
> wbinvd latencies of a few milliseconds... :(
>
I don't know details, but it could be. While RT usage can easily remove
this instruction.
thx,eddie
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482B96-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-26 9:03 ` Jan Kiszka
@ 2007-10-26 9:18 ` Avi Kivity
[not found] ` <4721B0FD.9020006-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
1 sibling, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-26 9:18 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Jan Kiszka
Dong, Eddie wrote:
> Avi Kivity wrote:
>
>> Dong, Eddie wrote:
>>
>>>> There's a two-liner required to make it work. I'll add it soon.
>>>>
>>>>
>>>>
>>> But you still needs to issue WBINVD to all pCPUs which just move
>>> non-response time from one place to another, not?
>>>
>>>
>> You don't actually need to emulate wbinvd, you can just ignore it.
>>
>> The only reason I can think of to use wbinvd is if you're taking a cpu
>> down (for a deep sleep state, or if you're shutting it off). A guest
>> need not do that.
>>
>> Any other reason? dma? all dma today is cache-coherent, no?
>>
>>
> For legacy PCI device, yes it is cache-cohetent, but for PCIe devices,
> it is no longer a must. A PCIe device may not generate snoopy cycle
> and thus require OS to flush cache.
>
> For example, a guest with direct device, say audio, can copy
> data to dma buffer and then wbinvd to flush cache out and ask HW
> DMA to output.
>
Okay. In that case the host can emulate wbinvd by using the clflush
instruction, which is much faster (although overall execution time may
be higher), maintaining real-time response times.
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <4721AD49.7010405-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-26 9:18 ` Dong, Eddie
@ 2007-10-26 9:25 ` Avi Kivity
1 sibling, 0 replies; 40+ messages in thread
From: Avi Kivity @ 2007-10-26 9:25 UTC (permalink / raw)
To: Jan Kiszka; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Jan Kiszka wrote:
> Dong, Eddie wrote:
>
>> Avi Kivity wrote:
>>
>>> Dong, Eddie wrote:
>>>
>>>>> There's a two-liner required to make it work. I'll add it soon.
>>>>>
>>>>>
>>>>>
>>>> But you still needs to issue WBINVD to all pCPUs which just move
>>>> non-response time from one place to another, not?
>>>>
>>>>
>>> You don't actually need to emulate wbinvd, you can just ignore it.
>>>
>>> The only reason I can think of to use wbinvd is if you're taking a cpu
>>> down (for a deep sleep state, or if you're shutting it off). A guest
>>> need not do that.
>>>
>>> Any other reason? dma? all dma today is cache-coherent, no?
>>>
>>>
>> For legacy PCI device, yes it is cache-cohetent, but for PCIe devices,
>> it is no longer a must. A PCIe device may not generate snoopy cycle
>> and thus require OS to flush cache.
>>
>> For example, a guest with direct device, say audio, can copy
>> data to dma buffer and then wbinvd to flush cache out and ask HW
>> DMA to output.
>>
>
> So if you want the higher performance of PCIe you need
> performance-killing wbindv (not to speak of latency)? That sounds a bit
> contradictory to me. So this is also true for native PCIe usage?
>
>
Yes, good point. wbinvd is not only expensive in that it takes a long
time to execute, it blows your caches so that anything that executes
afterwards takes a huge hit.
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <4721B0FD.9020006-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-26 9:36 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482BFE-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 40+ messages in thread
From: Dong, Eddie @ 2007-10-26 9:36 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Jan Kiszka
Avi Kivity wrote:
> Dong, Eddie wrote:
>> Avi Kivity wrote:
>>
>>> Dong, Eddie wrote:
>>>
>>>>> There's a two-liner required to make it work. I'll add it soon.
>>>>>
>>>>>
>>>>>
>>>> But you still needs to issue WBINVD to all pCPUs which just move
>>>> non-response time from one place to another, not?
>>>>
>>>>
>>> You don't actually need to emulate wbinvd, you can just ignore it.
>>>
>>> The only reason I can think of to use wbinvd is if you're taking a
>>> cpu down (for a deep sleep state, or if you're shutting it off). A
>>> guest need not do that.
>>>
>>> Any other reason? dma? all dma today is cache-coherent, no?
>>>
>>>
>> For legacy PCI device, yes it is cache-cohetent, but for PCIe
>> devices, it is no longer a must. A PCIe device may not generate
>> snoopy cycle and thus require OS to flush cache.
>>
>> For example, a guest with direct device, say audio, can copy
>> data to dma buffer and then wbinvd to flush cache out and ask HW DMA
>> to output.
>>
>
> Okay. In that case the host can emulate wbinvd by using the clflush
> instruction, which is much faster (although overall execution time may
> be higher), maintaining real-time response times.
Faster? maybe.
The issue is clflush take va parameter. So KVM needs to map gpa first
and
then do flush.
WIth this additional overhead. I am not sure which one is faster. But
yes,
this is the trend we may walk toward to reduce Deny of Service.
(flush host or other VM's cache will slowdown whole system).
Eddie
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482BE4-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-10-26 9:37 ` Dor Laor
2007-10-26 12:21 ` Avi Kivity
1 sibling, 0 replies; 40+ messages in thread
From: Dor Laor @ 2007-10-26 9:37 UTC (permalink / raw)
To: Dong, Eddie
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Jan Kiszka,
Avi Kivity
Dong, Eddie wrote:
>
> Jan Kiszka wrote:
> >
> > So if you want the higher performance of PCIe you need
> > performance-killing wbindv (not to speak of latency)? That sounds a
> > bit contradictory to me. So this is also true for native PCIe usage?
> >
>
> Mmm, I won't say so. When you want to get RT performance, you
> won't use unknown OS such as Windows. If you use your RT
> linux, the OS (guest) itself should not use wbinvd.
>
The idea is to have a RT-Linux on the host that runs RT jobs but also
use it to run mgmt/other code on windows guests.
This way you can better utilize an idle RT linux while keeping
short latencies for the RT jobs.
Dor.
>
> For the Bochs BIOS case, like Avi mentioned, we can simply remove it.
>
>
> > What really frightens me about wbinvd is that its latency "nicely"
> > scales with the cache sizes. And I think my observed latency
> > is far from
> > being the worst case. In a different experiment, I once measured
> > wbinvd latencies of a few milliseconds... :(
> >
> I don't know details, but it could be. While RT usage can easily remove
> this instruction.
>
> thx,eddie
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems? Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> kvm-devel mailing list
> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482BFE-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-10-26 9:58 ` Avi Kivity
0 siblings, 0 replies; 40+ messages in thread
From: Avi Kivity @ 2007-10-26 9:58 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Jan Kiszka
Dong, Eddie wrote:
>>>
>> Okay. In that case the host can emulate wbinvd by using the clflush
>> instruction, which is much faster (although overall execution time may
>> be higher), maintaining real-time response times.
>>
>
> Faster? maybe.
> The issue is clflush take va parameter. So KVM needs to map gpa first
> and
> then do flush.
>
> WIth this additional overhead. I am not sure which one is faster. But
> yes,
> this is the trend we may walk toward to reduce Deny of Service.
> (flush host or other VM's cache will slowdown whole system).
>
The issue is not total time to execute (wbinvd is probably faster), but
the time where interrupts are blocked. wbinvd can block interrupts for
milliseconds, and if your industrial control machine needs service every
100 microsecond, something breaks.
[Background: this is for running Linux with realtime extensions as host,
with realtime processes on the host doing the control tasks and a guest
doing the GUI and perhaps communications.]
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482BE4-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-26 9:37 ` Dor Laor
@ 2007-10-26 12:21 ` Avi Kivity
[not found] ` <4721DBE4.6040801-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
1 sibling, 1 reply; 40+ messages in thread
From: Avi Kivity @ 2007-10-26 12:21 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Jan Kiszka
Dong, Eddie wrote:
> Jan Kiszka wrote:
>
>> So if you want the higher performance of PCIe you need
>> performance-killing wbindv (not to speak of latency)? That sounds a
>> bit contradictory to me. So this is also true for native PCIe usage?
>>
>>
>
> Mmm, I won't say so. When you want to get RT performance, you
> won't use unknown OS such as Windows. If you use your RT
> linux, the OS (guest) itself should not use wbinvd.
>
>
Well, virtualization lets you use an unknown OS without fear of
compromising host security, so a real-time capable hypervisor (which kvm
is, or very nearly) should be able to let you use an unknown OS without
fear of compromising real-time response times. Right now the only
unresolvable issue I see is wbinvd.
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: High vm-exit latencies during kvm boot-up/shutdown
[not found] ` <4721DBE4.6040801-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-29 8:32 ` Dong, Eddie
0 siblings, 0 replies; 40+ messages in thread
From: Dong, Eddie @ 2007-10-29 8:32 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Jan Kiszka
Avi Kivity wrote:
> Dong, Eddie wrote:
>> Jan Kiszka wrote:
>>
>>> So if you want the higher performance of PCIe you need
>>> performance-killing wbindv (not to speak of latency)? That sounds a
>>> bit contradictory to me. So this is also true for native PCIe usage?
>>>
>>>
>>
>> Mmm, I won't say so. When you want to get RT performance, you
>> won't use unknown OS such as Windows. If you use your RT
>> linux, the OS (guest) itself should not use wbinvd.
>>
>>
>
> Well, virtualization lets you use an unknown OS without fear of
> compromising host security, so a real-time capable hypervisor (which
> kvm is, or very nearly) should be able to let you use an unknown
> OS without
> fear of compromising real-time response times. Right now the only
> unresolvable issue I see is wbinvd.
>
Got it, I was mis-understood the problem!
Eddie
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 40+ messages in thread
end of thread, other threads:[~2007-10-29 8:32 UTC | newest]
Thread overview: 40+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-22 23:09 High vm-exit latencies during kvm boot-up/shutdown Jan Kiszka
[not found] ` <471D2D8C.1080202-S0/GAf8tV78@public.gmane.org>
2007-10-23 1:05 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02441127-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-23 6:38 ` Jan Kiszka
[not found] ` <471D96DC.7070809-S0/GAf8tV78@public.gmane.org>
2007-10-23 7:46 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A024414D9-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-23 9:08 ` Jan Kiszka
[not found] ` <471DBA1A.2080108-S0/GAf8tV78@public.gmane.org>
2007-10-23 9:46 ` Avi Kivity
[not found] ` <471DC311.2050003-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-23 13:30 ` Jan Kiszka
[not found] ` <471DF76B.7040001-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-23 14:19 ` Avi Kivity
[not found] ` <471E02F7.6080408-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-23 14:41 ` Jan Kiszka
[not found] ` <471E0818.6060405-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-23 14:44 ` Jan Kiszka
2007-10-23 15:26 ` Avi Kivity
[not found] ` <471E1290.2000208-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-23 15:59 ` Jan Kiszka
[not found] ` <471E1A77.90808-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-23 16:14 ` Avi Kivity
[not found] ` <471E1DCD.5040301-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-23 16:31 ` Jan Kiszka
[not found] ` <471E21D7.3000309-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-23 16:38 ` Avi Kivity
[not found] ` <471E238E.6040005-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-23 16:56 ` Jan Kiszka
[not found] ` <471E27B0.1090001-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-23 17:05 ` Avi Kivity
[not found] ` <471E29C5.1060807-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-24 8:37 ` Jan Kiszka
[not found] ` <471F0464.4000704-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-24 8:52 ` Avi Kivity
[not found] ` <471F07C0.8040104-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-24 9:16 ` Jan Kiszka
[not found] ` <471F0D57.3020209-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-24 9:33 ` Avi Kivity
[not found] ` <471F116D.9080903-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-24 16:22 ` Jan Kiszka
[not found] ` <471F7143.8040406-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-24 16:52 ` Avi Kivity
[not found] ` <471F7865.7070506-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-25 17:50 ` Jan Kiszka
[not found] ` <4720D76D.7070300-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-25 18:02 ` Avi Kivity
[not found] ` <4720DA42.6090300-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-26 1:41 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482827-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-26 8:20 ` Avi Kivity
[not found] ` <4721A350.7030409-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-26 8:32 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482B96-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-26 9:03 ` Jan Kiszka
[not found] ` <4721AD49.7010405-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2007-10-26 9:18 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482BE4-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-26 9:37 ` Dor Laor
2007-10-26 12:21 ` Avi Kivity
[not found] ` <4721DBE4.6040801-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-29 8:32 ` Dong, Eddie
2007-10-26 9:25 ` Avi Kivity
2007-10-26 9:18 ` Avi Kivity
[not found] ` <4721B0FD.9020006-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-26 9:36 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A02482BFE-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-26 9:58 ` Avi Kivity
2007-10-23 14:43 ` Gregory Haskins
[not found] ` <1193150619.8343.21.camel-5CR4LY5GPkvLDviKLk5550HKjMygAv58XqFh9Ls21Oc@public.gmane.org>
2007-10-23 14:50 ` Jan Kiszka
2007-10-23 8:16 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox