From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: High vm-exit latencies during kvm boot-up/shutdown Date: Tue, 23 Oct 2007 16:19:35 +0200 Message-ID: <471E02F7.6080408@qumranet.com> References: <471D2D8C.1080202@web.de> <10EA09EFD8728347A513008B6B0DA77A02441127@pdsmsx411.ccr.corp.intel.com> <471D96DC.7070809@web.de> <10EA09EFD8728347A513008B6B0DA77A024414D9@pdsmsx411.ccr.corp.intel.com> <471DBA1A.2080108@web.de> <471DC311.2050003@qumranet.com> <471DF76B.7040001@siemens.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org To: Jan Kiszka Return-path: In-Reply-To: <471DF76B.7040001-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Jan Kiszka wrote: > Avi, > > [somehow your mails do not get through to my private account, so I'm > switching] > > Avi Kivity wrote: > >> Jan Kiszka wrote: >> >>> Clarification: I can't precisely tell what code is executed in VM mode, >>> as I don't have qemu or that guest instrumented. I just see the kernel >>> entering VM mode and leaving it again more than 300 us later. So I >>> wonder why this is allowed while some external IRQ is pending. >>> >>> >>> >> How do you know an external interrupt is pending? >> > > It's the host timer IRQ, programmed to fire in certain intervals (100 us > here). Test case is some latency measurement tool like tglx's cyclictest > or similar programs we use in Xenomai. > > >> kvm programs the hardware to exit when an external interrupt arrives. >> >> > > Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with > latest kvm from git hacked into (kvm generally seems to work fine this way): > > ... > qemu-sys-7543 0...1 13897us : vmcs_write16+0xb/0x20 (vmx_save_host_state+0x1a7/0x1c0) > qemu-sys-7543 0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20) > qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xa0/0x1c0) > qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xb0/0x1c0) > qemu-sys-7543 0...1 13898us : segment_base+0xc/0x70 (vmx_save_host_state+0xbf/0x1c0) > qemu-sys-7543 0...1 13898us : vmcs_writel+0xb/0x30 (vmx_save_host_state+0xcf/0x1c0) > qemu-sys-7543 0...1 13898us : load_msrs+0xb/0x40 (vmx_save_host_state+0xe7/0x1c0) > qemu-sys-7543 0...1 13898us : kvm_load_guest_fpu+0x8/0x40 (kvm_vcpu_ioctl_run+0xbf/0x570) > qemu-sys-7543 0D..1 13899us : vmx_vcpu_run+0xc/0x110 (kvm_vcpu_ioctl_run+0x120/0x570) > qemu-sys-7543 0D..1 13899us!: vmcs_writel+0xb/0x30 (vmx_vcpu_run+0x22/0x110) > qemu-sys-7543 0D..1 14344us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xc7/0x110) > qemu-sys-7543 0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20) > qemu-sys-7543 0D..1 14345us : vmcs_read32+0xb/0x20 (vmx_vcpu_run+0xf4/0x110) > qemu-sys-7543 0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20) > qemu-sys-7543 0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0) > qemu-sys-7543 0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24 0 0) > qemu-sys-7543 0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0) > qemu-sys-7543 0D.h1 14351us : __spin_lock+0xc/0x30 (handle_level_irq+0x24/0x120) > qemu-sys-7543 0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 (handle_level_irq+0x37/0x120) > qemu-sys-7543 0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 (mask_and_ack_8259A+0x2a/0x120) > qemu-sys-7543 0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 (mask_and_ack_8259A+0x7a/0x120) > qemu-sys-7543 0D.h2 14358us : redirect_hardirq+0x8/0x70 (handle_level_irq+0x72/0x120) > qemu-sys-7543 0D.h2 14358us : __spin_unlock+0xb/0x40 (handle_level_irq+0x8e/0x120) > qemu-sys-7543 0D.h1 14358us : handle_IRQ_event+0xe/0x110 (handle_level_irq+0x9a/0x120) > qemu-sys-7543 0D.h1 14359us : timer_interrupt+0xb/0x60 (handle_IRQ_event+0x67/0x110) > qemu-sys-7543 0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 (timer_interrupt+0x20/0x60) > ... > > One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run - > and this while cyclictest runs at a period of 100 us! > > I got the same results over Adeos/I-pipe & Xenomai with the function > tracer there, also pointing to the period while the CPU is in VM mode. > > Anyone any ideas? Greg, I put you on CC as you said you once saw "decent > latencies" with your patches. Are there still magic bits missing in > official kvm? > No bits missing as far as I know. It should just work. Can you explain some more about the latenct tracer? How does it work? Seeing vmx_vcpu_run() in there confuses me, as it always runs with interrupts disabled (it does dispatch NMIs, so we could be seeing an NMI). Please post a disassembly of your vmx_vcpu_run so we can interpret the offsets. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/