From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39959) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1baNHD-00047M-FU for qemu-devel@nongnu.org; Thu, 18 Aug 2016 09:22:32 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1baNHB-0005kY-HS for qemu-devel@nongnu.org; Thu, 18 Aug 2016 09:22:30 -0400 Received: from mail-ua0-x231.google.com ([2607:f8b0:400c:c08::231]:34145) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1baNHB-0005kT-DC for qemu-devel@nongnu.org; Thu, 18 Aug 2016 09:22:29 -0400 Received: by mail-ua0-x231.google.com with SMTP id k90so27534599uak.1 for ; Thu, 18 Aug 2016 06:22:29 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: References: From: Stefan Hajnoczi Date: Thu, 18 Aug 2016 14:22:28 +0100 Message-ID: Content-Type: text/plain; charset=UTF-8 Subject: Re: [Qemu-devel] errno 13, fopen qemu trace file. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Nir Levy Cc: "qemu-devel@nongnu.org" On Thu, Aug 18, 2016 at 1:58 PM, Nir Levy wrote: > I have a progress in tracing qemu, > I add the thread and tag done for each kvm_ioctl, kvm_vm_ioctl, kvm_vcpu_ioctl > in purpose of investigating pure hypervisor activity and delays on host. > the kvm type print only for convenience. > > for example: > > kvm_ioctl 3106435.230545 pid=11347 thread=11347 type=0xae03 arg=0x25 > > kvm_ioctl_done 3106435.230546 pid=11347 thread=11347 type=0xae03 arg=0x25 diff=1 (KVM_CHECK_EXTENSION) > > kvm_vcpu_ioctl 3106435.253930 pid=11347 thread=11354 cpu_index=0x2 type=0x4008ae9c arg=0x56417e6cb4f0 > > kvm_vcpu_ioctl_done 3106435.253931 pid=11347 thread=11354 cpu_index=0x2 type=0x4008ae9c arg=0x56417e6cb4f0 diff=1 (KVM_X86_SETUP_MCE) > > kvm_vm_ioctl 3106435.268896 pid=11347 thread=11347 type=0x4020ae46 arg=0x7ffed97cf9d0 > > kvm_vm_ioctl_done 3106435.269082 pid=11347 thread=11347 type=0x4020ae46 arg=0x7ffed97cf9d0 diff=186 (KVM_SET_USER_MEMORY_REGION) > > > I have notice KVM_RUN can take even seconds but that is probably low priority tasks,(io workers probably) Please read Linux Documentation/virtual/kvm/api.txt to learn about the ioctl calls. KVM_RUN is *the* ioctl that executes guest code. Unless a vcpu is halted we should be inside KVM_RUN, so spending time inside this ioctl is normal. > but this 186micro second on the main qemu thread is suspicious and might cause application running over vm delays. By "186micro second" you are referring to KVM_SET_USER_MEMORY_REGION in the trace above. Is this ioctl called in the critical path? I doubt it since the KVM_X86_SETUP_MCE ioctl in your trace happens during initialization time from kvm_arch_init_vcpu() and is not in the critical path when the guest is running. Why worry about latencies that do not affect running guests? Stefan