From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41827) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bSLAY-0004gT-Td for qemu-devel@nongnu.org; Wed, 27 Jul 2016 05:30:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bSLAX-0002dm-2h for qemu-devel@nongnu.org; Wed, 27 Jul 2016 05:30:26 -0400 Received: from mail-vk0-x22e.google.com ([2607:f8b0:400c:c05::22e]:36130) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bSLAW-0002di-SN for qemu-devel@nongnu.org; Wed, 27 Jul 2016 05:30:25 -0400 Received: by mail-vk0-x22e.google.com with SMTP id n129so5176680vke.3 for ; Wed, 27 Jul 2016 02:30:24 -0700 (PDT) MIME-Version: 1.0 From: charls chap Date: Wed, 27 Jul 2016 12:30:23 +0300 Message-ID: Content-Type: text/plain; charset=UTF-8 Subject: Re: [Qemu-devel] From virtio_kick until VM-exit? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Hello All, I am new with qemu, I am trying to understand the I/O path of a synchronous I/O. It turns out, that I've not a clear picture. Definitely for VM-exit and VM-entry parts. Some generic questions first, and some other questions inline :) 1) if i am correct: When we run QEMU in emulation mode, WITHOUT kvm. Then we run on TCG runtime No vcpus threads? qemu_tcg_cpu_thread_fn tcg_exec_all(); No interactions with kvm module. On the other hand, when we have virtualization, there are no interactions with any part of the tcg implementation. The tb_gen_code in translate-all, and find_slot and find_fast, its not part of the tcg, and there still "executed, in the KVM case? So if we have for (;;) c++; vcpu thread executes code, using cpu-exec? 2) What is this pipe, i mean between who? when is used? int event_notifier_test_and_clear(EventNotifier *e) { int value; ssize_t len; char buffer[512]; /* Drain the notify pipe. For eventfd, only 8 bytes will be read. */ value = 0; do { len = read(e->rfd, buffer, sizeof(buffer)); value |= (len > 0); } while ((len == -1 && errno == EINTR) || len == sizeof(buffer)); return value; } 3) I've tried to trace iothread, It seems that the following functions executed once: iothread_class_init iothread_register_types But i have no idea, when static void *iothread_run(void *opaque) Acutally when iothread is created? This decision is made in the static int vmx_handle_exit (struct kvm_vcpu *vcpu) (kvm/vmx.c)? What does it mean " The ioeventfd file descriptor will be signalled (it becomes readable)." > During the time in kvm.ko the guest vcpu is not executing because no > host CPU is in guest mode for that vcpu context. There is no spinning > or waiting as you mentioned above. The host CPU is simply busy doing > other things and the guest vcpu is not running during that time. > If vcpu is not sleeping, then it means, that vcpu didn't execute the kick in the guest kernel. > After the ioeventfd has been signalled, kvm.ko does a vmenter and > resumes guest code execution. The guest finds itself back after the > instruction that wrote to VIRTIO_PCI_QUEUE_NOTIFY. > > During this time there has been no QEMU userspace activity because > ioeventfd signalling happens in the kernel in the kvm.ko module. So > QEMU is still inside ioctl(KVM_RUN). > > iothread is in control and this is the thread that will follow the common kernel path for the I/O submit and completion. I mean, that iothread, will be waiting in Host kernel, I/O wait queue, after the submission of I/O. In the meantime, kvm does a VM_ENTRY to where? Since, the intrerrupt is not completed, the return point couldn't be the guest interrupt handler... > Now it's up to the host kernel to schedule the thread that is > monitoring the ioeventfd file descriptor. The ioeventfd has become > readable so hopefully the scheduler will soon dispatch the QEMU event > loop thread that is waiting in epoll(2)/ppoll(2). Once the QEMU > thread wakes up it will execute the virtio-blk device emulation code > that processes the virtqueue. The guest vcpu may be executing during > this time. > > > 4: And then there is a virtual interrupt injection and VM ENTRY to guest > > kernel, > > so vcpu is unblocked and it executes the complete_bottom_halve? > > No, the interrupt injection is independent of the vmenter. As > mentioned above, the vcpu may run while virtio-blk device emulation > happens (when ioeventfd is used, which is the default setting). > > The vcpu will receive an interrupt and jump to the virtio_pci > interrupt handler function, which calls virtio_blk.ko function to > process completed requests from the virtqueue. > > from which thread in what function VM-exit-to which point in kvm.ko? and from which point of kvm.ko VM-entry-to which point/function in qemu? Virtual interrupt injection from which point of host kernel to which point/function in QEMU? > I'm not going further since my answers have changed the > assumptions/model that you were considering. Maybe it's all clear to > you now. Otherwise please email the QEMU mailing list at > qemu-devel@nongnu.org and CC me instead of emailing me directly. That > way others can participate (e.g. if I'm busy and unable to reply > quickly). > > Stefan > Thanks in advance for your time and patience