qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Re: [Qemu-devel] From virtio_kick until VM-exit?
@ 2016-07-27  9:30 charls chap
  0 siblings, 0 replies; 8+ messages in thread
From: charls chap @ 2016-07-27  9:30 UTC (permalink / raw)
  To: qemu-devel

Hello All,

I am new with qemu, I am trying to understand the I/O path of a synchronous
I/O.
It turns out, that I've not a clear picture. Definitely for VM-exit and
VM-entry parts.


Some generic questions first, and some other questions inline :)



1) if i am correct:
When we run QEMU in emulation mode, WITHOUT kvm. Then we run on TCG runtime
No vcpus threads?

qemu_tcg_cpu_thread_fn
tcg_exec_all();

No interactions with kvm module. On the other hand, when we have
virtualization, there are no
interactions with any part of the tcg implementation.

The tb_gen_code in translate-all, and find_slot and find_fast,  its not
part of the tcg, and there still
"executed, in the KVM case?
So if we have
for (;;)
c++;

vcpu thread executes code, using cpu-exec?

2)
What is this pipe, i mean between who?
when is used?
int event_notifier_test_and_clear(EventNotifier *e)
{
    int value;
    ssize_t len;
    char buffer[512];

    /* Drain the notify pipe.  For eventfd, only 8 bytes will be read.  */
    value = 0;
    do {
        len = read(e->rfd, buffer, sizeof(buffer));
        value |= (len > 0);
    } while ((len == -1 && errno == EINTR) || len == sizeof(buffer));

    return value;
}

3)
I've tried to trace iothread,
It seems that the following functions executed once:
iothread_class_init
iothread_register_types

But i have no idea, when static void *iothread_run(void *opaque)
Acutally when iothread is created?






This decision is made in the static int vmx_handle_exit
<http://lxr.free-electrons.com/ident?v=2.6.33;i=vmx_handle_exit>(struct
kvm_vcpu <http://lxr.free-electrons.com/ident?v=2.6.33;i=kvm_vcpu> *vcpu)
 (kvm/vmx.c)?


What does it mean " The ioeventfd file descriptor will be
signalled (it becomes readable)."



> During the time in kvm.ko the guest vcpu is not executing because no
> host CPU is in guest mode for that vcpu context.  There is no spinning
> or waiting as you mentioned above.  The host CPU is simply busy doing
> other things and the guest vcpu is not running during that time.
>

If vcpu is not sleeping, then it means, that vcpu didn't execute the kick
in the guest kernel.




> After the ioeventfd has been signalled, kvm.ko does a vmenter and
> resumes guest code execution.  The guest finds itself back after the
> instruction that wrote to VIRTIO_PCI_QUEUE_NOTIFY.
>
> During this time there has been no QEMU userspace activity because
> ioeventfd signalling happens in the kernel in the kvm.ko module.  So
> QEMU is still inside ioctl(KVM_RUN).
>
>
iothread is in control and this is the thread that will follow the
common kernel path for the I/O submit and completion. I mean, that
iothread, will be waiting in Host kernel, I/O wait queue,
after the submission of I/O.

In the meantime, kvm does a VM_ENTRY to where?
Since, the intrerrupt is not completed, the return point couldn't be the
guest interrupt handler...



> Now it's up to the host kernel to schedule the thread that is
> monitoring the ioeventfd file descriptor.  The ioeventfd has become
> readable so hopefully the scheduler will soon dispatch the QEMU event
> loop thread that is waiting in epoll(2)/ppoll(2).  Once the QEMU
> thread wakes up it will execute the virtio-blk device emulation code
> that processes the virtqueue.  The guest vcpu may be executing during
> this time.
>


> > 4: And then there is a virtual interrupt injection and VM ENTRY to guest
> > kernel,
> > so vcpu is unblocked and it executes the complete_bottom_halve?
>
> No, the interrupt injection is independent of the vmenter.  As
> mentioned above, the vcpu may run while virtio-blk device emulation
> happens (when ioeventfd is used, which is the default setting).
>
> The vcpu will receive an interrupt and jump to the virtio_pci
> interrupt handler function, which calls virtio_blk.ko function to
> process completed requests from the virtqueue.
>
>

from which thread in what function  VM-exit-to  which point in kvm.ko?
and
from which point of kvm.ko   VM-entry-to  which point/function in qemu?

Virtual interrupt injection from which point of host kernel to which
point/function in QEMU?




> I'm not going further since my answers have changed the
> assumptions/model that you were considering.  Maybe it's all clear to
> you now.  Otherwise please email the QEMU mailing list at
> qemu-devel@nongnu.org and CC me instead of emailing me directly.  That
> way others can participate (e.g. if I'm busy and unable to reply
> quickly).
>
> Stefan
>


Thanks in advance for your time and patience

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-07-30  8:36 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CAA6eV_Susgqqnoi4Gy8Ohg3RROOLq6Uuv9OZbGqBh_p_yq5kAA@mail.gmail.com>
     [not found] ` <CAJSP0QW-HLiuA6MCPEK3uKEnSty65JpTz-_b9sOmjTL8DZ4byw@mail.gmail.com>
2016-07-27  9:19   ` [Qemu-devel] From virtio_kick until VM-exit? charls chap
2016-07-27  9:51     ` Stefan Hajnoczi
2016-07-27 12:42       ` Stefan Hajnoczi
2016-07-27 12:52     ` Stefan Hajnoczi
2016-07-27 13:20       ` Charls D. Chap
2016-07-27 13:46         ` Stefan Hajnoczi
     [not found]           ` <CAA6eV_Rk38Q4-YATU7cziCcFJNjX2T2FjKYwDsCDm0dZgyrakQ@mail.gmail.com>
2016-07-30  8:35             ` Stefan Hajnoczi
2016-07-27  9:30 charls chap

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).