qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Re: [Qemu-devel] From virtio_kick until VM-exit?
       [not found] ` <CAJSP0QW-HLiuA6MCPEK3uKEnSty65JpTz-_b9sOmjTL8DZ4byw@mail.gmail.com>
@ 2016-07-27  9:19   ` charls chap
  2016-07-27  9:51     ` Stefan Hajnoczi
  2016-07-27 12:52     ` Stefan Hajnoczi
  0 siblings, 2 replies; 8+ messages in thread
From: charls chap @ 2016-07-27  9:19 UTC (permalink / raw)
  To: qemu-devel; +Cc: Stefan Hajnoczi

Hello All,

I am new with qemu, I am trying to understand the I/O path of a synchronous
I/O.
It turns out, that I've not a clear picture. Definitely for VM-exit and
VM-entry parts.


Some generic questions first, and some other questions inline :)



1) if i am correct:
When we run QEMU in emulation mode, WITHOUT kvm. Then we run on TCG runtime
No vcpus threads?

qemu_tcg_cpu_thread_fn
tcg_exec_all();

No interactions with kvm module. On the other hand, when we have
virtualization, there are no
interactions with any part of the tcg implementation.

The tb_gen_code in translate-all, and find_slot and find_fast,  its not
part of the tcg, and there still
"executed, in the KVM case?
So if we have
for (;;)
c++;

vcpu thread executes code, using cpu-exec?

2)
What is this pipe, i mean between who?
when is used?
int event_notifier_test_and_clear(EventNotifier *e)
{
    int value;
    ssize_t len;
    char buffer[512];

    /* Drain the notify pipe.  For eventfd, only 8 bytes will be read.  */
    value = 0;
    do {
        len = read(e->rfd, buffer, sizeof(buffer));
        value |= (len > 0);
    } while ((len == -1 && errno == EINTR) || len == sizeof(buffer));

    return value;
}

3)
I've tried to trace iothread,
It seems that the following functions executed once:
iothread_class_init
iothread_register_types

But i have no idea, when static void *iothread_run(void *opaque)
Acutally when iothread is created?






On Wed, Jul 27, 2016 at 10:13 AM, Stefan Hajnoczi <stefanha@gmail.com>
wrote:

> On Tue, Jul 26, 2016 at 9:08 PM, charls chap <chapcharls@gmail.com> wrote:
> > Let's say that we run a VM over QEMU-KVM. One vpcu, driver virtio for
> block.
> > An app.c at first does some trivial stuff (non-privileged instructions)
> and
> > then performs a synchronous I/O(O_DIRECT, O_SYNC).
> >
> >
> >
> > my understanding for the path is:
> > VCPU executes guest code in TCG -- for the trivial stuff, Then it comes
> the
> > write(), so,
> > vcpu switches to guest kernel, it goes down the kernel path until the
> > kick(PIO)
> > then vpcu blocks(0: vcpu thread is blocked in guest kernel? if yes,
> spinning
> > or
> > waiting on a condition?) and then (1: What are the invocations until it
> > reaches
> > "the other side")
> >
> > Then kvm, handles the exit, and switches to userspace, iothread takes
> action
> > (3: how is this happening?)
>
> You mentioned TCG above and now you mentioned KVM.  Either TCG
> (just-in-time compiler) or KVM (hardware virtualization extensions)
> can be used but not both at the same time.  TCG is used to translate
> instructions from the guest architecture to the host architecture,
> e.g. ARM guest on x86 host.  KVM is used to efficiently execute
> same-on-same, e.g. x86 guest on x86 host.  I'll assume you are using
> just KVM in your examples.
>
> The guest virtio_pci.ko driver contains an instruction that writes to
> the VIRTIO_PCI_QUEUE_NOTIFY hardware register.  This will cause a
> "vmexit" (a trap from guest mode back to host mode) and the kvm.ko
> host kernel module will inspect this trapped instruction and decide
> that it's an ioeventfd write.  The ioeventfd file descriptor will be
> signalled (it becomes readable).
>

This decision is made in the static int vmx_handle_exit
<http://lxr.free-electrons.com/ident?v=2.6.33;i=vmx_handle_exit>(struct
kvm_vcpu <http://lxr.free-electrons.com/ident?v=2.6.33;i=kvm_vcpu> *vcpu)
 (kvm/vmx.c)?


What does it mean " The ioeventfd file descriptor will be
signalled (it becomes readable)."



> During the time in kvm.ko the guest vcpu is not executing because no
> host CPU is in guest mode for that vcpu context.  There is no spinning
> or waiting as you mentioned above.  The host CPU is simply busy doing
> other things and the guest vcpu is not running during that time.
>

If vcpu is not sleeping, then it means, that vcpu didn't execute the kick
in the guest kernel.




> After the ioeventfd has been signalled, kvm.ko does a vmenter and
> resumes guest code execution.  The guest finds itself back after the
> instruction that wrote to VIRTIO_PCI_QUEUE_NOTIFY.
>
> During this time there has been no QEMU userspace activity because
> ioeventfd signalling happens in the kernel in the kvm.ko module.  So
> QEMU is still inside ioctl(KVM_RUN).
>
>
iothread is in control and this is the thread that will follow the
common kernel path for the I/O submit and completion. I mean, that
iothread, will be waiting in Host kernel, I/O wait queue,
after the submission of I/O.

In the meantime, kvm does a VM_ENTRY to where?
Since, the intrerrupt is not completed, the return point couldn't be the
guest interrupt handler...



> Now it's up to the host kernel to schedule the thread that is
> monitoring the ioeventfd file descriptor.  The ioeventfd has become
> readable so hopefully the scheduler will soon dispatch the QEMU event
> loop thread that is waiting in epoll(2)/ppoll(2).  Once the QEMU
> thread wakes up it will execute the virtio-blk device emulation code
> that processes the virtqueue.  The guest vcpu may be executing during
> this time.
>


> > 4: And then there is a virtual interrupt injection and VM ENTRY to guest
> > kernel,
> > so vcpu is unblocked and it executes the complete_bottom_halve?
>
> No, the interrupt injection is independent of the vmenter.  As
> mentioned above, the vcpu may run while virtio-blk device emulation
> happens (when ioeventfd is used, which is the default setting).
>
> The vcpu will receive an interrupt and jump to the virtio_pci
> interrupt handler function, which calls virtio_blk.ko function to
> process completed requests from the virtqueue.
>
>

from which thread in what function  VM-exit-to  which point in kvm.ko?
and
from which point of kvm.ko   VM-entry-to  which point/function in qemu?

Virtual interrupt injection from which point of host kernel to which
point/function in QEMU?




> I'm not going further since my answers have changed the
> assumptions/model that you were considering.  Maybe it's all clear to
> you now.  Otherwise please email the QEMU mailing list at
> qemu-devel@nongnu.org and CC me instead of emailing me directly.  That
> way others can participate (e.g. if I'm busy and unable to reply
> quickly).
>
> Stefan
>


Thanks in advance for your time and patience

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] From virtio_kick until VM-exit?
@ 2016-07-27  9:30 charls chap
  0 siblings, 0 replies; 8+ messages in thread
From: charls chap @ 2016-07-27  9:30 UTC (permalink / raw)
  To: qemu-devel

Hello All,

I am new with qemu, I am trying to understand the I/O path of a synchronous
I/O.
It turns out, that I've not a clear picture. Definitely for VM-exit and
VM-entry parts.


Some generic questions first, and some other questions inline :)



1) if i am correct:
When we run QEMU in emulation mode, WITHOUT kvm. Then we run on TCG runtime
No vcpus threads?

qemu_tcg_cpu_thread_fn
tcg_exec_all();

No interactions with kvm module. On the other hand, when we have
virtualization, there are no
interactions with any part of the tcg implementation.

The tb_gen_code in translate-all, and find_slot and find_fast,  its not
part of the tcg, and there still
"executed, in the KVM case?
So if we have
for (;;)
c++;

vcpu thread executes code, using cpu-exec?

2)
What is this pipe, i mean between who?
when is used?
int event_notifier_test_and_clear(EventNotifier *e)
{
    int value;
    ssize_t len;
    char buffer[512];

    /* Drain the notify pipe.  For eventfd, only 8 bytes will be read.  */
    value = 0;
    do {
        len = read(e->rfd, buffer, sizeof(buffer));
        value |= (len > 0);
    } while ((len == -1 && errno == EINTR) || len == sizeof(buffer));

    return value;
}

3)
I've tried to trace iothread,
It seems that the following functions executed once:
iothread_class_init
iothread_register_types

But i have no idea, when static void *iothread_run(void *opaque)
Acutally when iothread is created?






This decision is made in the static int vmx_handle_exit
<http://lxr.free-electrons.com/ident?v=2.6.33;i=vmx_handle_exit>(struct
kvm_vcpu <http://lxr.free-electrons.com/ident?v=2.6.33;i=kvm_vcpu> *vcpu)
 (kvm/vmx.c)?


What does it mean " The ioeventfd file descriptor will be
signalled (it becomes readable)."



> During the time in kvm.ko the guest vcpu is not executing because no
> host CPU is in guest mode for that vcpu context.  There is no spinning
> or waiting as you mentioned above.  The host CPU is simply busy doing
> other things and the guest vcpu is not running during that time.
>

If vcpu is not sleeping, then it means, that vcpu didn't execute the kick
in the guest kernel.




> After the ioeventfd has been signalled, kvm.ko does a vmenter and
> resumes guest code execution.  The guest finds itself back after the
> instruction that wrote to VIRTIO_PCI_QUEUE_NOTIFY.
>
> During this time there has been no QEMU userspace activity because
> ioeventfd signalling happens in the kernel in the kvm.ko module.  So
> QEMU is still inside ioctl(KVM_RUN).
>
>
iothread is in control and this is the thread that will follow the
common kernel path for the I/O submit and completion. I mean, that
iothread, will be waiting in Host kernel, I/O wait queue,
after the submission of I/O.

In the meantime, kvm does a VM_ENTRY to where?
Since, the intrerrupt is not completed, the return point couldn't be the
guest interrupt handler...



> Now it's up to the host kernel to schedule the thread that is
> monitoring the ioeventfd file descriptor.  The ioeventfd has become
> readable so hopefully the scheduler will soon dispatch the QEMU event
> loop thread that is waiting in epoll(2)/ppoll(2).  Once the QEMU
> thread wakes up it will execute the virtio-blk device emulation code
> that processes the virtqueue.  The guest vcpu may be executing during
> this time.
>


> > 4: And then there is a virtual interrupt injection and VM ENTRY to guest
> > kernel,
> > so vcpu is unblocked and it executes the complete_bottom_halve?
>
> No, the interrupt injection is independent of the vmenter.  As
> mentioned above, the vcpu may run while virtio-blk device emulation
> happens (when ioeventfd is used, which is the default setting).
>
> The vcpu will receive an interrupt and jump to the virtio_pci
> interrupt handler function, which calls virtio_blk.ko function to
> process completed requests from the virtqueue.
>
>

from which thread in what function  VM-exit-to  which point in kvm.ko?
and
from which point of kvm.ko   VM-entry-to  which point/function in qemu?

Virtual interrupt injection from which point of host kernel to which
point/function in QEMU?




> I'm not going further since my answers have changed the
> assumptions/model that you were considering.  Maybe it's all clear to
> you now.  Otherwise please email the QEMU mailing list at
> qemu-devel@nongnu.org and CC me instead of emailing me directly.  That
> way others can participate (e.g. if I'm busy and unable to reply
> quickly).
>
> Stefan
>


Thanks in advance for your time and patience

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] From virtio_kick until VM-exit?
  2016-07-27  9:19   ` [Qemu-devel] From virtio_kick until VM-exit? charls chap
@ 2016-07-27  9:51     ` Stefan Hajnoczi
  2016-07-27 12:42       ` Stefan Hajnoczi
  2016-07-27 12:52     ` Stefan Hajnoczi
  1 sibling, 1 reply; 8+ messages in thread
From: Stefan Hajnoczi @ 2016-07-27  9:51 UTC (permalink / raw)
  To: charls chap; +Cc: qemu-devel

On Wed, Jul 27, 2016 at 10:19 AM, charls chap <chapcharls@gmail.com> wrote:
> I am new with qemu, I am trying to understand the I/O path of a synchronous
> I/O.
> It turns out, that I've not a clear picture. Definitely for VM-exit and
> VM-entry parts.

Please email the QEMU mailing list at
qemu-devel@nongnu.org and CC me instead of emailing me directly.  That
way others can participate (e.g. if I'm busy and unable to reply
quickly).

Stefan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] From virtio_kick until VM-exit?
  2016-07-27  9:51     ` Stefan Hajnoczi
@ 2016-07-27 12:42       ` Stefan Hajnoczi
  0 siblings, 0 replies; 8+ messages in thread
From: Stefan Hajnoczi @ 2016-07-27 12:42 UTC (permalink / raw)
  To: charls chap; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 596 bytes --]

On Wed, Jul 27, 2016 at 10:51:41AM +0100, Stefan Hajnoczi wrote:
> On Wed, Jul 27, 2016 at 10:19 AM, charls chap <chapcharls@gmail.com> wrote:
> > I am new with qemu, I am trying to understand the I/O path of a synchronous
> > I/O.
> > It turns out, that I've not a clear picture. Definitely for VM-exit and
> > VM-entry parts.
> 
> Please email the QEMU mailing list at
> qemu-devel@nongnu.org and CC me instead of emailing me directly.  That
> way others can participate (e.g. if I'm busy and unable to reply
> quickly).

Sorry, I didn't see the CC in my mail reader :).

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] From virtio_kick until VM-exit?
  2016-07-27  9:19   ` [Qemu-devel] From virtio_kick until VM-exit? charls chap
  2016-07-27  9:51     ` Stefan Hajnoczi
@ 2016-07-27 12:52     ` Stefan Hajnoczi
  2016-07-27 13:20       ` Charls D. Chap
  1 sibling, 1 reply; 8+ messages in thread
From: Stefan Hajnoczi @ 2016-07-27 12:52 UTC (permalink / raw)
  To: charls chap; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2924 bytes --]

On Wed, Jul 27, 2016 at 12:19:52PM +0300, charls chap wrote:
> Hello All,
> 
> I am new with qemu, I am trying to understand the I/O path of a synchronous
> I/O.

What exactly do you mean by "synchronous I/O"?

Most modern devices have asynchronous interfaces (i.e. a ring or list of
requests that complete with an interrupt after the vcpu submits them and
continues execution).

> 1) if i am correct:
> When we run QEMU in emulation mode, WITHOUT kvm. Then we run on TCG runtime
> No vcpus threads?
> 
> qemu_tcg_cpu_thread_fn
> tcg_exec_all();
> 
> No interactions with kvm module. On the other hand, when we have
> virtualization, there are no
> interactions with any part of the tcg implementation.

Yes, it's either TCG or KVM.

> The tb_gen_code in translate-all, and find_slot and find_fast,  its not
> part of the tcg, and there still
> "executed, in the KVM case?
> So if we have
> for (;;)
> c++;
> 
> vcpu thread executes code, using cpu-exec?

In the KVM case the vcpu thread does ioctl(KVM_RUN) to execute guest
code.

> 2)
> What is this pipe, i mean between who?
> when is used?
> int event_notifier_test_and_clear(EventNotifier *e)
> {
>     int value;
>     ssize_t len;
>     char buffer[512];
> 
>     /* Drain the notify pipe.  For eventfd, only 8 bytes will be read.  */
>     value = 0;
>     do {
>         len = read(e->rfd, buffer, sizeof(buffer));
>         value |= (len > 0);
>     } while ((len == -1 && errno == EINTR) || len == sizeof(buffer));
> 
>     return value;
> }

Read eventfd(2) to understand this primitive.  The "pipe" part is a
fallback for systems that don't support eventfd(2).  eventfd is used for
signalling between threads.

The kvm.ko module can signal an ioeventfd when a particular memory or
I/O address is written.  This means that the thread monitoring the
ioeventfd will run when the guest has written to the memory or I/O
address.

This ioeventfd mechanism is an alternative to the "heavyweight exit"
code path (return from ioctl(KVM_RUN) and dispatch the memory or I/O
access in QEMU vcpu thread context before calling ioctl(KVM_RUN) again).
The advantage of ioeventfd is that device emulation can happen in a
separate thread while the vcpu continues executing guest code.

> 
> 3)
> I've tried to trace iothread,
> It seems that the following functions executed once:
> iothread_class_init
> iothread_register_types
> 
> But i have no idea, when static void *iothread_run(void *opaque)
> Acutally when iothread is created?

An IOThread is only created if you put -object iothread,id=iothread0 on
the command-line.  Then you can associate a virtio-blk or virtio-scsi
device with a particular IOThread: -device
virtio-blk-pci,iothread=iothread0,drive=drive0.

When no IOThread is given on the command-line, the ioeventfd processing
happens in the QEMU main loop thread.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] From virtio_kick until VM-exit?
  2016-07-27 12:52     ` Stefan Hajnoczi
@ 2016-07-27 13:20       ` Charls D. Chap
  2016-07-27 13:46         ` Stefan Hajnoczi
  0 siblings, 1 reply; 8+ messages in thread
From: Charls D. Chap @ 2016-07-27 13:20 UTC (permalink / raw)
  To: qemu-devel; +Cc: Stefan Hajnoczi

Hello List (again),
Thank you Stefan for your quick responses! You are great.


On Wed, Jul 27, 2016 at 3:52 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Wed, Jul 27, 2016 at 12:19:52PM +0300, charls chap wrote:
> > Hello All,
> >
> > I am new with qemu, I am trying to understand the I/O path of a synchronous
> > I/O.
>
> What exactly do you mean by "synchronous I/O"?

An I/O , that is associated with a device opened with O_SYNC, O_DIRECT flags,
so that we are sure that it's going all the path down, until the
actual write of the data to the physical device.
That's why, i can't understand, how vcpu can continue execution,
without waiting on a condition or sleeping.
If vcpu is not sleeping, then it means, that vcpu didn't execute the
kick in the guest kernel?


For the return path
--------------------------
>After the ioeventfd has been signalled, kvm.ko does a vmenter and
>resumes guest code execution.  The guest finds itself back after the
>instruction that wrote to VIRTIO_PCI_QUEUE_NOTIFY.

>During this time there has been no QEMU userspace activity because
>ioeventfd signalling happens in the kernel in the kvm.ko module.  So
>QEMU is still inside ioctl(KVM_RUN).

iothread is in control and this is the thread that will follow the
common kernel path for the I/O submit and completion. I mean, that
iothread, will be waiting in Host kernel, I/O wait queue,
after the submission of I/O.

In the meantime, kvm does a VM_ENTRY to where?
Since, the interrupt is not completed, the return point couldn't be the
guest interrupt handler...

In short, i still can't find the following:
from which thread in what function  VM-exit-to  which point in kvm.ko?
and
from which point of kvm.ko   VM-entry-to  which point/function in qemu?

Virtual interrupt injection from which point of host kernel to which
point/function in QEMU?


>
>
> Most modern devices have asynchronous interfaces (i.e. a ring or list of
> requests that complete with an interrupt after the vcpu submits them and
> continues execution).
>
> > 1) if i am correct:
> > When we run QEMU in emulation mode, WITHOUT kvm. Then we run on TCG runtime
> > No vcpus threads?
> >
> > qemu_tcg_cpu_thread_fn
> > tcg_exec_all();
> >
> > No interactions with kvm module. On the other hand, when we have
> > virtualization, there are no
> > interactions with any part of the tcg implementation.
>
> Yes, it's either TCG or KVM.
>
> > The tb_gen_code in translate-all, and find_slot and find_fast,  its not
> > part of the tcg, and there still
> > "executed, in the KVM case?
> > So if we have
> > for (;;)
> > c++;
> >
> > vcpu thread executes code, using cpu-exec?
>
> In the KVM case the vcpu thread does ioctl(KVM_RUN) to execute guest
> code.
ioctl(KVM_RUN) means that we have QEMU/host switch. So how we can say
that guest code is executed natively?


>
> > 2)
> > What is this pipe, i mean between who?
> > when is used?
> > int event_notifier_test_and_clear(EventNotifier *e)
> > {
> >     int value;
> >     ssize_t len;
> >     char buffer[512];
> >
> >     /* Drain the notify pipe.  For eventfd, only 8 bytes will be read.  */
> >     value = 0;
> >     do {
> >         len = read(e->rfd, buffer, sizeof(buffer));
> >         value |= (len > 0);
> >     } while ((len == -1 && errno == EINTR) || len == sizeof(buffer));
> >
> >     return value;
> > }
>
> Read eventfd(2) to understand this primitive.  The "pipe" part is a
> fallback for systems that don't support eventfd(2).  eventfd is used for
> signalling between threads.
>
a pipe between who?


> The kvm.ko module can signal an ioeventfd when a particular memory or
> I/O address is written.  This means that the thread monitoring the
> ioeventfd will run when the guest has written to the memory or I/O
> address.
>
> This ioeventfd mechanism is an alternative to the "heavyweight exit"
> code path (return from ioctl(KVM_RUN) and dispatch the memory or I/O
> access in QEMU vcpu thread context before calling ioctl(KVM_RUN) again).
> The advantage of ioeventfd is that device emulation can happen in a
> separate thread while the vcpu continues executing guest code.
>
> >
> > 3)
> > I've tried to trace iothread,
> > It seems that the following functions executed once:
> > iothread_class_init
> > iothread_register_types
> >
> > But i have no idea, when static void *iothread_run(void *opaque)
> > Acutally when iothread is created?
>
> An IOThread is only created if you put -object iothread,id=iothread0 on
> the command-line.  Then you can associate a virtio-blk or virtio-scsi
> device with a particular IOThread: -device
> virtio-blk-pci,iothread=iothread0,drive=drive0.
>
> When no IOThread is given on the command-line, the ioeventfd processing
> happens in the QEMU main loop thread.
>



> Stefan



Thanks,
Charls

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] From virtio_kick until VM-exit?
  2016-07-27 13:20       ` Charls D. Chap
@ 2016-07-27 13:46         ` Stefan Hajnoczi
       [not found]           ` <CAA6eV_Rk38Q4-YATU7cziCcFJNjX2T2FjKYwDsCDm0dZgyrakQ@mail.gmail.com>
  0 siblings, 1 reply; 8+ messages in thread
From: Stefan Hajnoczi @ 2016-07-27 13:46 UTC (permalink / raw)
  To: Charls D. Chap; +Cc: qemu-devel

On Wed, Jul 27, 2016 at 2:20 PM, Charls D. Chap <chapcharls@gmail.com> wrote:
> On Wed, Jul 27, 2016 at 3:52 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>>
>> On Wed, Jul 27, 2016 at 12:19:52PM +0300, charls chap wrote:
>> > Hello All,
>> >
>> > I am new with qemu, I am trying to understand the I/O path of a synchronous
>> > I/O.
>>
>> What exactly do you mean by "synchronous I/O"?
>
> An I/O , that is associated with a device opened with O_SYNC, O_DIRECT flags,
> so that we are sure that it's going all the path down, until the
> actual write of the data to the physical device.
> That's why, i can't understand, how vcpu can continue execution,
> without waiting on a condition or sleeping.
> If vcpu is not sleeping, then it means, that vcpu didn't execute the
> kick in the guest kernel?

Review how blocking write(2) is implemented on physical hardware:

The thread inside the write(2) system call blocks.  This means the
thread scheduler knows this thread cannot make progress and instead it
runs another thread in the meantime.  Eventually the I/O request is
completed and the write(2) thread becomes runnable again.  At this
point write(2) returns.

Virtualization changes nothing about this scenario.  The virtio-blk
virtqueue notification is equivalent to writing to a hardware register
on a phyiscal storage controller (SATA, SCSI HBA, etc).  In both cases
the CPU continues executing instructions while the I/O request is
being processed.

> iothread is in control and this is the thread that will follow the
> common kernel path for the I/O submit and completion. I mean, that
> iothread, will be waiting in Host kernel, I/O wait queue,
> after the submission of I/O.

No.  Please read block/raw-posix.c to understand how I/O requests are
submitted/completed in QEMU.  Either a thread pool is used so worker
threads do the blocking preadv(2)/pwritev(2)/fdatasync(2) or Linux AIO
with eventfd is used.  In both cases IOThread or the QEMU main loop
thread are event loops that wait in ppoll(2)/epoll(2) until the
eventfd is signalled.

> In the meantime, kvm does a VM_ENTRY to where?

Read the ioeventfd code in the Linux kernel: virt/kvm/eventfd.c.

> Since, the interrupt is not completed, the return point couldn't be the
> guest interrupt handler...

...

Sorry, I don't have time to reply to all your questions and it would
require me to look up the code too.  The level of detail you are
asking for is at the code level.  In other words, you're asking people
to read the code for you.

Please read the code and think about it before sending more questions.

Stefan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] From virtio_kick until VM-exit?
       [not found]           ` <CAA6eV_Rk38Q4-YATU7cziCcFJNjX2T2FjKYwDsCDm0dZgyrakQ@mail.gmail.com>
@ 2016-07-30  8:35             ` Stefan Hajnoczi
  0 siblings, 0 replies; 8+ messages in thread
From: Stefan Hajnoczi @ 2016-07-30  8:35 UTC (permalink / raw)
  To: Charls D. Chap; +Cc: qemu-devel

On Fri, Jul 29, 2016 at 9:00 PM, Charls D. Chap <chapcharls@gmail.com> wrote:

Please use Reply-All when responding to a mailing list thread.  This
keeps the mailing list (qemu-devel@nongnu.org) in the CC list so your
replies are sent to the mailing list too.  This way the discussion
stays public on the mailing list and others can participate.

> I've read again and again the code. Please let me ask you one last question.
>
> the question is, where is the VMEXIT and VMENTRY code.
>
>
> If i have this, i can answer the following that bother me:
> 1)
> What is a VMEXIT, what mechanism? is it an interrupt? Same for VMENTRY
> Where does the VMENTRY return? In an interrupt handler?
>
> 2)
> Which are the parameters of vmetry and vmexit
> does each vcpu has a specific VPID or does it change in every RESUME
>
> 3)
> What is a hypercall? A pair of VMEXIT-VMENTRY? or is it oneway (does it ever
> return?) Or there are many types of hypercalls
>
> 4)
> When we do vmexit, Is it another context? I am mean, does the stack changes?
> What is the guest stack? Each vcpu has one stack?

Please look at the Intel Software Developer's Manuals and read about
VMX instructions:

https://www-ssl.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html

Once you have an overview you'll understand the semantics and be able
to grep for the relevant code in the kvm kernel module.

Stefan

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-07-30  8:36 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CAA6eV_Susgqqnoi4Gy8Ohg3RROOLq6Uuv9OZbGqBh_p_yq5kAA@mail.gmail.com>
     [not found] ` <CAJSP0QW-HLiuA6MCPEK3uKEnSty65JpTz-_b9sOmjTL8DZ4byw@mail.gmail.com>
2016-07-27  9:19   ` [Qemu-devel] From virtio_kick until VM-exit? charls chap
2016-07-27  9:51     ` Stefan Hajnoczi
2016-07-27 12:42       ` Stefan Hajnoczi
2016-07-27 12:52     ` Stefan Hajnoczi
2016-07-27 13:20       ` Charls D. Chap
2016-07-27 13:46         ` Stefan Hajnoczi
     [not found]           ` <CAA6eV_Rk38Q4-YATU7cziCcFJNjX2T2FjKYwDsCDm0dZgyrakQ@mail.gmail.com>
2016-07-30  8:35             ` Stefan Hajnoczi
2016-07-27  9:30 charls chap

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).