qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] Queries on dataplane mechanism
@ 2016-06-23 15:26 Gaurav Sharma
  2016-06-24 10:15 ` Stefan Hajnoczi
  0 siblings, 1 reply; 4+ messages in thread
From: Gaurav Sharma @ 2016-06-23 15:26 UTC (permalink / raw)
  To: qemu-devel

Hi,
I am trying to explore how the data plane mechanism works in QEMU. I
understand the behavior of QEMU big lock. Can someone clarify the following
w.r.t. to data plane :

1. Currently only virtio-blk-pci and virtio-scsi-pci have data plane
enabled ?

2. From qemu 2.1.0 data plane is enabled by default. I specify the
following options in the command line to enable :
-enable-kvm -drive if=none,id=drive1,file=file_name -object
iothread,id=iothread2 -device
virtio-blk-pci,id=drv0,drive=drive1,iothread=iothread2
Is the above syntax correct ?

3. What is the best possible scenario to test data plane ? Currently, I
have a test set up wherein i have two different devices [dev1 and dev2]. If
i process a write to dev1 which i made blocking by putting a sleep
statement, will i be able to process write on dev2 ? My understanding is
that as in case of dataplane we have a different event loop, i should be
able to process write on dev2. Is this correct ?

I am using qemu 2.2.0 on CentOS 7.1 with kvm version 1.5.3 and using Debian
3.2.8 as guest OS.

Regards.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] Queries on dataplane mechanism
  2016-06-23 15:26 [Qemu-devel] Queries on dataplane mechanism Gaurav Sharma
@ 2016-06-24 10:15 ` Stefan Hajnoczi
  2016-06-28  9:06   ` Gaurav Sharma
  0 siblings, 1 reply; 4+ messages in thread
From: Stefan Hajnoczi @ 2016-06-24 10:15 UTC (permalink / raw)
  To: Gaurav Sharma; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1725 bytes --]

On Thu, Jun 23, 2016 at 08:56:34PM +0530, Gaurav Sharma wrote:
> Hi,
> I am trying to explore how the data plane mechanism works in QEMU. I
> understand the behavior of QEMU big lock. Can someone clarify the following
> w.r.t. to data plane :
> 
> 1. Currently only virtio-blk-pci and virtio-scsi-pci have data plane
> enabled ?

Yes.

> 2. From qemu 2.1.0 data plane is enabled by default.

No "enabled by default" would mean that existing QEMU command-lines
enable dataplane.  This is not the case.  You have to explicitly define
an iothread object and then associate a virtio-blk/virtio-scsi device
with it.

> I specify the
> following options in the command line to enable :
> -enable-kvm -drive if=none,id=drive1,file=file_name -object
> iothread,id=iothread2 -device
> virtio-blk-pci,id=drv0,drive=drive1,iothread=iothread2
> Is the above syntax correct ?

Yes.

> 3. What is the best possible scenario to test data plane ? Currently, I
> have a test set up wherein i have two different devices [dev1 and dev2]. If
> i process a write to dev1 which i made blocking by putting a sleep
> statement, will i be able to process write on dev2 ? My understanding is
> that as in case of dataplane we have a different event loop, i should be
> able to process write on dev2. Is this correct ?

Dataplane improves scalability for high IOPS workloads when there are
multiple disks.

You do not need to modify any code in order to benchmark dataplane.  Run
fio inside an SMP 4 guest with 4 disks (you can use the host Linux
kernel's null_blk driver) and you should find that QEMU without
dataplane has lower iops.  The difference should become clear around 4
or 8 vcpus/disks.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] Queries on dataplane mechanism
  2016-06-24 10:15 ` Stefan Hajnoczi
@ 2016-06-28  9:06   ` Gaurav Sharma
  2016-07-15 14:17     ` Stefan Hajnoczi
  0 siblings, 1 reply; 4+ messages in thread
From: Gaurav Sharma @ 2016-06-28  9:06 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

Hi Stefan,
I am working on something to move PCI devices to data plane architecture.
Do you know any know reasons, as to why this was not tried before ?

Regards,

On Fri, Jun 24, 2016 at 3:45 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Thu, Jun 23, 2016 at 08:56:34PM +0530, Gaurav Sharma wrote:
> > Hi,
> > I am trying to explore how the data plane mechanism works in QEMU. I
> > understand the behavior of QEMU big lock. Can someone clarify the
> following
> > w.r.t. to data plane :
> >
> > 1. Currently only virtio-blk-pci and virtio-scsi-pci have data plane
> > enabled ?
>
> Yes.
>
> > 2. From qemu 2.1.0 data plane is enabled by default.
>
> No "enabled by default" would mean that existing QEMU command-lines
> enable dataplane.  This is not the case.  You have to explicitly define
> an iothread object and then associate a virtio-blk/virtio-scsi device
> with it.
>
> > I specify the
> > following options in the command line to enable :
> > -enable-kvm -drive if=none,id=drive1,file=file_name -object
> > iothread,id=iothread2 -device
> > virtio-blk-pci,id=drv0,drive=drive1,iothread=iothread2
> > Is the above syntax correct ?
>
> Yes.
>
> > 3. What is the best possible scenario to test data plane ? Currently, I
> > have a test set up wherein i have two different devices [dev1 and dev2].
> If
> > i process a write to dev1 which i made blocking by putting a sleep
> > statement, will i be able to process write on dev2 ? My understanding is
> > that as in case of dataplane we have a different event loop, i should be
> > able to process write on dev2. Is this correct ?
>
> Dataplane improves scalability for high IOPS workloads when there are
> multiple disks.
>
> You do not need to modify any code in order to benchmark dataplane.  Run
> fio inside an SMP 4 guest with 4 disks (you can use the host Linux
> kernel's null_blk driver) and you should find that QEMU without
> dataplane has lower iops.  The difference should become clear around 4
> or 8 vcpus/disks.
>
> Stefan
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] Queries on dataplane mechanism
  2016-06-28  9:06   ` Gaurav Sharma
@ 2016-07-15 14:17     ` Stefan Hajnoczi
  0 siblings, 0 replies; 4+ messages in thread
From: Stefan Hajnoczi @ 2016-07-15 14:17 UTC (permalink / raw)
  To: Gaurav Sharma; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 485 bytes --]

On Tue, Jun 28, 2016 at 02:36:41PM +0530, Gaurav Sharma wrote:
> I am working on something to move PCI devices to data plane architecture.
> Do you know any know reasons, as to why this was not tried before ?

The dataplane approach is implemented on a per-device basis because it
involves registering an ioeventfd to trigger on a hardware register
access.

It's only worth doing for performance-critical devices (and performance
critical hardware registers in those devices).

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-07-15 14:18 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-06-23 15:26 [Qemu-devel] Queries on dataplane mechanism Gaurav Sharma
2016-06-24 10:15 ` Stefan Hajnoczi
2016-06-28  9:06   ` Gaurav Sharma
2016-07-15 14:17     ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).