public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: Avi Kivity <avi@qumranet.com>, Anthony Liguori <aliguori@us.ibm.com>
Cc: kvm-devel <kvm-devel@lists.sourceforge.net>
Subject: [patch 00/13] RFC: split the global mutex
Date: Thu, 17 Apr 2008 17:10:21 -0300	[thread overview]
Message-ID: <20080417201021.515148882@localhost.localdomain> (raw)

Introduce QEMUDevice, making the ioport/iomem->device relationship visible. 

At the moment it only contains a lock, but could be extended.

With it the following is possible:
    - vcpu's to read/write via ioports/iomem while the iothread is working on 
      some unrelated device, or just copying data from the kernel.
    - vcpu's to read/write via ioports/iomem to different devices simultaneously.

This patchset is only a proof of concept kind of thing, so only serial+raw image
are supported. 

Tried two benchmarks, iperf and tiobench. With tiobench the reported latency is 
significantly lower (20%+), but throughput with IDE is only slightly higher. 

Expect to see larger improvements with a higher performing IO scheme (SCSI still buggy,
looking at it).

The iperf numbers are pretty good. Performance of UP guests increase slightly but SMP
is quite significant.

Note that workloads with multiple busy devices (such as databases, web servers) should
be the real winners.

What is the feeling on this? Its not _that_ intrusive and can be easily NOP'ed out for
QEMU.

iperf -c 4 -i 60

---- e1000

UP guest:
global lock
[SUM]  0.0-10.0 sec    156 MBytes    131 Mbits/sec
[SUM]  0.0-10.0 sec    151 MBytes    126 Mbits/sec
[SUM]  0.0-10.0 sec    151 MBytes    126 Mbits/sec
[SUM]  0.0-10.0 sec    151 MBytes    127 Mbits/sec
per-device lock
[SUM]  0.0-10.0 sec    164 MBytes    137 Mbits/sec
[SUM]  0.0-10.0 sec    161 MBytes    135 Mbits/sec
[SUM]  0.0-10.0 sec    158 MBytes    133 Mbits/sec
[SUM]  0.0-10.0 sec    171 MBytes    143 Mbits/sec

SMP guest (4-way)
global lock
[SUM]  0.0-13.0 sec    402 MBytes    259 Mbits/sec
[SUM]  0.0-10.1 sec    469 MBytes    391 Mbits/sec
[SUM]  0.0-10.1 sec    477 MBytes    397 Mbits/sec
[SUM]  0.0-10.0 sec    469 MBytes    393 Mbits/sec
per-device lock
[SUM]  0.0-13.0 sec    471 MBytes    304 Mbits/sec
[SUM]  0.0-10.2 sec    532 MBytes    439 Mbits/sec
[SUM]  0.0-10.1 sec    510 MBytes    423 Mbits/sec
[SUM]  0.0-10.1 sec    529 MBytes    441 Mbits/sec

----- virtio-net
UP guest:
global lock
[SUM]  0.0-13.0 sec    192 MBytes    124 Mbits/sec
[SUM]  0.0-10.0 sec    213 MBytes    178 Mbits/sec
[SUM]  0.0-10.0 sec    213 MBytes    178 Mbits/sec
[SUM]  0.0-10.0 sec    213 MBytes    178 Mbits/sec
per-device lock
[SUM]  0.0-13.0 sec    193 MBytes    125 Mbits/sec
[SUM]  0.0-10.0 sec    210 MBytes    176 Mbits/sec
[SUM]  0.0-10.0 sec    218 MBytes    183 Mbits/sec
[SUM]  0.0-10.0 sec    216 MBytes    181 Mbits/sec

SMP guest:
global lock
[SUM]  0.0-13.0 sec    446 MBytes    288 Mbits/sec
[SUM]  0.0-10.0 sec    521 MBytes    437 Mbits/sec
[SUM]  0.0-10.0 sec    525 MBytes    440 Mbits/sec
[SUM]  0.0-10.0 sec    533 MBytes    446 Mbits/sec
per-device lock
[SUM]  0.0-13.0 sec    512 MBytes    331 Mbits/sec
[SUM]  0.0-10.0 sec    617 MBytes    517 Mbits/sec
[SUM]  0.0-10.1 sec    631 MBytes    527 Mbits/sec
[SUM]  0.0-10.0 sec    626 MBytes    524 Mbits/sec


-- 


-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone

             reply	other threads:[~2008-04-17 20:10 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-04-17 20:10 Marcelo Tosatti [this message]
2008-04-17 20:10 ` [patch 01/13] QEMU: get rid of global cpu_single_env Marcelo Tosatti
2008-04-17 20:10 ` [patch 02/13] QEMU: introduce QEMUDevice Marcelo Tosatti
2008-04-17 20:10 ` [patch 03/13] QEMU: make esp.c build conditional to SPARC target Marcelo Tosatti
2008-04-17 20:10 ` [patch 04/13] QEMU: plug QEMUDevice pt1 / ioport awareness Marcelo Tosatti
2008-04-17 20:10 ` [patch 05/13] QEMU: add a mutex to protect IRQ chip data structures Marcelo Tosatti
2008-04-17 20:10 ` [patch 06/13] QEMU: plug QEMUDevice pt2 / iomem awareness Marcelo Tosatti
2008-04-17 20:10 ` [patch 07/13] QEMU: grab device lock for ioport/iomem processing Marcelo Tosatti
2008-04-17 20:10 ` [patch 08/13] QEMU: character device locking Marcelo Tosatti
2008-04-17 20:10 ` [patch 09/13] QEMU: network " Marcelo Tosatti
2008-04-17 20:10 ` [patch 10/13] QEMU: get rid of aiocb cache Marcelo Tosatti
2008-04-17 20:10 ` [patch 11/13] QEMU: block device locking Marcelo Tosatti
2008-04-17 20:10 ` [patch 12/13] QEMU: scsi-disk reentrancy fix Marcelo Tosatti
2008-04-17 20:10 ` [patch 13/13] QEMU/KVM: get rid of global lock Marcelo Tosatti
2008-04-20 11:16 ` [patch 00/13] RFC: split the global mutex Avi Kivity
2008-04-21  0:00   ` Marcelo Tosatti
2008-04-21  6:10     ` Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080417201021.515148882@localhost.localdomain \
    --to=mtosatti@redhat.com \
    --cc=aliguori@us.ibm.com \
    --cc=avi@qumranet.com \
    --cc=kvm-devel@lists.sourceforge.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox