From: Asias He <asias@redhat.com>
To: Khoa Huynh <khoa@us.ibm.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
Anthony Liguori <aliguori@us.ibm.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
qemu-devel@nongnu.org, Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 0/7] virtio: virtio-blk data plane
Date: Wed, 21 Nov 2012 13:22:42 +0800 [thread overview]
Message-ID: <50AC6522.1000505@redhat.com> (raw)
In-Reply-To: <OF39E32C0B.05B41E16-ON85257ABC.005260E0-86257ABC.0052AF40@us.ibm.com>
On 11/20/2012 11:03 PM, Khoa Huynh wrote:
> Asias He <asias@redhat.com> wrote on 11/20/2012 03:02:07 AM:
>
>> From: Asias He <asias@redhat.com>
>> To: Stefan Hajnoczi <stefanha@redhat.com>,
>> Cc: qemu-devel@nongnu.org, Anthony Liguori/Austin/IBM@IBMUS, Paolo
>> Bonzini <pbonzini@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
>> "Michael S. Tsirkin" <mst@redhat.com>, Khoa Huynh/Austin/IBM@IBMUS
>> Date: 11/20/2012 03:01 AM
>> Subject: Re: [PATCH 0/7] virtio: virtio-blk data plane
>>
>> Hello Stefan,
>>
>> On 11/15/2012 11:18 PM, Stefan Hajnoczi wrote:
>> > This series adds the -device virtio-blk-pci,x-data-plane=on property
> that
>> > enables a high performance I/O codepath. A dedicated thread is
>> used to process
>> > virtio-blk requests outside the global mutex and without going
>> through the QEMU
>> > block layer.
>> >
>> > Khoa Huynh <khoa@us.ibm.com> reported an increase from 140,000
>> IOPS to 600,000
>> > IOPS for a single VM using virtio-blk-data-plane in July:
>> >
>> > http://comments.gmane.org/gmane.comp.emulators.kvm.devel/94580
>> >
>> > The virtio-blk-data-plane approach was originally presented at
>> Linux Plumbers
>> > Conference 2010. The following slides contain a brief overview:
>> >
>> > http://linuxplumbersconf.org/2010/ocw/system/presentations/651/
>> original/Optimizing_the_QEMU_Storage_Stack.pdf
>> >
>> > The basic approach is:
>> > 1. Each virtio-blk device has a thread dedicated to handling ioeventfd
>> > signalling when the guest kicks the virtqueue.
>> > 2. Requests are processed without going through the QEMU block layer
> using
>> > Linux AIO directly.
>> > 3. Completion interrupts are injected via irqfd from the dedicated
> thread.
>> >
>> > To try it out:
>> >
>> > qemu -drive
> if=none,id=drive0,cache=none,aio=native,format=raw,file=...
>> > -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on
>>
>>
>> Is this the latest dataplane bits:
>> (git://github.com/stefanha/qemu.git virtio-blk-data-plane)
>>
>> commit 7872075c24fa01c925d4f41faa9d04ce69bf5328
>> Author: Stefan Hajnoczi <stefanha@redhat.com>
>> Date: Wed Nov 14 15:45:38 2012 +0100
>>
>> virtio-blk: add x-data-plane=on|off performance feature
>>
>>
>> With this commit on a ramdisk based box, I am seeing about 10K IOPS with
>> x-data-plane on and 90K IOPS with x-data-plane off.
>>
>> Any ideas?
>
> Hi Asias, I'll try this latest patchset next week and will report results
> as soon as they are available. I am currently on vacation this week due
> to the Thanksgivings holiday in the US.... (Previously, I was able to get
> up to 1.33 million IOPS for a single KVM guest using Stefan's previous
> code.)
Hi Khoa, Thanks! Yes, I saw that number in your slides.
>
> Thanks,
> -Khoa
>
>>
>> Command line I used:
>>
>> IMG=/dev/ram0
>> x86_64-softmmu/qemu-system-x86_64 \
>> -drive file=/root/img/sid.img,if=ide \
>> -drive file=${IMG},if=none,cache=none,aio=native,id=disk1 -device
>> virtio-blk-pci,x-data-plane=off,drive=disk1,scsi=off \
>> -kernel $KERNEL -append "root=/dev/sdb1 console=tty0" \
>> -L /tmp/qemu-dataplane/share/qemu/ -nographic -vnc :0 -enable-kvm -m
>> 2048 -smp 4 -cpu qemu64,+x2apic -M pc
>>
>>
>> >
>> > Limitations:
>> > * Only format=raw is supported
>> > * Live migration is not supported
>> > * Block jobs, hot unplug, and other operations fail with -EBUSY
>> > * I/O throttling limits are ignored
>> > * Only Linux hosts are supported due to Linux AIO usage
>> >
>> > The code has reached a stage where I feel it is ready to merge.
> Users have
>> > been playing with it for some time and want the significant
>> performance boost.
>> >
>> > We are refactoring QEMU to get rid of the global mutex. I believe that
>> > virtio-blk-data-plane can eventually become the default mode of
> operation.
>> >
>> > Instead of waiting for global mutex removal efforts to finish, I
> want to use
>> > virtio-blk-data-plane as an example device for AioContext and
> threaded hw
>> > dispatch refactoring. This means:
>> >
>> > 1. When the block layer can bind to an AioContext and execute I/O
>> outside the
>> > global mutex, virtio-blk-data-plane can use this (and gain image
> format
>> > support).
>> >
>> > 2. When hw dispatch no longer needs the global mutex we can use
> hw/virtio.c
>> > again and perhaps run a pool of iothreads instead of dedicated
> data plane
>> > threads.
>> >
>> > But in the meantime, I have cleaned up the virtio-blk-data-plane
>> code so that
>> > it can be merged as an experimental feature.
>> >
>> > Changes from the RFC v9:
>> > * Add x-data-plane=on|off option and coexist with regular
> virtio-blk code
>> > * Create thread from BH so it inherits iothread cpusets
>> > * Drain requests on vm_stop() so stopped guest does not access
> image file
>> > * Add migration blocker
>> > * Add bdrv_in_use() to prevent block jobs and other operations
>> that can interfere
>> > * Drop IOQueue request merging for simplicity
>> > * Drop ioctl interrupt injection and always use irqfd for simplicity
>> > * Major cleanup to split up source files
>> > * Rebase from qemu-kvm.git onto qemu.git
>> > * Address Michael Tsirkin's review comments
>> >
>> > Stefan Hajnoczi (7):
>> > raw-posix: add raw_get_aio_fd() for virtio-blk-data-plane
>> > configure: add CONFIG_VIRTIO_BLK_DATA_PLANE
>> > dataplane: add virtqueue vring code
>> > dataplane: add event loop
>> > dataplane: add Linux AIO request queue
>> > dataplane: add virtio-blk data plane code
>> > virtio-blk: add x-data-plane=on|off performance feature
>> >
>> > block.h | 9 +
>> > block/raw-posix.c | 34 ++++
>> > configure | 21 +++
>> > hw/Makefile.objs | 2 +-
>> > hw/dataplane/Makefile.objs | 3 +
>> > hw/dataplane/event-poll.c | 109 ++++++++++++
>> > hw/dataplane/event-poll.h | 40 +++++
>> > hw/dataplane/ioq.c | 118 +++++++++++++
>> > hw/dataplane/ioq.h | 57 +++++++
>> > hw/dataplane/virtio-blk.c | 414 ++++++++++++++++++++++++++++++++
>> +++++++++++++
>> > hw/dataplane/virtio-blk.h | 41 +++++
>> > hw/dataplane/vring.c | 321 +++++++++++++++++++++++++++++++++++
>> > hw/dataplane/vring.h | 54 ++++++
>> > hw/virtio-blk.c | 59 ++++++-
>> > hw/virtio-blk.h | 1 +
>> > hw/virtio-pci.c | 3 +
>> > trace-events | 9 +
>> > 17 files changed, 1293 insertions(+), 2 deletions(-)
>> > create mode 100644 hw/dataplane/Makefile.objs
>> > create mode 100644 hw/dataplane/event-poll.c
>> > create mode 100644 hw/dataplane/event-poll.h
>> > create mode 100644 hw/dataplane/ioq.c
>> > create mode 100644 hw/dataplane/ioq.h
>> > create mode 100644 hw/dataplane/virtio-blk.c
>> > create mode 100644 hw/dataplane/virtio-blk.h
>> > create mode 100644 hw/dataplane/vring.c
>> > create mode 100644 hw/dataplane/vring.h
>> >
>>
>>
>> --
>> Asias
>>
>
--
Asias
prev parent reply other threads:[~2012-11-21 5:21 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-15 15:18 [Qemu-devel] [PATCH 0/7] virtio: virtio-blk data plane Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 1/7] raw-posix: add raw_get_aio_fd() for virtio-blk-data-plane Stefan Hajnoczi
2012-11-15 20:03 ` Anthony Liguori
2012-11-16 6:15 ` Stefan Hajnoczi
2012-11-16 8:22 ` Paolo Bonzini
2012-11-15 15:19 ` [Qemu-devel] [PATCH 2/7] configure: add CONFIG_VIRTIO_BLK_DATA_PLANE Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 3/7] dataplane: add virtqueue vring code Stefan Hajnoczi
2012-11-15 15:37 ` Paolo Bonzini
2012-11-15 20:09 ` Anthony Liguori
2012-11-16 6:24 ` Stefan Hajnoczi
2012-11-16 7:48 ` Christian Borntraeger
2012-11-16 8:13 ` Stefan Hajnoczi
2012-11-17 16:15 ` Blue Swirl
2012-11-18 9:27 ` Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 4/7] dataplane: add event loop Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 5/7] dataplane: add Linux AIO request queue Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 6/7] dataplane: add virtio-blk data plane code Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 7/7] virtio-blk: add x-data-plane=on|off performance feature Stefan Hajnoczi
2012-11-15 18:48 ` Michael S. Tsirkin
2012-11-15 19:34 ` Khoa Huynh
2012-11-15 21:11 ` Anthony Liguori
2012-11-15 21:08 ` Anthony Liguori
2012-11-16 6:22 ` Stefan Hajnoczi
2012-11-19 10:38 ` Kevin Wolf
2012-11-19 10:51 ` Paolo Bonzini
2012-11-16 7:40 ` Paolo Bonzini
2012-11-20 9:02 ` [Qemu-devel] [PATCH 0/7] virtio: virtio-blk data plane Asias He
2012-11-20 12:21 ` Stefan Hajnoczi
2012-11-20 12:25 ` Stefan Hajnoczi
2012-11-21 5:39 ` Asias He
2012-11-21 6:42 ` Asias He
2012-11-21 6:44 ` Stefan Hajnoczi
2012-11-21 7:00 ` Asias He
2012-11-22 12:12 ` Stefan Hajnoczi
2012-11-21 5:22 ` Asias He
2012-11-22 12:16 ` Stefan Hajnoczi
2012-11-20 15:03 ` Khoa Huynh
2012-11-21 5:22 ` Asias He [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50AC6522.1000505@redhat.com \
--to=asias@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=khoa@us.ibm.com \
--cc=kwolf@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).