From: Asias He <asias@redhat.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
Anthony Liguori <aliguori@us.ibm.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
qemu-devel <qemu-devel@nongnu.org>, Khoa Huynh <khoa@us.ibm.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 0/7] virtio: virtio-blk data plane
Date: Wed, 21 Nov 2012 15:00:25 +0800 [thread overview]
Message-ID: <50AC7C09.1080808@redhat.com> (raw)
In-Reply-To: <CAJSP0QVweXFC3HrrO8z6Tt9OH5WtEXPfZi=ou-Yax3KMskA25A@mail.gmail.com>
On 11/21/2012 02:44 PM, Stefan Hajnoczi wrote:
> On Wed, Nov 21, 2012 at 7:42 AM, Asias He <asias@redhat.com> wrote:
>> On 11/21/2012 01:39 PM, Asias He wrote:
>>> On 11/20/2012 08:25 PM, Stefan Hajnoczi wrote:
>>>> On Tue, Nov 20, 2012 at 1:21 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>>>>> On Tue, Nov 20, 2012 at 10:02 AM, Asias He <asias@redhat.com> wrote:
>>>>>> Hello Stefan,
>>>>>>
>>>>>> On 11/15/2012 11:18 PM, Stefan Hajnoczi wrote:
>>>>>>> This series adds the -device virtio-blk-pci,x-data-plane=on property that
>>>>>>> enables a high performance I/O codepath. A dedicated thread is used to process
>>>>>>> virtio-blk requests outside the global mutex and without going through the QEMU
>>>>>>> block layer.
>>>>>>>
>>>>>>> Khoa Huynh <khoa@us.ibm.com> reported an increase from 140,000 IOPS to 600,000
>>>>>>> IOPS for a single VM using virtio-blk-data-plane in July:
>>>>>>>
>>>>>>> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/94580
>>>>>>>
>>>>>>> The virtio-blk-data-plane approach was originally presented at Linux Plumbers
>>>>>>> Conference 2010. The following slides contain a brief overview:
>>>>>>>
>>>>>>> http://linuxplumbersconf.org/2010/ocw/system/presentations/651/original/Optimizing_the_QEMU_Storage_Stack.pdf
>>>>>>>
>>>>>>> The basic approach is:
>>>>>>> 1. Each virtio-blk device has a thread dedicated to handling ioeventfd
>>>>>>> signalling when the guest kicks the virtqueue.
>>>>>>> 2. Requests are processed without going through the QEMU block layer using
>>>>>>> Linux AIO directly.
>>>>>>> 3. Completion interrupts are injected via irqfd from the dedicated thread.
>>>>>>>
>>>>>>> To try it out:
>>>>>>>
>>>>>>> qemu -drive if=none,id=drive0,cache=none,aio=native,format=raw,file=...
>>>>>>> -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on
>>>>>>
>>>>>>
>>>>>> Is this the latest dataplane bits:
>>>>>> (git://github.com/stefanha/qemu.git virtio-blk-data-plane)
>>>>>>
>>>>>> commit 7872075c24fa01c925d4f41faa9d04ce69bf5328
>>>>>> Author: Stefan Hajnoczi <stefanha@redhat.com>
>>>>>> Date: Wed Nov 14 15:45:38 2012 +0100
>>>>>>
>>>>>> virtio-blk: add x-data-plane=on|off performance feature
>>>>>>
>>>>>>
>>>>>> With this commit on a ramdisk based box, I am seeing about 10K IOPS with
>>>>>> x-data-plane on and 90K IOPS with x-data-plane off.
>>>>>>
>>>>>> Any ideas?
>>>>>>
>>>>>> Command line I used:
>>>>>>
>>>>>> IMG=/dev/ram0
>>>>>> x86_64-softmmu/qemu-system-x86_64 \
>>>>>> -drive file=/root/img/sid.img,if=ide \
>>>>>> -drive file=${IMG},if=none,cache=none,aio=native,id=disk1 -device
>>>>>> virtio-blk-pci,x-data-plane=off,drive=disk1,scsi=off \
>>>>>> -kernel $KERNEL -append "root=/dev/sdb1 console=tty0" \
>>>>>> -L /tmp/qemu-dataplane/share/qemu/ -nographic -vnc :0 -enable-kvm -m
>>>>>> 2048 -smp 4 -cpu qemu64,+x2apic -M pc
>>>>>
>>>>> Was just about to send out the latest patch series which addresses
>>>>> review comments, so I have tested the latest code
>>>>> (61b70fef489ce51ecd18d69afb9622c110b9315c).
>>>>
>>>> Rebased onto qemu.git/master before sending out. The commit ID is now:
>>>> cf6ed6406543ecc43895012a9ac9665e3753d5e8
>>>>
>>>> https://github.com/stefanha/qemu/commits/virtio-blk-data-plane
>>>>
>>>> Stefan
>>>
>>> Ok, thanks. /me trying
>>
>> Hi Stefan,
>>
>> If I enable the merge in guest the IOPS for seq read/write goes up to
>> ~400K/300K. If I disable the merge in guest the IOPS drops to ~17K/24K
>> for seq read/write (which is similar to the result I posted yesterday,
>> with merge disalbed). Could you please also share the numbers for rand
>> read and write in your setup?
>
> Thanks for running the test. Please send your rand read/write fio
> jobfile so I can run the exact same test.
>
> BTW I was running the default F18 (host) and RHEL 6.3 (guest) I/O
> schedulers in my test yesterday.
Sure, this is the script I used to run the test in guest.
# cat run.sh
#!/bin/sh
cat > all.fio <<EOF
[global]
exec_prerun="echo 3 > /proc/sys/vm/drop_caches"
group_reporting
ioscheduler=noop
thread
bs=4k
size=512MB
direct=1
filename=/dev/vda
numjobs=16
ioengine=libaio
iodepth=64
loops=3
ramp_time=0
[seq-read]
stonewall
rw=read
[seq-write]
stonewall
rw=write
[rnd-read]
stonewall
rw=randread
[rnd-write]
stonewall
rw=randwrite
EOF
echo noop > /sys/block/vda/queue/scheduler
echo 2 > /sys/block/vda/queue/nomerges
echo 0 > /sys/block/vda/queue/nomerges
fio all.fio --output=f.log
echo
echo -------------------------------------
cat f.log|grep iops
cat f.log|grep clat |grep avg
cat f.log|grep cpu
cat f.log|grep ios |grep in_queue
cat /proc/interrupts |grep requests |grep virtio |grep 41\:
cat /proc/stat |grep ^intr | awk '{print " 41: interrupt in total: "$44}'
fio all.fio --showcmd
--
Asias
next prev parent reply other threads:[~2012-11-21 6:58 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-15 15:18 [Qemu-devel] [PATCH 0/7] virtio: virtio-blk data plane Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 1/7] raw-posix: add raw_get_aio_fd() for virtio-blk-data-plane Stefan Hajnoczi
2012-11-15 20:03 ` Anthony Liguori
2012-11-16 6:15 ` Stefan Hajnoczi
2012-11-16 8:22 ` Paolo Bonzini
2012-11-15 15:19 ` [Qemu-devel] [PATCH 2/7] configure: add CONFIG_VIRTIO_BLK_DATA_PLANE Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 3/7] dataplane: add virtqueue vring code Stefan Hajnoczi
2012-11-15 15:37 ` Paolo Bonzini
2012-11-15 20:09 ` Anthony Liguori
2012-11-16 6:24 ` Stefan Hajnoczi
2012-11-16 7:48 ` Christian Borntraeger
2012-11-16 8:13 ` Stefan Hajnoczi
2012-11-17 16:15 ` Blue Swirl
2012-11-18 9:27 ` Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 4/7] dataplane: add event loop Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 5/7] dataplane: add Linux AIO request queue Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 6/7] dataplane: add virtio-blk data plane code Stefan Hajnoczi
2012-11-15 15:19 ` [Qemu-devel] [PATCH 7/7] virtio-blk: add x-data-plane=on|off performance feature Stefan Hajnoczi
2012-11-15 18:48 ` Michael S. Tsirkin
2012-11-15 19:34 ` Khoa Huynh
2012-11-15 21:11 ` Anthony Liguori
2012-11-15 21:08 ` Anthony Liguori
2012-11-16 6:22 ` Stefan Hajnoczi
2012-11-19 10:38 ` Kevin Wolf
2012-11-19 10:51 ` Paolo Bonzini
2012-11-16 7:40 ` Paolo Bonzini
2012-11-20 9:02 ` [Qemu-devel] [PATCH 0/7] virtio: virtio-blk data plane Asias He
2012-11-20 12:21 ` Stefan Hajnoczi
2012-11-20 12:25 ` Stefan Hajnoczi
2012-11-21 5:39 ` Asias He
2012-11-21 6:42 ` Asias He
2012-11-21 6:44 ` Stefan Hajnoczi
2012-11-21 7:00 ` Asias He [this message]
2012-11-22 12:12 ` Stefan Hajnoczi
2012-11-21 5:22 ` Asias He
2012-11-22 12:16 ` Stefan Hajnoczi
2012-11-20 15:03 ` Khoa Huynh
2012-11-21 5:22 ` Asias He
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50AC7C09.1080808@redhat.com \
--to=asias@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=khoa@us.ibm.com \
--cc=kwolf@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).