From: Fam Zheng <famz@redhat.com>
To: Diana Madalina Craciun <diana.craciun@nxp.com>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] virtio-blk-dataplane performance numbers
Date: Tue, 19 Jan 2016 11:32:25 +0800 [thread overview]
Message-ID: <20160119033225.GC13438@ad.usersys.redhat.com> (raw)
In-Reply-To: <HE1PR04MB132162A06A81200D1D1F13D6FFCD0@HE1PR04MB1321.eurprd04.prod.outlook.com>
On Fri, 01/15 12:35, Diana Madalina Craciun wrote:
> Hi,
>
> I made some measurements guest vs bare metal (using virtio-data-plane)
> and got some results I cannot fully explain.
>
> First some details about the setup:
>
> - I have an ARM v8 hardware + 1 SSD connected to SATA.
>
> I have run FIO using multiple block sizes and IO depths:
>
> for i in 1 2 4 8 16 32
> do
> for j in 4 8 16 32 64 128 256 512
> do
> echo "Test ${i}_${j}"
> fio -filename=/dev/sda1 -direct=1 -iodepth $i -rw=write
> -ioengine=libaio -bs=${j}k -size=8G -numjobs=4 -group_reporting
> -name=mytest_write_${i}_${j} > /dev/out_write_${i}_${j}
> done
> done
>
>
> I run the same script for both baremetal and guest.
>
> QEMU (QEMU 2.4 and kernel 4.1) command line is the following:
>
> qemu-system-aarch64 -enable-kvm -nographic -machine type=virt -cpu host
> -kernel /boot/Image -append "root=/dev/ram rw console=ttyAMA0,115200
> ramdisk_size=1000000" -serial tcp::4444,server,telnet -initrd
> /boot/rootfs.ext2.gz -m 1024 -mem-path /var/lib/hugetlbfs/pagesize-1GB
> -object iothread,id=iothread0 -drive
> if=none,id=drive0,cache=none,format=raw,file=/dev/sda,aio=native -device
> virtio-blk-pci,drive=drive0,scsi=off,iothread=iothread0
>
> I have pinned the I/O thread to physical CPU 0 and the VCPU thread to
> physical CPU 1.
>
> When comparing bare metal vs guest I have noticed that in some
> situations I get better results in guest. The table contains the results
> for sequential read (lines: io depth and columns: block sizes, the
> numbers are guest vs bare metal degradation percentage). For random read
> and write the results do not show large variations, but for sequential
> read and write I see important variations.
>
> 4k 8K 16K 32K 64K 128K 256K 512K
> 1 50.28 37.19 36.08 -0.4 4.09 5.18 3.22 1.71
> 2 46.22 22.63 24.41 -0.45 1.72 2.17 2.37 -4.64
> 4 -10.82 15.60 11.64 5.21 0.09 2.86 -3.52 6.71
> 8 -18.05 5.96 8.82 0.26 0.95 4.30 -13.53 17.9
> 16 12.78 11.76 6.29 3.42 7.00 18.14 -0.4 5.59
> 32 16.99 7.98 4.70 7.67 -9.78 3.66 9.48 -3.55
>
> The negative numbers may come from the benchmark variation, probably I
> would have to run the benchmark multiple times to see the variation even
> in host.
>
> However if somebody have an explanation of why I might get better
> results in guest vs bare metal or at least in what direction to investigate.
It's probably due to request merges in virtio-blk-pci. You can collect blktrace
at host side to see - the I/O size sent to host device would be larger due to
merges.
Fam
prev parent reply other threads:[~2016-01-19 3:32 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-15 12:35 [Qemu-devel] virtio-blk-dataplane performance numbers Diana Madalina Craciun
2016-01-19 3:32 ` Fam Zheng [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160119033225.GC13438@ad.usersys.redhat.com \
--to=famz@redhat.com \
--cc=diana.craciun@nxp.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).