From: Stefan Hajnoczi <stefanha@gmail.com>
To: Jonghwan Choi <jhbird.choi@samsung.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane
Date: Mon, 23 Sep 2013 15:27:58 +0200 [thread overview]
Message-ID: <20130923132758.GC5814@stefanha-thinkpad.redhat.com> (raw)
In-Reply-To: <002b01cea9d5$d4340430$7c9c0c90$%choi@samsung.com>
On Thu, Sep 05, 2013 at 10:18:28AM +0900, Jonghwan Choi wrote:
Thanks for posting these details.
Have you tried running x-data-plane=off with vcpu = 8 and how does the
performance compare to x-data-plane=off with vcpu = 1?
> > 1. The fio results so it's clear which cases performed worse and by how
> > much.
> >
> When I set vcpu = 8, read performance is decreased about 25%.
> In my test, when vcpu = 1, I got a best formance.
Performance with vcpu = 8 is 25% worse than performance with vcpu = 1?
Can you try pinning threads to host CPUs? See libvirt emulatorpin and
vcpupin attributes:
http://libvirt.org/formatdomain.html#elementsCPUTuning
> > 2. The fio job files.
> >
> [testglobal]
> description=high_iops
> exec_prerun="echo 3 > /proc/sys/vm/drop_caches"
> group_reporting=1
> rw=read
> direct=1
> ioengine=sync
> bs=4m
> numjobs=1
> size=2048m
A couple of points to check:
1. This test case is synchronous and latency-sensitive, you are not
benchmarking parallel I/Os so x-data-plane=on is not expected to
perform any better than x-data-plane=off.
The point of x-data-plane=on is for smp > 1 guests with parallel I/O
to scale well. If both those conditions are not met by the workload
then I don't expect you to see any gains over x-data-plane=off.
If you want to try parallel I/Os, I suggest using:
ioengine=linuxaio
iodepth=16
2. size=2048m with bs=4m on an SSD drive seems quite small because the
test would complete quickly. What is the overall running time of
this test?
In order to collect stable results it's usually a good idea for the
test to run for a couple of minutes (e.g. 2 minutes minimum).
Otherwise outliers can influence the results too much.
You may need to increase 'size' or use the 'runtime=2m' option
instead.
Stefan
next prev parent reply other threads:[~2013-09-23 13:28 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-09-02 8:24 [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane Jonghwan Choi
2013-09-04 8:58 ` Stefan Hajnoczi
2013-09-05 1:18 ` Jonghwan Choi
2013-09-23 13:27 ` Stefan Hajnoczi [this message]
-- strict thread matches above, loose matches on Subject: below --
2013-09-02 8:17 Jonghwan Choi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130923132758.GC5814@stefanha-thinkpad.redhat.com \
--to=stefanha@gmail.com \
--cc=jhbird.choi@samsung.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).