qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Karl Rister <kmr@us.ibm.com>, Fam Zheng <famz@redhat.com>
Cc: qemu-devel@nongnu.org, stefanha@redhat.com
Subject: Re: [Qemu-devel] dataplane performance on s390
Date: Tue, 10 Jun 2014 22:19:35 +0200	[thread overview]
Message-ID: <53976857.2080508@redhat.com> (raw)
In-Reply-To: <539754EC.3070600@us.ibm.com>

Il 10/06/2014 20:56, Karl Rister ha scritto:
> On 06/09/2014 08:40 PM, Fam Zheng wrote:
>> On Mon, 06/09 15:43, Karl Rister wrote:
>>> Hi All
>>>
>>> I was asked by our development team to do a performance sniff test of
>>> the
>>> latest dataplane code on s390 and compare it against qemu.git.  Here
>>> is a
>>> brief description of the configuration, the testing done, and then the
>>> results.
>>>
>>> Configuration:
>>>
>>> Host: 26 CPU LPAR, 64GB, 8 zFCP adapters
>>> Guest: 4 VCPU, 1GB, 128 virtio block devices
>>>
>>> Each virtio block device maps to a dm-multipath device in the host
>>> with 8
>>> paths.  Multipath is configured with the service-time policy.  All block
>>> devices are configured to use the deadline IO scheduler.
>>>
>>> Test:
>>>
>>> FIO is used to run 4 scenarios: sequential read, sequential write,
>>> random
>>> read, and random write.  Sequential scenarios use a 128KB request
>>> size and
>>> random scenarios us a 8KB request size.  Each scenario is run with an
>>> increasing number of jobs, from 1 to 128 (powers of 2).  Each job is
>>> bound
>>> to an individual file on an ext3 file system on a virtio device and uses
>>> O_DIRECT, libaio, and iodepth=1.  Each test is run three times for 2
>>> minutes
>>> each, the first iteration (a warmup) is thrown out and the next two
>>> iterations are averaged together.
>>>
>>> Results:
>>>
>>> Baseline: qemu.git 93f94f9018229f146ed6bbe9e5ff72d67e4bd7ab
>>>
>>> Dataplane: bdrv_set_aio_context 0ab50cde71aa27f39b8a3ea4766ff82671adb2a4
>>
>> Hi Karl,
>>
>> Thanks for the results.
>>
>> The throughput differences look minimal, where is the bandwidth
>> saturated in
>> these tests?  And why use iodepth=1, not more?
>
> Hi Fam
>
> Based on previously collected data, the configuration is hitting
> saturation at the following points:
>
> Sequential Read: 128 jobs
> Sequential Write: 32 jobs
> Random Read: 64 jobs
> Random Write: saturation not reached
>
> The iodepth=1 configuration is a somewhat arbitrary choice that is only
> limited by machine run time, I could certainly run higher loads and at
> times I do.

What is the overall throughput in iops and MB/s, on both baremetal and virt?

Paolo

  reply	other threads:[~2014-06-10 20:20 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-09 20:43 [Qemu-devel] dataplane performance on s390 Karl Rister
2014-06-10  1:40 ` Fam Zheng
2014-06-10 18:56   ` Karl Rister
2014-06-10 20:19     ` Paolo Bonzini [this message]
2014-06-19 10:39   ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53976857.2080508@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=famz@redhat.com \
    --cc=kmr@us.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).