From: Bob Liu <bob.liu@oracle.com>
To: Marcus Granado <marcus.granado@citrix.com>
Cc: Arianna Avanzini <avanzini.arianna@gmail.com>,
axboe@fb.com, felipe.franciosi@citrix.com,
linux-kernel@vger.kernel.org,
Christoph Hellwig <hch@infradead.org>,
david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
boris.ostrovsky@oracle.com,
Jonathan Davies <Jonathan.Davies@eu.citrix.com>,
Rafal Mielniczuk <Rafal.Mielniczuk@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback
Date: Wed, 01 Jul 2015 08:04:04 +0800 [thread overview]
Message-ID: <55932E74.5060904@oracle.com> (raw)
In-Reply-To: <5592A5EF.2050005@citrix.com>
On 06/30/2015 10:21 PM, Marcus Granado wrote:
> On 13/05/15 11:29, Bob Liu wrote:
>>
>> On 04/28/2015 03:46 PM, Arianna Avanzini wrote:
>>> Hello Christoph,
>>>
>>> Il 28/04/2015 09:36, Christoph Hellwig ha scritto:
>>>> What happened to this patchset?
>>>>
>>>
>>> It was passed on to Bob Liu, who published a follow-up patchset here: https://lkml.org/lkml/2015/2/15/46
>>>
>>
>> Right, and then I was interrupted by another xen-block feature: 'multi-page' ring.
>> Will back on this patchset soon. Thank you!
>>
>> -Bob
>>
>
> Hi,
>
> Our measurements for the multiqueue patch indicate a clear improvement in iops when more queues are used.
>
> The measurements were obtained under the following conditions:
>
> - using blkback as the dom0 backend with the multiqueue patch applied to a dom0 kernel 4.0 on 8 vcpus.
>
> - using a recent Ubuntu 15.04 kernel 3.19 with multiqueue frontend applied to be used as a guest on 4 vcpus
>
> - using a micron RealSSD P320h as the underlying local storage on a Dell PowerEdge R720 with 2 Xeon E5-2643 v2 cpus.
>
> - fio 2.2.7-22-g36870 as the generator of synthetic loads in the guest. We used direct_io to skip caching in the guest and ran fio for 60s reading a number of block sizes ranging from 512 bytes to 4MiB. Queue depth of 32 for each queue was used to saturate individual vcpus in the guest.
>
> We were interested in observing storage iops for different values of block sizes. Our expectation was that iops would improve when increasing the number of queues, because both the guest and dom0 would be able to make use of more vcpus to handle these requests.
>
> These are the results (as aggregate iops for all the fio threads) that we got for the conditions above with sequential reads:
>
> fio_threads io_depth block_size 1-queue_iops 8-queue_iops
> 8 32 512 158K 264K
> 8 32 1K 157K 260K
> 8 32 2K 157K 258K
> 8 32 4K 148K 257K
> 8 32 8K 124K 207K
> 8 32 16K 84K 105K
> 8 32 32K 50K 54K
> 8 32 64K 24K 27K
> 8 32 128K 11K 13K
>
> 8-queue iops was better than single queue iops for all the block sizes. There were very good improvements as well for sequential writes with block size 4K (from 80K iops with single queue to 230K iops with 8 queues), and no regressions were visible in any measurement performed.
>
Great! Thank you very much for the test.
I'm trying to rebase these patches to the latest kernel version(v4.1) and will send out in following days.
--
Regards,
-Bob
next prev parent reply other threads:[~2015-07-01 0:04 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-11 23:57 [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback Arianna Avanzini
2014-09-11 23:57 ` [PATCH RFC v2 1/5] xen, blkfront: port to the the multi-queue block layer API Arianna Avanzini
2014-09-13 19:29 ` Christoph Hellwig
2014-10-01 20:18 ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 2/5] xen, blkfront: introduce support for multiple block rings Arianna Avanzini
2014-10-01 20:18 ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 3/5] xen, blkfront: negotiate the number of block rings with the backend Arianna Avanzini
2014-09-12 10:46 ` David Vrabel
2014-10-01 20:18 ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 4/5] xen, blkback: introduce support for multiple block rings Arianna Avanzini
2014-10-01 20:18 ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 5/5] xen, blkback: negotiate of the number of block rings with the frontend Arianna Avanzini
2014-09-12 10:58 ` David Vrabel
2014-10-01 20:23 ` Konrad Rzeszutek Wilk
2014-10-01 20:27 ` [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback Konrad Rzeszutek Wilk
2015-04-28 7:36 ` Christoph Hellwig
2015-04-28 7:46 ` Arianna Avanzini
2015-05-13 10:29 ` Bob Liu
2015-06-30 14:21 ` [Xen-devel] " Marcus Granado
2015-07-01 0:04 ` Bob Liu [this message]
2015-07-01 3:02 ` Jens Axboe
2015-08-10 11:03 ` Rafal Mielniczuk
2015-08-10 11:14 ` Bob Liu
2015-08-10 15:52 ` Jens Axboe
2015-08-11 6:07 ` Bob Liu
2015-08-11 9:45 ` Rafal Mielniczuk
2015-08-11 17:32 ` Jens Axboe
2015-08-12 10:16 ` Bob Liu
2015-08-12 16:46 ` Rafal Mielniczuk
2015-08-14 8:29 ` Bob Liu
2015-08-14 12:30 ` Rafal Mielniczuk
2015-08-18 9:45 ` Rafal Mielniczuk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55932E74.5060904@oracle.com \
--to=bob.liu@oracle.com \
--cc=Jonathan.Davies@eu.citrix.com \
--cc=Rafal.Mielniczuk@citrix.com \
--cc=avanzini.arianna@gmail.com \
--cc=axboe@fb.com \
--cc=boris.ostrovsky@oracle.com \
--cc=david.vrabel@citrix.com \
--cc=felipe.franciosi@citrix.com \
--cc=hch@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=marcus.granado@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).