From: Anthony Liguori <aliguori@us.ibm.com>
To: Ryan Harper <ryanh@us.ibm.com>
Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org
Subject: Re: [Qemu-devel] [5323] Implement an fd pool to get real AIO with posix-aio
Date: Fri, 26 Sep 2008 13:35:19 -0500 [thread overview]
Message-ID: <48DD2B67.407@us.ibm.com> (raw)
In-Reply-To: <20080926175927.GZ31395@us.ibm.com>
Ryan Harper wrote:
> * Anthony Liguori <anthony@codemonkey.ws> [2008-09-26 11:03]:
>
>> Revision: 5323
>> http://svn.sv.gnu.org/viewvc/?view=rev&root=qemu&revision=5323
>> Author: aliguori
>> Date: 2008-09-26 15:59:29 +0000 (Fri, 26 Sep 2008)
>>
>> Log Message:
>> -----------
>> Implement an fd pool to get real AIO with posix-aio
>>
>> This patch implements a simple fd pool to allow many AIO requests with
>> posix-aio. The result is significantly improved performance (identical to that
>> reported for linux-aio) for both cache=on and cache=off.
>>
>> The fundamental problem with posix-aio is that it limits itself to one thread
>> per-file descriptor. I don't know why this is, but this patch provides a simple
>> mechanism to work around this (duplicating the file descriptor).
>>
>> This isn't a great solution, but it seems like a reasonable intermediate step
>> between posix-aio and a custom thread-pool to replace it.
>>
>> Ryan Harper will be posting some performance analysis he did comparing posix-aio
>> with fd pooling against linux-aio. The size of the posix-aio thread pool and
>> the fd pool were largely determined by him based on this analysis.
>>
>
> I'll have some more data to post in a bit, but for now, bumping the fd
> pool up to 64 and ensuring we init aio to support a thread per fd, we
> mostly match linux aio performance with a simpler implementation. For
> randomwrites, fd_pool lags a bit, but I've got other data that shows in
> most scenarios, fd_pool matches linux aio performance and does so with
> less CPU consumption.
>
> Results:
>
> 16k randwrite 1 thread, 74 iodepth | MB/s | avg sub lat (us) | avg comp lat (ms)
> -----------------------------------+------+------------------+------------------
> baremetal (O_DIRECT, aka cache=off)| 61.2 | 13.07 | 19.59
> kvm: cache=off posix-aio w/o patch | 4.7 | 3467.44 | 254.08
>
So with posix-aio, once we have many requests, each request is going to
block until the request completes. I don't fully understand why the
average completion latency is so high because in theory, there should be
no delay between completion and submission. Maybe it has to do with the
fact that we spend so much time blocking during submission, that the
io-thread doesn't get a chance to run. I bet if we dropped the
qemu_mutex during submission, the completion latency would drop to a
very small number. Not worth actually testing.
> kvm: cache=off linux-aio | 61.1 | 75.35 | 19.57
>
The fact that the submission latency is so high confirms what I've been
about linux-aio submissions being very unoptimal. That is really quite
high.
> kvm: cache=on posix-aio w/o patch |127.0 | 115.78 | 9.19
> kvm: cache=on posix-aio w/ patch |126.0 | 67.35 | 9.30
>
It looks like 127mb/s is pretty close to the optimal cached write time.
When using caching, writes can complete almost immediately so it's not
surprising that submission latency is so low (even though it's blocking
during submission).
I am surprised that w/patch has a latency that's so high. I think that
suggests that requests are queuing up. I bet increasing the aio_num
field would reduce this number.
> ------------ new results ----------+------+------------------+------------------
> kvm:cache=off posix-aio fd_pool[16]| 33.5 | 14.28 | 49.19
> kvm:cache=off posix-aio fd_pool[64]| 51.1 | 14.86 | 23.66
>
I assume you tried to bump from 64 to something higher and couldn't make
up the lost bandwidth?
> 16k write 1 thread, 74 iodepth | MB/s | avg sub lat (us) | avg comp lat (ms)
> -----------------------------------+------+------------------+------------------
> baremetal (O_DIRECT, aka cache=off)|128.1 | 10.90 | 9.45
> kvm: cache=off posix-aio w/o patch | 5.1 | 3152.00 | 231.06
> kvm: cache=off linux-aio |130.0 | 83.83 | 8.99
> kvm: cache=on posix-aio w/o patch |184.0 | 80.46 | 6.35
> kvm: cache=on posix-aio w/ patch |165.0 | 70.90 | 7.09
> ------------ new results ----------+------+------------------+------------------
> kvm:cache=off posix-aio fd_pool[16]| 78.2 | 58.24 | 15.43
> kvm:cache=off posix-aio fd_pool[64]|129.0 | 71.62 | 9.11
>
That's a nice result. We could probably improve the latency by tweaking
the queue sizes.
Very nice work! Thanks for doing the thorough analysis.
Regards,
Anthony Liguori
>
>
next prev parent reply other threads:[~2008-09-26 18:37 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-09-26 15:59 [Qemu-devel] [5323] Implement an fd pool to get real AIO with posix-aio Anthony Liguori
2008-09-26 17:59 ` Ryan Harper
2008-09-26 18:35 ` Anthony Liguori [this message]
2008-09-26 18:50 ` Ryan Harper
2008-10-07 15:43 ` Gerd Hoffmann
2008-10-07 16:01 ` Anthony Liguori
2008-10-07 20:40 ` Gerd Hoffmann
2008-10-07 21:21 ` Anthony Liguori
2008-10-07 22:15 ` Gerd Hoffmann
2008-10-07 22:39 ` Anthony Liguori
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=48DD2B67.407@us.ibm.com \
--to=aliguori@us.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=qemu-devel@nongnu.org \
--cc=ryanh@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).