From: Anthony Liguori <aliguori@us.ibm.com>
To: Ryan Harper <ryanh@us.ibm.com>
Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org
Subject: Re: [Qemu-devel] Re: [PATCH 0/3] Refactor AIO to allow multiple AIO implementations
Date: Tue, 23 Sep 2008 11:09:34 -0500 [thread overview]
Message-ID: <48D914BE.6040004@us.ibm.com> (raw)
In-Reply-To: <20080923144319.GM31395@us.ibm.com>
Ryan Harper wrote:
> * Anthony Liguori <aliguori@us.ibm.com> [2008-09-22 22:44]:
>
>> Can you run the same performance tests with the following patches (using
>> sync=on instead of cache=off)?
>>
>> You'll need my aio_init fix too. I suspect this will give equally good
>> performance to your patch set. That's not saying your patch set isn't
>> useful, but I would like to get performance to be better for the case
>> that we're going through the page cache.
>>
>
> I can run the test, but it is orthogonal to the patchset which is
> focused on using O_DIRECT and linux-aio.
>
Actually, I'm now much more interested in using the fd_pool patch with
cache=off. Using it with the sync=on patch is interesting but I'm
curious how close fd_poll + cache=off gets to linux-aio + cache=off.
Supporting linux-aio is going to be a royal pain. I don't know how we
can do a runtime probe of whether we support resfd or not. A build time
probe is going to be lame because we'll be relying on the glibc
headers. Plus, I'm really, really interested in avoiding the
association of cache=off == better performance.
In theory, the dup() + posix-aio should do okay compared to a custom
thread pool. It should have slightly higher latency, but completion
time should be pretty close. That would let us hold off supporting a
thread pool until we're ready to do zero-copy IO (which is the only
argument for a thread pool verses posix-aio).
Regards,
Anthony Liguori
next prev parent reply other threads:[~2008-09-23 16:10 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-09-22 23:17 [Qemu-devel] [PATCH 0/3] Refactor AIO to allow multiple AIO implementations Ryan Harper
2008-09-22 23:17 ` [Qemu-devel] [PATCH 1/3] Only call aio flush handler if set Ryan Harper
2008-09-23 2:38 ` [Qemu-devel] " Anthony Liguori
2008-09-23 14:26 ` Ryan Harper
2008-09-23 14:34 ` Anthony Liguori
2008-09-23 14:41 ` Ryan Harper
2008-09-23 14:50 ` Anthony Liguori
2008-09-22 23:17 ` [Qemu-devel] [PATCH 2/3] Move aio implementation out of raw block driver Ryan Harper
2008-09-23 1:16 ` [Qemu-devel] " Ryan Harper
2008-09-23 2:45 ` Anthony Liguori
2008-09-23 14:39 ` Ryan Harper
2008-09-23 14:40 ` Anthony Liguori
2008-09-23 14:53 ` Gerd Hoffmann
2008-09-23 16:06 ` Anthony Liguori
2008-09-23 18:04 ` Gerd Hoffmann
2008-09-23 18:28 ` Anthony Liguori
2008-09-24 22:31 ` Marcelo Tosatti
2008-09-22 23:17 ` [Qemu-devel] " Ryan Harper
2008-09-23 1:22 ` [Qemu-devel] [PATCH 3/3] Add linux aio implementation for raw block devices Ryan Harper
2008-09-23 3:32 ` [Qemu-devel] Re: [PATCH 0/3] Refactor AIO to allow multiple AIO implementations Anthony Liguori
2008-09-23 14:43 ` Ryan Harper
2008-09-23 14:47 ` Anthony Liguori
2008-09-23 16:09 ` Anthony Liguori [this message]
2008-09-23 10:27 ` [Qemu-devel] " Jamie Lokier
2008-10-02 22:41 ` [Qemu-devel] " john cooper
2008-10-03 13:33 ` Ryan Harper
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=48D914BE.6040004@us.ibm.com \
--to=aliguori@us.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=qemu-devel@nongnu.org \
--cc=ryanh@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).