From: Jamie Lokier <jamie@shareable.org>
To: Anton Altaparmakov <aia21@cam.ac.uk>
Cc: Matthew Wilcox <matthew@wil.cx>,
Benjamin LaHaise <bcrl@kvack.org>, Christoph Hellwig <hch@lst.de>,
akpm@osdl.org, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH] add support for vectored and async I/O to all simple filesystems
Date: Wed, 2 Nov 2005 23:36:59 +0000 [thread overview]
Message-ID: <20051102233659.GB20756@mail.shareable.org> (raw)
In-Reply-To: <Pine.LNX.4.64.0511022048230.24959@hermes-1.csi.cam.ac.uk>
Anton Altaparmakov wrote:
> Yes, of course aio can block and in fact will block arbitrarily for
> arbitrary lengths of time. At least at present the implementations of
> ->aio_read and ->aio_write in the file systems will block left right and
> center.
>
> [examples...]
That's a shame. I was hoping it would offer similar properties to
non-blocking I/O on sockets: something which can indicate that the
resource is unavailable, but allows the application to continue with
other things without needing parallel threads for that.
> The only way you can _really_ have guaranteed async io is to queue the io
> to a kernel thread work queue and return immediately to the caller. The
> only thing you will then block on potentially is allocating memory for the
> "queue entry item" and on waiting for the lock to the "queue" so it is
> safe to write to it.
Since threads with well-defined blocking points can be mechanically
transformed to state machines, it is possible to guarantee async I/O
without needing extra threads. I had wondered if the Linux AIO
implementation did something like that.
However, it is too complex to convert all filesystem code from
blocking to state machines (by hand), and unnecessary. I've thought
about possible AIO implementations many times, and always return to
the idea of having AIOs be state machines which are handled
synchronously and by interrupts, up until a point where a complex
blocking point is reached in a filesystem (which shouldn't happen for
the common read/write cases, but may happen for the uncommon ones),
and then the AIOs convert to work handled by a worker thread spawned
from a pool on demand.
In other words, there are plenty of ways to guarantee properly async
I/O. I'm surprised Linux AIO isn't - that seems to defeat the whole
point of AIO.
-- Jamie
next prev parent reply other threads:[~2005-11-02 23:37 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-11-01 2:36 [PATCH] add support for vectored and async I/O to all simple filesystems Christoph Hellwig
2005-11-01 10:28 ` Miklos Szeredi
2005-11-01 15:27 ` Christoph Hellwig
2005-11-01 17:19 ` Miklos Szeredi
2005-11-07 5:00 ` Christoph Hellwig
2005-11-01 19:20 ` Jamie Lokier
2005-11-01 20:57 ` Benjamin LaHaise
2005-11-02 11:06 ` Jamie Lokier
2005-11-02 16:21 ` Benjamin LaHaise
2005-11-02 16:29 ` Matthew Wilcox
2005-11-02 16:45 ` Benjamin LaHaise
2005-11-02 20:31 ` Jamie Lokier
2005-11-02 21:04 ` Anton Altaparmakov
2005-11-02 23:36 ` Jamie Lokier [this message]
2005-11-05 0:18 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20051102233659.GB20756@mail.shareable.org \
--to=jamie@shareable.org \
--cc=aia21@cam.ac.uk \
--cc=akpm@osdl.org \
--cc=bcrl@kvack.org \
--cc=hch@lst.de \
--cc=linux-fsdevel@vger.kernel.org \
--cc=matthew@wil.cx \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).