From: Jeff Layton <jlayton@poochiereds.net>
To: NeilBrown <neilb@suse.com>, Kevin Wolf <kwolf@redhat.com>,
Theodore Ts'o <tytso@mit.edu>
Cc: Rik van Riel <riel@redhat.com>,
Christoph Hellwig <hch@infradead.org>,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
lsf-pc@lists.linux-foundation.org,
Ric Wheeler <rwheeler@redhat.com>
Subject: Re: [Lsf-pc] [LSF/MM TOPIC] I/O error handling and fsync()
Date: Sun, 22 Jan 2017 18:31:57 -0500 [thread overview]
Message-ID: <1485127917.5321.1.camel@poochiereds.net> (raw)
In-Reply-To: <87o9yyemud.fsf@notabene.neil.brown.name>
On Mon, 2017-01-23 at 09:44 +1100, NeilBrown wrote:
> On Mon, Jan 16 2017, Jeff Layton wrote:
>
> > On Sat, 2017-01-14 at 09:28 +1100, NeilBrown wrote:
> > > On Sat, Jan 14 2017, Kevin Wolf wrote:
> > >
> > > > Am 13.01.2017 um 15:21 hat Theodore Ts'o geschrieben:
> > > > > On Fri, Jan 13, 2017 at 12:09:59PM +0100, Kevin Wolf wrote:
> > > > > > Now even if at the moment there were no storage backend where a write
> > > > > > failure can be temporary (which I find hard to believe, but who knows),
> > > > > > a single new driver is enough to expose the problem. Are you confident
> > > > > > enough that no single driver will ever behave this way to make data
> > > > > > integrity depend on the assumption?
> > > > >
> > > > > This is really a philosophical question. It very much simplifiees
> > > > > things if we can make the assumption that a driver that *does* behave
> > > > > this way is **broken**. If the I/O error is temporary, then the
> > > > > driver should simply not complete the write, and wait.
> > > >
> > > > If we are sure that (at least we make it so that) every error is
> > > > permanent, then yes, this simplifies things a bit because it saves you
> > > > the retries that we know wouldn't succeed anyway.
> > > >
> > > > In that case, what's possibly left is modifying fsync() so that it
> > > > consistently returns an error; or if not, we need to promise this
> > > > behaviour to userspace so that on the first fsync() failure it can give
> > > > up on the file without doing less for the user than it could do.
> > >
> > > I think we can (and implicitly do) make that promise: if you get EIO
> > > From fsync, then there is no credible recovery action you can try.
> > >
> > > >
> > > > > If it fails, it should only be because it has timed out on waiting and
> > > > > has assumed that the problem is permanent.
> > > >
> > > > If a manual action is required to restore the functionality, how can you
> > > > use a timeout for determining whether a problem is permanent or not?
> > >
> > > If manual action is required, and can reasonably be expected, then the
> > > device driver should block indefinitely.
> > > As an example, the IBM s390 systems have a "dasd" storage driver, which
> > > I think is a fiber-attached storage array. If the connection to the
> > > array stops working (and no more paths are available), it will (by
> > > default) block indefinitely. I presume it logs the problem and the
> > > sysadmin can find out and fix things - or if "things" are unfixable,
> > > they can change the configuration to report an error.
> > >
> > > Similary the DM multipath module has an option "queue_if_no_path" (aka
> > > "no_path_retry") which means that if no working paths are found, the
> > > request should be queued and retried (no error reported).
> > >
> > > If manual action is an option, then the driver must be configured to wait for
> > > manual action.
> > >
> > > >
> > > > This is exactly the kind of errors from which we want to recover in
> > > > qemu instead of killing the VMs. Assuming that errors are permanent when
> > > > they aren't, but just require some action before they can succeed, is
> > > > not a solution to the problem, but it's pretty much the description of
> > > > the problem that we had before we implemented the retry logic.
> > > >
> > > > So if you say that all errors are permanent, fine; but if some of them
> > > > are actually temporary, we're back to square one.
> > > >
> > > > > Otherwise, every single application is going to have to learn how to
> > > > > deal with temporary errors, and everything that implies (throwing up
> > > > > dialog boxes to the user, who may not be able to do anything
> > > >
> > > > Yes, that's obviously not a realistic option.
> > > >
> > > > > --- this is why in the dm-thin case, if you think it should be
> > > > > temporary, dm-thin should be calling out to a usr space program that
> > > > > pages an system administrator; why do you think the process or the
> > > > > user who started the process can do anything about it/)
> > > >
> > > > In the case of qemu, we can't do anything about it in terms of making
> > > > the request work, but we can do something useful with the information:
> > > > We limit the damage done, by pausing the VM and preventing it from
> > > > seeing a broken hard disk from which it wouldn't recover without a
> > > > reboot. So in our case, both the system administrator and the process
> > > > want to be informed.
> > >
> > > In theory, using aio_fsync() should allow the process to determine if
> > > any writes are blocking indefinitely. I have a suspicion that
> > > aio_fsync() is not actually asynchronous, but that might be old
> > > information.
> > > Alternately a child process could call "fsync" and report when it completed.
> > >
> > > >
> > > > A timeout could serve as a trigger for qemu, but we could possibly do
> > > > better for things like the dm-thin case where we know immediately that
> > > > we'll have to wait for manual action.
> > >
> > > A consistent way for devices to be able to report "operator
> > > intervention required" would certainly be useful. I'm not sure how easy
> > > it would be for a particular application to determine if such a report
> > > was relevant for any of its IO though.
> > >
> > > It might not be too hard to add a flag to "sync_file_range()" to ask it to
> > > report the status of queues, e.g.:
> > > 0 - nothing queued, no data to sync
> > > 1 - writes are being queued, and progress appears to be normal
> > > 2 - queue appears to be stalled
> > > 3 - queue reports that admin intervention is required.
> > >
> > > The last one would require a fair bit of plumbing to get the information
> > > to the right place. The others are probably fairly easy if they can be
> > > defined properly.
> > > If you look in /sys/kernel/bdi/*/stats you will see statistic for each
> > > bdi (backing device info) which roughly correspond to filesystems. You
> > > can easily map from a file descriptor to a bdi.
> > > The "BdiWriteBandwidth" will (presumably) drop if there is data to be
> > > written which cannot get out. Monitoring these stats might give an
> > > application a useful understanding about what is happening in a particular
> > > storage device.
> > > I don't suggest that qemu should access this file, because it is a
> > > 'debugfs' file and not part of the api. But the information is there
> > > and might be useful. If you can show that it is directly useful to an
> > > application in some way, that would a useful step towards making the
> > > information more directly available in an api-stable way.
> > >
> >
> > I think my main takeaway from reading this discussion is that the
> > write/fsync model as a whole is really unsuitable for this (common) use
> > case. Given that we're already discussing using Linux specific
> > interfaces (sync_file_range, for instance), maybe we should turn this
> > topic around:
> >
> > What would an ideal kernel<->userland interface for this use case look
> > like?
>
>
> "(common) use case" ??
> I understand the use case to be "application needs to know when output
> queue is congested so that it can pause gracefully instead of hang at
> some arbitrary moment".
> Is that the same use case that you see?
> It is really "common"?
>
> (It may still be important, even if it isn't common).
>
> I support the idea of stepping back and asking the big picture question!
>
> NeilBrown
Ahh, sorry if I wasn't clear.
I know Kevin posed this topic in the context of QEMU/KVM, and I figure
that running virt guests (themselves doing all sorts of workloads) is a
pretty common setup these days. That was what I meant by "use case"
here. Obviously there are many other workloads that could benefit from
(or be harmed by) changes in this area.
Still, I think that looking at QEMU/KVM as a "application" and
considering what we can do to help optimize that case could be helpful
here (and might also be helpful for other workloads).
--
Jeff Layton <jlayton@poochiereds.net>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-01-22 23:32 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-10 16:02 [LSF/MM TOPIC] I/O error handling and fsync() Kevin Wolf
2017-01-11 0:41 ` NeilBrown
2017-01-13 11:09 ` Kevin Wolf
2017-01-13 14:21 ` Theodore Ts'o
2017-01-13 16:00 ` Kevin Wolf
2017-01-13 22:28 ` NeilBrown
2017-01-14 6:18 ` Darrick J. Wong
2017-01-16 12:14 ` [Lsf-pc] " Jeff Layton
2017-01-22 22:44 ` NeilBrown
2017-01-22 23:31 ` Jeff Layton [this message]
2017-01-23 0:21 ` Theodore Ts'o
2017-01-23 10:09 ` Kevin Wolf
2017-01-23 12:10 ` Jeff Layton
2017-01-23 17:25 ` Theodore Ts'o
2017-01-23 17:53 ` Chuck Lever
2017-01-23 22:40 ` Jeff Layton
2017-01-23 22:35 ` Jeff Layton
2017-01-23 23:09 ` Trond Myklebust
2017-01-24 0:16 ` NeilBrown
2017-01-24 0:46 ` Jeff Layton
2017-01-24 21:58 ` NeilBrown
2017-01-25 13:00 ` Jeff Layton
2017-01-30 5:30 ` NeilBrown
2017-01-24 3:34 ` Trond Myklebust
2017-01-25 18:35 ` Theodore Ts'o
2017-01-26 0:36 ` NeilBrown
2017-01-26 9:25 ` Jan Kara
2017-01-26 22:19 ` NeilBrown
2017-01-27 3:23 ` Theodore Ts'o
2017-01-27 6:03 ` NeilBrown
2017-01-30 16:04 ` Jan Kara
2017-01-13 18:40 ` Al Viro
2017-01-13 19:06 ` Kevin Wolf
2017-01-11 5:03 ` Theodore Ts'o
2017-01-11 9:47 ` [Lsf-pc] " Jan Kara
2017-01-11 15:45 ` Theodore Ts'o
2017-01-11 10:55 ` Chris Vest
2017-01-11 11:40 ` Kevin Wolf
2017-01-13 4:51 ` NeilBrown
2017-01-13 11:51 ` Kevin Wolf
2017-01-13 21:55 ` NeilBrown
2017-01-11 12:14 ` Chris Vest
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1485127917.5321.1.camel@poochiereds.net \
--to=jlayton@poochiereds.net \
--cc=hch@infradead.org \
--cc=kwolf@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=neilb@suse.com \
--cc=riel@redhat.com \
--cc=rwheeler@redhat.com \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).