From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>,
qemu-devel@nongnu.org,
Raushaniya Maksudova <rmaksudova@virtuozzo.com>,
"Denis V. Lunev" <den@openvz.org>
Subject: Re: [Qemu-devel] [PATCH 4/5] disk_deadlines: add control of requests time expiration
Date: Mon, 14 Sep 2015 17:53:55 +0100 [thread overview]
Message-ID: <20150914165355.GE15536@stefanha-thinkpad.redhat.com> (raw)
In-Reply-To: <20150910113920.GA4460@noname.redhat.com>
On Thu, Sep 10, 2015 at 01:39:20PM +0200, Kevin Wolf wrote:
> Am 10.09.2015 um 12:27 hat Stefan Hajnoczi geschrieben:
> > On Tue, Sep 08, 2015 at 04:48:24PM +0200, Kevin Wolf wrote:
> > > Am 08.09.2015 um 16:23 hat Denis V. Lunev geschrieben:
> > > > On 09/08/2015 04:05 PM, Kevin Wolf wrote:
> > > > >Am 08.09.2015 um 13:27 hat Denis V. Lunev geschrieben:
> > > > >>interesting point. Yes, it flushes all requests and most likely
> > > > >>hangs inside waiting requests to complete. But fortunately
> > > > >>this happens after the switch to paused state thus
> > > > >>the guest becomes paused. That's why I have missed this
> > > > >>fact.
> > > > >>
> > > > >>This (could) be considered as a problem but I have no (good)
> > > > >>solution at the moment. Should think a bit on.
> > > > >Let me suggest a radically different design. Note that I don't say this
> > > > >is necessarily how things should be done, I'm just trying to introduce
> > > > >some new ideas and broaden the discussion, so that we have a larger set
> > > > >of ideas from which we can pick the right solution(s).
> > > > >
> > > > >The core of my idea would be a new filter block driver 'timeout' that
> > > > >can be added on top of each BDS that could potentially fail, like a
> > > > >raw-posix BDS pointing to a file on NFS. This way most pieces of the
> > > > >solution are nicely modularised and don't touch the block layer core.
> > > > >
> > > > >During normal operation the driver would just be passing through
> > > > >requests to the lower layer. When it detects a timeout, however, it
> > > > >completes the request it received with -ETIMEDOUT. It also completes any
> > > > >new request it receives with -ETIMEDOUT without passing the request on
> > > > >until the request that originally timed out returns. This is our safety
> > > > >measure against anyone seeing whether or how the timed out request
> > > > >modified data.
> > > > >
> > > > >We need to make sure that bdrv_drain() doesn't wait for this request.
> > > > >Possibly we need to introduce a .bdrv_drain callback that replaces the
> > > > >default handling, because bdrv_requests_pending() in the default
> > > > >handling considers bs->file, which would still have the timed out
> > > > >request. We don't want to see this; bdrv_drain_all() should complete
> > > > >even though that request is still pending internally (externally, we
> > > > >returned -ETIMEDOUT, so we can consider it completed). This way the
> > > > >monitor stays responsive and background jobs can go on if they don't use
> > > > >the failing block device.
> > > > >
> > > > >And then we essentially reuse the rerror/werror mechanism that we
> > > > >already have to stop the VM. The device models would be extended to
> > > > >always stop the VM on -ETIMEDOUT, regardless of the error policy. In
> > > > >this state, the VM would even be migratable if you make sure that the
> > > > >pending request can't modify the image on the destination host any more.
> > > > >
> > > > >Do you think this could work, or did I miss something important?
> > > > >
> > > > >Kevin
> > > > could I propose even more radical solution then?
> > > >
> > > > My original approach was based on the fact that
> > > > this could should be maintainable out-of-stream.
> > > > If the patch will be merged - this boundary condition
> > > > could be dropped.
> > > >
> > > > Why not to invent 'terror' field on BdrvOptions
> > > > and process things in core block layer without
> > > > a filter? RB Tree entry will just not created if
> > > > the policy will be set to 'ignore'.
> > >
> > > 'terror' might not be the most fortunate name... ;-)
> > >
> > > The reason why I would prefer a filter driver is so the code and the
> > > associated data structures are cleanly modularised and we can keep the
> > > actual block layer core small and clean. The same is true for some other
> > > functions that I would rather move out of the core into filter drivers
> > > than add new cases (e.g. I/O throttling, backup notifiers, etc.), but
> > > which are a bit harder to actually move because we already have old
> > > interfaces that we can't break (we'll probably do it anyway eventually,
> > > even if it needs a bit more compatibility code).
> > >
> > > However, it seems that you are mostly touching code that is maintained
> > > by Stefan, and Stefan used to be a bit more open to adding functionality
> > > to the core, so my opinion might not be the last word.
> >
> > I've been thinking more about the correctness of this feature:
> >
> > QEMU cannot cancel I/O because there is no Linux userspace API for doing
> > so. Linux AIO's io_cancel(2) syscall is a nop since file systems don't
> > implement a kiocb_cancel_fn. Sending a signal to a task blocked in
> > O_DIRECT preadv(2)/pwritev(2) doesn't work either because the task is in
> > uninterruptible sleep.
> >
> > The only way to make sure a request has finished is to wait for
> > completion. If we treat a request as failed/cancelled but it's actually
> > still pending at a layer of the storage stack:
> > 1. Read requests may modify guest memory.
> > 2. Write requests may modify disk sectors.
> >
> > Today the guest times out and tries to do IDE/ATA recovery, for example.
> > This causes QEMU to eventually call the synchronous bdrv_drain_all()
> > function and the guest hangs. Also, if the guest mounts the file system
> > read-only in response to the timeout, then game over.
> >
> > The disk-deadlines feature lets QEMU detect timeouts before the guest so
> > we can pause the guest. The part I have been thinking about is that the
> > only option is to wait until the request completes.
> >
> > We cannot abandon the timed out request because we'll face #1 or #2
> > above. This means it doesn't make sense to retry the request like
> > rerror=/werror=. rerror=/werror= can retry safely because the original
> > request has failed but that is not the case for timed out requests.
> >
> > This also means that live migration isn't safe, at least if a write
> > request is pending. If the guest migrates, the pending write request on
> > the source host could still complete after live migration handover,
> > corrupting the disk.
> >
> > Getting back to these patches: I think the implementation is correct in
> > that the only policy is to wait for timed out requests to complete and
> > then resume the guest.
> >
> > However, these patches need to violate the constraint that guest memory
> > isn't dirtied when the guest is paused. This is an important constraint
> > for the correctness of live migration, since we need to be able to track
> > all changes to guest memory.
> >
> > Just wanted to post this in case anyone disagrees.
>
> You're making a few good points here.
>
> I thought that migration with a pending write request could be safe with
> some additional knowledge because if you know that the write is hanging
> because the connection to the NFS server is down and you make sure that
> it remains disconnected, that would work. However, the hanging request
> is already in the kernel, so you could never bring the connection up
> again without rebooting the host, which is clearly not a realistic
> assumption.
>
> Never thought of the constraints of live migration either, so it seems
> reads requests are equally problematic.
>
> So it appears that the filter driver would have to add a migration
> blocker whenever it sees any request time out, and only clear it again
> when all pending requests have completed.
Adding new features as filters (like quorum) instead adding them to the
core block layer is a good thing.
Kevin: Can you post an example of the syntax so it's clear what you
mean?
next prev parent reply other threads:[~2015-09-14 16:54 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-08 8:00 [Qemu-devel] [PATCH RFC 0/5] disk deadlines Denis V. Lunev
2015-09-08 8:00 ` [Qemu-devel] [PATCH 1/5] add QEMU style defines for __sync_add_and_fetch Denis V. Lunev
2015-09-10 8:19 ` Stefan Hajnoczi
2015-09-08 8:00 ` [Qemu-devel] [PATCH 2/5] disk_deadlines: add request to resume Virtual Machine Denis V. Lunev
2015-09-10 8:51 ` Stefan Hajnoczi
2015-09-10 19:18 ` Denis V. Lunev
2015-09-14 16:46 ` Stefan Hajnoczi
2015-09-08 8:00 ` [Qemu-devel] [PATCH 3/5] disk_deadlines: add disk-deadlines option per drive Denis V. Lunev
2015-09-10 9:05 ` Stefan Hajnoczi
2015-09-08 8:00 ` [Qemu-devel] [PATCH 4/5] disk_deadlines: add control of requests time expiration Denis V. Lunev
2015-09-08 9:35 ` Fam Zheng
2015-09-08 9:42 ` Denis V. Lunev
2015-09-08 11:06 ` Kevin Wolf
2015-09-08 11:27 ` Denis V. Lunev
2015-09-08 13:05 ` Kevin Wolf
2015-09-08 14:23 ` Denis V. Lunev
2015-09-08 14:48 ` Kevin Wolf
2015-09-10 10:27 ` Stefan Hajnoczi
2015-09-10 11:39 ` Kevin Wolf
2015-09-14 16:53 ` Stefan Hajnoczi [this message]
2015-09-25 12:34 ` Dr. David Alan Gilbert
2015-09-28 12:42 ` Stefan Hajnoczi
2015-09-28 13:55 ` Dr. David Alan Gilbert
2015-09-08 8:00 ` [Qemu-devel] [PATCH 5/5] disk_deadlines: add info disk-deadlines option Denis V. Lunev
2015-09-08 16:20 ` Eric Blake
2015-09-08 16:26 ` Eric Blake
2015-09-10 18:53 ` Denis V. Lunev
2015-09-10 19:13 ` Denis V. Lunev
2015-09-08 8:58 ` [Qemu-devel] [PATCH RFC 0/5] disk deadlines Vasiliy Tolstov
2015-09-08 9:20 ` Fam Zheng
2015-09-08 10:11 ` Kevin Wolf
2015-09-08 10:13 ` Denis V. Lunev
2015-09-08 10:20 ` Fam Zheng
2015-09-08 10:46 ` Denis V. Lunev
2015-09-08 10:49 ` Kevin Wolf
2015-09-08 13:20 ` Fam Zheng
2015-09-08 9:33 ` Paolo Bonzini
2015-09-08 9:41 ` Denis V. Lunev
2015-09-08 9:43 ` Paolo Bonzini
2015-09-08 10:37 ` Andrey Korolyov
2015-09-08 10:50 ` Denis V. Lunev
2015-09-08 10:07 ` Kevin Wolf
2015-09-08 10:08 ` Denis V. Lunev
2015-09-08 10:22 ` Stefan Hajnoczi
2015-09-08 10:26 ` Paolo Bonzini
2015-09-08 10:36 ` Denis V. Lunev
2015-09-08 19:11 ` John Snow
2015-09-10 19:29 ` [Qemu-devel] Summary: " Denis V. Lunev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150914165355.GE15536@stefanha-thinkpad.redhat.com \
--to=stefanha@redhat.com \
--cc=den@openvz.org \
--cc=kwolf@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rmaksudova@virtuozzo.com \
--cc=stefanha@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).