From: "Daniel P. Berrangé" <berrange@redhat.com>
To: "Dr. David Alan Gilbert" <dave@treblig.org>
Cc: "Jiří Denemark" <jdenemar@redhat.com>,
"Peter Xu" <peterx@redhat.com>,
"Juraj Marcin" <jmarcin@redhat.com>,
qemu-devel@nongnu.org, "Fabiano Rosas" <farosas@suse.de>
Subject: Re: [PATCH 4/4] migration: Introduce POSTCOPY_DEVICE state
Date: Wed, 1 Oct 2025 16:10:44 +0100 [thread overview]
Message-ID: <aN1EdMGbLxE10NOp@redhat.com> (raw)
In-Reply-To: <aN0LFzHSyF7dMtZ1@gallifrey>
On Wed, Oct 01, 2025 at 11:05:59AM +0000, Dr. David Alan Gilbert wrote:
> * Jiří Denemark (jdenemar@redhat.com) wrote:
> > On Tue, Sep 30, 2025 at 16:04:54 -0400, Peter Xu wrote:
> > > On Tue, Sep 30, 2025 at 09:53:31AM +0200, Jiří Denemark wrote:
> > > > On Thu, Sep 25, 2025 at 14:22:06 -0400, Peter Xu wrote:
> > > > > On Thu, Sep 25, 2025 at 01:54:40PM +0200, Jiří Denemark wrote:
> > > > > > On Mon, Sep 15, 2025 at 13:59:15 +0200, Juraj Marcin wrote:
> > > > > So far, dest QEMU will try to resume the VM after getting RUN command, that
> > > > > is what loadvm_postcopy_handle_run_bh() does, and it will (when autostart=1
> > > > > set): (1) firstly try to activate all block devices, iff it succeeded, (2)
> > > > > do vm_start(), at the end of which RESUME event will be generated. So
> > > > > RESUME currently implies both disk activation success, and vm start worked.
> > > > >
> > > > > > may still fail when locking disks fails (not sure if this is the only
> > > > > > way cont may fail). In this case we cannot cancel the migration on the
> > > > >
> > > > > Is there any known issue with locking disks that dest would fail? This
> > > > > really sound like we should have the admin taking a look.
> > > >
> > > > Oh definitely, it would be some kind of an storage access issue on the
> > > > destination. But we'd like to give the admin an option to actually do
> > > > anything else than just killing the VM :-) Either by automatically
> > > > canceling the migration or allowing recovery once storage issues are
> > > > solved.
> > >
> > > The problem is, if the storage locking stopped working properly, then how
> > > to guarantee the shared storage itself is working properly?
> > >
> > > When I was replying previously, I was expecting the admin taking a look to
> > > fix the storage, I didn't expect the VM can still be recovered anymore if
> > > there's no confidence that the block devices will work all fine. The
> > > locking errors to me may imply a block corruption already, or should I not
> > > see it like that?
> >
> > If the storage itself is broken, there's clearly nothing we can do. But
> > the thing is we're accessing it from two distinct hosts. So while it may
> > work on the source, it can be broken on the destination. For example,
> > connection between the destination host and the storage may be broken.
> > Not sure how often this can happen in real life, but we have a bug
> > report that (artificially) breaking storage access on the destination
> > results in paused VM on the source which can only be killed.
>
> I've got a vague memory that a tricky case is when some of your storage
> devices are broken on the destination, but not all.
> So you tell the block layer you want to take them on the destination
> some take their lock, one fails; now what state are you in?
> I'm not sure if the block layer had a way of telling you what state
> you were in when I was last involved in that.
As long as the target QEMU CPUs have NOT started running, then
no I/O writes should have been sent to the storage, so the storage
should still be in a consistent state, and thus we can still try
to fail back to the source QEMU.
The "fun" problem here is that just because we /try/ to fail back
to the source QEMU, does not imply the source QEMU will now succeed
in re-acquiring the locks it just released a short time ago.
Consider the classic dead NFS server problem. The target may have
acquired 1 lock and failed on another lock because of a service
interruption to the NFS server. Well the target can't neccessarily
release the lock that it did successfully acquire now. So if we
fail back to the source, it'll be unable to reacquire the lock as
the target still holds it.
This doesn't mean we shouldn't try to fail back, but there will
always be some failures scenarios we'll struggle to recover from.
The "migration paused" state is our last chance, as that leaves
both QEMU's present while the admin tries to fix the underlying
problems.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
next prev parent reply other threads:[~2025-10-01 15:18 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-15 11:59 [PATCH 0/4] migration: Introduce POSTCOPY_DEVICE state Juraj Marcin
2025-09-15 11:59 ` [PATCH 1/4] migration: Do not try to start VM if disk activation fails Juraj Marcin
2025-09-19 16:12 ` Fabiano Rosas
2025-09-15 11:59 ` [PATCH 2/4] migration: Accept MigrationStatus in migration_has_failed() Juraj Marcin
2025-09-19 14:57 ` Peter Xu
2025-09-22 11:26 ` Juraj Marcin
2025-09-15 11:59 ` [PATCH 3/4] migration: Refactor incoming cleanup into migration_incoming_finish() Juraj Marcin
2025-09-19 15:53 ` Peter Xu
2025-09-19 16:46 ` Fabiano Rosas
2025-09-22 12:58 ` Juraj Marcin
2025-09-22 15:51 ` Peter Xu
2025-09-22 17:40 ` Fabiano Rosas
2025-09-22 17:48 ` Peter Xu
2025-09-23 14:58 ` Juraj Marcin
2025-09-23 16:17 ` Peter Xu
2025-09-15 11:59 ` [PATCH 4/4] migration: Introduce POSTCOPY_DEVICE state Juraj Marcin
2025-09-19 16:58 ` Peter Xu
2025-09-19 17:50 ` Peter Xu
2025-09-22 13:34 ` Juraj Marcin
2025-09-22 16:16 ` Peter Xu
2025-09-23 14:23 ` Juraj Marcin
2025-09-25 11:54 ` Jiří Denemark
2025-09-25 18:22 ` Peter Xu
2025-09-30 7:53 ` Jiří Denemark
2025-09-30 20:04 ` Peter Xu
2025-10-01 8:43 ` Jiří Denemark
2025-10-01 11:05 ` Dr. David Alan Gilbert
2025-10-01 14:26 ` Jiří Denemark
2025-10-01 15:53 ` Dr. David Alan Gilbert
2025-10-01 15:10 ` Daniel P. Berrangé [this message]
2025-10-02 12:17 ` Jiří Denemark
2025-10-02 13:12 ` Dr. David Alan Gilbert
2025-10-01 10:09 ` Juraj Marcin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aN1EdMGbLxE10NOp@redhat.com \
--to=berrange@redhat.com \
--cc=dave@treblig.org \
--cc=farosas@suse.de \
--cc=jdenemar@redhat.com \
--cc=jmarcin@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).