From: Julien Grall <julien.grall@linaro.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org,
Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] migration, xen: Fix block image lock issue on live migration
Date: Wed, 29 Nov 2017 12:28:39 +0000 [thread overview]
Message-ID: <bf0e06f5-13b4-cbef-4a00-f1f61a3c262e@linaro.org> (raw)
In-Reply-To: <20171127150042.GA2004@perard.uk.xensource.com>
+ Stefano
On 11/27/2017 03:00 PM, Anthony PERARD wrote:
> Hi Julien,
Hi Anthony,
>
> Can I get a release-ack for this patch?
>
> This fix local live migration of HVM guest when the disk backend is
> qdisk. osstest doesn't report a regression because the kernel or the
> glibc is just a bit too old.
When does that regression happen? I am considering to release Xen 4.10
soon and would need more details to decide the inclusion of the patch.
Cheers,
>
> Thanks,
>
>
> On Wed, Nov 22, 2017 at 09:45:03AM +0100, Juan Quintela wrote:
>> From: Anthony PERARD <anthony.perard@citrix.com>
>>
>> When doing a live migration of a Xen guest with libxl, the images for
>> block devices are locked by the original QEMU process, and this prevent
>> the QEMU at the destination to take the lock and the migration fail.
>>
>> >From QEMU point of view, once the RAM of a domain is migrated, there is
>> two QMP commands, "stop" then "xen-save-devices-state", at which point a
>> new QEMU is spawned at the destination.
>>
>> Release locks in "xen-save-devices-state" so the destination can takes
>> them, if it's a live migration.
>>
>> This patch add the "live" parameter to "xen-save-devices-state" which
>> default to true so older version of libxenlight can work with newer
>> version of QEMU.
>>
>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>> Reviewed-by: Juan Quintela <quintela@redhat.com>
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>> ---
>> migration/savevm.c | 23 ++++++++++++++++++++++-
>> qapi/migration.json | 6 +++++-
>> 2 files changed, 27 insertions(+), 2 deletions(-)
>>
>> diff --git a/migration/savevm.c b/migration/savevm.c
>> index 192f2d82cd..b7908f62be 100644
>> --- a/migration/savevm.c
>> +++ b/migration/savevm.c
>> @@ -2242,13 +2242,20 @@ int save_snapshot(const char *name, Error **errp)
>> return ret;
>> }
>>
>> -void qmp_xen_save_devices_state(const char *filename, Error **errp)
>> +void qmp_xen_save_devices_state(const char *filename, bool has_live, bool live,
>> + Error **errp)
>> {
>> QEMUFile *f;
>> QIOChannelFile *ioc;
>> int saved_vm_running;
>> int ret;
>>
>> + if (!has_live) {
>> + /* live default to true so old version of Xen tool stack can have a
>> + * successfull live migration */
>> + live = true;
>> + }
>> +
>> saved_vm_running = runstate_is_running();
>> vm_stop(RUN_STATE_SAVE_VM);
>> global_state_store_running();
>> @@ -2263,6 +2270,20 @@ void qmp_xen_save_devices_state(const char *filename, Error **errp)
>> qemu_fclose(f);
>> if (ret < 0) {
>> error_setg(errp, QERR_IO_ERROR);
>> + } else {
>> + /* libxl calls the QMP command "stop" before calling
>> + * "xen-save-devices-state" and in case of migration failure, libxl
>> + * would call "cont".
>> + * So call bdrv_inactivate_all (release locks) here to let the other
>> + * side of the migration take controle of the images.
>> + */
>> + if (live && !saved_vm_running) {
>> + ret = bdrv_inactivate_all();
>> + if (ret) {
>> + error_setg(errp, "%s: bdrv_inactivate_all() failed (%d)",
>> + __func__, ret);
>> + }
>> + }
>> }
>>
>> the_end:
>> diff --git a/qapi/migration.json b/qapi/migration.json
>> index bbc4671ded..03f57c9616 100644
>> --- a/qapi/migration.json
>> +++ b/qapi/migration.json
>> @@ -1075,6 +1075,9 @@
>> # data. See xen-save-devices-state.txt for a description of the binary
>> # format.
>> #
>> +# @live: Optional argument to ask QEMU to treat this command as part of a live
>> +# migration. Default to true. (since 2.11)
>> +#
>> # Returns: Nothing on success
>> #
>> # Since: 1.1
>> @@ -1086,7 +1089,8 @@
>> # <- { "return": {} }
>> #
>> ##
>> -{ 'command': 'xen-save-devices-state', 'data': {'filename': 'str'} }
>> +{ 'command': 'xen-save-devices-state',
>> + 'data': {'filename': 'str', '*live':'bool' } }
>>
>> ##
>> # @xen-set-replication:
>> --
>> 2.13.6
>>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2017-11-29 12:28 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20171122084504.11984-1-quintela@redhat.com>
[not found] ` <20171122084504.11984-2-quintela@redhat.com>
2017-11-27 15:00 ` [PATCH] migration, xen: Fix block image lock issue on live migration Anthony PERARD
2017-11-29 12:28 ` Julien Grall [this message]
2017-11-29 15:06 ` Anthony PERARD
2017-12-01 16:17 ` Julien Grall
2017-10-02 16:30 Anthony PERARD
2017-10-02 19:18 ` Dr. David Alan Gilbert
2017-10-04 13:03 ` Kevin Wolf
2017-10-24 12:11 ` Anthony PERARD
2017-10-03 11:33 ` Roger Pau Monné
2017-10-03 11:47 ` Anthony PERARD
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bf0e06f5-13b4-cbef-4a00-f1f61a3c262e@linaro.org \
--to=julien.grall@linaro.org \
--cc=anthony.perard@citrix.com \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).