* [Qemu-devel] [PATCH for-3.1 0/2] block: Fix crash on migration with explicit child nodes @ 2018-11-26 11:28 Kevin Wolf 2018-11-26 11:28 ` [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children before parents Kevin Wolf 2018-11-26 11:28 ` [Qemu-devel] [PATCH for-3.1 2/2] iotests: Test migration with -blockdev Kevin Wolf 0 siblings, 2 replies; 7+ messages in thread From: Kevin Wolf @ 2018-11-26 11:28 UTC (permalink / raw) To: qemu-block; +Cc: kwolf, mreitz, qemu-devel Kevin Wolf (2): block: Don't inactivate children before parents iotests: Test migration with -blockdev block.c | 22 ++++++++ tests/qemu-iotests/234 | 105 +++++++++++++++++++++++++++++++++++++ tests/qemu-iotests/234.out | 27 ++++++++++ tests/qemu-iotests/group | 1 + 4 files changed, 155 insertions(+) create mode 100755 tests/qemu-iotests/234 create mode 100644 tests/qemu-iotests/234.out -- 2.19.1 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children before parents 2018-11-26 11:28 [Qemu-devel] [PATCH for-3.1 0/2] block: Fix crash on migration with explicit child nodes Kevin Wolf @ 2018-11-26 11:28 ` Kevin Wolf 2018-11-26 12:05 ` Max Reitz 2018-11-26 11:28 ` [Qemu-devel] [PATCH for-3.1 2/2] iotests: Test migration with -blockdev Kevin Wolf 1 sibling, 1 reply; 7+ messages in thread From: Kevin Wolf @ 2018-11-26 11:28 UTC (permalink / raw) To: qemu-block; +Cc: kwolf, mreitz, qemu-devel bdrv_child_cb_inactivate() asserts that parents are already inactive when children get inactivated. This precondition is necessary because parents could still issue requests in their inactivation code. When block nodes are created individually with -blockdev, all of them are monitor owned and will be returned by bdrv_next() in an undefined order (in practice, in the order of their creation, which is usually children before parents), which obviously fails the assertion. This patch fixes the ordering by skipping nodes with still active parents in bdrv_inactivate_recurse() because we know that they will be covered by recursion when the last active parent becomes inactive. Signed-off-by: Kevin Wolf <kwolf@redhat.com> --- block.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/block.c b/block.c index 5ba3435f8f..0569275e31 100644 --- a/block.c +++ b/block.c @@ -4612,6 +4612,22 @@ void bdrv_invalidate_cache_all(Error **errp) } } +static bool bdrv_has_active_bds_parent(BlockDriverState *bs) +{ + BdrvChild *parent; + + QLIST_FOREACH(parent, &bs->parents, next_parent) { + if (parent->role->parent_is_bds) { + BlockDriverState *parent_bs = parent->opaque; + if (!(parent_bs->open_flags & BDRV_O_INACTIVE)) { + return true; + } + } + } + + return false; +} + static int bdrv_inactivate_recurse(BlockDriverState *bs, bool setting_flag) { @@ -4622,6 +4638,12 @@ static int bdrv_inactivate_recurse(BlockDriverState *bs, return -ENOMEDIUM; } + /* Make sure that we don't inactivate a child before its parent. + * It will be covered by recursion from the yet active parent. */ + if (bdrv_has_active_bds_parent(bs)) { + return 0; + } + if (!setting_flag && bs->drv->bdrv_inactivate) { ret = bs->drv->bdrv_inactivate(bs); if (ret < 0) { -- 2.19.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children before parents 2018-11-26 11:28 ` [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children before parents Kevin Wolf @ 2018-11-26 12:05 ` Max Reitz 2018-11-26 12:33 ` Kevin Wolf 2018-11-26 12:40 ` Max Reitz 0 siblings, 2 replies; 7+ messages in thread From: Max Reitz @ 2018-11-26 12:05 UTC (permalink / raw) To: Kevin Wolf, qemu-block; +Cc: qemu-devel [-- Attachment #1: Type: text/plain, Size: 2621 bytes --] On 26.11.18 12:28, Kevin Wolf wrote: > bdrv_child_cb_inactivate() asserts that parents are already inactive > when children get inactivated. This precondition is necessary because > parents could still issue requests in their inactivation code. > > When block nodes are created individually with -blockdev, all of them > are monitor owned and will be returned by bdrv_next() in an undefined > order (in practice, in the order of their creation, which is usually > children before parents), which obviously fails the assertion. > > This patch fixes the ordering by skipping nodes with still active > parents in bdrv_inactivate_recurse() because we know that they will be > covered by recursion when the last active parent becomes inactive. > > Signed-off-by: Kevin Wolf <kwolf@redhat.com> > --- > block.c | 22 ++++++++++++++++++++++ > 1 file changed, 22 insertions(+) > > diff --git a/block.c b/block.c > index 5ba3435f8f..0569275e31 100644 > --- a/block.c > +++ b/block.c > @@ -4612,6 +4612,22 @@ void bdrv_invalidate_cache_all(Error **errp) > } > } > > +static bool bdrv_has_active_bds_parent(BlockDriverState *bs) > +{ > + BdrvChild *parent; > + > + QLIST_FOREACH(parent, &bs->parents, next_parent) { > + if (parent->role->parent_is_bds) { > + BlockDriverState *parent_bs = parent->opaque; > + if (!(parent_bs->open_flags & BDRV_O_INACTIVE)) { > + return true; > + } > + } > + } Now I see why you say this might make sense as a permission. > + > + return false; > +} > + > static int bdrv_inactivate_recurse(BlockDriverState *bs, > bool setting_flag) > { > @@ -4622,6 +4638,12 @@ static int bdrv_inactivate_recurse(BlockDriverState *bs, > return -ENOMEDIUM; > } > > + /* Make sure that we don't inactivate a child before its parent. > + * It will be covered by recursion from the yet active parent. */ > + if (bdrv_has_active_bds_parent(bs)) { > + return 0; > + } > + Hm. Wouldn't it make more sense to always return early when there are any BDS parents? Because if there are any BDS parents and none of them are active (anymore), then this child will have been inactivated already, and we can save ourselves the trouble of going through the rest of the function again. Do drivers support multiple calls to .bdrv_in_activate() at all? Max > if (!setting_flag && bs->drv->bdrv_inactivate) { > ret = bs->drv->bdrv_inactivate(bs); > if (ret < 0) { > [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 488 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children before parents 2018-11-26 12:05 ` Max Reitz @ 2018-11-26 12:33 ` Kevin Wolf 2018-11-26 12:44 ` Max Reitz 2018-11-26 12:40 ` Max Reitz 1 sibling, 1 reply; 7+ messages in thread From: Kevin Wolf @ 2018-11-26 12:33 UTC (permalink / raw) To: Max Reitz; +Cc: qemu-block, qemu-devel [-- Attachment #1: Type: text/plain, Size: 4225 bytes --] Am 26.11.2018 um 13:05 hat Max Reitz geschrieben: > On 26.11.18 12:28, Kevin Wolf wrote: > > bdrv_child_cb_inactivate() asserts that parents are already inactive > > when children get inactivated. This precondition is necessary because > > parents could still issue requests in their inactivation code. > > > > When block nodes are created individually with -blockdev, all of them > > are monitor owned and will be returned by bdrv_next() in an undefined > > order (in practice, in the order of their creation, which is usually > > children before parents), which obviously fails the assertion. > > > > This patch fixes the ordering by skipping nodes with still active > > parents in bdrv_inactivate_recurse() because we know that they will be > > covered by recursion when the last active parent becomes inactive. > > > > Signed-off-by: Kevin Wolf <kwolf@redhat.com> > > --- > > block.c | 22 ++++++++++++++++++++++ > > 1 file changed, 22 insertions(+) > > > > diff --git a/block.c b/block.c > > index 5ba3435f8f..0569275e31 100644 > > --- a/block.c > > +++ b/block.c > > @@ -4612,6 +4612,22 @@ void bdrv_invalidate_cache_all(Error **errp) > > } > > } > > > > +static bool bdrv_has_active_bds_parent(BlockDriverState *bs) > > +{ > > + BdrvChild *parent; > > + > > + QLIST_FOREACH(parent, &bs->parents, next_parent) { > > + if (parent->role->parent_is_bds) { > > + BlockDriverState *parent_bs = parent->opaque; > > + if (!(parent_bs->open_flags & BDRV_O_INACTIVE)) { > > + return true; > > + } > > + } > > + } > > Now I see why you say this might make sense as a permission. You do? To be honest, now that I found this solution, I don't think a permission makes much sense any more, because you would have the same loop, and you would only be checking a different flag in the end. > > + > > + return false; > > +} > > + > > static int bdrv_inactivate_recurse(BlockDriverState *bs, > > bool setting_flag) > > { > > @@ -4622,6 +4638,12 @@ static int bdrv_inactivate_recurse(BlockDriverState *bs, > > return -ENOMEDIUM; > > } > > > > + /* Make sure that we don't inactivate a child before its parent. > > + * It will be covered by recursion from the yet active parent. */ > > + if (bdrv_has_active_bds_parent(bs)) { > > + return 0; > > + } > > + > > Hm. Wouldn't it make more sense to always return early when there are > any BDS parents? Because if there are any BDS parents and none of them > are active (anymore), then this child will have been inactivated > already, and we can save ourselves the trouble of going through the rest > of the function again. I don't quite follow... If we always return early no matter whether there is an active parent, who will have inactivated the child? After trying to write up why you're wrong, I think there are two cases and both of us have a point: 1. bdrv_next() enumerates the child node first and then the last BDS parent. This is what this patch fixes. It will inactivate the child exactly once, at the time that the last parent has become inactive (and recursively calls this function for each of its children). If you remove that one inactivation, too, children won't be inactivated at all. 2. bdrv_next() enumerates the last BDS parent first and then the child. This is unlikely and might even be impossible today, but once we expose bdrv_reopen() in QMP and let the user reconfigure the edges, it probably becomes possible. In this case, even after my patch we inactivate drivers twice. Maybe we should just return early if BDRV_O_INACTIVE is already set. What makes me kind of unsure is that we already test for this flag, but only for part of the function. Any ideas how to test this? Can we create such a situation today? > Do drivers support multiple calls to .bdrv_in_activate() at all? They were probably not designed for that... Not sure if there was ever a commit where we used to call them multiple times without failing the assertion first. Kevin [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 801 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children before parents 2018-11-26 12:33 ` Kevin Wolf @ 2018-11-26 12:44 ` Max Reitz 0 siblings, 0 replies; 7+ messages in thread From: Max Reitz @ 2018-11-26 12:44 UTC (permalink / raw) To: Kevin Wolf; +Cc: qemu-block, qemu-devel [-- Attachment #1.1: Type: text/plain, Size: 4719 bytes --] On 26.11.18 13:33, Kevin Wolf wrote: > Am 26.11.2018 um 13:05 hat Max Reitz geschrieben: >> On 26.11.18 12:28, Kevin Wolf wrote: >>> bdrv_child_cb_inactivate() asserts that parents are already inactive >>> when children get inactivated. This precondition is necessary because >>> parents could still issue requests in their inactivation code. >>> >>> When block nodes are created individually with -blockdev, all of them >>> are monitor owned and will be returned by bdrv_next() in an undefined >>> order (in practice, in the order of their creation, which is usually >>> children before parents), which obviously fails the assertion. >>> >>> This patch fixes the ordering by skipping nodes with still active >>> parents in bdrv_inactivate_recurse() because we know that they will be >>> covered by recursion when the last active parent becomes inactive. >>> >>> Signed-off-by: Kevin Wolf <kwolf@redhat.com> >>> --- >>> block.c | 22 ++++++++++++++++++++++ >>> 1 file changed, 22 insertions(+) >>> >>> diff --git a/block.c b/block.c >>> index 5ba3435f8f..0569275e31 100644 >>> --- a/block.c >>> +++ b/block.c >>> @@ -4612,6 +4612,22 @@ void bdrv_invalidate_cache_all(Error **errp) >>> } >>> } >>> >>> +static bool bdrv_has_active_bds_parent(BlockDriverState *bs) >>> +{ >>> + BdrvChild *parent; >>> + >>> + QLIST_FOREACH(parent, &bs->parents, next_parent) { >>> + if (parent->role->parent_is_bds) { >>> + BlockDriverState *parent_bs = parent->opaque; >>> + if (!(parent_bs->open_flags & BDRV_O_INACTIVE)) { >>> + return true; >>> + } >>> + } >>> + } >> >> Now I see why you say this might make sense as a permission. > > You do? To be honest, now that I found this solution, I don't think a > permission makes much sense any more, because you would have the same > loop, and you would only be checking a different flag in the end. > >>> + >>> + return false; >>> +} >>> + >>> static int bdrv_inactivate_recurse(BlockDriverState *bs, >>> bool setting_flag) >>> { >>> @@ -4622,6 +4638,12 @@ static int bdrv_inactivate_recurse(BlockDriverState *bs, >>> return -ENOMEDIUM; >>> } >>> >>> + /* Make sure that we don't inactivate a child before its parent. >>> + * It will be covered by recursion from the yet active parent. */ >>> + if (bdrv_has_active_bds_parent(bs)) { >>> + return 0; >>> + } >>> + >> >> Hm. Wouldn't it make more sense to always return early when there are >> any BDS parents? Because if there are any BDS parents and none of them >> are active (anymore), then this child will have been inactivated >> already, and we can save ourselves the trouble of going through the rest >> of the function again. > > I don't quite follow... If we always return early no matter whether > there is an active parent, who will have inactivated the child? > > After trying to write up why you're wrong, I think there are two cases > and both of us have a point: > > 1. bdrv_next() enumerates the child node first and then the last BDS > parent. This is what this patch fixes. > > It will inactivate the child exactly once, at the time that the last > parent has become inactive (and recursively calls this function for > each of its children). If you remove that one inactivation, too, > children won't be inactivated at all. > > 2. bdrv_next() enumerates the last BDS parent first and then the child. > This is unlikely and might even be impossible today, but once we > expose bdrv_reopen() in QMP and let the user reconfigure the edges, > it probably becomes possible. blockdev-snapshot exists today. > In this case, even after my patch we inactivate drivers twice. Maybe > we should just return early if BDRV_O_INACTIVE is already set. What > makes me kind of unsure is that we already test for this flag, but > only for part of the function. > > Any ideas how to test this? Can we create such a situation today? As I wrote in my second mail just now, I think bdrv_inactivate_all() needs to check this. See attached diff to 234, but I don't know whether we can really test this, as there is no failure when .bdrv_inactivate() is called multiple times. (What I've done is add debugging code to see that it is called multiple times). Max >> Do drivers support multiple calls to .bdrv_in_activate() at all? > > They were probably not designed for that... Not sure if there was ever a > commit where we used to call them multiple times without failing the > assertion first. > > Kevin > [-- Attachment #1.2: diff --] [-- Type: text/plain, Size: 1956 bytes --] diff --git a/tests/qemu-iotests/234 b/tests/qemu-iotests/234 index 1f695d337a..b119b4cc4d 100755 --- a/tests/qemu-iotests/234 +++ b/tests/qemu-iotests/234 @@ -27,11 +27,13 @@ iotests.verify_image_format(supported_fmts=['raw', 'qcow2']) iotests.verify_platform(['linux']) with iotests.FilePath('img') as img_path, \ + iotests.FilePath('backing') as backing_path, \ iotests.FilePath('mig_fifo_a') as fifo_a, \ iotests.FilePath('mig_fifo_b') as fifo_b, \ iotests.VM(path_suffix='a') as vm_a, \ iotests.VM(path_suffix='b') as vm_b: + iotests.qemu_img_pipe('create', '-f', iotests.imgfmt, backing_path, '64M') iotests.qemu_img_pipe('create', '-f', iotests.imgfmt, img_path, '64M') os.mkfifo(fifo_a) @@ -40,14 +42,20 @@ with iotests.FilePath('img') as img_path, \ iotests.log('Launching source VM...') (vm_a.add_blockdev('file,filename=%s,node-name=drive0-file' % (img_path)) .add_blockdev('%s,file=drive0-file,node-name=drive0' % (iotests.imgfmt)) + .add_blockdev('file,filename=%s,node-name=drive0-backing-file' % (backing_path)) + .add_blockdev('%s,file=drive0-backing-file,node-name=drive0-backing' % (iotests.imgfmt)) .launch()) iotests.log('Launching destination VM...') (vm_b.add_blockdev('file,filename=%s,node-name=drive0-file' % (img_path)) .add_blockdev('%s,file=drive0-file,node-name=drive0' % (iotests.imgfmt)) + .add_blockdev('file,filename=%s,node-name=drive0-backing-file' % (backing_path)) + .add_blockdev('%s,file=drive0-backing-file,node-name=drive0-backing' % (iotests.imgfmt)) .add_incoming("exec: cat '%s'" % (fifo_a)) .launch()) + vm_a.qmp('blockdev-snapshot', node='drive0-backing', overlay='drive0') + iotests.log('Enabling migration QMP events on A...') iotests.log(vm_a.qmp('migrate-set-capabilities', capabilities=[ { [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 488 bytes --] ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children before parents 2018-11-26 12:05 ` Max Reitz 2018-11-26 12:33 ` Kevin Wolf @ 2018-11-26 12:40 ` Max Reitz 1 sibling, 0 replies; 7+ messages in thread From: Max Reitz @ 2018-11-26 12:40 UTC (permalink / raw) To: Kevin Wolf, qemu-block; +Cc: qemu-devel [-- Attachment #1: Type: text/plain, Size: 3704 bytes --] On 26.11.18 13:05, Max Reitz wrote: > On 26.11.18 12:28, Kevin Wolf wrote: >> bdrv_child_cb_inactivate() asserts that parents are already inactive >> when children get inactivated. This precondition is necessary because >> parents could still issue requests in their inactivation code. >> >> When block nodes are created individually with -blockdev, all of them >> are monitor owned and will be returned by bdrv_next() in an undefined >> order (in practice, in the order of their creation, which is usually >> children before parents), which obviously fails the assertion. >> >> This patch fixes the ordering by skipping nodes with still active >> parents in bdrv_inactivate_recurse() because we know that they will be >> covered by recursion when the last active parent becomes inactive. >> >> Signed-off-by: Kevin Wolf <kwolf@redhat.com> >> --- >> block.c | 22 ++++++++++++++++++++++ >> 1 file changed, 22 insertions(+) >> >> diff --git a/block.c b/block.c >> index 5ba3435f8f..0569275e31 100644 >> --- a/block.c >> +++ b/block.c >> @@ -4612,6 +4612,22 @@ void bdrv_invalidate_cache_all(Error **errp) >> } >> } >> >> +static bool bdrv_has_active_bds_parent(BlockDriverState *bs) >> +{ >> + BdrvChild *parent; >> + >> + QLIST_FOREACH(parent, &bs->parents, next_parent) { >> + if (parent->role->parent_is_bds) { >> + BlockDriverState *parent_bs = parent->opaque; >> + if (!(parent_bs->open_flags & BDRV_O_INACTIVE)) { >> + return true; >> + } >> + } >> + } > > Now I see why you say this might make sense as a permission. > >> + >> + return false; >> +} >> + >> static int bdrv_inactivate_recurse(BlockDriverState *bs, >> bool setting_flag) >> { >> @@ -4622,6 +4638,12 @@ static int bdrv_inactivate_recurse(BlockDriverState *bs, >> return -ENOMEDIUM; >> } >> >> + /* Make sure that we don't inactivate a child before its parent. >> + * It will be covered by recursion from the yet active parent. */ >> + if (bdrv_has_active_bds_parent(bs)) { Another thing I found while testing: I think this should only return early if setting_flag is true. BDRV_O_INACTIVE will only be set on the second pass. If you skip nodes with active parents unconditionally, bs->drv->bdrv_inactivate() will not be called for any non-root BDS (because bdrv_has_active_bds_parents() returns true for all non-root BDSs during the first pass). >> + return 0; >> + } >> + > > Hm. Wouldn't it make more sense to always return early when there are > any BDS parents? Because if there are any BDS parents and none of them > are active (anymore), then this child will have been inactivated > already, and we can save ourselves the trouble of going through the rest > of the function again. Hm, well, no, at least not directly here. (Otherwise bdrv_inactive_recurse() will not really recurse when it calls itself...) But bdrv_inactive_all() could check that before invoking this function. Ah, but then it is possible to still run into the exact bug you're fixing here, because a node might inactivate a child that has another parent still (which is inactivated later). But still, I think bdrv_inactive_all() should skip non-root BDSs, because calling bs->drv->bdrv_inactive() and parent->role->inactivate() multiple times cannot be right. Max > Do drivers support multiple calls to .bdrv_in_activate() at all? > > Max > >> if (!setting_flag && bs->drv->bdrv_inactivate) { >> ret = bs->drv->bdrv_inactivate(bs); >> if (ret < 0) { >> > > [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 488 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH for-3.1 2/2] iotests: Test migration with -blockdev 2018-11-26 11:28 [Qemu-devel] [PATCH for-3.1 0/2] block: Fix crash on migration with explicit child nodes Kevin Wolf 2018-11-26 11:28 ` [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children before parents Kevin Wolf @ 2018-11-26 11:28 ` Kevin Wolf 1 sibling, 0 replies; 7+ messages in thread From: Kevin Wolf @ 2018-11-26 11:28 UTC (permalink / raw) To: qemu-block; +Cc: kwolf, mreitz, qemu-devel Check that block node activation and inactivation works with a block graph that is built with individually created nodes. Signed-off-by: Kevin Wolf <kwolf@redhat.com> --- tests/qemu-iotests/234 | 105 +++++++++++++++++++++++++++++++++++++ tests/qemu-iotests/234.out | 27 ++++++++++ tests/qemu-iotests/group | 1 + 3 files changed, 133 insertions(+) create mode 100755 tests/qemu-iotests/234 create mode 100644 tests/qemu-iotests/234.out diff --git a/tests/qemu-iotests/234 b/tests/qemu-iotests/234 new file mode 100755 index 0000000000..1f695d337a --- /dev/null +++ b/tests/qemu-iotests/234 @@ -0,0 +1,105 @@ +#!/usr/bin/env python +# +# Copyright (C) 2018 Red Hat, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see <http://www.gnu.org/licenses/>. +# +# Creator/Owner: Kevin Wolf <kwolf@redhat.com> +# +# Check that block node activation and inactivation works with a block graph +# that is built with individually created nodes + +import iotests +import os + +iotests.verify_image_format(supported_fmts=['raw', 'qcow2']) +iotests.verify_platform(['linux']) + +with iotests.FilePath('img') as img_path, \ + iotests.FilePath('mig_fifo_a') as fifo_a, \ + iotests.FilePath('mig_fifo_b') as fifo_b, \ + iotests.VM(path_suffix='a') as vm_a, \ + iotests.VM(path_suffix='b') as vm_b: + + iotests.qemu_img_pipe('create', '-f', iotests.imgfmt, img_path, '64M') + + os.mkfifo(fifo_a) + os.mkfifo(fifo_b) + + iotests.log('Launching source VM...') + (vm_a.add_blockdev('file,filename=%s,node-name=drive0-file' % (img_path)) + .add_blockdev('%s,file=drive0-file,node-name=drive0' % (iotests.imgfmt)) + .launch()) + + iotests.log('Launching destination VM...') + (vm_b.add_blockdev('file,filename=%s,node-name=drive0-file' % (img_path)) + .add_blockdev('%s,file=drive0-file,node-name=drive0' % (iotests.imgfmt)) + .add_incoming("exec: cat '%s'" % (fifo_a)) + .launch()) + + iotests.log('Enabling migration QMP events on A...') + iotests.log(vm_a.qmp('migrate-set-capabilities', capabilities=[ + { + 'capability': 'events', + 'state': True + } + ])) + + iotests.log('Starting migration to B...') + iotests.log(vm_a.qmp('migrate', uri='exec:cat >%s' % (fifo_a))) + with iotests.Timeout(3, 'Migration does not complete'): + while True: + event = vm_a.event_wait('MIGRATION') + iotests.log(event, filters=[iotests.filter_qmp_event]) + if event['data']['status'] == 'completed': + break + + iotests.log(vm_a.qmp('query-migrate')['return']['status']) + iotests.log(vm_b.qmp('query-migrate')['return']['status']) + + iotests.log(vm_a.qmp('query-status')) + iotests.log(vm_b.qmp('query-status')) + + iotests.log('Add a second parent to drive0-file...') + iotests.log(vm_b.qmp('blockdev-add', driver='raw', file='drive0-file', + node_name='drive0-raw')) + + iotests.log('Restart A with -incoming and second parent...') + vm_a.shutdown() + (vm_a.add_blockdev('raw,file=drive0-file,node-name=drive0-raw') + .add_incoming("exec: cat '%s'" % (fifo_b)) + .launch()) + + iotests.log('Enabling migration QMP events on B...') + iotests.log(vm_b.qmp('migrate-set-capabilities', capabilities=[ + { + 'capability': 'events', + 'state': True + } + ])) + + iotests.log('Starting migration back to A...') + iotests.log(vm_b.qmp('migrate', uri='exec:cat >%s' % (fifo_b))) + with iotests.Timeout(3, 'Migration does not complete'): + while True: + event = vm_b.event_wait('MIGRATION') + iotests.log(event, filters=[iotests.filter_qmp_event]) + if event['data']['status'] == 'completed': + break + + iotests.log(vm_a.qmp('query-migrate')['return']['status']) + iotests.log(vm_b.qmp('query-migrate')['return']['status']) + + iotests.log(vm_a.qmp('query-status')) + iotests.log(vm_b.qmp('query-status')) diff --git a/tests/qemu-iotests/234.out b/tests/qemu-iotests/234.out new file mode 100644 index 0000000000..115537cd5c --- /dev/null +++ b/tests/qemu-iotests/234.out @@ -0,0 +1,27 @@ +Launching source VM... +Launching destination VM... +Enabling migration QMP events on A... +{"return": {}} +Starting migration to B... +{"return": {}} +{"data": {"status": "setup"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}} +{"data": {"status": "active"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}} +{"data": {"status": "completed"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}} +completed +completed +{"return": {"running": false, "singlestep": false, "status": "postmigrate"}} +{"return": {"running": true, "singlestep": false, "status": "running"}} +Add a second parent to drive0-file... +{"return": {}} +Restart A with -incoming and second parent... +Enabling migration QMP events on B... +{"return": {}} +Starting migration back to A... +{"return": {}} +{"data": {"status": "setup"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}} +{"data": {"status": "active"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}} +{"data": {"status": "completed"}, "event": "MIGRATION", "timestamp": {"microseconds": "USECS", "seconds": "SECS"}} +completed +completed +{"return": {"running": true, "singlestep": false, "status": "running"}} +{"return": {"running": false, "singlestep": false, "status": "postmigrate"}} diff --git a/tests/qemu-iotests/group b/tests/qemu-iotests/group index ddf1a5b549..8c56a0ad11 100644 --- a/tests/qemu-iotests/group +++ b/tests/qemu-iotests/group @@ -231,3 +231,4 @@ 231 auto quick 232 auto quick 233 auto quick +234 auto quick migration -- 2.19.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2018-11-26 12:44 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2018-11-26 11:28 [Qemu-devel] [PATCH for-3.1 0/2] block: Fix crash on migration with explicit child nodes Kevin Wolf 2018-11-26 11:28 ` [Qemu-devel] [PATCH for-3.1 1/2] block: Don't inactivate children before parents Kevin Wolf 2018-11-26 12:05 ` Max Reitz 2018-11-26 12:33 ` Kevin Wolf 2018-11-26 12:44 ` Max Reitz 2018-11-26 12:40 ` Max Reitz 2018-11-26 11:28 ` [Qemu-devel] [PATCH for-3.1 2/2] iotests: Test migration with -blockdev Kevin Wolf
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).