* [Qemu-devel] [PATCH v4 0/3] qmp: Add "blockdev-backup" @ 2014-12-04 2:29 Fam Zheng 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 1/3] qmp: Add command 'blockdev-backup' Fam Zheng ` (2 more replies) 0 siblings, 3 replies; 13+ messages in thread From: Fam Zheng @ 2014-12-04 2:29 UTC (permalink / raw) To: qemu-devel; +Cc: Kevin Wolf, Markus Armbruster, Stefan Hajnoczi, mreitz v4: Rebase to block-next. Support dataplane by acquiring AioContext. (Stefan) v3: Address Eric's comments on documentation. Squashed 3/4 into 2/4. v2: Address Markus' and Eric's comments: Fix qapi schema documentation. Fix versioning of transactions. Improve test case code by dropping inelegnet bool. The existing drive-backup command accepts a target file path, but that interface provides little flexibility on the properties of target block device, compared to what is possible with "blockdev-add", "drive_add" or "-drive". This is also a building block to allow image fleecing (creating a point in time snapshot and export with nbd-server-add). (For symmetry, blockdev-mirror will be added in a separate series.) Fam Fam Zheng (3): qmp: Add command 'blockdev-backup' block: Add blockdev-backup to transaction qemu-iotests: Test blockdev-backup in 055 block/backup.c | 28 ++++++ blockdev.c | 129 +++++++++++++++++++++++++++ qapi-schema.json | 7 ++ qapi/block-core.json | 54 ++++++++++++ qmp-commands.hx | 44 ++++++++++ tests/qemu-iotests/055 | 211 +++++++++++++++++++++++++++++++++++++-------- tests/qemu-iotests/055.out | 4 +- 7 files changed, 438 insertions(+), 39 deletions(-) -- 1.9.3 ^ permalink raw reply [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v4 1/3] qmp: Add command 'blockdev-backup' 2014-12-04 2:29 [Qemu-devel] [PATCH v4 0/3] qmp: Add "blockdev-backup" Fam Zheng @ 2014-12-04 2:29 ` Fam Zheng 2014-12-04 13:43 ` Max Reitz 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 2/3] block: Add blockdev-backup to transaction Fam Zheng 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 3/3] qemu-iotests: Test blockdev-backup in 055 Fam Zheng 2 siblings, 1 reply; 13+ messages in thread From: Fam Zheng @ 2014-12-04 2:29 UTC (permalink / raw) To: qemu-devel; +Cc: Kevin Wolf, Markus Armbruster, Stefan Hajnoczi, mreitz Similar to drive-backup, but this command uses a device id as target instead of creating/opening an image file. Also add blocker on target bs, since the target is also a named device now. Add check and report error for bs == target which became possible but is an illegal case with introduction of blockdev-backup. Signed-off-by: Fam Zheng <famz@redhat.com> --- block/backup.c | 28 +++++++++++++++++++++++++++ blockdev.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++ qapi/block-core.json | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++++ qmp-commands.hx | 44 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 174 insertions(+) diff --git a/block/backup.c b/block/backup.c index 792e655..b944dd4 100644 --- a/block/backup.c +++ b/block/backup.c @@ -360,6 +360,7 @@ static void coroutine_fn backup_run(void *opaque) hbitmap_free(job->bitmap); bdrv_iostatus_disable(target); + bdrv_op_unblock_all(target, job->common.blocker); data = g_malloc(sizeof(*data)); data->ret = ret; @@ -379,6 +380,11 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target, assert(target); assert(cb); + if (bs == target) { + error_setg(errp, "Source and target cannot be the same"); + return; + } + if ((on_source_error == BLOCKDEV_ON_ERROR_STOP || on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) && !bdrv_iostatus_is_enabled(bs)) { @@ -386,6 +392,26 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target, return; } + if (!bdrv_is_inserted(bs)) { + error_setg(errp, "Devie is not inserted: %s", + bdrv_get_device_name(bs)); + return; + } + + if (!bdrv_is_inserted(target)) { + error_setg(errp, "Devie is not inserted: %s", + bdrv_get_device_name(target)); + return; + } + + if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) { + return; + } + + if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) { + return; + } + len = bdrv_getlength(bs); if (len < 0) { error_setg_errno(errp, -len, "unable to get length for '%s'", @@ -399,6 +425,8 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target, return; } + bdrv_op_block_all(target, job->common.blocker); + job->on_source_error = on_source_error; job->on_target_error = on_target_error; job->target = target; diff --git a/blockdev.c b/blockdev.c index 5651a8e..f44441a 100644 --- a/blockdev.c +++ b/blockdev.c @@ -2240,6 +2240,8 @@ void qmp_drive_backup(const char *device, const char *target, aio_context = bdrv_get_aio_context(bs); aio_context_acquire(aio_context); + /* Although backup_run has this check too, we need to use bs->drv below, so + * do an early check redundantly. */ if (!bdrv_is_inserted(bs)) { error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, device); goto out; @@ -2256,6 +2258,7 @@ void qmp_drive_backup(const char *device, const char *target, } } + /* Early check to avoid creating target */ if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) { goto out; } @@ -2323,6 +2326,51 @@ BlockDeviceInfoList *qmp_query_named_block_nodes(Error **errp) return bdrv_named_nodes_list(); } +void qmp_blockdev_backup(const char *device, const char *target, + enum MirrorSyncMode sync, + bool has_speed, int64_t speed, + bool has_on_source_error, + BlockdevOnError on_source_error, + bool has_on_target_error, + BlockdevOnError on_target_error, + Error **errp) +{ + BlockDriverState *bs; + BlockDriverState *target_bs; + Error *local_err = NULL; + + if (!has_speed) { + speed = 0; + } + if (!has_on_source_error) { + on_source_error = BLOCKDEV_ON_ERROR_REPORT; + } + if (!has_on_target_error) { + on_target_error = BLOCKDEV_ON_ERROR_REPORT; + } + + bs = bdrv_find(device); + if (!bs) { + error_set(errp, QERR_DEVICE_NOT_FOUND, device); + return; + } + + target_bs = bdrv_find(target); + if (!target_bs) { + error_set(errp, QERR_DEVICE_NOT_FOUND, target); + return; + } + + bdrv_ref(target_bs); + bdrv_set_aio_context(target_bs, bdrv_get_aio_context(bs)); + backup_start(bs, target_bs, speed, sync, on_source_error, on_target_error, + block_job_cb, bs, &local_err); + if (local_err != NULL) { + bdrv_unref(target_bs); + error_propagate(errp, local_err); + } +} + #define DEFAULT_MIRROR_BUF_SIZE (10 << 20) void qmp_drive_mirror(const char *device, const char *target, diff --git a/qapi/block-core.json b/qapi/block-core.json index 6e8db15..d9f0598 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -703,6 +703,41 @@ '*on-target-error': 'BlockdevOnError' } } ## +# @BlockdevBackup +# +# @device: the name of the device which should be copied. +# +# @target: the name of the backup target device. +# +# @sync: what parts of the disk image should be copied to the destination +# (all the disk, only the sectors allocated in the topmost image, or +# only new I/O). +# +# @speed: #optional the maximum speed, in bytes per second. The default is 0, +# for unlimited. +# +# @on-source-error: #optional the action to take on an error on the source, +# default 'report'. 'stop' and 'enospc' can only be used +# if the block device supports io-status (see BlockInfo). +# +# @on-target-error: #optional the action to take on an error on the target, +# default 'report' (no limitations, since this applies to +# a different block device than @device). +# +# Note that @on-source-error and @on-target-error only affect background I/O. +# If an error occurs during a guest write request, the device's rerror/werror +# actions will be used. +# +# Since: 2.3 +## +{ 'type': 'BlockdevBackup', + 'data': { 'device': 'str', 'target': 'str', + 'sync': 'MirrorSyncMode', + '*speed': 'int', + '*on-source-error': 'BlockdevOnError', + '*on-target-error': 'BlockdevOnError' } } + +## # @blockdev-snapshot-sync # # Generates a synchronous snapshot of a block device. @@ -822,6 +857,25 @@ { 'command': 'drive-backup', 'data': 'DriveBackup' } ## +# @blockdev-backup +# +# Start a point-in-time copy of a block device to a new destination. The +# status of ongoing blockdev-backup operations can be checked with +# query-block-jobs where the BlockJobInfo.type field has the value 'backup'. +# The operation can be stopped before it has completed using the +# block-job-cancel command. +# +# For the arguments, see the documentation of BlockdevBackup. +# +# Returns: Nothing on success. +# If @device or @target is not a valid block device, DeviceNotFound. +# +# Since 2.3 +## +{ 'command': 'blockdev-backup', 'data': 'BlockdevBackup' } + + +## # @query-named-block-nodes # # Get the named block driver list diff --git a/qmp-commands.hx b/qmp-commands.hx index d4b0010..3b209b2 100644 --- a/qmp-commands.hx +++ b/qmp-commands.hx @@ -1094,6 +1094,50 @@ Example: "sync": "full", "target": "backup.img" } } <- { "return": {} } + +EQMP + + { + .name = "blockdev-backup", + .args_type = "sync:s,device:B,target:B,speed:i?," + "on-source-error:s?,on-target-error:s?", + .mhandler.cmd_new = qmp_marshal_input_blockdev_backup, + }, + +SQMP +blockdev-backup +------------ + +The device version of drive-backup: this command takes an existing named device +as backup target. + +Arguments: + +- "device": the name of the device which should be copied. + (json-string) +- "target": the target of the new image. If the file exists, or if it is a + device, the existing file/device will be used as the new + destination. If it does not exist, a new file will be created. + (json-string) +- "sync": what parts of the disk image should be copied to the destination; + possibilities include "full" for all the disk, "top" for only the + sectors allocated in the topmost image, or "none" to only replicate + new I/O (MirrorSyncMode). +- "speed": the maximum speed, in bytes per second (json-int, optional) +- "on-source-error": the action to take on an error on the source, default + 'report'. 'stop' and 'enospc' can only be used + if the block device supports io-status. + (BlockdevOnError, optional) +- "on-target-error": the action to take on an error on the target, default + 'report' (no limitations, since this applies to + a different block device than device). + (BlockdevOnError, optional) + +Example: +-> { "execute": "blockdev-backup", "arguments": { "device": "src-id", + "target": "tgt-id" } } +<- { "return": {} } + EQMP { -- 1.9.3 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v4 1/3] qmp: Add command 'blockdev-backup' 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 1/3] qmp: Add command 'blockdev-backup' Fam Zheng @ 2014-12-04 13:43 ` Max Reitz 2014-12-05 6:12 ` Fam Zheng 2014-12-19 8:47 ` Markus Armbruster 0 siblings, 2 replies; 13+ messages in thread From: Max Reitz @ 2014-12-04 13:43 UTC (permalink / raw) To: Fam Zheng, qemu-devel; +Cc: Kevin Wolf, Markus Armbruster, Stefan Hajnoczi On 2014-12-04 at 03:29, Fam Zheng wrote: > Similar to drive-backup, but this command uses a device id as target > instead of creating/opening an image file. > > Also add blocker on target bs, since the target is also a named device > now. > > Add check and report error for bs == target which became possible but is > an illegal case with introduction of blockdev-backup. > > Signed-off-by: Fam Zheng <famz@redhat.com> > --- > block/backup.c | 28 +++++++++++++++++++++++++++ > blockdev.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++ > qapi/block-core.json | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > qmp-commands.hx | 44 ++++++++++++++++++++++++++++++++++++++++++ > 4 files changed, 174 insertions(+) > > diff --git a/block/backup.c b/block/backup.c > index 792e655..b944dd4 100644 > --- a/block/backup.c > +++ b/block/backup.c > @@ -360,6 +360,7 @@ static void coroutine_fn backup_run(void *opaque) > hbitmap_free(job->bitmap); > > bdrv_iostatus_disable(target); > + bdrv_op_unblock_all(target, job->common.blocker); > > data = g_malloc(sizeof(*data)); > data->ret = ret; > @@ -379,6 +380,11 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target, > assert(target); > assert(cb); > > + if (bs == target) { > + error_setg(errp, "Source and target cannot be the same"); > + return; > + } > + > if ((on_source_error == BLOCKDEV_ON_ERROR_STOP || > on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) && > !bdrv_iostatus_is_enabled(bs)) { > @@ -386,6 +392,26 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target, > return; > } > > + if (!bdrv_is_inserted(bs)) { > + error_setg(errp, "Devie is not inserted: %s", *Device > + bdrv_get_device_name(bs)); > + return; > + } > + > + if (!bdrv_is_inserted(target)) { > + error_setg(errp, "Devie is not inserted: %s", Here, too. > + bdrv_get_device_name(target)); > + return; > + } > + > + if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) { > + return; > + } > + > + if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) { > + return; > + } > + > len = bdrv_getlength(bs); > if (len < 0) { > error_setg_errno(errp, -len, "unable to get length for '%s'", > @@ -399,6 +425,8 @@ void backup_start(BlockDriverState *bs, BlockDriverState *target, > return; > } > > + bdrv_op_block_all(target, job->common.blocker); > + > job->on_source_error = on_source_error; > job->on_target_error = on_target_error; > job->target = target; > diff --git a/blockdev.c b/blockdev.c > index 5651a8e..f44441a 100644 > --- a/blockdev.c > +++ b/blockdev.c > @@ -2240,6 +2240,8 @@ void qmp_drive_backup(const char *device, const char *target, > aio_context = bdrv_get_aio_context(bs); > aio_context_acquire(aio_context); > > + /* Although backup_run has this check too, we need to use bs->drv below, so > + * do an early check redundantly. */ > if (!bdrv_is_inserted(bs)) { > error_set(errp, QERR_DEVICE_HAS_NO_MEDIUM, device); > goto out; > @@ -2256,6 +2258,7 @@ void qmp_drive_backup(const char *device, const char *target, > } > } > > + /* Early check to avoid creating target */ > if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) { > goto out; > } > @@ -2323,6 +2326,51 @@ BlockDeviceInfoList *qmp_query_named_block_nodes(Error **errp) > return bdrv_named_nodes_list(); > } > > +void qmp_blockdev_backup(const char *device, const char *target, > + enum MirrorSyncMode sync, > + bool has_speed, int64_t speed, > + bool has_on_source_error, > + BlockdevOnError on_source_error, > + bool has_on_target_error, > + BlockdevOnError on_target_error, > + Error **errp) > +{ > + BlockDriverState *bs; > + BlockDriverState *target_bs; > + Error *local_err = NULL; > + > + if (!has_speed) { > + speed = 0; > + } > + if (!has_on_source_error) { > + on_source_error = BLOCKDEV_ON_ERROR_REPORT; > + } > + if (!has_on_target_error) { > + on_target_error = BLOCKDEV_ON_ERROR_REPORT; > + } > + > + bs = bdrv_find(device); Hmmmm, I once tried to rewrite some block jobs to use BlockBackend instead of BDS directly... Didn't work out so well. So, well, bdrv_find() is fine (although there is still the TODO above its definition which asks callers to use blk_by_name()...). > + if (!bs) { > + error_set(errp, QERR_DEVICE_NOT_FOUND, device); > + return; > + } > + > + target_bs = bdrv_find(target); > + if (!target_bs) { > + error_set(errp, QERR_DEVICE_NOT_FOUND, target); > + return; > + } > + > + bdrv_ref(target_bs); > + bdrv_set_aio_context(target_bs, bdrv_get_aio_context(bs)); In the cover letter you said you were acquiring the AIO context but you're not. Something like the aio_context_acquire() call in qmp_drive_backup() seems missing. > + backup_start(bs, target_bs, speed, sync, on_source_error, on_target_error, > + block_job_cb, bs, &local_err); > + if (local_err != NULL) { > + bdrv_unref(target_bs); Hm, as far as I can see, backup_complete() is always run, regardless of the operation status. backup_complete() in turn calls bdrv_unref(s->target), so this seems unnecessary to me. > + error_propagate(errp, local_err); > + } > +} > + > #define DEFAULT_MIRROR_BUF_SIZE (10 << 20) > > void qmp_drive_mirror(const char *device, const char *target, > diff --git a/qapi/block-core.json b/qapi/block-core.json > index 6e8db15..d9f0598 100644 > --- a/qapi/block-core.json > +++ b/qapi/block-core.json > @@ -703,6 +703,41 @@ > '*on-target-error': 'BlockdevOnError' } } > > ## > +# @BlockdevBackup > +# > +# @device: the name of the device which should be copied. > +# > +# @target: the name of the backup target device. > +# > +# @sync: what parts of the disk image should be copied to the destination > +# (all the disk, only the sectors allocated in the topmost image, or > +# only new I/O). > +# > +# @speed: #optional the maximum speed, in bytes per second. The default is 0, > +# for unlimited. > +# > +# @on-source-error: #optional the action to take on an error on the source, > +# default 'report'. 'stop' and 'enospc' can only be used > +# if the block device supports io-status (see BlockInfo). > +# > +# @on-target-error: #optional the action to take on an error on the target, > +# default 'report' (no limitations, since this applies to > +# a different block device than @device). > +# > +# Note that @on-source-error and @on-target-error only affect background I/O. > +# If an error occurs during a guest write request, the device's rerror/werror > +# actions will be used. > +# > +# Since: 2.3 > +## > +{ 'type': 'BlockdevBackup', > + 'data': { 'device': 'str', 'target': 'str', > + 'sync': 'MirrorSyncMode', > + '*speed': 'int', > + '*on-source-error': 'BlockdevOnError', > + '*on-target-error': 'BlockdevOnError' } } > + > +## > # @blockdev-snapshot-sync > # > # Generates a synchronous snapshot of a block device. > @@ -822,6 +857,25 @@ > { 'command': 'drive-backup', 'data': 'DriveBackup' } > > ## > +# @blockdev-backup > +# > +# Start a point-in-time copy of a block device to a new destination. The > +# status of ongoing blockdev-backup operations can be checked with > +# query-block-jobs where the BlockJobInfo.type field has the value 'backup'. > +# The operation can be stopped before it has completed using the > +# block-job-cancel command. > +# > +# For the arguments, see the documentation of BlockdevBackup. > +# > +# Returns: Nothing on success. > +# If @device or @target is not a valid block device, DeviceNotFound. > +# > +# Since 2.3 > +## > +{ 'command': 'blockdev-backup', 'data': 'BlockdevBackup' } > + > + > +## > # @query-named-block-nodes > # > # Get the named block driver list > diff --git a/qmp-commands.hx b/qmp-commands.hx > index d4b0010..3b209b2 100644 > --- a/qmp-commands.hx > +++ b/qmp-commands.hx > @@ -1094,6 +1094,50 @@ Example: > "sync": "full", > "target": "backup.img" } } > <- { "return": {} } > + > +EQMP > + > + { > + .name = "blockdev-backup", > + .args_type = "sync:s,device:B,target:B,speed:i?," > + "on-source-error:s?,on-target-error:s?", > + .mhandler.cmd_new = qmp_marshal_input_blockdev_backup, > + }, > + > +SQMP > +blockdev-backup > +------------ > + > +The device version of drive-backup: this command takes an existing named device > +as backup target. > + > +Arguments: > + > +- "device": the name of the device which should be copied. > + (json-string) > +- "target": the target of the new image. If the file exists, or if it is a > + device, the existing file/device will be used as the new > + destination. If it does not exist, a new file will be created. > + (json-string) > +- "sync": what parts of the disk image should be copied to the destination; > + possibilities include "full" for all the disk, "top" for only the > + sectors allocated in the topmost image, or "none" to only replicate > + new I/O (MirrorSyncMode). > +- "speed": the maximum speed, in bytes per second (json-int, optional) > +- "on-source-error": the action to take on an error on the source, default > + 'report'. 'stop' and 'enospc' can only be used > + if the block device supports io-status. > + (BlockdevOnError, optional) > +- "on-target-error": the action to take on an error on the target, default > + 'report' (no limitations, since this applies to > + a different block device than device). > + (BlockdevOnError, optional) > + > +Example: > +-> { "execute": "blockdev-backup", "arguments": { "device": "src-id", > + "target": "tgt-id" } } Isn't "sync" missing? Max > +<- { "return": {} } > + > EQMP > > { ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v4 1/3] qmp: Add command 'blockdev-backup' 2014-12-04 13:43 ` Max Reitz @ 2014-12-05 6:12 ` Fam Zheng 2014-12-05 9:10 ` Max Reitz 2014-12-19 8:47 ` Markus Armbruster 1 sibling, 1 reply; 13+ messages in thread From: Fam Zheng @ 2014-12-05 6:12 UTC (permalink / raw) To: Max Reitz; +Cc: Kevin Wolf, qemu-devel, Stefan Hajnoczi, Markus Armbruster On Thu, 12/04 14:43, Max Reitz wrote: > >+ if (!bs) { > >+ error_set(errp, QERR_DEVICE_NOT_FOUND, device); > >+ return; > >+ } > >+ > >+ target_bs = bdrv_find(target); > >+ if (!target_bs) { > >+ error_set(errp, QERR_DEVICE_NOT_FOUND, target); > >+ return; > >+ } > >+ > >+ bdrv_ref(target_bs); > >+ bdrv_set_aio_context(target_bs, bdrv_get_aio_context(bs)); > > In the cover letter you said you were acquiring the AIO context but you're > not. Something like the aio_context_acquire() call in qmp_drive_backup() > seems missing. Yes. Adding. > > >+ backup_start(bs, target_bs, speed, sync, on_source_error, on_target_error, > >+ block_job_cb, bs, &local_err); > >+ if (local_err != NULL) { > >+ bdrv_unref(target_bs); > > Hm, as far as I can see, backup_complete() is always run, regardless of the > operation status. backup_complete() in turn calls bdrv_unref(s->target), so > this seems unnecessary to me. In error case, backup_start does nothing. So we need to release for the bdrv_ref two lines above. > >+SQMP > >+blockdev-backup > >+------------ > >+ > >+The device version of drive-backup: this command takes an existing named device > >+as backup target. > >+ > >+Arguments: > >+ > >+- "device": the name of the device which should be copied. > >+ (json-string) > >+- "target": the target of the new image. If the file exists, or if it is a > >+ device, the existing file/device will be used as the new > >+ destination. If it does not exist, a new file will be created. > >+ (json-string) > >+- "sync": what parts of the disk image should be copied to the destination; > >+ possibilities include "full" for all the disk, "top" for only the > >+ sectors allocated in the topmost image, or "none" to only replicate > >+ new I/O (MirrorSyncMode). > >+- "speed": the maximum speed, in bytes per second (json-int, optional) > >+- "on-source-error": the action to take on an error on the source, default > >+ 'report'. 'stop' and 'enospc' can only be used > >+ if the block device supports io-status. > >+ (BlockdevOnError, optional) > >+- "on-target-error": the action to take on an error on the target, default > >+ 'report' (no limitations, since this applies to > >+ a different block device than device). > >+ (BlockdevOnError, optional) > >+ > >+Example: > >+-> { "execute": "blockdev-backup", "arguments": { "device": "src-id", > >+ "target": "tgt-id" } } > > Isn't "sync" missing? Yes. Thanks, Fam ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v4 1/3] qmp: Add command 'blockdev-backup' 2014-12-05 6:12 ` Fam Zheng @ 2014-12-05 9:10 ` Max Reitz 0 siblings, 0 replies; 13+ messages in thread From: Max Reitz @ 2014-12-05 9:10 UTC (permalink / raw) To: Fam Zheng; +Cc: Kevin Wolf, qemu-devel, Stefan Hajnoczi, Markus Armbruster On 2014-12-05 at 07:12, Fam Zheng wrote: > On Thu, 12/04 14:43, Max Reitz wrote: >>> + if (!bs) { >>> + error_set(errp, QERR_DEVICE_NOT_FOUND, device); >>> + return; >>> + } >>> + >>> + target_bs = bdrv_find(target); >>> + if (!target_bs) { >>> + error_set(errp, QERR_DEVICE_NOT_FOUND, target); >>> + return; >>> + } >>> + >>> + bdrv_ref(target_bs); >>> + bdrv_set_aio_context(target_bs, bdrv_get_aio_context(bs)); >> In the cover letter you said you were acquiring the AIO context but you're >> not. Something like the aio_context_acquire() call in qmp_drive_backup() >> seems missing. > Yes. Adding. > >>> + backup_start(bs, target_bs, speed, sync, on_source_error, on_target_error, >>> + block_job_cb, bs, &local_err); >>> + if (local_err != NULL) { >>> + bdrv_unref(target_bs); >> Hm, as far as I can see, backup_complete() is always run, regardless of the >> operation status. backup_complete() in turn calls bdrv_unref(s->target), so >> this seems unnecessary to me. > In error case, backup_start does nothing. So we need to release for the > bdrv_ref two lines above. Ah, right, I forgot about early errors. local_err is not even set if the block job itself fails, because by the time the block job is started, backup_start() will have already returned. My fault. Max >>> +SQMP >>> +blockdev-backup >>> +------------ >>> + >>> +The device version of drive-backup: this command takes an existing named device >>> +as backup target. >>> + >>> +Arguments: >>> + >>> +- "device": the name of the device which should be copied. >>> + (json-string) >>> +- "target": the target of the new image. If the file exists, or if it is a >>> + device, the existing file/device will be used as the new >>> + destination. If it does not exist, a new file will be created. >>> + (json-string) >>> +- "sync": what parts of the disk image should be copied to the destination; >>> + possibilities include "full" for all the disk, "top" for only the >>> + sectors allocated in the topmost image, or "none" to only replicate >>> + new I/O (MirrorSyncMode). >>> +- "speed": the maximum speed, in bytes per second (json-int, optional) >>> +- "on-source-error": the action to take on an error on the source, default >>> + 'report'. 'stop' and 'enospc' can only be used >>> + if the block device supports io-status. >>> + (BlockdevOnError, optional) >>> +- "on-target-error": the action to take on an error on the target, default >>> + 'report' (no limitations, since this applies to >>> + a different block device than device). >>> + (BlockdevOnError, optional) >>> + >>> +Example: >>> +-> { "execute": "blockdev-backup", "arguments": { "device": "src-id", >>> + "target": "tgt-id" } } >> Isn't "sync" missing? > Yes. > > Thanks, > Fam ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v4 1/3] qmp: Add command 'blockdev-backup' 2014-12-04 13:43 ` Max Reitz 2014-12-05 6:12 ` Fam Zheng @ 2014-12-19 8:47 ` Markus Armbruster 1 sibling, 0 replies; 13+ messages in thread From: Markus Armbruster @ 2014-12-19 8:47 UTC (permalink / raw) To: Max Reitz; +Cc: Kevin Wolf, Fam Zheng, qemu-devel, Stefan Hajnoczi Max Reitz <mreitz@redhat.com> writes: > On 2014-12-04 at 03:29, Fam Zheng wrote: >> Similar to drive-backup, but this command uses a device id as target >> instead of creating/opening an image file. >> >> Also add blocker on target bs, since the target is also a named device >> now. >> >> Add check and report error for bs == target which became possible but is >> an illegal case with introduction of blockdev-backup. [...] >> diff --git a/blockdev.c b/blockdev.c >> index 5651a8e..f44441a 100644 >> --- a/blockdev.c >> +++ b/blockdev.c [...] >> + if (!bs) { >> + error_set(errp, QERR_DEVICE_NOT_FOUND, device); >> + return; >> + } >> + >> + target_bs = bdrv_find(target); >> + if (!target_bs) { >> + error_set(errp, QERR_DEVICE_NOT_FOUND, target); >> + return; >> + } >> + >> + bdrv_ref(target_bs); >> + bdrv_set_aio_context(target_bs, bdrv_get_aio_context(bs)); > > In the cover letter you said you were acquiring the AIO context but > you're not. Something like the aio_context_acquire() call in > qmp_drive_backup() seems missing. The fact that I missed this in my review demonstrates that I have to pay much more attention to AIO contexts. Thanks, Max! ^ permalink raw reply [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v4 2/3] block: Add blockdev-backup to transaction 2014-12-04 2:29 [Qemu-devel] [PATCH v4 0/3] qmp: Add "blockdev-backup" Fam Zheng 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 1/3] qmp: Add command 'blockdev-backup' Fam Zheng @ 2014-12-04 2:29 ` Fam Zheng 2014-12-04 13:59 ` Max Reitz 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 3/3] qemu-iotests: Test blockdev-backup in 055 Fam Zheng 2 siblings, 1 reply; 13+ messages in thread From: Fam Zheng @ 2014-12-04 2:29 UTC (permalink / raw) To: qemu-devel; +Cc: Kevin Wolf, Markus Armbruster, Stefan Hajnoczi, mreitz Also add version info for other transaction types. Signed-off-by: Fam Zheng <famz@redhat.com> --- blockdev.c | 81 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ qapi-schema.json | 7 +++++ 2 files changed, 88 insertions(+) diff --git a/blockdev.c b/blockdev.c index f44441a..a98a4f8 100644 --- a/blockdev.c +++ b/blockdev.c @@ -1559,6 +1559,81 @@ static void drive_backup_clean(BlkTransactionState *common) } } +typedef struct BlockdevBackupState { + BlkTransactionState common; + BlockDriverState *bs; + BlockJob *job; + AioContext *aio_context; +} BlockdevBackupState; + +static void blockdev_backup_prepare(BlkTransactionState *common, Error **errp) +{ + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); + BlockdevBackup *backup; + BlockDriverState *bs, *target; + Error *local_err = NULL; + + assert(common->action->kind == TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP); + backup = common->action->blockdev_backup; + + bs = bdrv_find(backup->device); + if (!bs) { + error_set(errp, QERR_DEVICE_NOT_FOUND, backup->device); + return; + } + + target = bdrv_find(backup->target); + if (!target) { + error_set(errp, QERR_DEVICE_NOT_FOUND, backup->target); + return; + } + + /* AioContext is released in .clean() */ + state->aio_context = bdrv_get_aio_context(bs); + if (state->aio_context != bdrv_get_aio_context(target)) { + state->aio_context = NULL; + error_setg(errp, "Backup between two IO threads are not implemented"); + return; + } + aio_context_acquire(state->aio_context); + + qmp_blockdev_backup(backup->device, backup->target, + backup->sync, + backup->has_speed, backup->speed, + backup->has_on_source_error, backup->on_source_error, + backup->has_on_target_error, backup->on_target_error, + &local_err); + if (local_err) { + error_propagate(errp, local_err); + state->bs = NULL; + state->job = NULL; + return; + } + + state->bs = bdrv_find(backup->device); + state->job = state->bs->job; +} + +static void blockdev_backup_abort(BlkTransactionState *common) +{ + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); + BlockDriverState *bs = state->bs; + + /* Only cancel if it's the job we started */ + if (bs && bs->job && bs->job == state->job) { + block_job_cancel_sync(bs->job); + } +} + +static void blockdev_backup_clean(BlkTransactionState *common) +{ + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); + + if (state->aio_context) { + aio_context_release(state->aio_context); + } +} + static void abort_prepare(BlkTransactionState *common, Error **errp) { error_setg(errp, "Transaction aborted using Abort action"); @@ -1582,6 +1657,12 @@ static const BdrvActionOps actions[] = { .abort = drive_backup_abort, .clean = drive_backup_clean, }, + [TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP] = { + .instance_size = sizeof(BlockdevBackupState), + .prepare = blockdev_backup_prepare, + .abort = blockdev_backup_abort, + .clean = blockdev_backup_clean, + }, [TRANSACTION_ACTION_KIND_ABORT] = { .instance_size = sizeof(BlkTransactionState), .prepare = abort_prepare, diff --git a/qapi-schema.json b/qapi-schema.json index 9ffdcf8..411d287 100644 --- a/qapi-schema.json +++ b/qapi-schema.json @@ -1254,11 +1254,18 @@ # # A discriminated record of operations that can be performed with # @transaction. +# +# Since 1.1 +# drive-backup since 1.6 +# abort since 1.6 +# blockdev-snapshot-internal-sync since 1.7 +# blockdev-backup since 2.3 ## { 'union': 'TransactionAction', 'data': { 'blockdev-snapshot-sync': 'BlockdevSnapshot', 'drive-backup': 'DriveBackup', + 'blockdev-backup': 'BlockdevBackup', 'abort': 'Abort', 'blockdev-snapshot-internal-sync': 'BlockdevSnapshotInternal' } } -- 1.9.3 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v4 2/3] block: Add blockdev-backup to transaction 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 2/3] block: Add blockdev-backup to transaction Fam Zheng @ 2014-12-04 13:59 ` Max Reitz 2014-12-05 6:37 ` Fam Zheng 0 siblings, 1 reply; 13+ messages in thread From: Max Reitz @ 2014-12-04 13:59 UTC (permalink / raw) To: Fam Zheng, qemu-devel; +Cc: Kevin Wolf, Markus Armbruster, Stefan Hajnoczi On 2014-12-04 at 03:29, Fam Zheng wrote: > Also add version info for other transaction types. > > Signed-off-by: Fam Zheng <famz@redhat.com> > --- > blockdev.c | 81 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > qapi-schema.json | 7 +++++ > 2 files changed, 88 insertions(+) > > diff --git a/blockdev.c b/blockdev.c > index f44441a..a98a4f8 100644 > --- a/blockdev.c > +++ b/blockdev.c > @@ -1559,6 +1559,81 @@ static void drive_backup_clean(BlkTransactionState *common) > } > } > > +typedef struct BlockdevBackupState { > + BlkTransactionState common; > + BlockDriverState *bs; > + BlockJob *job; > + AioContext *aio_context; > +} BlockdevBackupState; > + > +static void blockdev_backup_prepare(BlkTransactionState *common, Error **errp) > +{ > + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); > + BlockdevBackup *backup; > + BlockDriverState *bs, *target; > + Error *local_err = NULL; > + > + assert(common->action->kind == TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP); > + backup = common->action->blockdev_backup; > + > + bs = bdrv_find(backup->device); > + if (!bs) { > + error_set(errp, QERR_DEVICE_NOT_FOUND, backup->device); > + return; > + } > + > + target = bdrv_find(backup->target); > + if (!target) { > + error_set(errp, QERR_DEVICE_NOT_FOUND, backup->target); > + return; > + } > + > + /* AioContext is released in .clean() */ > + state->aio_context = bdrv_get_aio_context(bs); > + if (state->aio_context != bdrv_get_aio_context(target)) { > + state->aio_context = NULL; > + error_setg(errp, "Backup between two IO threads are not implemented"); Either *Backups ore s/are/is/. > + return; > + } > + aio_context_acquire(state->aio_context); > + > + qmp_blockdev_backup(backup->device, backup->target, > + backup->sync, > + backup->has_speed, backup->speed, > + backup->has_on_source_error, backup->on_source_error, > + backup->has_on_target_error, backup->on_target_error, > + &local_err); > + if (local_err) { > + error_propagate(errp, local_err); > + state->bs = NULL; > + state->job = NULL; No need for these assignments, state is 0-initialized. I wouldn't point that out if Stefan wouldn't just have sent a patch which removed such assignments in some other transaction preparation. > + return; > + } > + > + state->bs = bdrv_find(backup->device); > + state->job = state->bs->job; > +} > + > +static void blockdev_backup_abort(BlkTransactionState *common) > +{ > + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); > + BlockDriverState *bs = state->bs; > + > + /* Only cancel if it's the job we started */ > + if (bs && bs->job && bs->job == state->job) { > + block_job_cancel_sync(bs->job); > + } > +} > + > +static void blockdev_backup_clean(BlkTransactionState *common) > +{ > + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); > + > + if (state->aio_context) { > + aio_context_release(state->aio_context); > + } > +} > + > static void abort_prepare(BlkTransactionState *common, Error **errp) > { > error_setg(errp, "Transaction aborted using Abort action"); > @@ -1582,6 +1657,12 @@ static const BdrvActionOps actions[] = { > .abort = drive_backup_abort, > .clean = drive_backup_clean, > }, > + [TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP] = { > + .instance_size = sizeof(BlockdevBackupState), > + .prepare = blockdev_backup_prepare, > + .abort = blockdev_backup_abort, > + .clean = blockdev_backup_clean, > + }, > [TRANSACTION_ACTION_KIND_ABORT] = { > .instance_size = sizeof(BlkTransactionState), > .prepare = abort_prepare, > diff --git a/qapi-schema.json b/qapi-schema.json > index 9ffdcf8..411d287 100644 > --- a/qapi-schema.json > +++ b/qapi-schema.json > @@ -1254,11 +1254,18 @@ > # > # A discriminated record of operations that can be performed with > # @transaction. > +# > +# Since 1.1 > +# drive-backup since 1.6 > +# abort since 1.6 > +# blockdev-snapshot-internal-sync since 1.7 > +# blockdev-backup since 2.3 This seems a bit hard to read... Maybe an empty line after the "Since 1.1" would help, but I'm not sure... > ## > { 'union': 'TransactionAction', > 'data': { > 'blockdev-snapshot-sync': 'BlockdevSnapshot', > 'drive-backup': 'DriveBackup', > + 'blockdev-backup': 'BlockdevBackup', > 'abort': 'Abort', > 'blockdev-snapshot-internal-sync': 'BlockdevSnapshotInternal' > } } So, about this patch in general: I know drive-backup works nearly the same way. It starts block job in prepare(), which is aborted in abort(). But it seems a bit like cheating to me. For me, a transaction is something which you can start and if any of the operations cannot be executed (because its preparation failed), all are aborted (that is, not even started). The commit() part will really do the operation, and that will never fail because prepare() has made sure it will not. This isn't the case when starting block jobs in prepare(). If some other operation's prepare() fails, some data may have been copied already, so you can't really roll back the operation. It isn't so bad for drive-backup, because you're normally writing to a new image, so having overwritten that image by a bit isn't so bad. But with blockdev-backup, you're overwriting a block device inside qemu, so overwriting it partially may actually be bad (or even overwriting it completely; if a transaction fails, I'd expect the operations to have done nothing at all). I understand that drive-backup does the same thing already (although I don't deem the impact there a bit less), so it may just be that my notion of transactions is wrong. However, in this state, where is the advantage of making this an operation usable in a transaction? I can just start a number of blockdev-backup block jobs manually and cancel them if anything goes wrong. There's no difference, so I don't see the benefit of making it a transaction operation if a transaction does not actually mean "Do not do anything if we don't know for sure that all operations will complete successfully." Max ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v4 2/3] block: Add blockdev-backup to transaction 2014-12-04 13:59 ` Max Reitz @ 2014-12-05 6:37 ` Fam Zheng 2014-12-05 9:24 ` Max Reitz 0 siblings, 1 reply; 13+ messages in thread From: Fam Zheng @ 2014-12-05 6:37 UTC (permalink / raw) To: Max Reitz; +Cc: Kevin Wolf, qemu-devel, Stefan Hajnoczi, Markus Armbruster On Thu, 12/04 14:59, Max Reitz wrote: > On 2014-12-04 at 03:29, Fam Zheng wrote: > >Also add version info for other transaction types. > > > >Signed-off-by: Fam Zheng <famz@redhat.com> > >--- > > blockdev.c | 81 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > qapi-schema.json | 7 +++++ > > 2 files changed, 88 insertions(+) > > > >diff --git a/blockdev.c b/blockdev.c > >index f44441a..a98a4f8 100644 > >--- a/blockdev.c > >+++ b/blockdev.c > >@@ -1559,6 +1559,81 @@ static void drive_backup_clean(BlkTransactionState *common) > > } > > } > >+typedef struct BlockdevBackupState { > >+ BlkTransactionState common; > >+ BlockDriverState *bs; > >+ BlockJob *job; > >+ AioContext *aio_context; > >+} BlockdevBackupState; > >+ > >+static void blockdev_backup_prepare(BlkTransactionState *common, Error **errp) > >+{ > >+ BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); > >+ BlockdevBackup *backup; > >+ BlockDriverState *bs, *target; > >+ Error *local_err = NULL; > >+ > >+ assert(common->action->kind == TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP); > >+ backup = common->action->blockdev_backup; > >+ > >+ bs = bdrv_find(backup->device); > >+ if (!bs) { > >+ error_set(errp, QERR_DEVICE_NOT_FOUND, backup->device); > >+ return; > >+ } > >+ > >+ target = bdrv_find(backup->target); > >+ if (!target) { > >+ error_set(errp, QERR_DEVICE_NOT_FOUND, backup->target); > >+ return; > >+ } > >+ > >+ /* AioContext is released in .clean() */ > >+ state->aio_context = bdrv_get_aio_context(bs); > >+ if (state->aio_context != bdrv_get_aio_context(target)) { > >+ state->aio_context = NULL; > >+ error_setg(errp, "Backup between two IO threads are not implemented"); > > Either *Backups ore s/are/is/. > > >+ return; > >+ } > >+ aio_context_acquire(state->aio_context); > >+ > >+ qmp_blockdev_backup(backup->device, backup->target, > >+ backup->sync, > >+ backup->has_speed, backup->speed, > >+ backup->has_on_source_error, backup->on_source_error, > >+ backup->has_on_target_error, backup->on_target_error, > >+ &local_err); > >+ if (local_err) { > >+ error_propagate(errp, local_err); > >+ state->bs = NULL; > >+ state->job = NULL; > > No need for these assignments, state is 0-initialized. I wouldn't point that > out if Stefan wouldn't just have sent a patch which removed such assignments > in some other transaction preparation. > > >+ return; > >+ } > >+ > >+ state->bs = bdrv_find(backup->device); > >+ state->job = state->bs->job; > >+} > >+ > >+static void blockdev_backup_abort(BlkTransactionState *common) > >+{ > >+ BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); > >+ BlockDriverState *bs = state->bs; > >+ > >+ /* Only cancel if it's the job we started */ > >+ if (bs && bs->job && bs->job == state->job) { > >+ block_job_cancel_sync(bs->job); > >+ } > >+} > >+ > >+static void blockdev_backup_clean(BlkTransactionState *common) > >+{ > >+ BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); > >+ > >+ if (state->aio_context) { > >+ aio_context_release(state->aio_context); > >+ } > >+} > >+ > > static void abort_prepare(BlkTransactionState *common, Error **errp) > > { > > error_setg(errp, "Transaction aborted using Abort action"); > >@@ -1582,6 +1657,12 @@ static const BdrvActionOps actions[] = { > > .abort = drive_backup_abort, > > .clean = drive_backup_clean, > > }, > >+ [TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP] = { > >+ .instance_size = sizeof(BlockdevBackupState), > >+ .prepare = blockdev_backup_prepare, > >+ .abort = blockdev_backup_abort, > >+ .clean = blockdev_backup_clean, > >+ }, > > [TRANSACTION_ACTION_KIND_ABORT] = { > > .instance_size = sizeof(BlkTransactionState), > > .prepare = abort_prepare, > >diff --git a/qapi-schema.json b/qapi-schema.json > >index 9ffdcf8..411d287 100644 > >--- a/qapi-schema.json > >+++ b/qapi-schema.json > >@@ -1254,11 +1254,18 @@ > > # > > # A discriminated record of operations that can be performed with > > # @transaction. > >+# > >+# Since 1.1 > >+# drive-backup since 1.6 > >+# abort since 1.6 > >+# blockdev-snapshot-internal-sync since 1.7 > >+# blockdev-backup since 2.3 > > This seems a bit hard to read... Maybe an empty line after the "Since 1.1" > would help, but I'm not sure... > > > ## > > { 'union': 'TransactionAction', > > 'data': { > > 'blockdev-snapshot-sync': 'BlockdevSnapshot', > > 'drive-backup': 'DriveBackup', > >+ 'blockdev-backup': 'BlockdevBackup', > > 'abort': 'Abort', > > 'blockdev-snapshot-internal-sync': 'BlockdevSnapshotInternal' > > } } > > So, about this patch in general: I know drive-backup works nearly the same > way. It starts block job in prepare(), which is aborted in abort(). But it > seems a bit like cheating to me. For me, a transaction is something which > you can start and if any of the operations cannot be executed (because its > preparation failed), all are aborted (that is, not even started). The > commit() part will really do the operation, and that will never fail because > prepare() has made sure it will not. > > This isn't the case when starting block jobs in prepare(). If some other > operation's prepare() fails, some data may have been copied already, so you > can't really roll back the operation. It isn't so bad for drive-backup, > because you're normally writing to a new image, so having overwritten that > image by a bit isn't so bad. But with blockdev-backup, you're overwriting a > block device inside qemu, so overwriting it partially may actually be bad > (or even overwriting it completely; if a transaction fails, I'd expect the > operations to have done nothing at all). > > I understand that drive-backup does the same thing already (although I don't > deem the impact there a bit less), so it may just be that my notion of > transactions is wrong. > > However, in this state, where is the advantage of making this an operation > usable in a transaction? I can just start a number of blockdev-backup block > jobs manually and cancel them if anything goes wrong. There's no difference, > so I don't see the benefit of making it a transaction operation if a > transaction does not actually mean "Do not do anything if we don't know for > sure that all operations will complete successfully." It's still different. When you start a number of drive-backup or blockdev-backup in a transaction, all the disks' data are from a single point of time. That is important for backup use cases. When you start jobs manually, the guest may already changed some data, so the disks are not consistent. Think of Linux guest dm soft raid use case. Fam ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v4 2/3] block: Add blockdev-backup to transaction 2014-12-05 6:37 ` Fam Zheng @ 2014-12-05 9:24 ` Max Reitz 2014-12-05 14:47 ` Max Reitz 0 siblings, 1 reply; 13+ messages in thread From: Max Reitz @ 2014-12-05 9:24 UTC (permalink / raw) To: Fam Zheng; +Cc: Kevin Wolf, qemu-devel, Stefan Hajnoczi, Markus Armbruster On 2014-12-05 at 07:37, Fam Zheng wrote: > On Thu, 12/04 14:59, Max Reitz wrote: >> On 2014-12-04 at 03:29, Fam Zheng wrote: >>> Also add version info for other transaction types. >>> >>> Signed-off-by: Fam Zheng <famz@redhat.com> >>> --- >>> blockdev.c | 81 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>> qapi-schema.json | 7 +++++ >>> 2 files changed, 88 insertions(+) >>> >>> diff --git a/blockdev.c b/blockdev.c >>> index f44441a..a98a4f8 100644 >>> --- a/blockdev.c >>> +++ b/blockdev.c >>> @@ -1559,6 +1559,81 @@ static void drive_backup_clean(BlkTransactionState *common) >>> } >>> } >>> +typedef struct BlockdevBackupState { >>> + BlkTransactionState common; >>> + BlockDriverState *bs; >>> + BlockJob *job; >>> + AioContext *aio_context; >>> +} BlockdevBackupState; >>> + >>> +static void blockdev_backup_prepare(BlkTransactionState *common, Error **errp) >>> +{ >>> + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); >>> + BlockdevBackup *backup; >>> + BlockDriverState *bs, *target; >>> + Error *local_err = NULL; >>> + >>> + assert(common->action->kind == TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP); >>> + backup = common->action->blockdev_backup; >>> + >>> + bs = bdrv_find(backup->device); >>> + if (!bs) { >>> + error_set(errp, QERR_DEVICE_NOT_FOUND, backup->device); >>> + return; >>> + } >>> + >>> + target = bdrv_find(backup->target); >>> + if (!target) { >>> + error_set(errp, QERR_DEVICE_NOT_FOUND, backup->target); >>> + return; >>> + } >>> + >>> + /* AioContext is released in .clean() */ >>> + state->aio_context = bdrv_get_aio_context(bs); >>> + if (state->aio_context != bdrv_get_aio_context(target)) { >>> + state->aio_context = NULL; >>> + error_setg(errp, "Backup between two IO threads are not implemented"); >> Either *Backups ore s/are/is/. >> >>> + return; >>> + } >>> + aio_context_acquire(state->aio_context); >>> + >>> + qmp_blockdev_backup(backup->device, backup->target, >>> + backup->sync, >>> + backup->has_speed, backup->speed, >>> + backup->has_on_source_error, backup->on_source_error, >>> + backup->has_on_target_error, backup->on_target_error, >>> + &local_err); >>> + if (local_err) { >>> + error_propagate(errp, local_err); >>> + state->bs = NULL; >>> + state->job = NULL; >> No need for these assignments, state is 0-initialized. I wouldn't point that >> out if Stefan wouldn't just have sent a patch which removed such assignments >> in some other transaction preparation. >> >>> + return; >>> + } >>> + >>> + state->bs = bdrv_find(backup->device); >>> + state->job = state->bs->job; >>> +} >>> + >>> +static void blockdev_backup_abort(BlkTransactionState *common) >>> +{ >>> + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); >>> + BlockDriverState *bs = state->bs; >>> + >>> + /* Only cancel if it's the job we started */ >>> + if (bs && bs->job && bs->job == state->job) { >>> + block_job_cancel_sync(bs->job); >>> + } >>> +} >>> + >>> +static void blockdev_backup_clean(BlkTransactionState *common) >>> +{ >>> + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, common, common); >>> + >>> + if (state->aio_context) { >>> + aio_context_release(state->aio_context); >>> + } >>> +} >>> + >>> static void abort_prepare(BlkTransactionState *common, Error **errp) >>> { >>> error_setg(errp, "Transaction aborted using Abort action"); >>> @@ -1582,6 +1657,12 @@ static const BdrvActionOps actions[] = { >>> .abort = drive_backup_abort, >>> .clean = drive_backup_clean, >>> }, >>> + [TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP] = { >>> + .instance_size = sizeof(BlockdevBackupState), >>> + .prepare = blockdev_backup_prepare, >>> + .abort = blockdev_backup_abort, >>> + .clean = blockdev_backup_clean, >>> + }, >>> [TRANSACTION_ACTION_KIND_ABORT] = { >>> .instance_size = sizeof(BlkTransactionState), >>> .prepare = abort_prepare, >>> diff --git a/qapi-schema.json b/qapi-schema.json >>> index 9ffdcf8..411d287 100644 >>> --- a/qapi-schema.json >>> +++ b/qapi-schema.json >>> @@ -1254,11 +1254,18 @@ >>> # >>> # A discriminated record of operations that can be performed with >>> # @transaction. >>> +# >>> +# Since 1.1 >>> +# drive-backup since 1.6 >>> +# abort since 1.6 >>> +# blockdev-snapshot-internal-sync since 1.7 >>> +# blockdev-backup since 2.3 >> This seems a bit hard to read... Maybe an empty line after the "Since 1.1" >> would help, but I'm not sure... >> >>> ## >>> { 'union': 'TransactionAction', >>> 'data': { >>> 'blockdev-snapshot-sync': 'BlockdevSnapshot', >>> 'drive-backup': 'DriveBackup', >>> + 'blockdev-backup': 'BlockdevBackup', >>> 'abort': 'Abort', >>> 'blockdev-snapshot-internal-sync': 'BlockdevSnapshotInternal' >>> } } >> So, about this patch in general: I know drive-backup works nearly the same >> way. It starts block job in prepare(), which is aborted in abort(). But it >> seems a bit like cheating to me. For me, a transaction is something which >> you can start and if any of the operations cannot be executed (because its >> preparation failed), all are aborted (that is, not even started). The >> commit() part will really do the operation, and that will never fail because >> prepare() has made sure it will not. >> >> This isn't the case when starting block jobs in prepare(). If some other >> operation's prepare() fails, some data may have been copied already, so you >> can't really roll back the operation. It isn't so bad for drive-backup, >> because you're normally writing to a new image, so having overwritten that >> image by a bit isn't so bad. But with blockdev-backup, you're overwriting a >> block device inside qemu, so overwriting it partially may actually be bad >> (or even overwriting it completely; if a transaction fails, I'd expect the >> operations to have done nothing at all). >> >> I understand that drive-backup does the same thing already (although I don't >> deem the impact there a bit less), so it may just be that my notion of >> transactions is wrong. >> >> However, in this state, where is the advantage of making this an operation >> usable in a transaction? I can just start a number of blockdev-backup block >> jobs manually and cancel them if anything goes wrong. There's no difference, >> so I don't see the benefit of making it a transaction operation if a >> transaction does not actually mean "Do not do anything if we don't know for >> sure that all operations will complete successfully." > It's still different. When you start a number of drive-backup or > blockdev-backup in a transaction, all the disks' data are from a single point > of time. That is important for backup use cases. > > When you start jobs manually, the guest may already changed some data, so the > disks are not consistent. Think of Linux guest dm soft raid use case. Well, I don't know how bad a stop-continue wrapper around starting the block jobs would actually be (technically the VM is stopped during the transaction preparation as well), but fine. I still wouldn't call this a transaction but rather a group operation. Stefan defined transactions to be "atomic group operations". drive-backup is not and blockdev-backup will not be atomic. But we already made the mistake with drive-backup so we can not really fix it (if it is a mistake at all, of course, which I'm assuming it is but I may be very wrong). So, assuming it is a mistake (disregard all of the following if transactions are not intended to be completely atomic): To try to mitigate damage I'd vote for introducing GroupOperations in contrast to Transactions (transactions are atomic, group operations are not). Non-atomic group operations will not have a commit() and probably also no abort() or clean(). They will just be started and that's it. We could then for example just allow any block job to be started in a group operation without having to write specialized code for all of them. We would then have to call the fact that drive-backup is available for transactions just a quirky slip. Not good, but that's all we can do about it now. Alternatively, we can make blockdev-backup a transaction and then explicitly state for drive-backup and blockdev-backup that those transactions are not really atomic. We'd have to describe exactly what happens when they are aborted (which is that a potentially already started block job will be aborted, but it is undefined how much of the block job has been done). I think it's worse than introducing GroupOperation and making the drive-backup transaction a deprecated legacy slip, but of course it'd be much easier. So if we decide to go for GroupOperation, I wouldn't do it in this series (which means that this patch would have to wait for later). If we don't, this patch can stay and the documentation (what happens on abort) may follow later. Max ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v4 2/3] block: Add blockdev-backup to transaction 2014-12-05 9:24 ` Max Reitz @ 2014-12-05 14:47 ` Max Reitz 0 siblings, 0 replies; 13+ messages in thread From: Max Reitz @ 2014-12-05 14:47 UTC (permalink / raw) To: Fam Zheng; +Cc: Kevin Wolf, qemu-devel, Stefan Hajnoczi, Markus Armbruster On 2014-12-05 at 10:24, Max Reitz wrote: > On 2014-12-05 at 07:37, Fam Zheng wrote: >> On Thu, 12/04 14:59, Max Reitz wrote: >>> On 2014-12-04 at 03:29, Fam Zheng wrote: >>>> Also add version info for other transaction types. >>>> >>>> Signed-off-by: Fam Zheng <famz@redhat.com> >>>> --- >>>> blockdev.c | 81 >>>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >>>> qapi-schema.json | 7 +++++ >>>> 2 files changed, 88 insertions(+) >>>> >>>> diff --git a/blockdev.c b/blockdev.c >>>> index f44441a..a98a4f8 100644 >>>> --- a/blockdev.c >>>> +++ b/blockdev.c >>>> @@ -1559,6 +1559,81 @@ static void >>>> drive_backup_clean(BlkTransactionState *common) >>>> } >>>> } >>>> +typedef struct BlockdevBackupState { >>>> + BlkTransactionState common; >>>> + BlockDriverState *bs; >>>> + BlockJob *job; >>>> + AioContext *aio_context; >>>> +} BlockdevBackupState; >>>> + >>>> +static void blockdev_backup_prepare(BlkTransactionState *common, >>>> Error **errp) >>>> +{ >>>> + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, >>>> common, common); >>>> + BlockdevBackup *backup; >>>> + BlockDriverState *bs, *target; >>>> + Error *local_err = NULL; >>>> + >>>> + assert(common->action->kind == >>>> TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP); >>>> + backup = common->action->blockdev_backup; >>>> + >>>> + bs = bdrv_find(backup->device); >>>> + if (!bs) { >>>> + error_set(errp, QERR_DEVICE_NOT_FOUND, backup->device); >>>> + return; >>>> + } >>>> + >>>> + target = bdrv_find(backup->target); >>>> + if (!target) { >>>> + error_set(errp, QERR_DEVICE_NOT_FOUND, backup->target); >>>> + return; >>>> + } >>>> + >>>> + /* AioContext is released in .clean() */ >>>> + state->aio_context = bdrv_get_aio_context(bs); >>>> + if (state->aio_context != bdrv_get_aio_context(target)) { >>>> + state->aio_context = NULL; >>>> + error_setg(errp, "Backup between two IO threads are not >>>> implemented"); >>> Either *Backups ore s/are/is/. >>> >>>> + return; >>>> + } >>>> + aio_context_acquire(state->aio_context); >>>> + >>>> + qmp_blockdev_backup(backup->device, backup->target, >>>> + backup->sync, >>>> + backup->has_speed, backup->speed, >>>> + backup->has_on_source_error, >>>> backup->on_source_error, >>>> + backup->has_on_target_error, >>>> backup->on_target_error, >>>> + &local_err); >>>> + if (local_err) { >>>> + error_propagate(errp, local_err); >>>> + state->bs = NULL; >>>> + state->job = NULL; >>> No need for these assignments, state is 0-initialized. I wouldn't >>> point that >>> out if Stefan wouldn't just have sent a patch which removed such >>> assignments >>> in some other transaction preparation. >>> >>>> + return; >>>> + } >>>> + >>>> + state->bs = bdrv_find(backup->device); >>>> + state->job = state->bs->job; >>>> +} >>>> + >>>> +static void blockdev_backup_abort(BlkTransactionState *common) >>>> +{ >>>> + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, >>>> common, common); >>>> + BlockDriverState *bs = state->bs; >>>> + >>>> + /* Only cancel if it's the job we started */ >>>> + if (bs && bs->job && bs->job == state->job) { >>>> + block_job_cancel_sync(bs->job); >>>> + } >>>> +} >>>> + >>>> +static void blockdev_backup_clean(BlkTransactionState *common) >>>> +{ >>>> + BlockdevBackupState *state = DO_UPCAST(BlockdevBackupState, >>>> common, common); >>>> + >>>> + if (state->aio_context) { >>>> + aio_context_release(state->aio_context); >>>> + } >>>> +} >>>> + >>>> static void abort_prepare(BlkTransactionState *common, Error **errp) >>>> { >>>> error_setg(errp, "Transaction aborted using Abort action"); >>>> @@ -1582,6 +1657,12 @@ static const BdrvActionOps actions[] = { >>>> .abort = drive_backup_abort, >>>> .clean = drive_backup_clean, >>>> }, >>>> + [TRANSACTION_ACTION_KIND_BLOCKDEV_BACKUP] = { >>>> + .instance_size = sizeof(BlockdevBackupState), >>>> + .prepare = blockdev_backup_prepare, >>>> + .abort = blockdev_backup_abort, >>>> + .clean = blockdev_backup_clean, >>>> + }, >>>> [TRANSACTION_ACTION_KIND_ABORT] = { >>>> .instance_size = sizeof(BlkTransactionState), >>>> .prepare = abort_prepare, >>>> diff --git a/qapi-schema.json b/qapi-schema.json >>>> index 9ffdcf8..411d287 100644 >>>> --- a/qapi-schema.json >>>> +++ b/qapi-schema.json >>>> @@ -1254,11 +1254,18 @@ >>>> # >>>> # A discriminated record of operations that can be performed with >>>> # @transaction. >>>> +# >>>> +# Since 1.1 >>>> +# drive-backup since 1.6 >>>> +# abort since 1.6 >>>> +# blockdev-snapshot-internal-sync since 1.7 >>>> +# blockdev-backup since 2.3 >>> This seems a bit hard to read... Maybe an empty line after the >>> "Since 1.1" >>> would help, but I'm not sure... >>> >>>> ## >>>> { 'union': 'TransactionAction', >>>> 'data': { >>>> 'blockdev-snapshot-sync': 'BlockdevSnapshot', >>>> 'drive-backup': 'DriveBackup', >>>> + 'blockdev-backup': 'BlockdevBackup', >>>> 'abort': 'Abort', >>>> 'blockdev-snapshot-internal-sync': 'BlockdevSnapshotInternal' >>>> } } >>> So, about this patch in general: I know drive-backup works nearly >>> the same >>> way. It starts block job in prepare(), which is aborted in abort(). >>> But it >>> seems a bit like cheating to me. For me, a transaction is something >>> which >>> you can start and if any of the operations cannot be executed >>> (because its >>> preparation failed), all are aborted (that is, not even started). The >>> commit() part will really do the operation, and that will never fail >>> because >>> prepare() has made sure it will not. >>> >>> This isn't the case when starting block jobs in prepare(). If some >>> other >>> operation's prepare() fails, some data may have been copied already, >>> so you >>> can't really roll back the operation. It isn't so bad for drive-backup, >>> because you're normally writing to a new image, so having >>> overwritten that >>> image by a bit isn't so bad. But with blockdev-backup, you're >>> overwriting a >>> block device inside qemu, so overwriting it partially may actually >>> be bad >>> (or even overwriting it completely; if a transaction fails, I'd >>> expect the >>> operations to have done nothing at all). >>> >>> I understand that drive-backup does the same thing already (although >>> I don't >>> deem the impact there a bit less), so it may just be that my notion of >>> transactions is wrong. >>> >>> However, in this state, where is the advantage of making this an >>> operation >>> usable in a transaction? I can just start a number of >>> blockdev-backup block >>> jobs manually and cancel them if anything goes wrong. There's no >>> difference, >>> so I don't see the benefit of making it a transaction operation if a >>> transaction does not actually mean "Do not do anything if we don't >>> know for >>> sure that all operations will complete successfully." >> It's still different. When you start a number of drive-backup or >> blockdev-backup in a transaction, all the disks' data are from a >> single point >> of time. That is important for backup use cases. >> >> When you start jobs manually, the guest may already changed some >> data, so the >> disks are not consistent. Think of Linux guest dm soft raid use case. > > Well, I don't know how bad a stop-continue wrapper around starting the > block jobs would actually be (technically the VM is stopped during the > transaction preparation as well), but fine. > > I still wouldn't call this a transaction but rather a group operation. > Stefan defined transactions to be "atomic group operations". > drive-backup is not and blockdev-backup will not be atomic. But we > already made the mistake with drive-backup so we can not really fix it > (if it is a mistake at all, of course, which I'm assuming it is but I > may be very wrong). > > So, assuming it is a mistake (disregard all of the following if > transactions are not intended to be completely atomic): > > To try to mitigate damage I'd vote for introducing GroupOperations in > contrast to Transactions (transactions are atomic, group operations > are not). Non-atomic group operations will not have a commit() and > probably also no abort() or clean(). They will just be started and > that's it. We could then for example just allow any block job to be > started in a group operation without having to write specialized code > for all of them. > > We would then have to call the fact that drive-backup is available for > transactions just a quirky slip. Not good, but that's all we can do > about it now. > > Alternatively, we can make blockdev-backup a transaction and then > explicitly state for drive-backup and blockdev-backup that those > transactions are not really atomic. We'd have to describe exactly what > happens when they are aborted (which is that a potentially already > started block job will be aborted, but it is undefined how much of the > block job has been done). I think it's worse than introducing > GroupOperation and making the drive-backup transaction a deprecated > legacy slip, but of course it'd be much easier. > > So if we decide to go for GroupOperation, I wouldn't do it in this > series (which means that this patch would have to wait for later). If > we don't, this patch can stay and the documentation (what happens on > abort) may follow later. I just discussed this issue with Kevin, and I was wrong. The faulty assumption was that the drive-backup and blockdev-backup operations signify the full block job; whereas they actually only start the block job. So the atomic operation in question is starting the block job. The question now is whether the block jobs can write data after they have been started and before they would be aborted, because aborting them would then be too late. qmp_drive_backup() creates the block job and indeed enters it; however, before anything is written, it will always yield (and directly afterwards they check once again whether they have been aborted). Therefore, no data will be written before qmp_transaction() returns and aborting the block jobs will cancel it before anything has been written. Great! The only change which will not be reverted is having created the target file. Not nice, but not critical either, and not much we can do about it anyway. Of course, as patch 1 reuses a lot of code from qmp_drive_backup(), blockdev-backup will therefore be fine, too. tl;dr: We don't need GroupOperations, making this a transaction is completely fine. Because I only saw some minor issues (I'd really like to have the NULL assignments removed, though, because it may be confusing to see them in that error path but not in the one right before it): Reviewed-by: Max Reitz <mreitz@redhat.com> ^ permalink raw reply [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v4 3/3] qemu-iotests: Test blockdev-backup in 055 2014-12-04 2:29 [Qemu-devel] [PATCH v4 0/3] qmp: Add "blockdev-backup" Fam Zheng 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 1/3] qmp: Add command 'blockdev-backup' Fam Zheng 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 2/3] block: Add blockdev-backup to transaction Fam Zheng @ 2014-12-04 2:29 ` Fam Zheng 2014-12-04 14:21 ` Max Reitz 2 siblings, 1 reply; 13+ messages in thread From: Fam Zheng @ 2014-12-04 2:29 UTC (permalink / raw) To: qemu-devel; +Cc: Kevin Wolf, Markus Armbruster, Stefan Hajnoczi, mreitz This applies cases on drive-backup on blockdev-backup, except cases with target format and mode. Also add a case to check source == target. Signed-off-by: Fam Zheng <famz@redhat.com> --- tests/qemu-iotests/055 | 211 +++++++++++++++++++++++++++++++++++++-------- tests/qemu-iotests/055.out | 4 +- 2 files changed, 176 insertions(+), 39 deletions(-) diff --git a/tests/qemu-iotests/055 b/tests/qemu-iotests/055 index 0872444..e81d4d0 100755 --- a/tests/qemu-iotests/055 +++ b/tests/qemu-iotests/055 @@ -1,8 +1,8 @@ #!/usr/bin/env python # -# Tests for drive-backup +# Tests for drive-backup and blockdev-backup # -# Copyright (C) 2013 Red Hat, Inc. +# Copyright (C) 2013, 2014 Red Hat, Inc. # # Based on 041. # @@ -27,6 +27,7 @@ from iotests import qemu_img, qemu_io test_img = os.path.join(iotests.test_dir, 'test.img') target_img = os.path.join(iotests.test_dir, 'target.img') +blockdev_target_img = os.path.join(iotests.test_dir, 'blockdev-target.img') class TestSingleDrive(iotests.QMPTestCase): image_len = 64 * 1024 * 1024 # MB @@ -38,34 +39,41 @@ class TestSingleDrive(iotests.QMPTestCase): qemu_io('-f', iotests.imgfmt, '-c', 'write -P0xd5 1M 32k', test_img) qemu_io('-f', iotests.imgfmt, '-c', 'write -P0xdc 32M 124k', test_img) qemu_io('-f', iotests.imgfmt, '-c', 'write -P0xdc 67043328 64k', test_img) + qemu_img('create', '-f', iotests.imgfmt, blockdev_target_img, str(TestSingleDrive.image_len)) - self.vm = iotests.VM().add_drive(test_img) + self.vm = iotests.VM().add_drive(test_img).add_drive(blockdev_target_img) self.vm.launch() def tearDown(self): self.vm.shutdown() os.remove(test_img) + os.remove(blockdev_target_img) try: os.remove(target_img) except OSError: pass - def test_cancel(self): + def do_test_cancel(self, cmd, target): self.assert_no_active_block_jobs() - result = self.vm.qmp('drive-backup', device='drive0', - target=target_img, sync='full') + result = self.vm.qmp(cmd, device='drive0', target=target, sync='full') self.assert_qmp(result, 'return', {}) event = self.cancel_and_wait() self.assert_qmp(event, 'data/type', 'backup') - def test_pause(self): + def test_cancel_drive_backup(self): + self.do_test_cancel('drive-backup', target_img) + + def test_cancel_blockdev_backup(self): + self.do_test_cancel('blockdev-backup', 'drive1') + + def do_test_pause(self, cmd, target, image): self.assert_no_active_block_jobs() self.vm.pause_drive('drive0') - result = self.vm.qmp('drive-backup', device='drive0', - target=target_img, sync='full') + result = self.vm.qmp(cmd, device='drive0', + target=target, sync='full') self.assert_qmp(result, 'return', {}) result = self.vm.qmp('block-job-pause', device='drive0') @@ -86,14 +94,25 @@ class TestSingleDrive(iotests.QMPTestCase): self.wait_until_completed() self.vm.shutdown() - self.assertTrue(iotests.compare_images(test_img, target_img), + self.assertTrue(iotests.compare_images(test_img, image), 'target image does not match source after backup') + def test_pause_drive_backup(self): + self.do_test_pause('drive-backup', target_img, target_img) + + def test_pause_blockdev_backup(self): + self.do_test_pause('blockdev-backup', 'drive1', blockdev_target_img) + def test_medium_not_found(self): result = self.vm.qmp('drive-backup', device='ide1-cd0', target=target_img, sync='full') self.assert_qmp(result, 'error/class', 'GenericError') + def test_medium_not_found_blockdev_backup(self): + result = self.vm.qmp('blockdev-backup', device='ide1-cd0', + target='drive1', sync='full') + self.assert_qmp(result, 'error/class', 'GenericError') + def test_image_not_found(self): result = self.vm.qmp('drive-backup', device='drive0', target=target_img, sync='full', mode='existing') @@ -105,31 +124,53 @@ class TestSingleDrive(iotests.QMPTestCase): format='spaghetti-noodles') self.assert_qmp(result, 'error/class', 'GenericError') - def test_device_not_found(self): - result = self.vm.qmp('drive-backup', device='nonexistent', - target=target_img, sync='full') + def do_test_device_not_found(self, cmd, **args): + result = self.vm.qmp(cmd, **args) self.assert_qmp(result, 'error/class', 'DeviceNotFound') + def test_device_not_found(self): + self.do_test_device_not_found('drive-backup', device='nonexistent', + target=target_img, sync='full') + + self.do_test_device_not_found('blockdev-backup', device='nonexistent', + target='drive0', sync='full') + + self.do_test_device_not_found('blockdev-backup', device='drive0', + target='nonexistent', sync='full') + + self.do_test_device_not_found('blockdev-backup', device='nonexistent', + target='nonexistent', sync='full') + + def test_target_is_source(self): + result = self.vm.qmp('blockdev-backup', device='drive0', + target='drive0', sync='full') + self.assert_qmp(result, 'error/class', 'GenericError') + class TestSetSpeed(iotests.QMPTestCase): image_len = 80 * 1024 * 1024 # MB def setUp(self): qemu_img('create', '-f', iotests.imgfmt, test_img, str(TestSetSpeed.image_len)) qemu_io('-f', iotests.imgfmt, '-c', 'write -P1 0 512', test_img) - self.vm = iotests.VM().add_drive(test_img) + qemu_img('create', '-f', iotests.imgfmt, blockdev_target_img, str(TestSingleDrive.image_len)) + + self.vm = iotests.VM().add_drive(test_img).add_drive(blockdev_target_img) self.vm.launch() def tearDown(self): self.vm.shutdown() os.remove(test_img) - os.remove(target_img) + os.remove(blockdev_target_img) + try: + os.remove(target_img) + except OSError: + pass - def test_set_speed(self): + def do_test_set_speed(self, cmd, target): self.assert_no_active_block_jobs() self.vm.pause_drive('drive0') - result = self.vm.qmp('drive-backup', device='drive0', - target=target_img, sync='full') + result = self.vm.qmp(cmd, device='drive0', target=target, sync='full') self.assert_qmp(result, 'return', {}) # Default speed is 0 @@ -148,10 +189,10 @@ class TestSetSpeed(iotests.QMPTestCase): event = self.cancel_and_wait(resume=True) self.assert_qmp(event, 'data/type', 'backup') - # Check setting speed in drive-backup works + # Check setting speed option works self.vm.pause_drive('drive0') - result = self.vm.qmp('drive-backup', device='drive0', - target=target_img, sync='full', speed=4*1024*1024) + result = self.vm.qmp(cmd, device='drive0', + target=target, sync='full', speed=4*1024*1024) self.assert_qmp(result, 'return', {}) result = self.vm.qmp('query-block-jobs') @@ -161,18 +202,24 @@ class TestSetSpeed(iotests.QMPTestCase): event = self.cancel_and_wait(resume=True) self.assert_qmp(event, 'data/type', 'backup') - def test_set_speed_invalid(self): + def test_set_speed_drive_backup(self): + self.do_test_set_speed('drive-backup', target_img) + + def test_set_speed_blockdev_backup(self): + self.do_test_set_speed('blockdev-backup', 'drive1') + + def do_test_set_speed_invalid(self, cmd, target): self.assert_no_active_block_jobs() - result = self.vm.qmp('drive-backup', device='drive0', - target=target_img, sync='full', speed=-1) + result = self.vm.qmp(cmd, device='drive0', + target=target, sync='full', speed=-1) self.assert_qmp(result, 'error/class', 'GenericError') self.assert_no_active_block_jobs() self.vm.pause_drive('drive0') - result = self.vm.qmp('drive-backup', device='drive0', - target=target_img, sync='full') + result = self.vm.qmp(cmd, device='drive0', + target=target, sync='full') self.assert_qmp(result, 'return', {}) result = self.vm.qmp('block-job-set-speed', device='drive0', speed=-1) @@ -181,6 +228,12 @@ class TestSetSpeed(iotests.QMPTestCase): event = self.cancel_and_wait(resume=True) self.assert_qmp(event, 'data/type', 'backup') + def test_set_speed_invalid_drive_backup(self): + self.do_test_set_speed_invalid('drive-backup', target_img) + + def test_set_speed_invalid_blockdev_backup(self): + self.do_test_set_speed_invalid('blockdev-backup', 'drive1') + class TestSingleTransaction(iotests.QMPTestCase): image_len = 64 * 1024 * 1024 # MB @@ -190,41 +243,50 @@ class TestSingleTransaction(iotests.QMPTestCase): qemu_io('-f', iotests.imgfmt, '-c', 'write -P0xd5 1M 32k', test_img) qemu_io('-f', iotests.imgfmt, '-c', 'write -P0xdc 32M 124k', test_img) qemu_io('-f', iotests.imgfmt, '-c', 'write -P0xdc 67043328 64k', test_img) + qemu_img('create', '-f', iotests.imgfmt, blockdev_target_img, str(TestSingleDrive.image_len)) - self.vm = iotests.VM().add_drive(test_img) + self.vm = iotests.VM().add_drive(test_img).add_drive(blockdev_target_img) self.vm.launch() def tearDown(self): self.vm.shutdown() os.remove(test_img) + os.remove(blockdev_target_img) try: os.remove(target_img) except OSError: pass - def test_cancel(self): + def do_test_cancel(self, cmd, target): self.assert_no_active_block_jobs() result = self.vm.qmp('transaction', actions=[{ - 'type': 'drive-backup', + 'type': cmd, 'data': { 'device': 'drive0', - 'target': target_img, + 'target': target, 'sync': 'full' }, } ]) + self.assert_qmp(result, 'return', {}) event = self.cancel_and_wait() self.assert_qmp(event, 'data/type', 'backup') - def test_pause(self): + def test_cancel_drive_backup(self): + self.do_test_cancel('drive-backup', target_img) + + def test_cancel_blockdev_backup(self): + self.do_test_cancel('blockdev-backup', 'drive1') + + def do_test_pause(self, cmd, target, image): self.assert_no_active_block_jobs() self.vm.pause_drive('drive0') result = self.vm.qmp('transaction', actions=[{ - 'type': 'drive-backup', + 'type': cmd, 'data': { 'device': 'drive0', - 'target': target_img, + 'target': target, 'sync': 'full' }, } ]) @@ -248,19 +310,31 @@ class TestSingleTransaction(iotests.QMPTestCase): self.wait_until_completed() self.vm.shutdown() - self.assertTrue(iotests.compare_images(test_img, target_img), + self.assertTrue(iotests.compare_images(test_img, image), 'target image does not match source after backup') - def test_medium_not_found(self): + def test_pause_drive_backup(self): + self.do_test_pause('drive-backup', target_img, target_img) + + def test_pause_blockdev_backup(self): + self.do_test_pause('blockdev-backup', 'drive1', blockdev_target_img) + + def do_test_medium_not_found(self, cmd, target): result = self.vm.qmp('transaction', actions=[{ - 'type': 'drive-backup', + 'type': cmd, 'data': { 'device': 'ide1-cd0', - 'target': target_img, + 'target': target, 'sync': 'full' }, } ]) self.assert_qmp(result, 'error/class', 'GenericError') + def test_medium_not_found_drive_backup(self): + self.do_test_medium_not_found('drive-backup', target_img) + + def test_medium_not_found_blockdev_backup(self): + self.do_test_medium_not_found('blockdev-backup', 'drive1') + def test_image_not_found(self): result = self.vm.qmp('transaction', actions=[{ 'type': 'drive-backup', @@ -283,6 +357,43 @@ class TestSingleTransaction(iotests.QMPTestCase): ]) self.assert_qmp(result, 'error/class', 'DeviceNotFound') + result = self.vm.qmp('transaction', actions=[{ + 'type': 'blockdev-backup', + 'data': { 'device': 'nonexistent', + 'target': 'drive1', + 'sync': 'full' }, + } + ]) + self.assert_qmp(result, 'error/class', 'DeviceNotFound') + + result = self.vm.qmp('transaction', actions=[{ + 'type': 'blockdev-backup', + 'data': { 'device': 'drive0', + 'target': 'nonexistent', + 'sync': 'full' }, + } + ]) + self.assert_qmp(result, 'error/class', 'DeviceNotFound') + + result = self.vm.qmp('transaction', actions=[{ + 'type': 'blockdev-backup', + 'data': { 'device': 'nonexistent', + 'target': 'nonexistent', + 'sync': 'full' }, + } + ]) + self.assert_qmp(result, 'error/class', 'DeviceNotFound') + + def test_target_is_source(self): + result = self.vm.qmp('transaction', actions=[{ + 'type': 'blockdev-backup', + 'data': { 'device': 'drive0', + 'target': 'drive0', + 'sync': 'full' }, + } + ]) + self.assert_qmp(result, 'error/class', 'GenericError') + def test_abort(self): result = self.vm.qmp('transaction', actions=[{ 'type': 'drive-backup', @@ -298,5 +409,31 @@ class TestSingleTransaction(iotests.QMPTestCase): self.assert_qmp(result, 'error/class', 'GenericError') self.assert_no_active_block_jobs() + result = self.vm.qmp('transaction', actions=[{ + 'type': 'blockdev-backup', + 'data': { 'device': 'nonexistent', + 'target': 'drive1', + 'sync': 'full' }, + }, { + 'type': 'Abort', + 'data': {}, + } + ]) + self.assert_qmp(result, 'error/class', 'GenericError') + self.assert_no_active_block_jobs() + + result = self.vm.qmp('transaction', actions=[{ + 'type': 'blockdev-backup', + 'data': { 'device': 'drive0', + 'target': 'nonexistent', + 'sync': 'full' }, + }, { + 'type': 'Abort', + 'data': {}, + } + ]) + self.assert_qmp(result, 'error/class', 'GenericError') + self.assert_no_active_block_jobs() + if __name__ == '__main__': iotests.main(supported_fmts=['raw', 'qcow2']) diff --git a/tests/qemu-iotests/055.out b/tests/qemu-iotests/055.out index 6323079..42314e9 100644 --- a/tests/qemu-iotests/055.out +++ b/tests/qemu-iotests/055.out @@ -1,5 +1,5 @@ -.............. +........................ ---------------------------------------------------------------------- -Ran 14 tests +Ran 24 tests OK -- 1.9.3 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v4 3/3] qemu-iotests: Test blockdev-backup in 055 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 3/3] qemu-iotests: Test blockdev-backup in 055 Fam Zheng @ 2014-12-04 14:21 ` Max Reitz 0 siblings, 0 replies; 13+ messages in thread From: Max Reitz @ 2014-12-04 14:21 UTC (permalink / raw) To: Fam Zheng, qemu-devel; +Cc: Kevin Wolf, Markus Armbruster, Stefan Hajnoczi On 2014-12-04 at 03:29, Fam Zheng wrote: > This applies cases on drive-backup on blockdev-backup, except cases with > target format and mode. > > Also add a case to check source == target. > > Signed-off-by: Fam Zheng <famz@redhat.com> > --- > tests/qemu-iotests/055 | 211 +++++++++++++++++++++++++++++++++++++-------- > tests/qemu-iotests/055.out | 4 +- > 2 files changed, 176 insertions(+), 39 deletions(-) Reviewed-by: Max Reitz <mreitz@redhat.com> ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2014-12-19 8:47 UTC | newest] Thread overview: 13+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-12-04 2:29 [Qemu-devel] [PATCH v4 0/3] qmp: Add "blockdev-backup" Fam Zheng 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 1/3] qmp: Add command 'blockdev-backup' Fam Zheng 2014-12-04 13:43 ` Max Reitz 2014-12-05 6:12 ` Fam Zheng 2014-12-05 9:10 ` Max Reitz 2014-12-19 8:47 ` Markus Armbruster 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 2/3] block: Add blockdev-backup to transaction Fam Zheng 2014-12-04 13:59 ` Max Reitz 2014-12-05 6:37 ` Fam Zheng 2014-12-05 9:24 ` Max Reitz 2014-12-05 14:47 ` Max Reitz 2014-12-04 2:29 ` [Qemu-devel] [PATCH v4 3/3] qemu-iotests: Test blockdev-backup in 055 Fam Zheng 2014-12-04 14:21 ` Max Reitz
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).