qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Sergio Lopez <slp@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: armbru@redhat.com, qemu-devel@nongnu.org, qemu-block@nongnu.org,
	Max Reitz <mreitz@redhat.com>
Subject: Re: [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup
Date: Tue, 19 Nov 2019 12:35:32 +0100	[thread overview]
Message-ID: <87blt8cd4r.fsf@redhat.com> (raw)
In-Reply-To: <20191119111840.GA5910@linux.fritz.box>

[-- Attachment #1: Type: text/plain, Size: 4858 bytes --]


Kevin Wolf <kwolf@redhat.com> writes:

> Am 19.11.2019 um 11:54 hat Sergio Lopez geschrieben:
>> 
>> Max Reitz <mreitz@redhat.com> writes:
>> 
>> > On 13.11.19 14:24, Sergio Lopez wrote:
>> >> 
>> >> Sergio Lopez <slp@redhat.com> writes:
>> >> 
>> >>> no-reply@patchew.org writes:
>> >>>
>> >>>> Patchew URL: https://patchew.org/QEMU/20191112113012.71136-1-slp@redhat.com/
>> >>>>
>> >>>>
>> >>>>
>> >>>> Hi,
>> >>>>
>> >>>> This series failed the docker-quick@centos7 build test. Please find the testing commands and
>> >>>> their output below. If you have Docker installed, you can probably reproduce it
>> >>>> locally.
>> >>>>
>> >>>> === TEST SCRIPT BEGIN ===
>> >>>> #!/bin/bash
>> >>>> make docker-image-centos7 V=1 NETWORK=1
>> >>>> time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
>> >>>> === TEST SCRIPT END ===
>> >>>>
>> >>>>   TEST    iotest-qcow2: 268
>> >>>> Failures: 141
>> >>>
>> >>> Hm... 141 didn't fail in my test machine. I'm going to have a look.
>> >> 
>> >> So here's the output:
>> >> 
>> >> --- /root/qemu/tests/qemu-iotests/141.out	2019-11-12 04:43:27.651557587 -0500
>> >> +++ /root/qemu/build/tests/qemu-iotests/141.out.bad	2019-11-13 08:12:06.575967337 -0500
>> >> @@ -10,6 +10,8 @@
>> >>  Formatting 'TEST_DIR/o.IMGFMT', fmt=IMGFMT size=1048576 backing_file=TEST_DIR/t.IMGFMT backing_fmt=IMGFMT
>> >>  {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "job0"}}
>> >>  {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
>> >> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "paused", "id": "job0"}}
>> >> +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "job0"}}
>> >>  {"error": {"class": "GenericError", "desc": "Node 'drv0' is busy: node is used as backing hd of 'NODE_NAME'"}}
>> >>  {"return": {}}
>> >>  {"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "job0"}}
>> >> 
>> >> Those extra lines, the "paused" and "running", are a result of the job
>> >> being done in a transaction, within a drained section.
>> >> 
>> >> We can update 141.out, but now I'm wondering, was it safe creating the
>> >> job at do_drive_backup() outside of a drained section, as
>> >> qmp_drive_backup was doing?
>> >
>> > I think it is.  Someone needs to drain the source node before attaching
>> > the job filter (which intercepts writes), and bdrv_backup_top_append()
>> > does precisely this.
>> >
>> > If the source node is in an I/O thread, you could argue that the drain
>> > starts later than when the user has invoked the backup command, and so
>> > some writes might slip through.  That’s correct.  But at the same time,
>> > it’s impossible to drain it the instant the command is received.  So
>> > some writes might always slip through (and the drain will not stop them
>> > either, it will just let them happen).
>> >
>> > Therefore, I think it’s fine the way it is.
>> >
>> >> Do you think there may be any potential drawbacks as a result of always
>> >> doing it now inside a drained section?
>> >
>> > Well, one drawback is clearly visible.  The job goes to paused for no
>> > reason.
>> 
>> This is something that already happens when requesting the drive-backup
>> through a transaction:
>> 
>> {"execute":"transaction","arguments":{"actions":[{"type":"drive-backup","data":{"device":"drv0","target":"o.qcow2","sync":"full","format":"qcow2"}}]}}
>> 
>> I don't think it makes sense to have two different behaviors for the
>> same action. So we either accept the additional pause+resume iteration
>> for qmp_drive_backup, or we remove the drained section from the
>> transaction based one.
>> 
>> What do you think?
>
> Draining all involved nodes is necessary for transactions, because you
> want a consistent backup across all involved disks. That is, you want it
> to be a snapshot at the same point in time for all of them - no requests
> may happen between starting backup on the first and the second disk.
>
> For a single device operation, this requirement doesn't exist, because
> there is nothing else that must happen at the same point in time.

This poses a problem with the unification strategy you suggested for qmp
commands and transactions. I guess that, if we really want to preserve
the original behavior, we can extend DriveBackup to add a flag to
indicate whether the transaction should create a drained section or not.

Does this sound reasonable to you?

Thanks,
Sergio.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

  reply	other threads:[~2019-11-19 11:37 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-12 11:30 [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup Sergio Lopez
2019-11-12 11:30 ` [PATCH v3 1/8] blockdev: merge drive_backup_prepare with do_drive_backup Sergio Lopez
2019-11-19  9:14   ` Max Reitz
2019-11-12 11:30 ` [PATCH v3 2/8] blockdev: fix coding style issues in drive_backup_prepare Sergio Lopez
2019-11-12 11:30 ` [PATCH v3 3/8] blockdev: place drive_backup_prepare with the other related transaction functions Sergio Lopez
2019-11-12 11:30 ` [PATCH v3 4/8] blockdev: change qmp_drive_backup to make use of transactions Sergio Lopez
2019-11-12 11:30 ` [PATCH v3 5/8] blockdev: merge blockdev_backup_prepare with do_blockdev_backup Sergio Lopez
2019-11-12 11:30 ` [PATCH v3 6/8] blockdev: place blockdev_backup_prepare with the other related transaction helpers Sergio Lopez
2019-11-12 11:30 ` [PATCH v3 7/8] blockdev: change qmp_blockdev_backup to make use of transactions Sergio Lopez
2019-11-12 11:30 ` [PATCH v3 8/8] blockdev: honor bdrv_try_set_aio_context() context requirements Sergio Lopez
2019-11-12 22:49 ` [PATCH v3 0/8] blockdev: avoid acquiring AioContext lock twice at do_drive_backup and do_blockdev_backup no-reply
2019-11-13  9:14   ` Sergio Lopez
2019-11-13 13:24     ` Sergio Lopez
2019-11-19  9:36       ` Max Reitz
2019-11-19 10:54         ` Sergio Lopez
2019-11-19 11:18           ` Kevin Wolf
2019-11-19 11:35             ` Sergio Lopez [this message]
2019-11-19 12:13               ` Kevin Wolf
2019-11-19 12:31                 ` Sergio Lopez

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87blt8cd4r.fsf@redhat.com \
    --to=slp@redhat.com \
    --cc=armbru@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).