From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38927) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1a3hHj-0007Wd-Pu for qemu-devel@nongnu.org; Tue, 01 Dec 2015 04:31:48 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1a3hHh-0004wr-Gm for qemu-devel@nongnu.org; Tue, 01 Dec 2015 04:31:43 -0500 Date: Tue, 1 Dec 2015 10:31:31 +0100 From: Kevin Wolf Message-ID: <20151201093131.GB6527@noname.str.redhat.com> References: <1429314609-29776-1-git-send-email-jsnow@redhat.com> <1429314609-29776-21-git-send-email-jsnow@redhat.com> <20151127171422.GB4287@noname.redhat.com> <565C849E.1080307@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <565C849E.1080307@redhat.com> Subject: Re: [Qemu-devel] [PATCH v6 20/21] iotests: add incremental backup failure recovery test List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: John Snow Cc: famz@redhat.com, qemu-block@nongnu.org, qemu-devel@nongnu.org, armbru@redhat.com, vsementsov@parallels.com, stefanha@redhat.com, mreitz@redhat.com Am 30.11.2015 um 18:17 hat John Snow geschrieben: > On 11/27/2015 12:14 PM, Kevin Wolf wrote: > > Am 18.04.2015 um 01:50 hat John Snow geschrieben: > >> Test the failure case for incremental backups. > >> > >> Signed-off-by: John Snow > >> Reviewed-by: Max Reitz > >> --- > >> tests/qemu-iotests/124 | 57 ++++++++++++++++++++++++++++++++++++++++++++++ > >> tests/qemu-iotests/124.out | 4 ++-- > >> 2 files changed, 59 insertions(+), 2 deletions(-) > >> > >> diff --git a/tests/qemu-iotests/124 b/tests/qemu-iotests/124 > >> index 5c3b434..95f6de5 100644 > >> --- a/tests/qemu-iotests/124 > >> +++ b/tests/qemu-iotests/124 > >> @@ -240,6 +240,63 @@ class TestIncrementalBackup(iotests.QMPTestCase): > >> self.check_backups() > >> > >> > >> + def test_incremental_failure(self): > >> + '''Test: Verify backups made after a failure are correct. > >> + > >> + Simulate a failure during an incremental backup block job, > >> + emulate additional writes, then create another incremental backup > >> + afterwards and verify that the backup created is correct. > >> + ''' > >> + > >> + # Create a blkdebug interface to this img as 'drive1', > >> + # but don't actually create a new image. > >> + drive1 = self.add_node('drive1', self.drives[0]['fmt'], > >> + path=self.drives[0]['file'], > >> + backup=self.drives[0]['backup']) > >> + result = self.vm.qmp('blockdev-add', options={ > >> + 'id': drive1['id'], > >> + 'driver': drive1['fmt'], > >> + 'file': { > >> + 'driver': 'blkdebug', > >> + 'image': { > >> + 'driver': 'file', > >> + 'filename': drive1['file'] > >> + }, > >> + 'set-state': [{ > >> + 'event': 'flush_to_disk', > >> + 'state': 1, > >> + 'new_state': 2 > >> + }], > >> + 'inject-error': [{ > >> + 'event': 'read_aio', > >> + 'errno': 5, > >> + 'state': 2, > >> + 'immediately': False, > >> + 'once': True > >> + }], > >> + } > >> + }) > >> + self.assert_qmp(result, 'return', {}) > > > > John, how naughty of you! > > > > And here I thought it was OK because nobody yelled! > > The yell was just delayed. > > > It's interesting how many tests break now that I tried to add some > > advisory qcow2 locking so that people don't constantly break their > > images with 'qemu-img snapshot' while the VM is running. > > > > I think this one is a bug in the test case. I'm not completely sure how > > to fix it, though. Can we move the blkdebug layer to the top level? I > > think reusing the same qcow2 BDS (using a node name reference) would be > > okay. We just need to avoid opening the qcow2 layer twice for the same > > image. > > > > I can either do that, or just fall back to fully allocating two images > and modify the test accordingly. Whatever is the easiest (and works). > Is this for 2.5? No, that's definitely 2.6. But if possible, I want to merge the qcow2 locking relatively early in the cycle because I want to see how surprising the new behaviour would be for users and whether it's too restrictive before we enable it in a release. Kevin