From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54584) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7gfg-00006o-Sx for qemu-devel@nongnu.org; Wed, 24 Jun 2015 05:08:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z7gff-0006Ud-S6 for qemu-devel@nongnu.org; Wed, 24 Jun 2015 05:08:40 -0400 Date: Wed, 24 Jun 2015 17:08:31 +0800 From: Fam Zheng Message-ID: <20150624090831.GC22582@ad.nay.redhat.com> References: <1433742974-20128-1-git-send-email-famz@redhat.com> <20150608130242.GE1961@stefanha-thinkpad.redhat.com> <20150611082911.GB22459@ad.nay.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150611082911.GB22459@ad.nay.redhat.com> Subject: Re: [Qemu-devel] [Qemu-stable] [PATCH v7 0/8] block: Mirror discarded sectors List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: Kevin Wolf , qemu-block@nongnu.org, jsnow@redhat.com, Jeff Cody , qemu-devel@nongnu.org, qemu-stable@nongnu.org, pbonzini@redhat.com, wangxiaolong@ucloud.cn On Thu, 06/11 16:29, Fam Zheng wrote: > On Mon, 06/08 14:02, Stefan Hajnoczi wrote: > > On Mon, Jun 08, 2015 at 01:56:06PM +0800, Fam Zheng wrote: > > > v7: Fix the lost assignment of s->unmap. > > > > > > v6: Fix pnum in bdrv_get_block_status_above. [Paolo] > > > > > > v5: Rewrite patch 1. > > > Address Eric's comments on patch 3. > > > Add Eric's rev-by to patches 2 & 4. > > > Check BDRV_BLOCK_DATA in patch 3. [Paolo] > > > > > > This fixes the mirror assert failure reported by wangxiaolong: > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2015-04/msg04458.html > > > > > > The direct cause is that hbitmap code couldn't handle unset of bits *after* > > > iterator's current position. We could fix that, but the bdrv_reset_dirty() call > > > is more questionable: > > > > > > Before, if guest discarded some sectors during migration, it could see > > > different data after moving to dest side, depending on block backends of the > > > src and the dest. This is IMO worse than mirroring the actual reading as done > > > in this series, because we don't know what the guest is doing. > > > > > > For example if a guest first issues WRITE SAME to wipe out the area then issues > > > UNMAP to discard it, just to get rid of some sensitive data completely, we may > > > miss both operations and leave stale data on dest image. > > > > > > > > > Fam Zheng (8): > > > block: Add bdrv_get_block_status_above > > > qmp: Add optional bool "unmap" to drive-mirror > > > mirror: Do zero write on target if sectors not allocated > > > block: Fix dirty bitmap in bdrv_co_discard > > > block: Remove bdrv_reset_dirty > > > qemu-iotests: Make block job methods common > > > qemu-iotests: Add test case for mirror with unmap > > > iotests: Use event_wait in wait_ready > > > > > > block.c | 12 -------- > > > block/io.c | 60 ++++++++++++++++++++++++++++++--------- > > > block/mirror.c | 28 +++++++++++++++--- > > > blockdev.c | 5 ++++ > > > hmp.c | 2 +- > > > include/block/block.h | 4 +++ > > > include/block/block_int.h | 4 +-- > > > qapi/block-core.json | 8 +++++- > > > qmp-commands.hx | 3 ++ > > > tests/qemu-iotests/041 | 66 ++++++++++--------------------------------- > > > tests/qemu-iotests/132 | 59 ++++++++++++++++++++++++++++++++++++++ > > > tests/qemu-iotests/132.out | 5 ++++ > > > tests/qemu-iotests/group | 1 + > > > tests/qemu-iotests/iotests.py | 23 +++++++++++++++ > > > 14 files changed, 196 insertions(+), 84 deletions(-) > > > create mode 100644 tests/qemu-iotests/132 > > > create mode 100644 tests/qemu-iotests/132.out > > > > > > -- > > > 2.4.2 > > > > > > > Thanks, applied to my block tree: > > https://github.com/stefanha/qemu/commits/block > > Stefan, > > The only controversial patches are the qmp/drive-mirror ones (1-3), while > patches 4-8 are still useful on their own: they fix the mentioned crash and > improve iotests. > > Shall we merge the second half (of course none of them depend on 1-3) now that > softfreeze is approaching? Stefan, would you consider applying patches 4-8? Fam