From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57395) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Y9KYu-0005Qp-AP for qemu-devel@nongnu.org; Thu, 08 Jan 2015 16:24:16 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Y9KYr-0003xk-Ej for qemu-devel@nongnu.org; Thu, 08 Jan 2015 16:24:12 -0500 Received: from mx1.redhat.com ([209.132.183.28]:34286) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Y9KYr-0003xZ-5z for qemu-devel@nongnu.org; Thu, 08 Jan 2015 16:24:09 -0500 Message-ID: <54AEF575.2040703@redhat.com> Date: Thu, 08 Jan 2015 16:24:05 -0500 From: John Snow MIME-Version: 1.0 References: <1418307457-25996-1-git-send-email-vsementsov@parallels.com> <1418307457-25996-8-git-send-email-vsementsov@parallels.com> In-Reply-To: <1418307457-25996-8-git-send-email-vsementsov@parallels.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 7/9] block-migration: remove not needed iothread lock List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Vladimir Sementsov-Ogievskiy , qemu-devel@nongnu.org Cc: kwolf@redhat.com, den@openvz.org, stefanha@redhat.com On 12/11/2014 09:17 AM, Vladimir Sementsov-Ogievskiy wrote: > Instead of locking iothread, we can just swap these calls. So, if some > write to our range occures before resetting the bitmap, then it will > get into subsequent aio read, becouse it occures, in any case, after > resetting the bitmap. > s/occures/occurs/g s/becouse/because/g (I hope you are not annoyed by the spelling corrections: They are in good faith and personally I would hope people would correct any of my spelling mistakes before it goes in the git log!) > Signed-off-by: Vladimir Sementsov-Ogievskiy > --- > block-migration.c | 6 ++---- > 1 file changed, 2 insertions(+), 4 deletions(-) > > diff --git a/block-migration.c b/block-migration.c > index d0c825f..908a66d 100644 > --- a/block-migration.c > +++ b/block-migration.c > @@ -315,13 +315,11 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds) > block_mig_state.submitted++; > blk_mig_unlock(); > > - qemu_mutex_lock_iothread(); > + bdrv_reset_dirty_bitmap(bs, bmds->dirty_bitmap, cur_sector, nr_sectors); > + > blk->aiocb = bdrv_aio_readv(bs, cur_sector, &blk->qiov, > nr_sectors, blk_mig_read_cb, blk); > > - bdrv_reset_dirty_bitmap(bs, bmds->dirty_bitmap, cur_sector, nr_sectors); > - qemu_mutex_unlock_iothread(); > - > bmds->cur_sector = cur_sector + nr_sectors; > return (bmds->cur_sector >= total_sectors); > } > OK, so the justification here is that by ordering it as "reset, read" that any writes that occur after the reset will be simply included in the read, and we won't have reset any bits that we didn't actually get a chance to read. OK. But what about losing the ability to clear bits that are set needlessly? e.g.: Sector 1 is dirty. Sector 2 is clean. We reset the bitmap. All sectors are clean. A write occurs to sector 2, marking it dirty. We read sectors one and two. Sector two is now erroneously marked dirty, when it isn't. It's not a data integrity problem, but we're swapping one inefficiency for another. Do you have a justification for why this is better than the lock?