From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59239) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bI43o-0000tw-5J for qemu-devel@nongnu.org; Tue, 28 Jun 2016 21:13:01 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bI43l-0001ao-Vk for qemu-devel@nongnu.org; Tue, 28 Jun 2016 21:12:59 -0400 Date: Wed, 29 Jun 2016 09:12:41 +0800 From: Fam Zheng Message-ID: <20160629011241.GC20978@ad.usersys.redhat.com> References: <1467038869-11538-1-git-send-email-den@openvz.org> <1467038869-11538-2-git-send-email-den@openvz.org> <20160628012716.GB22237@ad.usersys.redhat.com> <57723F1A.8020105@openvz.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <57723F1A.8020105@openvz.org> Subject: Re: [Qemu-devel] [PATCH v4 1/3] block: ignore flush requests when storage is clean List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Denis V. Lunev" Cc: Kevin Wolf , Evgeny Yakovlev , qemu-block@nongnu.org, qemu-devel@nongnu.org, Max Reitz , Stefan Hajnoczi , John Snow On Tue, 06/28 12:10, Denis V. Lunev wrote: > On 06/28/2016 04:27 AM, Fam Zheng wrote: > > On Mon, 06/27 17:47, Denis V. Lunev wrote: > > > From: Evgeny Yakovlev > > > > > > Some guests (win2008 server for example) do a lot of unnecessary > > > flushing when underlying media has not changed. This adds additional > > > overhead on host when calling fsync/fdatasync. > > > > > > This change introduces a dirty flag in BlockDriverState which is set > > > in bdrv_set_dirty and is checked in bdrv_co_flush. This allows us to > > > avoid unnecessary flushing when storage is clean. > > > > > > The problem with excessive flushing was found by a performance test > > > which does parallel directory tree creation (from 2 processes). > > > Results improved from 0.424 loops/sec to 0.432 loops/sec. > > > Each loop creates 10^3 directories with 10 files in each. > > > > > > Signed-off-by: Evgeny Yakovlev > > > Signed-off-by: Denis V. Lunev > > > CC: Kevin Wolf > > > CC: Max Reitz > > > CC: Stefan Hajnoczi > > > CC: Fam Zheng > > > CC: John Snow > > > --- > > > block.c | 1 + > > > block/dirty-bitmap.c | 3 +++ > > > block/io.c | 19 +++++++++++++++++++ > > > include/block/block_int.h | 1 + > > > 4 files changed, 24 insertions(+) > > > > > > diff --git a/block.c b/block.c > > > index 947df29..68ae3a0 100644 > > > --- a/block.c > > > +++ b/block.c > > > @@ -2581,6 +2581,7 @@ int bdrv_truncate(BlockDriverState *bs, int64_t offset) > > > ret = refresh_total_sectors(bs, offset >> BDRV_SECTOR_BITS); > > > bdrv_dirty_bitmap_truncate(bs); > > > bdrv_parent_cb_resize(bs); > > > + bs->dirty = true; /* file node sync is needed after truncate */ > > > } > > > return ret; > > > } > > > diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c > > > index 4902ca5..54e0413 100644 > > > --- a/block/dirty-bitmap.c > > > +++ b/block/dirty-bitmap.c > > > @@ -370,6 +370,9 @@ void bdrv_set_dirty(BlockDriverState *bs, int64_t cur_sector, > > > } > > > hbitmap_set(bitmap->bitmap, cur_sector, nr_sectors); > > > } > > > + > > > + /* Set global block driver dirty flag even if bitmap is disabled */ > > > + bs->dirty = true; > > > } > > > /** > > > diff --git a/block/io.c b/block/io.c > > > index b9e53e3..152f5a9 100644 > > > --- a/block/io.c > > > +++ b/block/io.c > > > @@ -2247,6 +2247,25 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs) > > > goto flush_parent; > > > } > > > + /* Check if storage is actually dirty before flushing to disk */ > > > + if (!bs->dirty) { > > > + /* Flush requests are appended to tracked request list in order so that > > > + * most recent request is at the head of the list. Following code uses > > > + * this ordering to wait for the most recent flush request to complete > > > + * to ensure that requests return in order */ > > > + BdrvTrackedRequest *prev_req; > > > + QLIST_FOREACH(prev_req, &bs->tracked_requests, list) { > > > + if (prev_req == &req || prev_req->type != BDRV_TRACKED_FLUSH) { > > > + continue; > > > + } > > > + > > > + qemu_co_queue_wait(&prev_req->wait_queue); > > > + break; > > > + } > > > + goto flush_parent; > > Should we check bs->dirty again after qemu_co_queue_wait()? I think another > > write request could sneak in while this coroutine yields. > no, we do not care. Any subsequent to FLUSH write does not guaranteed to > be flushed. We have the warranty only that all write requests completed > prior to this flush are really flushed. I'm not worried about subsequent requests. A prior request can be already in progress or be waiting when we check bs->dirty, though it would be false there, but it will become true soon -- bdrv_set_dirty is only called when a request is completing. Fam > > > > > > + } > > > + bs->dirty = false; > > > + > > > BLKDBG_EVENT(bs->file, BLKDBG_FLUSH_TO_DISK); > > > if (bs->drv->bdrv_co_flush_to_disk) { > > > ret = bs->drv->bdrv_co_flush_to_disk(bs); > > > diff --git a/include/block/block_int.h b/include/block/block_int.h > > > index 0432ba5..59a7def 100644 > > > --- a/include/block/block_int.h > > > +++ b/include/block/block_int.h > > > @@ -435,6 +435,7 @@ struct BlockDriverState { > > > bool valid_key; /* if true, a valid encryption key has been set */ > > > bool sg; /* if true, the device is a /dev/sg* */ > > > bool probed; /* if true, format was probed rather than specified */ > > > + bool dirty; /* if true, media is dirty and should be flushed */ > > How about renaming this to "need_flush"? The one "dirty" we had is set by > > bdrv_set_dirty, and cleared by bdrv_reset_dirty_bitmap. I'd avoid the > > confusion between the two concepts. > > > > Fam > > can be > > > > int copy_on_read; /* if nonzero, copy read backing sectors into image. > > > note this is a reference count */ > > > -- > > > 2.1.4 > > > > >