From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:37178) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UGVz2-0001hS-A0 for qemu-devel@nongnu.org; Fri, 15 Mar 2013 10:51:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UGVyy-0005BA-Sy for qemu-devel@nongnu.org; Fri, 15 Mar 2013 10:51:48 -0400 Received: from nodalink.pck.nerim.net ([62.212.105.220]:59629 helo=paradis.irqsave.net) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UGVyy-0005At-GT for qemu-devel@nongnu.org; Fri, 15 Mar 2013 10:51:44 -0400 From: =?UTF-8?q?Beno=C3=AEt=20Canet?= Date: Fri, 15 Mar 2013 15:49:38 +0100 Message-Id: <1363358986-8360-25-git-send-email-benoit@irqsave.net> In-Reply-To: <1363358986-8360-1-git-send-email-benoit@irqsave.net> References: <1363358986-8360-1-git-send-email-benoit@irqsave.net> Subject: [Qemu-devel] [RFC V7 24/32] qcow2: Serialize write requests when deduplication is activated. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: kwolf@redhat.com, =?UTF-8?q?Beno=C3=AEt=20Canet?= , stefanha@redhat.com This fixes the sub cluster sized writes race conditions while waiting for a faster solution. Signed-off-by: Benoit Canet --- block/qcow2.c | 17 ++++++++++++++++- block/qcow2.h | 1 + 2 files changed, 17 insertions(+), 1 deletion(-) diff --git a/block/qcow2.c b/block/qcow2.c index 838241c..9c613e5 100644 --- a/block/qcow2.c +++ b/block/qcow2.c @@ -515,6 +515,7 @@ static int qcow2_open(BlockDriverState *bs, int flags) /* Initialise locks */ qemu_co_mutex_init(&s->lock); + qemu_co_mutex_init(&s->dedup_lock); /* Repair image if dirty */ if (!(flags & BDRV_O_CHECK) && !bs->read_only && @@ -805,9 +806,19 @@ static coroutine_fn int qcow2_co_writev(BlockDriverState *bs, s->cluster_cache_offset = -1; /* disable compressed cache */ + atomic_dedup_is_running = qcow2_dedup_is_running(bs); + + if (atomic_dedup_is_running) { + /* This mutex is used to serialize the write requests in the dedup case. + * The goal is to avoid that the dedup process concurrents requests to + * the same clusters and corrupt data. + * With qcow2_dedup_read_missing_and_concatenate that would not work. + */ + qemu_co_mutex_lock(&s->dedup_lock); + } + qemu_co_mutex_lock(&s->lock); - atomic_dedup_is_running = qcow2_dedup_is_running(bs); if (atomic_dedup_is_running) { QTAILQ_INIT(&ds.undedupables); ds.phash.reuse = false; @@ -977,6 +988,10 @@ fail: g_free(l2meta); } + if (atomic_dedup_is_running) { + qemu_co_mutex_unlock(&s->dedup_lock); + } + qemu_iovec_destroy(&hd_qiov); qemu_vfree(cluster_data); qemu_vfree(dedup_cluster_data); diff --git a/block/qcow2.h b/block/qcow2.h index b858db9..a430fe1 100644 --- a/block/qcow2.h +++ b/block/qcow2.h @@ -236,6 +236,7 @@ typedef struct BDRVQcowState { GTree *dedup_tree_by_hash; CoMutex lock; + CoMutex dedup_lock; uint32_t crypt_method; /* current crypt method, 0 if no key yet */ uint32_t crypt_method_header; -- 1.7.10.4