From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36465) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VQfqp-0004kV-Vj for qemu-devel@nongnu.org; Mon, 30 Sep 2013 11:57:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VQfqj-0007ab-Kg for qemu-devel@nongnu.org; Mon, 30 Sep 2013 11:57:35 -0400 Received: from mx1.redhat.com ([209.132.183.28]:31210) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VQfqj-0007aF-Ca for qemu-devel@nongnu.org; Mon, 30 Sep 2013 11:57:29 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r8UFvSL4001170 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 30 Sep 2013 11:57:28 -0400 From: Max Reitz Date: Mon, 30 Sep 2013 17:57:21 +0200 Message-Id: <1380556641-26256-1-git-send-email-mreitz@redhat.com> Subject: [Qemu-devel] [PATCH] qcow2: Switch L1 table in a single sequence List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Kevin Wolf , Stefan Hajnoczi , Max Reitz Switching the L1 table in memory should be an atomic operation, as far as possible. Calling qcow2_free_clusters on the old L1 table on disk is not a good idea when the old L1 table is no longer valid and the address to the new one hasn't yet been written into the corresponding BDRVQcowState field. To be more specific, this can lead to segfaults due to qcow2_check_metadata_overlap trying to access the L1 table during the free operation. Signed-off-by: Max Reitz --- block/qcow2-cluster.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c index 72cb573..0fd26bb 100644 --- a/block/qcow2-cluster.c +++ b/block/qcow2-cluster.c @@ -35,6 +35,7 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size, BDRVQcowState *s = bs->opaque; int new_l1_size2, ret, i; uint64_t *new_l1_table; + int64_t old_l1_table_offset, old_l1_size; int64_t new_l1_table_offset, new_l1_size; uint8_t data[12]; @@ -106,11 +107,13 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size, goto fail; } g_free(s->l1_table); - qcow2_free_clusters(bs, s->l1_table_offset, s->l1_size * sizeof(uint64_t), - QCOW2_DISCARD_OTHER); + old_l1_table_offset = s->l1_table_offset; s->l1_table_offset = new_l1_table_offset; s->l1_table = new_l1_table; + old_l1_size = s->l1_size; s->l1_size = new_l1_size; + qcow2_free_clusters(bs, old_l1_table_offset, old_l1_size * sizeof(uint64_t), + QCOW2_DISCARD_OTHER); return 0; fail: g_free(new_l1_table); -- 1.8.3.1