From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:52569) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V6WHf-0005mm-Eb for qemu-devel@nongnu.org; Mon, 05 Aug 2013 21:42:05 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1V6WHZ-00079V-FJ for qemu-devel@nongnu.org; Mon, 05 Aug 2013 21:41:59 -0400 Received: from mx1.redhat.com ([209.132.183.28]:7895) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V6WHZ-00079L-3V for qemu-devel@nongnu.org; Mon, 05 Aug 2013 21:41:53 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r761fqOq017563 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 5 Aug 2013 21:41:52 -0400 From: Fam Zheng Date: Tue, 6 Aug 2013 09:40:43 +0800 Message-Id: <1375753243-19530-11-git-send-email-famz@redhat.com> In-Reply-To: <1375753243-19530-1-git-send-email-famz@redhat.com> References: <1375753243-19530-1-git-send-email-famz@redhat.com> Subject: [Qemu-devel] [PATCH v3 10/10] vmdk: rename num_gtes_per_gte to num_gtes_per_gt List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: kwolf@redhat.com, pmatouse@redhat.com, jcody@redhat.com, armbru@redhat.com, stefanha@redhat.com, famz@redhat.com, asias@redhat.com, areis@redhat.com num_gtes_per_gte is a historical typo, rename it to a more sensible name. It means "number of GrainTableEntries per GrainTable". Signed-off-by: Fam Zheng --- block/vmdk.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/block/vmdk.c b/block/vmdk.c index 21610e3..cb34b9f 100644 --- a/block/vmdk.c +++ b/block/vmdk.c @@ -71,7 +71,7 @@ typedef struct { uint64_t granularity; uint64_t desc_offset; uint64_t desc_size; - uint32_t num_gtes_per_gte; + uint32_t num_gtes_per_gt; uint64_t rgd_offset; uint64_t gd_offset; uint64_t grain_offset; @@ -585,12 +585,12 @@ static int vmdk_open_vmdk4(BlockDriverState *bs, return -ENOTSUP; } - if (le32_to_cpu(header.num_gtes_per_gte) > 512) { + if (le32_to_cpu(header.num_gtes_per_gt) > 512) { error_report("L2 table size too big"); return -EINVAL; } - l1_entry_sectors = le32_to_cpu(header.num_gtes_per_gte) + l1_entry_sectors = le32_to_cpu(header.num_gtes_per_gt) * le64_to_cpu(header.granularity); if (l1_entry_sectors == 0) { return -EINVAL; @@ -613,7 +613,7 @@ static int vmdk_open_vmdk4(BlockDriverState *bs, le64_to_cpu(header.gd_offset) << 9, l1_backup_offset, l1_size, - le32_to_cpu(header.num_gtes_per_gte), + le32_to_cpu(header.num_gtes_per_gt), le64_to_cpu(header.granularity), &extent); if (ret < 0) { @@ -1409,12 +1409,12 @@ static int vmdk_create_extent(const char *filename, int64_t filesize, header.compressAlgorithm = compress ? VMDK4_COMPRESSION_DEFLATE : 0; header.capacity = filesize / 512; header.granularity = 128; - header.num_gtes_per_gte = 512; + header.num_gtes_per_gt = 512; grains = (filesize / 512 + header.granularity - 1) / header.granularity; - gt_size = ((header.num_gtes_per_gte * sizeof(uint32_t)) + 511) >> 9; + gt_size = ((header.num_gtes_per_gt * sizeof(uint32_t)) + 511) >> 9; gt_count = - (grains + header.num_gtes_per_gte - 1) / header.num_gtes_per_gte; + (grains + header.num_gtes_per_gt - 1) / header.num_gtes_per_gt; gd_size = (gt_count * sizeof(uint32_t) + 511) >> 9; header.desc_offset = 1; @@ -1430,7 +1430,7 @@ static int vmdk_create_extent(const char *filename, int64_t filesize, header.flags = cpu_to_le32(header.flags); header.capacity = cpu_to_le64(header.capacity); header.granularity = cpu_to_le64(header.granularity); - header.num_gtes_per_gte = cpu_to_le32(header.num_gtes_per_gte); + header.num_gtes_per_gt = cpu_to_le32(header.num_gtes_per_gt); header.desc_offset = cpu_to_le64(header.desc_offset); header.desc_size = cpu_to_le64(header.desc_size); header.rgd_offset = cpu_to_le64(header.rgd_offset); -- 1.8.3.4