From: "Denis V. Lunev" <den@openvz.org>
Cc: Kevin Wolf <kwolf@redhat.com>,
den@openvz.org, qemu-devel@nongnu.org,
Stefan Hajnoczi <stefanha@redhat.com>
Subject: [Qemu-devel] [PATCH 1/4] block/parallels: extend parallels format header with actual data values
Date: Tue, 22 Jul 2014 17:19:34 +0400 [thread overview]
Message-ID: <1406035177-221890-2-git-send-email-den@openvz.org> (raw)
In-Reply-To: <1406035177-221890-1-git-send-email-den@openvz.org>
Parallels image format has several additional fields inside:
- nb_sectors is actually 64 bit wide. Upper 32bits are not used for
images with signature "WithoutFreeSpace" and must be explicitely
zeroed according to Parallels. They will be used for images with
signature "WithouFreSpacExt"
- inuse is magic which means that the image is currently opened for
read/write or was not closed correctly, the magic is 0x746f6e59
- data_off is the location of the first data block. It can be zero
and in this case
This patch adds these values to struct parallels_header and adds
proper handling of nb_sectors for currently supported WithoutFreeSpace
images.
WithouFreSpacExt will be covered in the next patch.
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
---
block/parallels.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/block/parallels.c b/block/parallels.c
index 1a5bd35..c44df87 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -41,8 +41,10 @@ struct parallels_header {
uint32_t cylinders;
uint32_t tracks;
uint32_t catalog_entries;
- uint32_t nb_sectors;
- char padding[24];
+ uint64_t nb_sectors;
+ uint32_t inuse;
+ uint32_t data_off;
+ char padding[12];
} QEMU_PACKED;
typedef struct BDRVParallelsState {
@@ -90,7 +92,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
goto fail;
}
- bs->total_sectors = le32_to_cpu(ph.nb_sectors);
+ bs->total_sectors = (uint32_t)le64_to_cpu(ph.nb_sectors);
s->tracks = le32_to_cpu(ph.tracks);
if (s->tracks == 0) {
--
1.9.1
next prev parent reply other threads:[~2014-07-22 13:54 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-22 13:19 [Qemu-devel] [PATCH 0/4] block/parallels: 2TB+ parallels images support Denis V. Lunev
2014-07-22 13:19 ` Denis V. Lunev [this message]
2014-07-24 18:34 ` [Qemu-devel] [PATCH 1/4] block/parallels: extend parallels format header with actual data values Jeff Cody
2014-07-25 3:33 ` Denis V. Lunev
2014-07-22 13:19 ` [Qemu-devel] [PATCH 2/4] block/parallels: replace tabs with spaces in block/parallels.c Denis V. Lunev
2014-07-24 18:36 ` Jeff Cody
2014-07-22 13:19 ` [Qemu-devel] [PATCH 3/4] block/parallels: split check for parallels format in parallels_open Denis V. Lunev
2014-07-24 18:50 ` Jeff Cody
2014-07-25 3:36 ` Denis V. Lunev
2014-07-22 13:19 ` [Qemu-devel] [PATCH 4/4] block/parallels: 2TB+ parallels images support Denis V. Lunev
2014-07-24 19:25 ` Jeff Cody
2014-07-25 3:51 ` Denis V. Lunev
2014-07-25 13:08 ` Jeff Cody
2014-07-25 13:12 ` Denis V. Lunev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1406035177-221890-2-git-send-email-den@openvz.org \
--to=den@openvz.org \
--cc=kwolf@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).