From: Jeff Cody <jcody@redhat.com>
To: "Denis V. Lunev" <den@openvz.org>
Cc: Kevin Wolf <kwolf@redhat.com>,
qemu-devel@nongnu.org, Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v2 4/4] parallels: 2TB+ parallels images support
Date: Thu, 7 Aug 2014 10:39:21 -0400 [thread overview]
Message-ID: <20140807143921.GC18633@localhost.localdomain> (raw)
In-Reply-To: <1406564635-261591-5-git-send-email-den@openvz.org>
On Mon, Jul 28, 2014 at 08:23:55PM +0400, Denis V. Lunev wrote:
> Parallels has released in the recent updates of Parallels Server 5/6
> new addition to his image format. Images with signature WithouFreSpacExt
> have offsets in the catalog coded not as offsets in sectors (multiple
> of 512 bytes) but offsets coded in blocks (i.e. header->tracks * 512)
>
> In this case all 64 bits of header->nb_sectors are used for image size.
>
> This patch implements support of this for qemu-img and also adds specific
> check for an incorrect image. Images with block size greater than
> INT_MAX/513 are not supported. The biggest available Parallels image
> cluster size in the field is 1 Mb. Thus this limit will not hurt
> anyone.
>
> Signed-off-by: Denis V. Lunev <den@openvz.org>
> CC: Jeff Cody <jcody@redhat.com>
> CC: Kevin Wolf <kwolf@redhat.com>
> CC: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> block/parallels.c | 25 ++++++++++++++++++++-----
> 1 file changed, 20 insertions(+), 5 deletions(-)
>
> diff --git a/block/parallels.c b/block/parallels.c
> index 466705e..4414a9d 100644
> --- a/block/parallels.c
> +++ b/block/parallels.c
> @@ -30,6 +30,7 @@
> /**************************************************************/
>
> #define HEADER_MAGIC "WithoutFreeSpace"
> +#define HEADER_MAGIC2 "WithouFreSpacExt"
> #define HEADER_VERSION 2
> #define HEADER_SIZE 64
>
> @@ -54,6 +55,8 @@ typedef struct BDRVParallelsState {
> unsigned int catalog_size;
>
> unsigned int tracks;
> +
> + unsigned int off_multiplier;
> } BDRVParallelsState;
>
> static int parallels_probe(const uint8_t *buf, int buf_size, const char *filename)
> @@ -63,7 +66,8 @@ static int parallels_probe(const uint8_t *buf, int buf_size, const char *filenam
> if (buf_size < HEADER_SIZE)
> return 0;
>
> - if (!memcmp(ph->magic, HEADER_MAGIC, 16) &&
> + if ((!memcmp(ph->magic, HEADER_MAGIC, 16) ||
> + !memcmp(ph->magic, HEADER_MAGIC2, 16)) &&
> (le32_to_cpu(ph->version) == HEADER_VERSION))
> return 100;
>
> @@ -85,21 +89,31 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
> goto fail;
> }
>
> + bs->total_sectors = le64_to_cpu(ph.nb_sectors);
> +
> if (le32_to_cpu(ph.version) != HEADER_VERSION) {
> goto fail_format;
> }
> - if (memcmp(ph.magic, HEADER_MAGIC, 16)) {
> + if (!memcmp(ph.magic, HEADER_MAGIC, 16)) {
> + s->off_multiplier = 1;
> + bs->total_sectors = 0xffffffff & bs->total_sectors;
> + } else if (!memcmp(ph.magic, HEADER_MAGIC2, 16)) {
> + s->off_multiplier = le32_to_cpu(ph.tracks);
> + } else {
> goto fail_format;
> }
>
> - bs->total_sectors = 0xffffffff & le64_to_cpu(ph.nb_sectors);
> -
> s->tracks = le32_to_cpu(ph.tracks);
> if (s->tracks == 0) {
> error_setg(errp, "Invalid image: Zero sectors per track");
> ret = -EINVAL;
> goto fail;
> }
> + if (s->tracks > INT32_MAX/513) {
> + error_setg(errp, "Invalid image: Too big cluster");
> + ret = -EFBIG;
> + goto fail;
> + }
>
> s->catalog_size = le32_to_cpu(ph.catalog_entries);
> if (s->catalog_size > INT_MAX / 4) {
> @@ -139,7 +153,8 @@ static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
> /* not allocated */
> if ((index > s->catalog_size) || (s->catalog_bitmap[index] == 0))
> return -1;
> - return (uint64_t)(s->catalog_bitmap[index] + offset) * 512;
> + return
> + ((uint64_t)s->catalog_bitmap[index] * s->off_multiplier + offset) * 512;
This still does a cast to uint_64_t, instead of int64_t; not sure it
really matters in practice, as we should be safe now from exceeding an
int64_t value in the end result.
> }
>
> static int parallels_read(BlockDriverState *bs, int64_t sector_num,
> --
> 1.9.1
>
>
next prev parent reply other threads:[~2014-08-07 14:39 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-28 16:23 [Qemu-devel] [PATCH v2 0/4] block/parallels: 2TB+ parallels images support Denis V. Lunev
2014-07-28 16:23 ` [Qemu-devel] [PATCH v2 1/4] parallels: extend parallels format header with actual data values Denis V. Lunev
2014-08-07 14:27 ` Jeff Cody
2014-07-28 16:23 ` [Qemu-devel] [PATCH v2 2/4] parallels: replace tabs with spaces in block/parallels.c Denis V. Lunev
2014-07-28 16:23 ` [Qemu-devel] [PATCH v2 3/4] parallels: split check for parallels format in parallels_open Denis V. Lunev
2014-08-07 14:32 ` Jeff Cody
2014-07-28 16:23 ` [Qemu-devel] [PATCH v2 4/4] parallels: 2TB+ parallels images support Denis V. Lunev
2014-08-07 14:39 ` Jeff Cody [this message]
2014-08-07 15:03 ` Denis V. Lunev
2014-08-07 15:14 ` Jeff Cody
2014-08-07 15:22 ` Denis V. Lunev
2014-08-07 15:34 ` Denis V. Lunev
2014-08-07 12:41 ` [Qemu-devel] [PATCH v2 0/4] block/parallels: " Denis V. Lunev
2014-08-12 12:57 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140807143921.GC18633@localhost.localdomain \
--to=jcody@redhat.com \
--cc=den@openvz.org \
--cc=kwolf@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).