qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Fam Zheng <famz@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: kwolf@redhat.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 1/2] vmdk: support vmfsSparse files
Date: Mon, 12 Aug 2013 09:31:44 +0800	[thread overview]
Message-ID: <20130812013144.GA11870@localhost.localdomain> (raw)
In-Reply-To: <1376237638-6968-2-git-send-email-pbonzini@redhat.com>

On Sun, 08/11 18:13, Paolo Bonzini wrote:
> VMware ESX hosts use a variant of the VMDK3 format, identified by the
> vmfsSparse create type ad the VMFSSPARSE extent type.
> 
> It has 16 KB grain tables (L2) and a variable-size grain directory (L1).
> In addition, the grain size is always 512, but that is not a problem
> because it is included in the header.
> 
> The format of the extents is documented in the VMDK spec.  The format
> of the descriptor file is not documented precisely, but it can be
> found at http://kb.vmware.com/kb/10026353 (Recreating a missing virtual
> machine disk (VMDK) descriptor file for delta disks).
> 
I don't have access to this link, could you include some documents to
this descriptor format in comment or commit message? IIRC, it's only the
type be "VMFSSPARSE", right?

What version of ESX has this format?

> With these patches, vmfsSparse files only work if opened through the
> descriptor file.  Data files without descriptor files, as far as I
> could understand, are not supported by ESX.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  block/vmdk.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 46 insertions(+), 5 deletions(-)
> 
> diff --git a/block/vmdk.c b/block/vmdk.c
> index b16d509..eaf484a 100644
> --- a/block/vmdk.c
> +++ b/block/vmdk.c
> @@ -505,6 +505,34 @@ static int vmdk_open_vmdk3(BlockDriverState *bs,
>      return ret;
>  }
>  
> +static int vmdk_open_vmfs_sparse(BlockDriverState *bs,
> +                                 BlockDriverState *file,
> +                                 int flags)
> +{
> +    int ret;
> +    uint32_t magic;
> +    VMDK3Header header;
> +    VmdkExtent *extent;
> +
> +    ret = bdrv_pread(file, sizeof(magic), &header, sizeof(header));
> +    if (ret < 0) {
> +        return ret;
> +    }
> +    extent = vmdk_add_extent(bs, file, false,
> +                          le64_to_cpu(header.disk_sectors),
> +                          le64_to_cpu(header.l1dir_offset) << 9,
> +                          0,
> +                          le64_to_cpu(header.l1dir_size) * 4,
> +                          4096,
> +                          le64_to_cpu(header.granularity)); /* always 512 */

This needs to be rebased, vmdk_add_extent() signature has been changed
in:

    commit 8aa1331c09a9b899f48d97f097bb49b7d458be1c
    Author: Fam Zheng <famz@redhat.com>
    Date:   Tue Aug 6 15:44:51 2013 +0800

        vmdk: check granularity field in opening

        Granularity is used to calculate the cluster size and allocate r/w
        buffer. Check the value from image before using it, so we don't abort()
        for unbounded memory allocation.

        Signed-off-by: Fam Zheng <famz@redhat.com>
        Signed-off-by: Kevin Wolf <kwolf@redhat.com>

Since the new function is a variant of vmdk_open_vmdk3(), would you
consider doing a tiny refactor and reduce duplication? And l1dir_size
and granularity need to be checked, as in vmdk_open_vmdk4().

> +    ret = vmdk_init_tables(bs, extent);
> +    if (ret) {
> +        /* free extent allocated by vmdk_add_extent */
> +        vmdk_free_last_extent(bs);
> +    }
> +    return ret;
> +}
> +
>  static int vmdk_open_desc_file(BlockDriverState *bs, int flags,
>                                 uint64_t desc_offset);
>  
> @@ -663,7 +691,7 @@ static int vmdk_parse_description(const char *desc, const char *opt_name,
>  /* Open an extent file and append to bs array */
>  static int vmdk_open_sparse(BlockDriverState *bs,
>                              BlockDriverState *file,
> -                            int flags)
> +                            int flags, bool vmfs_sparse)
>  {
>      uint32_t magic;
>  
> @@ -674,7 +702,11 @@ static int vmdk_open_sparse(BlockDriverState *bs,
>      magic = be32_to_cpu(magic);
>      switch (magic) {
>          case VMDK3_MAGIC:
> -            return vmdk_open_vmdk3(bs, file, flags);
> +            if (vmfs_sparse) {
> +                return vmdk_open_vmfs_sparse(bs, file, flags);
> +            } else {
> +                return vmdk_open_vmdk3(bs, file, flags);
> +            }
>              break;
>          case VMDK4_MAGIC:
>              return vmdk_open_vmdk4(bs, file, flags);
> @@ -718,7 +750,8 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
>          }
>  
>          if (sectors <= 0 ||
> -            (strcmp(type, "FLAT") && strcmp(type, "SPARSE")) ||
> +            (strcmp(type, "FLAT") && strcmp(type, "SPARSE") &&
> +             strcmp(type, "VMFSSPARSE")) ||
>              (strcmp(access, "RW"))) {
>              goto next_line;
>          }
> @@ -743,7 +776,14 @@ static int vmdk_parse_extents(const char *desc, BlockDriverState *bs,
>              extent->flat_start_offset = flat_offset << 9;
>          } else if (!strcmp(type, "SPARSE")) {
>              /* SPARSE extent */
> -            ret = vmdk_open_sparse(bs, extent_file, bs->open_flags);
> +            ret = vmdk_open_sparse(bs, extent_file, bs->open_flags, false);
> +            if (ret) {
> +                bdrv_delete(extent_file);
> +                return ret;
> +            }
> +        } else if (!strcmp(type, "VMFSSPARSE")) {
> +            /* VMFSSPARSE extent */
> +            ret = vmdk_open_sparse(bs, extent_file, bs->open_flags, true);
>              if (ret) {
>                  bdrv_delete(extent_file);
>                  return ret;
> @@ -789,6 +829,7 @@ static int vmdk_open_desc_file(BlockDriverState *bs, int flags,
>          goto exit;
>      }
>      if (strcmp(ct, "monolithicFlat") &&
> +        strcmp(ct, "vmfsSparse") &&
>          strcmp(ct, "twoGbMaxExtentSparse") &&
>          strcmp(ct, "twoGbMaxExtentFlat")) {
>          fprintf(stderr,
> @@ -808,7 +849,7 @@ static int vmdk_open(BlockDriverState *bs, QDict *options, int flags)
>      int ret;
>      BDRVVmdkState *s = bs->opaque;
>  
> -    if (vmdk_open_sparse(bs, bs->file, flags) == 0) {
> +    if (vmdk_open_sparse(bs, bs->file, flags, false) == 0) {
>          s->desc_offset = 0x200;
>      } else {
>          ret = vmdk_open_desc_file(bs, flags, 0);
> -- 
> 1.8.3.1
> 
> 

-- 
Fam

  reply	other threads:[~2013-08-12  1:31 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-11 16:13 [Qemu-devel] [PATCH 0/2] vmdk: Support ESX files Paolo Bonzini
2013-08-11 16:13 ` [Qemu-devel] [PATCH 1/2] vmdk: support vmfsSparse files Paolo Bonzini
2013-08-12  1:31   ` Fam Zheng [this message]
2013-08-12  7:40     ` Paolo Bonzini
2013-08-12  9:10       ` Fam Zheng
2013-08-12  8:39   ` Andreas Färber
2013-08-12 11:28   ` Stefan Hajnoczi
2013-08-12 11:45     ` Stefan Hajnoczi
2013-08-11 16:13 ` [Qemu-devel] [PATCH 2/2] vmdk: support vmfs files Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130812013144.GA11870@localhost.localdomain \
    --to=famz@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).