qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] vpc max table entries calculation error
@ 2016-11-29 21:07 Nick Owens
  2016-12-07 13:16 ` Kevin Wolf
  0 siblings, 1 reply; 2+ messages in thread
From: Nick Owens @ 2016-11-29 21:07 UTC (permalink / raw)
  To: qemu-devel

i'm writing to discuss an issue in qemu's vpc creation.

when creating a dynamic-type vpc image with qemu-img like so:

$ qemu-img create -f vpc vhd.vhd 100M
Formatting 'vhd.vhd', fmt=vpc size=104857600

and then inspecting the file (with a tool i wrote; it's also easy to see in
a hex dump):

$ vhd-inspect ./vhd.vhd
                                  MaxTableEntries: 51
BlockSize: 2097152
PhysicalSize: 104865792
VirtualSize: 104865792
PhysicalSize/BlockSize = 50

we see that the MaxTableEntries differs from the PhysicalSize/BlockSize.

in the vhd specification ([1] or [2]) we see that it says:

"Max Table Entries
This field holds the maximum entries present in the BAT. This should be
equal to the number of blocks in the disk (that is, the disk size divided
by the block size)."

however, in the QEMU function 'create_dynamic_disk' in block/vpc.c, we can
see there is one additional block added to the calculation for
num_bat_entries (same as MaxTableEntries):

num_bat_entries = (total_sectors + block_size / 512) / (block_size / 512);

so, i tried to fix this by removing the extra '+ block_size / 512'.
however, that seems to break some assumptions in 'vpc_open', namely this
code:

        computed_size = (uint64_t) s->max_table_entries * s->block_size;
        if (computed_size < bs->total_sectors * 512) {
            error_setg(errp, "Page table too small");
            ret = -EINVAL;
            goto fail;
        }

on the other hand, if i create the dynamic vpc using '-o force_size', the
disk size computation ends up slightly different, apparently due to not
using CHS, and the check passes.

so, i am not sure what the right fix is here, as it seems vpc is very
messy, but i do think that this is a bug because the incorrect
MaxTableEntries causes other tools to miscompute the real disk size. when
these dynamic-type vpcs with incorrect MaxTableEntries are converted to
fixed-type and uploaded to Microsoft Azure, it results in the hypervisor
rejecting the image.

does someone have an idea about the correct way to fix this?

[1] https://technet.microsoft.com/en-us/virtualization/bb676673.aspx
[2]
https://docs.google.com/document/d/1RWssryIPuH_5isISxu9cGisyOfAV8s1_-e-YhhiF-jY/edit

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [Qemu-devel] vpc max table entries calculation error
  2016-11-29 21:07 [Qemu-devel] vpc max table entries calculation error Nick Owens
@ 2016-12-07 13:16 ` Kevin Wolf
  0 siblings, 0 replies; 2+ messages in thread
From: Kevin Wolf @ 2016-12-07 13:16 UTC (permalink / raw)
  To: Nick Owens; +Cc: qemu-devel, qemu-block

Am 29.11.2016 um 22:07 hat Nick Owens geschrieben:
> i'm writing to discuss an issue in qemu's vpc creation.
> 
> when creating a dynamic-type vpc image with qemu-img like so:
> 
> $ qemu-img create -f vpc vhd.vhd 100M
> Formatting 'vhd.vhd', fmt=vpc size=104857600
> 
> and then inspecting the file (with a tool i wrote; it's also easy to see in
> a hex dump):
> 
> $ vhd-inspect ./vhd.vhd
>                                   MaxTableEntries: 51
> BlockSize: 2097152
> PhysicalSize: 104865792
> VirtualSize: 104865792
> PhysicalSize/BlockSize = 50
> 
> we see that the MaxTableEntries differs from the PhysicalSize/BlockSize.

You did an integer division, which you can't really do because with only
50 blocks it couldn't describe the part of the image after the last
complete block. The real value is PhysicalSize/BlockSize = 50.00390625,
and rounding up is the only sane thing to do to give you an image with
this specific size.

> so, i am not sure what the right fix is here, as it seems vpc is very
> messy, but i do think that this is a bug because the incorrect
> MaxTableEntries causes other tools to miscompute the real disk size. when
> these dynamic-type vpcs with incorrect MaxTableEntries are converted to
> fixed-type and uploaded to Microsoft Azure, it results in the hypervisor
> rejecting the image.
> 
> does someone have an idea about the correct way to fix this?

You must use a multiple of the block size for the image size if you want
an exact result without any rounding. You'll have to use force_size=on
because adjusting the size to CHS will most likely fail to keep the size
at a multiple of the block size.

Yes, calling VHD "very messy" seems to be quite accurate. Creating an
image that works the same way both on Azure and the original Virtual PC
is almost impossible.

Kevin

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-12-07 13:16 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-29 21:07 [Qemu-devel] vpc max table entries calculation error Nick Owens
2016-12-07 13:16 ` Kevin Wolf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).