* [Qemu-devel] [PATCH 1/2] block/vdi: Use {DIV_,}ROUND_UP
2014-10-21 8:51 [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error Max Reitz
@ 2014-10-21 8:51 ` Max Reitz
2014-10-21 16:58 ` Eric Blake
2014-10-21 8:51 ` [Qemu-devel] [PATCH 2/2] block/vdi: Do not use bdrv_pwrite_sync() for bmap Max Reitz
2014-10-22 11:56 ` [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error Kevin Wolf
2 siblings, 1 reply; 9+ messages in thread
From: Max Reitz @ 2014-10-21 8:51 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Stefan Weil, Max Reitz, Stefan Hajnoczi,
Benoît Canet
There are macros for these operations, so make use of them.
Signed-off-by: Max Reitz <mreitz@redhat.com>
---
block/vdi.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/block/vdi.c b/block/vdi.c
index 9604721..19701ee 100644
--- a/block/vdi.c
+++ b/block/vdi.c
@@ -407,8 +407,7 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
We accept them but round the disk size to the next multiple of
SECTOR_SIZE. */
logout("odd disk size %" PRIu64 " B, round up\n", header.disk_size);
- header.disk_size += SECTOR_SIZE - 1;
- header.disk_size &= ~(SECTOR_SIZE - 1);
+ header.disk_size = ROUND_UP(header.disk_size, SECTOR_SIZE);
}
if (header.signature != VDI_SIGNATURE) {
@@ -475,7 +474,7 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
s->header = header;
bmap_size = header.blocks_in_image * sizeof(uint32_t);
- bmap_size = (bmap_size + SECTOR_SIZE - 1) / SECTOR_SIZE;
+ bmap_size = DIV_ROUND_UP(bmap_size, SECTOR_SIZE);
s->bmap = qemu_try_blockalign(bs->file, bmap_size * SECTOR_SIZE);
if (s->bmap == NULL) {
ret = -ENOMEM;
@@ -736,10 +735,10 @@ static int vdi_create(const char *filename, QemuOpts *opts, Error **errp)
/* We need enough blocks to store the given disk size,
so always round up. */
- blocks = (bytes + block_size - 1) / block_size;
+ blocks = DIV_ROUND_UP(bytes, block_size);
bmap_size = blocks * sizeof(uint32_t);
- bmap_size = ((bmap_size + SECTOR_SIZE - 1) & ~(SECTOR_SIZE -1));
+ bmap_size = ROUND_UP(bmap_size, SECTOR_SIZE);
memset(&header, 0, sizeof(header));
pstrcpy(header.text, sizeof(header.text), VDI_TEXT);
--
1.9.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] block/vdi: Use {DIV_,}ROUND_UP
2014-10-21 8:51 ` [Qemu-devel] [PATCH 1/2] block/vdi: Use {DIV_,}ROUND_UP Max Reitz
@ 2014-10-21 16:58 ` Eric Blake
0 siblings, 0 replies; 9+ messages in thread
From: Eric Blake @ 2014-10-21 16:58 UTC (permalink / raw)
To: Max Reitz, qemu-devel
Cc: Kevin Wolf, Stefan Weil, Benoît Canet, Stefan Hajnoczi
[-- Attachment #1: Type: text/plain, Size: 1408 bytes --]
On 10/21/2014 02:51 AM, Max Reitz wrote:
> There are macros for these operations, so make use of them.
>
> Signed-off-by: Max Reitz <mreitz@redhat.com>
> ---
> block/vdi.c | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
> @@ -475,7 +474,7 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
> s->header = header;
>
> bmap_size = header.blocks_in_image * sizeof(uint32_t);
> - bmap_size = (bmap_size + SECTOR_SIZE - 1) / SECTOR_SIZE;
> + bmap_size = DIV_ROUND_UP(bmap_size, SECTOR_SIZE);
Is it worth consolidating these two assignments into one:
bmap_size = DIV_ROUND_UP(header.blocks_in_image * sizeof(uint32_t),
SECTOR_SIZE);
> @@ -736,10 +735,10 @@ static int vdi_create(const char *filename, QemuOpts *opts, Error **errp)
>
> /* We need enough blocks to store the given disk size,
> so always round up. */
> - blocks = (bytes + block_size - 1) / block_size;
> + blocks = DIV_ROUND_UP(bytes, block_size);
>
> bmap_size = blocks * sizeof(uint32_t);
> - bmap_size = ((bmap_size + SECTOR_SIZE - 1) & ~(SECTOR_SIZE -1));
> + bmap_size = ROUND_UP(bmap_size, SECTOR_SIZE);
and again?
Either way,
Reviewed-by: Eric Blake <eblake@redhat.com>
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 539 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* [Qemu-devel] [PATCH 2/2] block/vdi: Do not use bdrv_pwrite_sync() for bmap
2014-10-21 8:51 [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error Max Reitz
2014-10-21 8:51 ` [Qemu-devel] [PATCH 1/2] block/vdi: Use {DIV_,}ROUND_UP Max Reitz
@ 2014-10-21 8:51 ` Max Reitz
2014-10-21 17:00 ` Eric Blake
2014-10-22 11:56 ` [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error Kevin Wolf
2 siblings, 1 reply; 9+ messages in thread
From: Max Reitz @ 2014-10-21 8:51 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Stefan Weil, Max Reitz, Stefan Hajnoczi,
Benoît Canet
The bmap can be rather large (maximum blocks per image count:
0x3fffffff; the bmap has a size of block_count * sizeof(uint32_t) bytes,
which makes 0xfffffffc bytes) and exceed INT_MAX. Using block layer
functions which take a byte count as an int is therefore not a good
idea. Use bdrv_write()+bdrv_flush() instead of bdrv_pwrite_sync().
See: https://bugzilla.redhat.com/show_bug.cgi?id=1154940
Signed-off-by: Max Reitz <mreitz@redhat.com>
---
block/vdi.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/block/vdi.c b/block/vdi.c
index 19701ee..322efcd 100644
--- a/block/vdi.c
+++ b/block/vdi.c
@@ -783,11 +783,18 @@ static int vdi_create(const char *filename, QemuOpts *opts, Error **errp)
bmap[i] = VDI_UNALLOCATED;
}
}
- ret = bdrv_pwrite_sync(bs, offset, bmap, bmap_size);
+ assert(!(offset % BDRV_SECTOR_SIZE));
+ ret = bdrv_write(bs, offset / BDRV_SECTOR_SIZE, (uint8_t *)bmap,
+ bmap_size / BDRV_SECTOR_SIZE);
if (ret < 0) {
error_setg(errp, "Error writing bmap to %s", filename);
goto exit;
}
+ ret = bdrv_flush(bs);
+ if (ret < 0) {
+ error_setg(errp, "Error flushing bmap to %s", filename);
+ goto exit;
+ }
offset += bmap_size;
}
--
1.9.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] block/vdi: Do not use bdrv_pwrite_sync() for bmap
2014-10-21 8:51 ` [Qemu-devel] [PATCH 2/2] block/vdi: Do not use bdrv_pwrite_sync() for bmap Max Reitz
@ 2014-10-21 17:00 ` Eric Blake
0 siblings, 0 replies; 9+ messages in thread
From: Eric Blake @ 2014-10-21 17:00 UTC (permalink / raw)
To: Max Reitz, qemu-devel
Cc: Kevin Wolf, Stefan Weil, Benoît Canet, Stefan Hajnoczi
[-- Attachment #1: Type: text/plain, Size: 757 bytes --]
On 10/21/2014 02:51 AM, Max Reitz wrote:
> The bmap can be rather large (maximum blocks per image count:
> 0x3fffffff; the bmap has a size of block_count * sizeof(uint32_t) bytes,
> which makes 0xfffffffc bytes) and exceed INT_MAX. Using block layer
> functions which take a byte count as an int is therefore not a good
> idea. Use bdrv_write()+bdrv_flush() instead of bdrv_pwrite_sync().
>
> See: https://bugzilla.redhat.com/show_bug.cgi?id=1154940
>
> Signed-off-by: Max Reitz <mreitz@redhat.com>
> ---
> block/vdi.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
Reviewed-by: Eric Blake <eblake@redhat.com>
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 539 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error
2014-10-21 8:51 [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error Max Reitz
2014-10-21 8:51 ` [Qemu-devel] [PATCH 1/2] block/vdi: Use {DIV_,}ROUND_UP Max Reitz
2014-10-21 8:51 ` [Qemu-devel] [PATCH 2/2] block/vdi: Do not use bdrv_pwrite_sync() for bmap Max Reitz
@ 2014-10-22 11:56 ` Kevin Wolf
2014-10-23 7:07 ` Max Reitz
2014-10-23 7:07 ` Max Reitz
2 siblings, 2 replies; 9+ messages in thread
From: Kevin Wolf @ 2014-10-22 11:56 UTC (permalink / raw)
To: Max Reitz; +Cc: Stefan Weil, qemu-devel, Stefan Hajnoczi, Benoît Canet
Am 21.10.2014 um 10:51 hat Max Reitz geschrieben:
> The bmap size in block/vdi.c may exceed INT_MAX. Using
> bdrv_pwrite_sync() (which takes an int byte count) is therefore not a
> good idea. The second patch of this series fixes this by replacing
> bdrv_pwrite_sync() by bdrv_write()+bdrv_flush() (we don't need the p in
> pwrite here).
>
> The first patch employs ROUND_UP() and DIV_ROUND_UP() in block/vdi.c, so
> you are reminded that bmap_size is aligned to BDRV_SECTOR_SIZE for the
> second patch.
>
> See https://bugzilla.redhat.com/show_bug.cgi?id=1154940 for a bug
> report.
>
> I will not include an iotest in this series because this would require
> qemu to allocate and then write about 2G of data; yes, test 1 in 084
> fails for me because qemu cannot allocate 4G for the bmap.
>
> In fact, I can only test this once I'm home where I have more RAM
> available (I made the mistake of activating swap space to test this only
> once).
Thanks, applied to the block branch.
Kevin
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error
2014-10-22 11:56 ` [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error Kevin Wolf
@ 2014-10-23 7:07 ` Max Reitz
2014-10-23 7:07 ` Max Reitz
1 sibling, 0 replies; 9+ messages in thread
From: Max Reitz @ 2014-10-23 7:07 UTC (permalink / raw)
To: Kevin Wolf; +Cc: Stefan Weil, qemu-devel, Stefan Hajnoczi, Benoît Canet
On 2014-10-22 at 13:56, Kevin Wolf wrote:
> Am 21.10.2014 um 10:51 hat Max Reitz geschrieben:
>> The bmap size in block/vdi.c may exceed INT_MAX. Using
>> bdrv_pwrite_sync() (which takes an int byte count) is therefore not a
>> good idea. The second patch of this series fixes this by replacing
>> bdrv_pwrite_sync() by bdrv_write()+bdrv_flush() (we don't need the p in
>> pwrite here).
>>
>> The first patch employs ROUND_UP() and DIV_ROUND_UP() in block/vdi.c, so
>> you are reminded that bmap_size is aligned to BDRV_SECTOR_SIZE for the
>> second patch.
>>
>> See https://bugzilla.redhat.com/show_bug.cgi?id=1154940 for a bug
>> report.
>>
>> I will not include an iotest in this series because this would require
>> qemu to allocate and then write about 2G of data; yes, test 1 in 084
>> fails for me because qemu cannot allocate 4G for the bmap.
>>
>> In fact, I can only test this once I'm home where I have more RAM
>> available (I made the mistake of activating swap space to test this only
>> once).
> Thanks, applied to the block branch.
So I tested it yesterday and it turns out it does not fix the bug. I'm
sorry, could you unapply patch 2?
The reason it doesn't work is, as you personally said to me yesterday,
that bdrv_write() goes through the bdrv_pwrite() path as well. Well, it
does not really, but it does test whether nb_sectors > INT_MAX /
BDRV_SECTOR_SIZE. Therefore, writing the bitmap still fails with EINVAL
(for images >= 512 TB).
Now, we could either fix VDI; or we simply limit the maximum image size
to half what it is now. I don't see a problem in doing the latter, since
qemu already does not support all image sizes, so there's no reason why
we should not limit the size even further. Also, 512 TB seems plenty today.
So unless someone is against this, I'll adjust VDI_BLOCKS_IN_IMAGE to be
INT_MAX / sizeof(uint32_t) (it's currently UINT32_MAX / sizeof(uint32_t)
== 0x3fffffff). This will be problematic if sizeof(int) >
sizeof(uint32_t), but I doubt that'll happen soon, if at all.
Max
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error
2014-10-22 11:56 ` [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error Kevin Wolf
2014-10-23 7:07 ` Max Reitz
@ 2014-10-23 7:07 ` Max Reitz
2014-10-23 13:24 ` Max Reitz
1 sibling, 1 reply; 9+ messages in thread
From: Max Reitz @ 2014-10-23 7:07 UTC (permalink / raw)
To: Kevin Wolf; +Cc: Stefan Weil, qemu-devel, Stefan Hajnoczi, Benoît Canet
On 2014-10-22 at 13:56, Kevin Wolf wrote:
> Am 21.10.2014 um 10:51 hat Max Reitz geschrieben:
>> The bmap size in block/vdi.c may exceed INT_MAX. Using
>> bdrv_pwrite_sync() (which takes an int byte count) is therefore not a
>> good idea. The second patch of this series fixes this by replacing
>> bdrv_pwrite_sync() by bdrv_write()+bdrv_flush() (we don't need the p in
>> pwrite here).
>>
>> The first patch employs ROUND_UP() and DIV_ROUND_UP() in block/vdi.c, so
>> you are reminded that bmap_size is aligned to BDRV_SECTOR_SIZE for the
>> second patch.
>>
>> See https://bugzilla.redhat.com/show_bug.cgi?id=1154940 for a bug
>> report.
>>
>> I will not include an iotest in this series because this would require
>> qemu to allocate and then write about 2G of data; yes, test 1 in 084
>> fails for me because qemu cannot allocate 4G for the bmap.
>>
>> In fact, I can only test this once I'm home where I have more RAM
>> available (I made the mistake of activating swap space to test this only
>> once).
> Thanks, applied to the block branch.
So I tested it yesterday and it turns out it does not fix the bug. I'm
sorry, could you unapply patch 2?
The reason it doesn't work is, as you personally said to me yesterday,
that bdrv_write() goes through the bdrv_pwrite() path as well. Well, it
does not really, but it does test whether nb_sectors > INT_MAX /
BDRV_SECTOR_SIZE. Therefore, writing the bitmap still fails with EINVAL
(for images >= 512 TB).
Now, we could either actually fix the VDI code; or we simply limit the
maximum image size to half what it is now. I don't see a problem in
doing the latter, since qemu already does not support all image sizes,
so there's no reason why we should not limit the size even further.
Also, 512 TB seems plenty today.
So unless someone is against this, I'll adjust VDI_BLOCKS_IN_IMAGE to be
INT_MAX / sizeof(uint32_t) (it's currently UINT32_MAX / sizeof(uint32_t)
== 0x3fffffff). This will be problematic if sizeof(int) >
sizeof(uint32_t), but I doubt that'll happen soon, if at all.
Max
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error
2014-10-23 7:07 ` Max Reitz
@ 2014-10-23 13:24 ` Max Reitz
0 siblings, 0 replies; 9+ messages in thread
From: Max Reitz @ 2014-10-23 13:24 UTC (permalink / raw)
To: Kevin Wolf; +Cc: Stefan Weil, qemu-devel, Stefan Hajnoczi, Benoît Canet
On 2014-10-23 at 09:07, Max Reitz wrote:
> On 2014-10-22 at 13:56, Kevin Wolf wrote:
>> Am 21.10.2014 um 10:51 hat Max Reitz geschrieben:
>>> The bmap size in block/vdi.c may exceed INT_MAX. Using
>>> bdrv_pwrite_sync() (which takes an int byte count) is therefore not a
>>> good idea. The second patch of this series fixes this by replacing
>>> bdrv_pwrite_sync() by bdrv_write()+bdrv_flush() (we don't need the p in
>>> pwrite here).
>>>
>>> The first patch employs ROUND_UP() and DIV_ROUND_UP() in
>>> block/vdi.c, so
>>> you are reminded that bmap_size is aligned to BDRV_SECTOR_SIZE for the
>>> second patch.
>>>
>>> See https://bugzilla.redhat.com/show_bug.cgi?id=1154940 for a bug
>>> report.
>>>
>>> I will not include an iotest in this series because this would require
>>> qemu to allocate and then write about 2G of data; yes, test 1 in 084
>>> fails for me because qemu cannot allocate 4G for the bmap.
>>>
>>> In fact, I can only test this once I'm home where I have more RAM
>>> available (I made the mistake of activating swap space to test this
>>> only
>>> once).
>> Thanks, applied to the block branch.
>
> So I tested it yesterday and it turns out it does not fix the bug. I'm
> sorry, could you unapply patch 2?
I see you haven't dropped it yet. Well, I just thought about it and of
course we can keep it in. It just doesn't fix the error its commit
message pretends to, but it definitely doesn't make matters worse.
Max
> The reason it doesn't work is, as you personally said to me yesterday,
> that bdrv_write() goes through the bdrv_pwrite() path as well. Well,
> it does not really, but it does test whether nb_sectors > INT_MAX /
> BDRV_SECTOR_SIZE. Therefore, writing the bitmap still fails with
> EINVAL (for images >= 512 TB).
>
> Now, we could either actually fix the VDI code; or we simply limit the
> maximum image size to half what it is now. I don't see a problem in
> doing the latter, since qemu already does not support all image sizes,
> so there's no reason why we should not limit the size even further.
> Also, 512 TB seems plenty today.
>
> So unless someone is against this, I'll adjust VDI_BLOCKS_IN_IMAGE to
> be INT_MAX / sizeof(uint32_t) (it's currently UINT32_MAX /
> sizeof(uint32_t) == 0x3fffffff). This will be problematic if
> sizeof(int) > sizeof(uint32_t), but I doubt that'll happen soon, if at
> all.
>
> Max
^ permalink raw reply [flat|nested] 9+ messages in thread