qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Max Reitz <mreitz@redhat.com>
To: Jeff Cody <jcody@redhat.com>
Cc: kwolf@redhat.com, stefanha@redhat.com, qemu-devel@nongnu.org,
	qemu-block@nongnu.org
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH v2 1/2] block: vpc - prevent overflow if max_table_entries >= 0x40000000
Date: Wed, 22 Jul 2015 19:29:47 +0200	[thread overview]
Message-ID: <55AFD30B.7040105@redhat.com> (raw)
In-Reply-To: <20150722172608.GA6814@localhost.localdomain>

On 22.07.2015 19:26, Jeff Cody wrote:
> On Wed, Jul 22, 2015 at 07:02:02PM +0200, Max Reitz wrote:
>> On 21.07.2015 18:13, Jeff Cody wrote:
>>> When we allocate the pagetable based on max_table_entries, we multiply
>>> the max table entry value by 4 to accomodate a table of 32-bit integers.
>>> However, max_table_entries is a uint32_t, and the VPC driver accepts
>>> ranges for that entry over 0x40000000.  So during this allocation:
>>>
>>> s->pagetable = qemu_try_blockalign(bs->file, s->max_table_entries * 4);
>>>
>>> The size arg overflows, allocating significantly less memory than
>>> expected.
>>>
>>> Since qemu_try_blockalign() size argument is size_t, cast the
>>> multiplication correctly to prevent overflow.
>>>
>>> The value of "max_table_entries * 4" is used elsewhere in the code as
>>> well, so store the correct value for use in all those cases.
>>>
>>> We also check the Max Tables Entries value, to make sure that it is <
>>> SIZE_MAX / 4, so we know the pagetable size will fit in size_t.
>>>
>>> Reported-by: Richard W.M. Jones <rjones@redhat.com>
>>> Signed-off-by: Jeff Cody <jcody@redhat.com>
>>> ---
>>>   block/vpc.c | 17 +++++++++++++----
>>>   1 file changed, 13 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/block/vpc.c b/block/vpc.c
>>> index 37572ba..6ef21e6 100644
>>> --- a/block/vpc.c
>>> +++ b/block/vpc.c
>>> @@ -168,6 +168,7 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
>>>       uint8_t buf[HEADER_SIZE];
>>>       uint32_t checksum;
>>>       uint64_t computed_size;
>>> +    uint64_t pagetable_size;
>>>       int disk_type = VHD_DYNAMIC;
>>>       int ret;
>>>
>>> @@ -269,7 +270,16 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
>>>               goto fail;
>>>           }
>>>
>>> -        s->pagetable = qemu_try_blockalign(bs->file, s->max_table_entries * 4);
>>> +        if (s->max_table_entries > SIZE_MAX / 4) {
>>> +            error_setg(errp, "Max Table Entries too large (%" PRId32 ")",
>>> +                        s->max_table_entries);
>>> +            ret = -EINVAL;
>>> +            goto fail;
>>> +        }
>>> +
>>> +        pagetable_size = (uint64_t) s->max_table_entries * 4;
>>> +
>>> +        s->pagetable = qemu_try_blockalign(bs->file, pagetable_size);
>>>           if (s->pagetable == NULL) {
>>>               ret = -ENOMEM;
>>>               goto fail;
>>> @@ -277,14 +287,13 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
>>>
>>>           s->bat_offset = be64_to_cpu(dyndisk_header->table_offset);
>>>
>>> -        ret = bdrv_pread(bs->file, s->bat_offset, s->pagetable,
>>> -                         s->max_table_entries * 4);
>>> +        ret = bdrv_pread(bs->file, s->bat_offset, s->pagetable, pagetable_size);
>>
>> The last parameter of bdrv_pread() is an int, so this will still
>> overflow if s->max_table_entries > INT_MAX / 4.
>>
>
> I thought about checking for INT_MAX overflow as well - I probably
> should.  I wasn't too concerned about it, however, as bdrv_read() will
> return -EINVAL for nb_sectors < 0.

Right, but SIZE_MAX may be larger than 2ull * INT_MAX, so on 64 bit 
systems, pagetable_size might become large enough for 
(int)pagetable_size to be positive, with (int)pagetable_size != 
pagetable_size, which would make bdrv_pread() work just fine, but only 
read a part of the table.

Max

>>>           if (ret < 0) {
>>>               goto fail;
>>>           }
>>>
>>>           s->free_data_block_offset =
>>> -            (s->bat_offset + (s->max_table_entries * 4) + 511) & ~511;
>>> +            (s->bat_offset + pagetable_size + 511) & ~511;
>>
>> Not necessary, but perhaps we should be using ROUND_UP() here...
>>
>
> Sure, why not :)
>
>> Max
>>
>>>           for (i = 0; i < s->max_table_entries; i++) {
>>>               be32_to_cpus(&s->pagetable[i]);
>>>
>>

  reply	other threads:[~2015-07-22 17:30 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-21 16:13 [Qemu-devel] [PATCH v2 0/2] block: vpc - prevent overflow Jeff Cody
2015-07-21 16:13 ` [Qemu-devel] [PATCH v2 1/2] block: vpc - prevent overflow if max_table_entries >= 0x40000000 Jeff Cody
2015-07-22 17:02   ` [Qemu-devel] [Qemu-block] " Max Reitz
2015-07-22 17:26     ` Jeff Cody
2015-07-22 17:29       ` Max Reitz [this message]
2015-07-22 17:40         ` Jeff Cody
2015-07-22 18:00           ` Max Reitz
2015-07-21 16:13 ` [Qemu-devel] [PATCH v2 2/2] block: qemu-iotests - add check for multiplication overflow in vpc Jeff Cody

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55AFD30B.7040105@redhat.com \
    --to=mreitz@redhat.com \
    --cc=jcody@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).