From: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
To: John Snow <jsnow@redhat.com>, P J P <ppandit@redhat.com>
Cc: qemu-ppc@nongnu.org, qemu-devel@nongnu.org, agraf@suse.de
Subject: Re: [Qemu-devel] [PATCH] macio: fix overflow in lba to offset conversion for ATAPI devices
Date: Tue, 5 Jan 2016 08:11:28 +0000 [thread overview]
Message-ID: <568B7AB0.307@ilande.co.uk> (raw)
In-Reply-To: <568ADE08.8070806@redhat.com>
On 04/01/16 21:03, John Snow wrote:
> On 01/04/2016 03:54 PM, Mark Cave-Ayland wrote:
>> On 04/01/16 20:36, John Snow wrote:
>>
>>> On 01/04/2016 02:15 PM, Mark Cave-Ayland wrote:
>>>> On 04/01/16 19:04, P J P wrote:
>>>>
>>>>> +-- On Mon, 4 Jan 2016, Mark Cave-Ayland wrote --+
>>>>> | /* Calculate current offset */
>>>>> | - offset = (int64_t)(s->lba << 11) + s->io_buffer_index;
>>>>> | + offset = ((int64_t)(s->lba) << 11) + s->io_buffer_index;
>>>>>
>>>>> Maybe ((int64_t)s->lba << 11) ? No parenthesis around s->lba.
>>>>
>>>> Yes that works here too (perhaps I was just being over-cautious).
>>>> Alex/John, please let me know if you want me to resubmit.
>>>>
>>>
>>> PJP's version should work just fine. I won't ask you to resubmit, though...
>>
>> Great, thanks :)
>>
>>> ...But, well, while we're here, I have a question for you:
>>>
>>> So s->lba is an int that we left shift by 11 for a max of (2^43 - 2^11)
>>> then we add it against s->io_buffer_index, a uint64_t, so this statement
>>> could still in theory overflow.
>>>
>>> Except not really, since io_buffer_index is bounded (in general) by
>>> io_buffer_total_len, which is usually (IDE_DMA_BUF_SECTORS*512 + 4) ->
>>> ~132K.
>>>
>>> I don't think there's any rigorous bounds-checking of io_buffer_index,
>>> just ad-hoc checking when we're good enough to remember to do it. And we
>>> don't seem to do it anywhere in macio. Is it worth peppering in an
>>> assert somewhere that io_buffer_index is reasonably small?
>>
>> The DBDMA engine is limited to 16-bit transfers so the maximum transfer
>> size is 64K, and s->io_buffer_index is used to hold the current position
>> within this transfer so unless we get some very large disks I think we
>> should be okay here?
>>
>
> For all non-malicious uses of the code, yes.
>
> If I want to apply some rigorous checking to this bound I should just
> add a function to manipulate it centrally in core.c, I think.
That sounds good - any solution that avoids having to maintain changes
across IDE core and macio separately is always good!
> I'll pull this and edit it to PJP's suggestion.
Brilliant - thanks a lot!
ATB,
Mark.
next prev parent reply other threads:[~2016-01-05 8:12 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-04 17:30 [Qemu-devel] [PATCH] macio: fix overflow in lba to offset conversion for ATAPI devices Mark Cave-Ayland
2016-01-04 19:04 ` P J P
2016-01-04 19:15 ` Mark Cave-Ayland
2016-01-04 20:36 ` John Snow
2016-01-04 20:54 ` Mark Cave-Ayland
2016-01-04 21:03 ` John Snow
2016-01-05 8:11 ` Mark Cave-Ayland [this message]
2016-01-05 21:27 ` John Snow
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=568B7AB0.307@ilande.co.uk \
--to=mark.cave-ayland@ilande.co.uk \
--cc=agraf@suse.de \
--cc=jsnow@redhat.com \
--cc=ppandit@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).