linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Alan Stern <stern@rowland.harvard.edu>
Cc: Dmitry Vyukov <dvyukov@google.com>,
	linux-scsi@vger.kernel.org, richard@r-senior.demon.co.uk,
	ltuikov@yahoo.com, jbottomley@parallels.com,
	Andrey Konovalov <andreyknvl@google.com>,
	Kostya Serebryany <kcc@google.com>
Subject: Re: Potential out-of-bounds access in drivers/scsi/sd.c
Date: Wed, 04 Sep 2013 16:45:45 +0200	[thread overview]
Message-ID: <52274799.4090605@redhat.com> (raw)
In-Reply-To: <Pine.LNX.4.44L0.1309041030420.1186-100000@iolanthe.rowland.org>

Il 04/09/2013 16:32, Alan Stern ha scritto:
> On Wed, 4 Sep 2013, Dmitry Vyukov wrote:
> 
>> Hi,
>>
>> We are working on a memory error detector AddressSanitizer for Linux
>> kernel (https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel),
>> it can detect use-after-free and buffer-overflow errors.
> 
> ...
> 
>> The code in sd_read_cache_type does the following:
>>
>> while (offset < len) {
>> ...
>> }
>> ...
>> if ((buffer[offset] & 0x3f) != modepage) {
>>     sd_printk(KERN_ERR, sdkp, "Got wrong page\n");
>>     goto defaults;
>> }
>>
>> When control leaves the while loop, offset >= len, so buffer[offset]
>> reads random garbage out-of-bounds.
>> It the worst case it can lead to crash, or if (buffer[offset] & 0x3f)
>> happen to be == modepage, then it will read more garbage.
>>
>> Please help validate and triage this.
> 
> The tool's output is correct.  The patch below should fix it.
> 
> Alan Stern
> 
> 
> 
> Index: usb-3.11/drivers/scsi/sd.c
> ===================================================================
> --- usb-3.11.orig/drivers/scsi/sd.c
> +++ usb-3.11/drivers/scsi/sd.c
> @@ -2419,7 +2419,7 @@ sd_read_cache_type(struct scsi_disk *sdk
>  			}
>  		}
>  
> -		if (modepage == 0x3F) {
> +		if (modepage == 0x3F || offset + 2 >= len) {
>  			sd_printk(KERN_ERR, sdkp, "No Caching mode page "
>  				  "present\n");
>  			goto defaults;

If you do this, the buggy "if" becomes dead code (the loop above doesn't
have any "break", so you know that offset >= len and the new condition
is always true).

So the patch does indeed prevent the bug, but the code can be simplified.

Paolo

  reply	other threads:[~2013-09-04 14:45 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-04  0:31 Potential out-of-bounds access in drivers/scsi/sd.c Dmitry Vyukov
2013-09-04 14:32 ` Alan Stern
2013-09-04 14:45   ` Paolo Bonzini [this message]
2013-09-04 15:42     ` Alan Stern
2013-09-05 13:38       ` Hannes Reinecke
2013-09-06 15:49         ` [PATCH] SCSI: Fix potential " Alan Stern
2013-09-06 16:24           ` Paolo Bonzini
2013-09-09  6:25             ` Hannes Reinecke
2013-09-04 15:37   ` Potential " Dmitry Vyukov
2013-09-04 15:42     ` Alan Stern

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52274799.4090605@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=andreyknvl@google.com \
    --cc=dvyukov@google.com \
    --cc=jbottomley@parallels.com \
    --cc=kcc@google.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=ltuikov@yahoo.com \
    --cc=richard@r-senior.demon.co.uk \
    --cc=stern@rowland.harvard.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).