public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* x86 pat.c:phys_mem_access_prot_allowed() bogosity
@ 2008-04-27 12:31 Adrian Bunk
  2008-04-27 13:11 ` Ingo Molnar
  0 siblings, 1 reply; 2+ messages in thread
From: Adrian Bunk @ 2008-04-27 12:31 UTC (permalink / raw)
  To: Venkatesh Pallipadi, Suresh Siddha, Ingo Molnar, tglx, hpa; +Cc: linux-kernel

Commit e7f260a276f2c9184fe753732d834b1f6fbe9f17
(x86: PAT use reserve free memtype in mmap of /dev/mem)
added the following gem to arch/x86/mm/pat.c:

<--  snip  -->

...
int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
                                unsigned long size, pgprot_t *vma_prot)
{
        u64 offset = ((u64) pfn) << PAGE_SHIFT;
        unsigned long flags = _PAGE_CACHE_UC_MINUS;
        unsigned long ret_flags;
...
...  (nothing that touches ret_flags)
...
        if (flags != _PAGE_CACHE_UC_MINUS) {
                retval = reserve_memtype(offset, offset + size, flags, NULL);
        } else {
                retval = reserve_memtype(offset, offset + size, -1, &ret_flags);
        }

        if (retval < 0)
                return 0;

        flags = ret_flags;

        if (pfn <= max_pfn_mapped &&
            ioremap_change_attr((unsigned long)__va(offset), size, flags) < 0) {
                free_memtype(offset, offset + size);
                printk(KERN_INFO
                "%s:%d /dev/mem ioremap_change_attr failed %s for %Lx-%Lx\n",
                        current->comm, current->pid,
                        cattr_name(flags),
                        offset, offset + size);
                return 0;
        }

        *vma_prot = __pgprot((pgprot_val(*vma_prot) & ~_PAGE_CACHE_MASK) |
                             flags);
        return 1;
}

<--  snip  -->

If (flags != _PAGE_CACHE_UC_MINUS) we pass garbage from the stack to 
ioremap_change_attr() and/or __pgprot().

Spotted by the Coverity checker.

cu
Adrian

-- 

       "Is there not promise of rain?" Ling Tan asked suddenly out
        of the darkness. There had been need of rain for many days.
       "Only a promise," Lao Er said.
                                       Pearl S. Buck - Dragon Seed


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: x86 pat.c:phys_mem_access_prot_allowed() bogosity
  2008-04-27 12:31 x86 pat.c:phys_mem_access_prot_allowed() bogosity Adrian Bunk
@ 2008-04-27 13:11 ` Ingo Molnar
  0 siblings, 0 replies; 2+ messages in thread
From: Ingo Molnar @ 2008-04-27 13:11 UTC (permalink / raw)
  To: Adrian Bunk; +Cc: Venkatesh Pallipadi, Suresh Siddha, tglx, hpa, linux-kernel


* Adrian Bunk <bunk@kernel.org> wrote:

> Commit e7f260a276f2c9184fe753732d834b1f6fbe9f17
> (x86: PAT use reserve free memtype in mmap of /dev/mem)
> added the following gem to arch/x86/mm/pat.c:
> 
> <--  snip  -->
> 
> ...
> int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
>                                 unsigned long size, pgprot_t *vma_prot)
> {
>         u64 offset = ((u64) pfn) << PAGE_SHIFT;
>         unsigned long flags = _PAGE_CACHE_UC_MINUS;
>         unsigned long ret_flags;
> ...
> ...  (nothing that touches ret_flags)
> ...
>         if (flags != _PAGE_CACHE_UC_MINUS) {
>                 retval = reserve_memtype(offset, offset + size, flags, NULL);
>         } else {
>                 retval = reserve_memtype(offset, offset + size, -1, &ret_flags);
>         }
> 
>         if (retval < 0)
>                 return 0;
> 
>         flags = ret_flags;
> 
>         if (pfn <= max_pfn_mapped &&
>             ioremap_change_attr((unsigned long)__va(offset), size, flags) < 0) {
>                 free_memtype(offset, offset + size);
>                 printk(KERN_INFO
>                 "%s:%d /dev/mem ioremap_change_attr failed %s for %Lx-%Lx\n",
>                         current->comm, current->pid,
>                         cattr_name(flags),
>                         offset, offset + size);
>                 return 0;
>         }
> 
>         *vma_prot = __pgprot((pgprot_val(*vma_prot) & ~_PAGE_CACHE_MASK) |
>                              flags);
>         return 1;
> }
> 
> <--  snip  -->
> 
> If (flags != _PAGE_CACHE_UC_MINUS) we pass garbage from the stack to 
> ioremap_change_attr() and/or __pgprot().
> 
> Spotted by the Coverity checker.

thanks Adrian - i've queued up the fix below.

Venkatesh, the code flow in reserve_memtype() is still as simple as it 
could be i believe - and the code flow complication directly resulted in 
this bug.

For example we should never pass in a NULL flags pointer - that way we 
could get rid of the NULL pointer checking - just fill in the return 
value unconditionally and just dont use it in the return site if not 
needed.

Another area to improve would be to merge the return code and the flags 
value - i.e. to not pass in a return value pointer at all. All 
_PAGE_CACHE_* flags are positive integers, so using negatives as a 
failure condition would still be OK. The special '-1 == wildcard' 
meaning for flags could still be kept. Hm?

	Ingo

-------------------->
Subject: x86: PAT fix
From: Ingo Molnar <mingo@elte.hu>
Date: Fri Mar 21 15:42:28 CET 2008

Adrian Bunk noticed the following Coverity report:

> Commit e7f260a276f2c9184fe753732d834b1f6fbe9f17
> (x86: PAT use reserve free memtype in mmap of /dev/mem)
> added the following gem to arch/x86/mm/pat.c:
>
> <--  snip  -->
>
> ...
> int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
>                                 unsigned long size, pgprot_t *vma_prot)
> {
>         u64 offset = ((u64) pfn) << PAGE_SHIFT;
>         unsigned long flags = _PAGE_CACHE_UC_MINUS;
>         unsigned long ret_flags;
> ...
> ...  (nothing that touches ret_flags)
> ...
>         if (flags != _PAGE_CACHE_UC_MINUS) {
>                 retval = reserve_memtype(offset, offset + size, flags, NULL);
>         } else {
>                 retval = reserve_memtype(offset, offset + size, -1, &ret_flags);
>         }
>
>         if (retval < 0)
>                 return 0;
>
>         flags = ret_flags;
>
>         if (pfn <= max_pfn_mapped &&
>             ioremap_change_attr((unsigned long)__va(offset), size, flags) < 0) {
>                 free_memtype(offset, offset + size);
>                 printk(KERN_INFO
>                 "%s:%d /dev/mem ioremap_change_attr failed %s for %Lx-%Lx\n",
>                         current->comm, current->pid,
>                         cattr_name(flags),
>                         offset, offset + size);
>                 return 0;
>         }
>
>         *vma_prot = __pgprot((pgprot_val(*vma_prot) & ~_PAGE_CACHE_MASK) |
>                              flags);
>         return 1;
> }
>
> <--  snip  -->
>
> If (flags != _PAGE_CACHE_UC_MINUS) we pass garbage from the stack to
> ioremap_change_attr() and/or __pgprot().
>
> Spotted by the Coverity checker.

the fix simplifies the code as we get rid of the 'ret_flags'
complication.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 arch/x86/mm/pat.c |    5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

Index: linux-x86.q/arch/x86/mm/pat.c
===================================================================
--- linux-x86.q.orig/arch/x86/mm/pat.c
+++ linux-x86.q/arch/x86/mm/pat.c
@@ -510,7 +510,6 @@ int phys_mem_access_prot_allowed(struct 
 {
 	u64 offset = ((u64) pfn) << PAGE_SHIFT;
 	unsigned long flags = _PAGE_CACHE_UC_MINUS;
-	unsigned long ret_flags;
 	int retval;
 
 	if (!range_is_allowed(pfn, size))
@@ -549,14 +548,12 @@ int phys_mem_access_prot_allowed(struct 
 	if (flags != _PAGE_CACHE_UC_MINUS) {
 		retval = reserve_memtype(offset, offset + size, flags, NULL);
 	} else {
-		retval = reserve_memtype(offset, offset + size, -1, &ret_flags);
+		retval = reserve_memtype(offset, offset + size, -1, &flags);
 	}
 
 	if (retval < 0)
 		return 0;
 
-	flags = ret_flags;
-
 	if (pfn <= max_pfn_mapped &&
             ioremap_change_attr((unsigned long)__va(offset), size, flags) < 0) {
 		free_memtype(offset, offset + size);

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2008-04-27 13:11 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-04-27 12:31 x86 pat.c:phys_mem_access_prot_allowed() bogosity Adrian Bunk
2008-04-27 13:11 ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox