linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
@ 2008-05-20  9:59 Paul Mundt
  2008-05-20 15:18 ` David Howells
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: Paul Mundt @ 2008-05-20  9:59 UTC (permalink / raw)
  To: Pekka Enberg, David Howells; +Cc: Christoph Lameter, Matt Mackall, linux-mm

Currently SLUB and SLOB both blow up in different ways on shnommu:

SLOB:

Freeing unused kernel memory: 620k freed
------------[ cut here ]------------
kernel BUG at mm/nommu.c:119!
Kernel BUG: 003e [#1]
Modules linked in:

Pid : 1, Comm:                 init
PC is at kobjsize+0x5c/0x8c
PC  : 0c04000c SP  : 0c20dd08 SR  : 40000001                   Not tainted
R0  : 00000000 R1  : 0000000a R2  : 001b8334 R3  : 0c180000
R4  : 00000840 R5  : 0c3a507c R6  : 000110e2 R7  : 00000020
R8  : 0c1e5334 R9  : 0c3a5210 R10 : 0c3a5210 R11 : 0c180000
R12 : 0c3a5250 R13 : 00000000 R14 : 0c20dd08
MACH: 00000221 MACL: 001b8334 GBR : 00000000 PR  : 0c03fff4

Call trace:
[<0c0409ca>] do_mmap_pgoff+0x5ea/0x7f0
[<0c07046a>] load_flat_file+0x326/0x77c
[<0c07090a>] load_flat_binary+0x4a/0x2c8
...


SLUB:

Freeing unused kernel memory: 624k freed
Unable to allocate RAM for process text/data, errno 12
Failed to execute /init
Unable to allocate RAM for process text/data, errno 12
Unable to allocate RAM for process text/data, errno 12
Kernel panic - not syncing: No init found.  Try passing init= option to kernel.
...

In both cases this is due to kobjsize() failures. By checking PageSlab()
before calling in to ksize(), SLAB manages to work ok, as it aggressively
sets these bits across compounds.

In situations where we are setting up private mappings or shared
anonymous mappings (via do_mmap_private()) and the vm_file mmap() fop
hands back an -ENOSYS, a new allocation is made and the file contents
copied over. SLOB's page->index at the time of hitting the above BUG_ON()
is the value of the non-page-aligned address returned by kmalloc() +
the read size. page->index itself is otherwise sane when the object to be
copied is larger than a page in size and aligned.

Both SLOB and SLUB presently have methods to determine whether an object
passed in to their respective ksize() implementations are their own slab
pages, or part of a compound page (heavily used by nommu for anonymous
mappings), leaving only SLAB as the odd one out. The rationale here seems
to be that SLAB's ksize() will trigger a BUG_ON() if !PageSlab(), and so
the page->index shift is defaulted to instead for non-slab pages.

Moving the existing logic in to SLAB's ksize() and simply wrapping in to
ksize() directly seems to do the right thing in all cases, and allows me
to boot with any of the slab allocators enabled, rather than simply SLAB
by itself.

I've done the same !PageSlab() test in SLAB as SLUB does in its ksize(),
which also seems to produce the correct results. Hopefully someone more
familiar with the history of kobjsize()/ksize() interaction can scream if
this is the wrong thing to do. :-)

Signed-off-by: Paul Mundt <lethal@linux-sh.org>

---

 mm/nommu.c |    8 +-------
 mm/slab.c  |    6 ++++++
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/nommu.c b/mm/nommu.c
index ef8c62c..3e11814 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -112,13 +112,7 @@ unsigned int kobjsize(const void *objp)
 	if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
 		return 0;
 
-	if (PageSlab(page))
-		return ksize(objp);
-
-	BUG_ON(page->index < 0);
-	BUG_ON(page->index >= MAX_ORDER);
-
-	return (PAGE_SIZE << page->index);
+	return ksize(objp);
 }
 
 /*
diff --git a/mm/slab.c b/mm/slab.c
index 06236e4..7a012bb 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4472,10 +4472,16 @@ const struct seq_operations slabstats_op = {
  */
 size_t ksize(const void *objp)
 {
+	struct page *page;
+
 	BUG_ON(!objp);
 	if (unlikely(objp == ZERO_SIZE_PTR))
 		return 0;
 
+	page = virt_to_head_page(objp);
+	if (unlikely(!PageSlab(page)))
+		return PAGE_SIZE << compound_order(page);
+
 	return obj_size(virt_to_cache(objp));
 }
 EXPORT_SYMBOL(ksize);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20  9:59 [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize() Paul Mundt
@ 2008-05-20 15:18 ` David Howells
  2008-05-20 16:52   ` Christoph Lameter
  2008-05-20 16:29 ` Matt Mackall
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 22+ messages in thread
From: David Howells @ 2008-05-20 15:18 UTC (permalink / raw)
  To: Paul Mundt
  Cc: dhowells, Pekka Enberg, Christoph Lameter, Matt Mackall, linux-mm

Paul Mundt <lethal@linux-sh.org> wrote:

> Moving the existing logic in to SLAB's ksize() and simply wrapping in to
> ksize() directly seems to do the right thing in all cases, and allows me
> to boot with any of the slab allocators enabled, rather than simply SLAB
> by itself.
> 
> I've done the same !PageSlab() test in SLAB as SLUB does in its ksize(),
> which also seems to produce the correct results. Hopefully someone more
> familiar with the history of kobjsize()/ksize() interaction can scream if
> this is the wrong thing to do. :-)

That seems reasonable.  I can't test it until I get back to the UK next week.

David

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20  9:59 [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize() Paul Mundt
  2008-05-20 15:18 ` David Howells
@ 2008-05-20 16:29 ` Matt Mackall
  2008-05-21  2:43   ` Paul Mundt
  2008-05-21 14:58 ` Pekka Enberg
  2008-05-21 17:13 ` Pekka J Enberg
  3 siblings, 1 reply; 22+ messages in thread
From: Matt Mackall @ 2008-05-20 16:29 UTC (permalink / raw)
  To: Paul Mundt; +Cc: Pekka Enberg, David Howells, Christoph Lameter, linux-mm

On Tue, 2008-05-20 at 18:59 +0900, Paul Mundt wrote:
> Currently SLUB and SLOB both blow up in different ways on shnommu:
> 
> SLOB:
> 
> Freeing unused kernel memory: 620k freed
> ------------[ cut here ]------------
> kernel BUG at mm/nommu.c:119!
> Kernel BUG: 003e [#1]
> Modules linked in:
> 
> Pid : 1, Comm:                 init
> PC is at kobjsize+0x5c/0x8c
> PC  : 0c04000c SP  : 0c20dd08 SR  : 40000001                   Not tainted
> R0  : 00000000 R1  : 0000000a R2  : 001b8334 R3  : 0c180000
> R4  : 00000840 R5  : 0c3a507c R6  : 000110e2 R7  : 00000020
> R8  : 0c1e5334 R9  : 0c3a5210 R10 : 0c3a5210 R11 : 0c180000
> R12 : 0c3a5250 R13 : 00000000 R14 : 0c20dd08
> MACH: 00000221 MACL: 001b8334 GBR : 00000000 PR  : 0c03fff4
> 
> Call trace:
> [<0c0409ca>] do_mmap_pgoff+0x5ea/0x7f0
> [<0c07046a>] load_flat_file+0x326/0x77c
> [<0c07090a>] load_flat_binary+0x4a/0x2c8
> ...
> 
> 
> SLUB:
> 
> Freeing unused kernel memory: 624k freed
> Unable to allocate RAM for process text/data, errno 12
> Failed to execute /init
> Unable to allocate RAM for process text/data, errno 12
> Unable to allocate RAM for process text/data, errno 12
> Kernel panic - not syncing: No init found.  Try passing init= option to kernel.
> ...
> 
> In both cases this is due to kobjsize() failures. By checking PageSlab()
> before calling in to ksize(), SLAB manages to work ok, as it aggressively
> sets these bits across compounds.
> 
> In situations where we are setting up private mappings or shared
> anonymous mappings (via do_mmap_private()) and the vm_file mmap() fop
> hands back an -ENOSYS, a new allocation is made and the file contents
> copied over. SLOB's page->index at the time of hitting the above BUG_ON()
> is the value of the non-page-aligned address returned by kmalloc() +
> the read size. page->index itself is otherwise sane when the object to be
> copied is larger than a page in size and aligned.
> 
> Both SLOB and SLUB presently have methods to determine whether an object
> passed in to their respective ksize() implementations are their own slab
> pages, or part of a compound page (heavily used by nommu for anonymous
> mappings), leaving only SLAB as the odd one out. The rationale here seems
> to be that SLAB's ksize() will trigger a BUG_ON() if !PageSlab(), and so
> the page->index shift is defaulted to instead for non-slab pages.
> 
> Moving the existing logic in to SLAB's ksize() and simply wrapping in to
> ksize() directly seems to do the right thing in all cases, and allows me
> to boot with any of the slab allocators enabled, rather than simply SLAB
> by itself.
> 
> I've done the same !PageSlab() test in SLAB as SLUB does in its ksize(),
> which also seems to produce the correct results. Hopefully someone more
> familiar with the history of kobjsize()/ksize() interaction can scream if
> this is the wrong thing to do. :-)

My investigation of this the last time around lead me to the conclusion
that the nommu code here was broken as it can call ksize on objects that
were statically allocated (IIRC, the initial task struct is one such
example).

It also calls ksize on objects that are kmem_cache_alloced, which is
also a no-no. Unfortunately, it's a no-no that just happens to work in
SLAB/SLUB by virtue of implementing kmalloc on top of kmem_cache_alloc. 

With SLOB, the object size for kmem_cache_alloced objects is only
available statically. Further, we can only statically distinguish
between a kmalloc'ed and kmem_cache_alloc'ed object. So when you pass a
kmem_cache_alloc'ed object, we'll end up reading random data outside the
object to find its 'size'. So this might 'work' for SLOB in the sense of
not crashing, but it won't be correct.

So I think your patch is progress as it's moving internal details out of
nommu but really what I think should happen is SLAB and SLUB should
WARN_ON when ksize is called on something that's not (a) in their arenas
and (b) specifically in their kmalloc slabs. And nommu should be fixed
to not trip those warnings.

If that means we decide to copy things like the initial task struct to
an allocated object for uniformity and mark them __init, so much the
better.

I generally think ksize() is a bad idea for a general interface. If you
allocated an object, you should be able to remember how big you asked it
to be and/or what type of object it was. Nor do I think it's appropriate
for non-allocator code to be aware of internal allocator details like
allocation slop. If you ask for a 5-byte object, get an 8-byte bufffer,
then write in byte 6, I think a debugging allocator should be allowed to
complain about accessing unallocated memory.

The only legitimate users in my mind are things like realloc which can
internally take advantage of the allocated buffer being bigger than what
was asked for.

> 
> Signed-off-by: Paul Mundt <lethal@linux-sh.org>

All that said:

Acked-by: Matt Mackall <mpm@selenic.com>

> 
> ---
> 
>  mm/nommu.c |    8 +-------
>  mm/slab.c  |    6 ++++++
>  2 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/nommu.c b/mm/nommu.c
> index ef8c62c..3e11814 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -112,13 +112,7 @@ unsigned int kobjsize(const void *objp)
>  	if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
>  		return 0;
>  
> -	if (PageSlab(page))
> -		return ksize(objp);
> -
> -	BUG_ON(page->index < 0);
> -	BUG_ON(page->index >= MAX_ORDER);
> -
> -	return (PAGE_SIZE << page->index);
> +	return ksize(objp);
>  }
>  
>  /*
> diff --git a/mm/slab.c b/mm/slab.c
> index 06236e4..7a012bb 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -4472,10 +4472,16 @@ const struct seq_operations slabstats_op = {
>   */
>  size_t ksize(const void *objp)
>  {
> +	struct page *page;
> +
>  	BUG_ON(!objp);
>  	if (unlikely(objp == ZERO_SIZE_PTR))
>  		return 0;
>  
> +	page = virt_to_head_page(objp);
> +	if (unlikely(!PageSlab(page)))
> +		return PAGE_SIZE << compound_order(page);
> +
>  	return obj_size(virt_to_cache(objp));
>  }
>  EXPORT_SYMBOL(ksize);
-- 
Mathematics is the supreme nostalgia of our time.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 15:18 ` David Howells
@ 2008-05-20 16:52   ` Christoph Lameter
  2008-05-20 18:23     ` Matt Mackall
  2008-05-22  4:23     ` Pekka Enberg
  0 siblings, 2 replies; 22+ messages in thread
From: Christoph Lameter @ 2008-05-20 16:52 UTC (permalink / raw)
  To: David Howells; +Cc: Paul Mundt, Pekka Enberg, Matt Mackall, linux-mm

On Tue, 20 May 2008, David Howells wrote:

> Paul Mundt <lethal@linux-sh.org> wrote:
> 
> > Moving the existing logic in to SLAB's ksize() and simply wrapping in to
> > ksize() directly seems to do the right thing in all cases, and allows me
> > to boot with any of the slab allocators enabled, rather than simply SLAB
> > by itself.
> > 
> > I've done the same !PageSlab() test in SLAB as SLUB does in its ksize(),
> > which also seems to produce the correct results. Hopefully someone more
> > familiar with the history of kobjsize()/ksize() interaction can scream if
> > this is the wrong thing to do. :-)
> 
> That seems reasonable.  I can't test it until I get back to the UK next week.

Hmm. That means we are sanctioning using ksize on arbitrary objects? SLUB 
supports that but SLAB wont and neither will SLOB. I think we need to stay 
with the strict definition that is needed by SLOB.

It seems also that the existing kobjsize function is wrong:

1. For compound pages the head page needs to be determined.

So do a virt_to_head_page() instead of a virt_to_page().

2. Why is page->index take as the page order?

Use compound_order instead?

I think the following patch will work for all allocators (can 
virt_to_page() really return NULL if the addr is invalid if so we may
have to fix virt_to_head_page()?):

---
 mm/nommu.c |    8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

Index: linux-2.6/mm/nommu.c
===================================================================
--- linux-2.6.orig/mm/nommu.c	2008-05-20 09:50:25.686495370 -0700
+++ linux-2.6/mm/nommu.c	2008-05-20 09:50:51.797745535 -0700
@@ -109,16 +109,14 @@ unsigned int kobjsize(const void *objp)
 	 * If the object we have should not have ksize performed on it,
 	 * return size of 0
 	 */
-	if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
+	if (!objp || (unsigned long)objp >= memory_end ||
+				!((page = virt_to_head_page(objp))))
 		return 0;
 
 	if (PageSlab(page))
 		return ksize(objp);
 
-	BUG_ON(page->index < 0);
-	BUG_ON(page->index >= MAX_ORDER);
-
-	return (PAGE_SIZE << page->index);
+	return PAGE_SIZE << compound_order(page);
 }
 
 /*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 16:52   ` Christoph Lameter
@ 2008-05-20 18:23     ` Matt Mackall
  2008-05-20 18:51       ` Christoph Lameter
  2008-05-22  4:23     ` Pekka Enberg
  1 sibling, 1 reply; 22+ messages in thread
From: Matt Mackall @ 2008-05-20 18:23 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: David Howells, Paul Mundt, Pekka Enberg, linux-mm

On Tue, 2008-05-20 at 09:52 -0700, Christoph Lameter wrote:
> On Tue, 20 May 2008, David Howells wrote:
> 
> > Paul Mundt <lethal@linux-sh.org> wrote:
> > 
> > > Moving the existing logic in to SLAB's ksize() and simply wrapping in to
> > > ksize() directly seems to do the right thing in all cases, and allows me
> > > to boot with any of the slab allocators enabled, rather than simply SLAB
> > > by itself.
> > > 
> > > I've done the same !PageSlab() test in SLAB as SLUB does in its ksize(),
> > > which also seems to produce the correct results. Hopefully someone more
> > > familiar with the history of kobjsize()/ksize() interaction can scream if
> > > this is the wrong thing to do. :-)
> > 
> > That seems reasonable.  I can't test it until I get back to the UK next week.
> 
> Hmm. That means we are sanctioning using ksize on arbitrary objects? SLUB 
> supports that but SLAB wont and neither will SLOB. I think we need to stay 
> with the strict definition that is needed by SLOB.

Of course SLUB won't be able to tell you the size of objects allocated
statically, through bootmem, etc.

> It seems also that the existing kobjsize function is wrong:
> 
> 1. For compound pages the head page needs to be determined.
> 
> So do a virt_to_head_page() instead of a virt_to_page().
> 
> 2. Why is page->index take as the page order?
> 
> Use compound_order instead?
> 
> I think the following patch will work for all allocators (can 
> virt_to_page() really return NULL if the addr is invalid if so we may
> have to fix virt_to_head_page()?):
> 
> ---
>  mm/nommu.c |    8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> Index: linux-2.6/mm/nommu.c
> ===================================================================
> --- linux-2.6.orig/mm/nommu.c	2008-05-20 09:50:25.686495370 -0700
> +++ linux-2.6/mm/nommu.c	2008-05-20 09:50:51.797745535 -0700
> @@ -109,16 +109,14 @@ unsigned int kobjsize(const void *objp)
>  	 * If the object we have should not have ksize performed on it,
>  	 * return size of 0
>  	 */
> -	if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
> +	if (!objp || (unsigned long)objp >= memory_end ||
> +				!((page = virt_to_head_page(objp))))

I think the real problem here is that nommu is way too intimate with the
allocator. This makes it even more so. Paul's approach of pushing this
down into SLAB is a step in the right direction. The next step is
teaching nommu to distinguish (statically) between
kmalloced/kmem_cache_alloced/static objects, which is a somewhat bigger
problem.

-- 
Mathematics is the supreme nostalgia of our time.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 18:23     ` Matt Mackall
@ 2008-05-20 18:51       ` Christoph Lameter
  2008-05-20 19:00         ` Matt Mackall
  0 siblings, 1 reply; 22+ messages in thread
From: Christoph Lameter @ 2008-05-20 18:51 UTC (permalink / raw)
  To: Matt Mackall; +Cc: David Howells, Paul Mundt, Pekka Enberg, linux-mm

On Tue, 20 May 2008, Matt Mackall wrote:

> > Hmm. That means we are sanctioning using ksize on arbitrary objects? SLUB 
> > supports that but SLAB wont and neither will SLOB. I think we need to stay 
> > with the strict definition that is needed by SLOB.
> 
> Of course SLUB won't be able to tell you the size of objects allocated
> statically, through bootmem, etc.

Right the function actually give misleading results even if you just pass 
a pointer to an int at an address that is page backed but not using a slab 
allocator. Then PAGE_SIZE will be returned?

So the semantics are screwed up here. kobjsize() should only be called for 
slab objects. 

Remove kobjsize completely and replace with calls to ksize? Callers must 
not call ksize() on non slab objects.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 18:51       ` Christoph Lameter
@ 2008-05-20 19:00         ` Matt Mackall
  2008-05-20 19:08           ` Christoph Lameter
  0 siblings, 1 reply; 22+ messages in thread
From: Matt Mackall @ 2008-05-20 19:00 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: David Howells, Paul Mundt, Pekka Enberg, linux-mm

On Tue, 2008-05-20 at 11:51 -0700, Christoph Lameter wrote:
> On Tue, 20 May 2008, Matt Mackall wrote:
> 
> > > Hmm. That means we are sanctioning using ksize on arbitrary objects? SLUB 
> > > supports that but SLAB wont and neither will SLOB. I think we need to stay 
> > > with the strict definition that is needed by SLOB.
> > 
> > Of course SLUB won't be able to tell you the size of objects allocated
> > statically, through bootmem, etc.
> 
> Right the function actually give misleading results even if you just pass 
> a pointer to an int at an address that is page backed but not using a slab 
> allocator. Then PAGE_SIZE will be returned?
> 
> So the semantics are screwed up here. kobjsize() should only be called for 
> slab objects. 
> 
> Remove kobjsize completely and replace with calls to ksize? Callers must 
> not call ksize() on non slab objects.

What'd you think of my idea of adding WARN_ONs to SLAB and SLUB for
these cases? That is, warn whenever ksize() gets a non-kmalloced
address?

-- 
Mathematics is the supreme nostalgia of our time.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 19:00         ` Matt Mackall
@ 2008-05-20 19:08           ` Christoph Lameter
  2008-05-20 19:14             ` Matt Mackall
  0 siblings, 1 reply; 22+ messages in thread
From: Christoph Lameter @ 2008-05-20 19:08 UTC (permalink / raw)
  To: Matt Mackall; +Cc: David Howells, Paul Mundt, Pekka Enberg, linux-mm

On Tue, 20 May 2008, Matt Mackall wrote:

> > Remove kobjsize completely and replace with calls to ksize? Callers must 
> > not call ksize() on non slab objects.
> 
> What'd you think of my idea of adding WARN_ONs to SLAB and SLUB for
> these cases? That is, warn whenever ksize() gets a non-kmalloced
> address?

How would that work given that both SLUB and SLOB forward >4k allocs to 
the page allocator? So any compound page allocation may be a slab 
allocation. Is there some way to distinguish between a 
allocations of the page allocator and a slab alloc?


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 19:08           ` Christoph Lameter
@ 2008-05-20 19:14             ` Matt Mackall
  2008-05-20 19:16               ` Christoph Lameter
  0 siblings, 1 reply; 22+ messages in thread
From: Matt Mackall @ 2008-05-20 19:14 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: David Howells, Paul Mundt, Pekka Enberg, linux-mm

On Tue, 2008-05-20 at 12:08 -0700, Christoph Lameter wrote:
> On Tue, 20 May 2008, Matt Mackall wrote:
> 
> > > Remove kobjsize completely and replace with calls to ksize? Callers must 
> > > not call ksize() on non slab objects.
> > 
> > What'd you think of my idea of adding WARN_ONs to SLAB and SLUB for
> > these cases? That is, warn whenever ksize() gets a non-kmalloced
> > address?
> 
> How would that work given that both SLUB and SLOB forward >4k allocs to 
> the page allocator? So any compound page allocation may be a slab 
> allocation. Is there some way to distinguish between a 
> allocations of the page allocator and a slab alloc?

We can't do it at all for SLOB. But when debugging is turned on, we can
notice (in SLAB and SLUB) whenever anyone asks for the ksize() of
something that lives on a non-kmalloc slab.

-- 
Mathematics is the supreme nostalgia of our time.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 19:14             ` Matt Mackall
@ 2008-05-20 19:16               ` Christoph Lameter
  2008-05-20 21:22                 ` Matt Mackall
  0 siblings, 1 reply; 22+ messages in thread
From: Christoph Lameter @ 2008-05-20 19:16 UTC (permalink / raw)
  To: Matt Mackall; +Cc: David Howells, Paul Mundt, Pekka Enberg, linux-mm

On Tue, 20 May 2008, Matt Mackall wrote:

> > How would that work given that both SLUB and SLOB forward >4k allocs to 
> > the page allocator? So any compound page allocation may be a slab 
> > allocation. Is there some way to distinguish between a 
> > allocations of the page allocator and a slab alloc?
> 
> We can't do it at all for SLOB. But when debugging is turned on, we can
> notice (in SLAB and SLUB) whenever anyone asks for the ksize() of
> something that lives on a non-kmalloc slab.

We could mark the pages specially I guess. Add a slab flag for 
kmalloc? PageSlab and PageKmalloc?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 19:16               ` Christoph Lameter
@ 2008-05-20 21:22                 ` Matt Mackall
  2008-05-21  1:19                   ` Paul Mundt
  0 siblings, 1 reply; 22+ messages in thread
From: Matt Mackall @ 2008-05-20 21:22 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: David Howells, Paul Mundt, Pekka Enberg, linux-mm

On Tue, 2008-05-20 at 12:16 -0700, Christoph Lameter wrote:
> On Tue, 20 May 2008, Matt Mackall wrote:
> 
> > > How would that work given that both SLUB and SLOB forward >4k allocs to 
> > > the page allocator? So any compound page allocation may be a slab 
> > > allocation. Is there some way to distinguish between a 
> > > allocations of the page allocator and a slab alloc?
> > 
> > We can't do it at all for SLOB. But when debugging is turned on, we can
> > notice (in SLAB and SLUB) whenever anyone asks for the ksize() of
> > something that lives on a non-kmalloc slab.
> 
> We could mark the pages specially I guess. Add a slab flag for 
> kmalloc? PageSlab and PageKmalloc?

No, just warn for the cases where we already have enough information.

-- 
Mathematics is the supreme nostalgia of our time.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 21:22                 ` Matt Mackall
@ 2008-05-21  1:19                   ` Paul Mundt
  2008-05-21  1:52                     ` Christoph Lameter
  0 siblings, 1 reply; 22+ messages in thread
From: Paul Mundt @ 2008-05-21  1:19 UTC (permalink / raw)
  To: Matt Mackall; +Cc: Christoph Lameter, David Howells, Pekka Enberg, linux-mm

On Tue, May 20, 2008 at 04:22:37PM -0500, Matt Mackall wrote:
> On Tue, 2008-05-20 at 12:16 -0700, Christoph Lameter wrote:
> > On Tue, 20 May 2008, Matt Mackall wrote:
> > > > How would that work given that both SLUB and SLOB forward >4k allocs to 
> > > > the page allocator? So any compound page allocation may be a slab 
> > > > allocation. Is there some way to distinguish between a 
> > > > allocations of the page allocator and a slab alloc?
> > > 
> > > We can't do it at all for SLOB. But when debugging is turned on, we can
> > > notice (in SLAB and SLUB) whenever anyone asks for the ksize() of
> > > something that lives on a non-kmalloc slab.
> > 
> > We could mark the pages specially I guess. Add a slab flag for 
> > kmalloc? PageSlab and PageKmalloc?
> 
> No, just warn for the cases where we already have enough information.
> 
If we go in that direction the existing users are already going to
trigger this a lot (ie, commit 4016a1390d07f15b267eecb20e76a48fd5c524ef).
Without this sort of heuristic in place, simply killing off kobjsize()
and calling in to ksize() directly would be a reasonable option.

Having WARN_ON()'s for !PageSlab() pages in ksize() in SLAB/SLUB would
make these cases more visible, at least.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-21  1:19                   ` Paul Mundt
@ 2008-05-21  1:52                     ` Christoph Lameter
  0 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2008-05-21  1:52 UTC (permalink / raw)
  To: Paul Mundt; +Cc: Matt Mackall, David Howells, Pekka Enberg, linux-mm

On Wed, 21 May 2008, Paul Mundt wrote:

> Having WARN_ON()'s for !PageSlab() pages in ksize() in SLAB/SLUB would
> make these cases more visible, at least.

SLUB/SLOB do not mark objects allocated via pass through to the page 
allocators with PageSlab.  WARN_ON would give false positives.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 16:29 ` Matt Mackall
@ 2008-05-21  2:43   ` Paul Mundt
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Mundt @ 2008-05-21  2:43 UTC (permalink / raw)
  To: Matt Mackall; +Cc: Pekka Enberg, David Howells, Christoph Lameter, linux-mm

On Tue, May 20, 2008 at 11:29:18AM -0500, Matt Mackall wrote:
> On Tue, 2008-05-20 at 18:59 +0900, Paul Mundt wrote:
> > Moving the existing logic in to SLAB's ksize() and simply wrapping in to
> > ksize() directly seems to do the right thing in all cases, and allows me
> > to boot with any of the slab allocators enabled, rather than simply SLAB
> > by itself.
> > 
> > I've done the same !PageSlab() test in SLAB as SLUB does in its ksize(),
> > which also seems to produce the correct results. Hopefully someone more
> > familiar with the history of kobjsize()/ksize() interaction can scream if
> > this is the wrong thing to do. :-)
> 
> My investigation of this the last time around lead me to the conclusion
> that the nommu code here was broken as it can call ksize on objects that
> were statically allocated (IIRC, the initial task struct is one such
> example).
> 
> It also calls ksize on objects that are kmem_cache_alloced, which is
> also a no-no. Unfortunately, it's a no-no that just happens to work in
> SLAB/SLUB by virtue of implementing kmalloc on top of kmem_cache_alloc. 
> 
> With SLOB, the object size for kmem_cache_alloced objects is only
> available statically. Further, we can only statically distinguish
> between a kmalloc'ed and kmem_cache_alloc'ed object. So when you pass a
> kmem_cache_alloc'ed object, we'll end up reading random data outside the
> object to find its 'size'. So this might 'work' for SLOB in the sense of
> not crashing, but it won't be correct.
> 
SLOB also is unique in that it doesn't end up setting __GFP_COMP on
higher-order pass through allocations through the kmem_cache_alloc path.
So we could at least check to see whether the object is a compound page
or not before deferring to ksize() -- otherwise just default to
PAGE_SIZE. Though this doesn't help for statics that are out of scope of
both kmalloc and kmem_cache_alloc, or things like the blackfin DMA area
past the end of memory. Hmm.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20  9:59 [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize() Paul Mundt
  2008-05-20 15:18 ` David Howells
  2008-05-20 16:29 ` Matt Mackall
@ 2008-05-21 14:58 ` Pekka Enberg
  2008-05-21 15:06   ` Matt Mackall
  2008-05-21 17:13 ` Pekka J Enberg
  3 siblings, 1 reply; 22+ messages in thread
From: Pekka Enberg @ 2008-05-21 14:58 UTC (permalink / raw)
  To: Paul Mundt, Pekka Enberg, David Howells, Christoph Lameter,
	Matt Mackall, linux-mm

(Not really sure if we came to a conclusion with the discussion.)

Paul Mundt wrote:
> diff --git a/mm/nommu.c b/mm/nommu.c
> index ef8c62c..3e11814 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -112,13 +112,7 @@ unsigned int kobjsize(const void *objp)
>  	if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
>  		return 0;
>  
> -	if (PageSlab(page))
> -		return ksize(objp);
> -
> -	BUG_ON(page->index < 0);
> -	BUG_ON(page->index >= MAX_ORDER);
> -
> -	return (PAGE_SIZE << page->index);
> +	return ksize(objp);
>  }
>  
>  /*
> diff --git a/mm/slab.c b/mm/slab.c
> index 06236e4..7a012bb 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -4472,10 +4472,16 @@ const struct seq_operations slabstats_op = {
>   */
>  size_t ksize(const void *objp)
>  {
> +	struct page *page;
> +
>  	BUG_ON(!objp);
>  	if (unlikely(objp == ZERO_SIZE_PTR))
>  		return 0;
>  
> +	page = virt_to_head_page(objp);
> +	if (unlikely(!PageSlab(page)))
> +		return PAGE_SIZE << compound_order(page);
> +
>  	return obj_size(virt_to_cache(objp));
>  }
>  EXPORT_SYMBOL(ksize);

The patch looks good to me. Christoph, Matt, NAK/ACK?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-21 14:58 ` Pekka Enberg
@ 2008-05-21 15:06   ` Matt Mackall
  0 siblings, 0 replies; 22+ messages in thread
From: Matt Mackall @ 2008-05-21 15:06 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: Paul Mundt, David Howells, Christoph Lameter, linux-mm

On Wed, 2008-05-21 at 17:58 +0300, Pekka Enberg wrote:
> (Not really sure if we came to a conclusion with the discussion.)
> 
> Paul Mundt wrote:
> > diff --git a/mm/nommu.c b/mm/nommu.c
> > index ef8c62c..3e11814 100644
> > --- a/mm/nommu.c
> > +++ b/mm/nommu.c
> > @@ -112,13 +112,7 @@ unsigned int kobjsize(const void *objp)
> >  	if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
> >  		return 0;
> >  
> > -	if (PageSlab(page))
> > -		return ksize(objp);
> > -
> > -	BUG_ON(page->index < 0);
> > -	BUG_ON(page->index >= MAX_ORDER);
> > -
> > -	return (PAGE_SIZE << page->index);
> > +	return ksize(objp);
> >  }
> >  
> >  /*
> > diff --git a/mm/slab.c b/mm/slab.c
> > index 06236e4..7a012bb 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -4472,10 +4472,16 @@ const struct seq_operations slabstats_op = {
> >   */
> >  size_t ksize(const void *objp)
> >  {
> > +	struct page *page;
> > +
> >  	BUG_ON(!objp);
> >  	if (unlikely(objp == ZERO_SIZE_PTR))
> >  		return 0;
> >  
> > +	page = virt_to_head_page(objp);
> > +	if (unlikely(!PageSlab(page)))
> > +		return PAGE_SIZE << compound_order(page);
> > +
> >  	return obj_size(virt_to_cache(objp));
> >  }
> >  EXPORT_SYMBOL(ksize);
> 
> The patch looks good to me. Christoph, Matt, NAK/ACK?

I did ack this, as I think it's a step in the right direction and it
will get nommu running. But I do think nommu's ksize() usage needs a
major rework.

-- 
Mathematics is the supreme nostalgia of our time.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20  9:59 [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize() Paul Mundt
                   ` (2 preceding siblings ...)
  2008-05-21 14:58 ` Pekka Enberg
@ 2008-05-21 17:13 ` Pekka J Enberg
  2008-05-21 17:27   ` Christoph Lameter
                     ` (2 more replies)
  3 siblings, 3 replies; 22+ messages in thread
From: Pekka J Enberg @ 2008-05-21 17:13 UTC (permalink / raw)
  To: Paul Mundt; +Cc: David Howells, Christoph Lameter, Matt Mackall, linux-mm

Hi!

On Tue, 20 May 2008, Paul Mundt wrote:
> Moving the existing logic in to SLAB's ksize() and simply wrapping in to
> ksize() directly seems to do the right thing in all cases, and allows me
> to boot with any of the slab allocators enabled, rather than simply SLAB
> by itself.
> 
> I've done the same !PageSlab() test in SLAB as SLUB does in its ksize(),
> which also seems to produce the correct results. Hopefully someone more
> familiar with the history of kobjsize()/ksize() interaction can scream if
> this is the wrong thing to do. :-)

As pointed out by Christoph, it. ksize() works with SLUB and SLOB 
accidentally because they do page allocator pass-through and thus need to 
deal with non-PageSlab pages. SLAB, however, does not do that which is why 
all pages passed to it must have PageSlab set (we ought to add a WARN_ON() 
there btw).

So I suggest we fix up kobjsize() instead. Paul, does the following 
untested patch work for you?

			Pekka

diff --git a/mm/nommu.c b/mm/nommu.c
index ef8c62c..a573aeb 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -115,10 +115,7 @@ unsigned int kobjsize(const void *objp)
 	if (PageSlab(page))
 		return ksize(objp);
 
-	BUG_ON(page->index < 0);
-	BUG_ON(page->index >= MAX_ORDER);
-
-	return (PAGE_SIZE << page->index);
+	return PAGE_SIZE << compound_order(page);
 }
 
 /*
diff --git a/mm/slab.c b/mm/slab.c

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-21 17:13 ` Pekka J Enberg
@ 2008-05-21 17:27   ` Christoph Lameter
  2008-05-21 19:08   ` Pekka Enberg
  2008-05-21 23:43   ` Paul Mundt
  2 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2008-05-21 17:27 UTC (permalink / raw)
  To: Pekka J Enberg; +Cc: Paul Mundt, David Howells, Matt Mackall, linux-mm

On Wed, 21 May 2008, Pekka J Enberg wrote:

> So I suggest we fix up kobjsize() instead. Paul, does the following 
> untested patch work for you?

Regardless of the test : Pekka's patch is a bugfix and should go via 
stable. kobjsize seems to assume that page->index contains the order
of a page. For a pagecache page it contains the page number in the 
mapping. So this should bug frequently if used on arbitrary objects.

> diff --git a/mm/nommu.c b/mm/nommu.c
> index ef8c62c..a573aeb 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -115,10 +115,7 @@ unsigned int kobjsize(const void *objp)
>  	if (PageSlab(page))
>  		return ksize(objp);
>  
> -	BUG_ON(page->index < 0);
> -	BUG_ON(page->index >= MAX_ORDER);
> -
> -	return (PAGE_SIZE << page->index);
> +	return PAGE_SIZE << compound_order(page);
>  }
>  
>  /*
> diff --git a/mm/slab.c b/mm/slab.c
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-21 17:13 ` Pekka J Enberg
  2008-05-21 17:27   ` Christoph Lameter
@ 2008-05-21 19:08   ` Pekka Enberg
  2008-05-21 23:43   ` Paul Mundt
  2 siblings, 0 replies; 22+ messages in thread
From: Pekka Enberg @ 2008-05-21 19:08 UTC (permalink / raw)
  To: Paul Mundt; +Cc: David Howells, Christoph Lameter, Matt Mackall, linux-mm

Pekka J Enberg wrote:
> As pointed out by Christoph, it. ksize() works with SLUB and SLOB 
> accidentally because they do page allocator pass-through and thus need to 
> deal with non-PageSlab pages. SLAB, however, does not do that which is why 
> all pages passed to it must have PageSlab set (we ought to add a WARN_ON() 
> there btw).
> 
> So I suggest we fix up kobjsize() instead. Paul, does the following 
> untested patch work for you?
> 
> diff --git a/mm/nommu.c b/mm/nommu.c
> index ef8c62c..a573aeb 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -115,10 +115,7 @@ unsigned int kobjsize(const void *objp)
>  	if (PageSlab(page))
>  		return ksize(objp);
>  
> -	BUG_ON(page->index < 0);
> -	BUG_ON(page->index >= MAX_ORDER);
> -
> -	return (PAGE_SIZE << page->index);
> +	return PAGE_SIZE << compound_order(page);

Hmm, actually this needs more fixing with SLOB as it never sets PageSlab.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-21 17:13 ` Pekka J Enberg
  2008-05-21 17:27   ` Christoph Lameter
  2008-05-21 19:08   ` Pekka Enberg
@ 2008-05-21 23:43   ` Paul Mundt
  2008-05-22  0:01     ` Christoph Lameter
  2 siblings, 1 reply; 22+ messages in thread
From: Paul Mundt @ 2008-05-21 23:43 UTC (permalink / raw)
  To: Pekka J Enberg; +Cc: David Howells, Christoph Lameter, Matt Mackall, linux-mm

On Wed, May 21, 2008 at 08:13:35PM +0300, Pekka J Enberg wrote:
> Hi!
> 
> On Tue, 20 May 2008, Paul Mundt wrote:
> > Moving the existing logic in to SLAB's ksize() and simply wrapping in to
> > ksize() directly seems to do the right thing in all cases, and allows me
> > to boot with any of the slab allocators enabled, rather than simply SLAB
> > by itself.
> > 
> > I've done the same !PageSlab() test in SLAB as SLUB does in its ksize(),
> > which also seems to produce the correct results. Hopefully someone more
> > familiar with the history of kobjsize()/ksize() interaction can scream if
> > this is the wrong thing to do. :-)
> 
> As pointed out by Christoph, it. ksize() works with SLUB and SLOB 
> accidentally because they do page allocator pass-through and thus need to 
> deal with non-PageSlab pages. SLAB, however, does not do that which is why 
> all pages passed to it must have PageSlab set (we ought to add a WARN_ON() 
> there btw).
> 
> So I suggest we fix up kobjsize() instead. Paul, does the following 
> untested patch work for you?
> 
It seems to, but I wonder if compound_order() needs to take a
virt_to_head_page(objp) instead of virt_to_page()?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-21 23:43   ` Paul Mundt
@ 2008-05-22  0:01     ` Christoph Lameter
  0 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2008-05-22  0:01 UTC (permalink / raw)
  To: Paul Mundt; +Cc: Pekka J Enberg, David Howells, Matt Mackall, linux-mm

On Thu, 22 May 2008, Paul Mundt wrote:

> It seems to, but I wonder if compound_order() needs to take a
> virt_to_head_page(objp) instead of virt_to_page()?

compound_order must be run on the head. So yes.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize().
  2008-05-20 16:52   ` Christoph Lameter
  2008-05-20 18:23     ` Matt Mackall
@ 2008-05-22  4:23     ` Pekka Enberg
  1 sibling, 0 replies; 22+ messages in thread
From: Pekka Enberg @ 2008-05-22  4:23 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: David Howells, Paul Mundt, Matt Mackall, linux-mm

Christoph Lameter wrote:
>  mm/nommu.c |    8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> Index: linux-2.6/mm/nommu.c
> ===================================================================
> --- linux-2.6.orig/mm/nommu.c	2008-05-20 09:50:25.686495370 -0700
> +++ linux-2.6/mm/nommu.c	2008-05-20 09:50:51.797745535 -0700
> @@ -109,16 +109,14 @@ unsigned int kobjsize(const void *objp)
>  	 * If the object we have should not have ksize performed on it,
>  	 * return size of 0
>  	 */
> -	if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
> +	if (!objp || (unsigned long)objp >= memory_end ||
> +				!((page = virt_to_head_page(objp))))
>  		return 0;
>  
>  	if (PageSlab(page))
>  		return ksize(objp);
>  
> -	BUG_ON(page->index < 0);
> -	BUG_ON(page->index >= MAX_ORDER);
> -
> -	return (PAGE_SIZE << page->index);
> +	return PAGE_SIZE << compound_order(page);

Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi>

Paul, David, please use this patch instead. Mine didn't have 
virt_to_head_page().

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2008-05-22  4:23 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-20  9:59 [PATCH] nommu: Push kobjsize() slab-specific logic down to ksize() Paul Mundt
2008-05-20 15:18 ` David Howells
2008-05-20 16:52   ` Christoph Lameter
2008-05-20 18:23     ` Matt Mackall
2008-05-20 18:51       ` Christoph Lameter
2008-05-20 19:00         ` Matt Mackall
2008-05-20 19:08           ` Christoph Lameter
2008-05-20 19:14             ` Matt Mackall
2008-05-20 19:16               ` Christoph Lameter
2008-05-20 21:22                 ` Matt Mackall
2008-05-21  1:19                   ` Paul Mundt
2008-05-21  1:52                     ` Christoph Lameter
2008-05-22  4:23     ` Pekka Enberg
2008-05-20 16:29 ` Matt Mackall
2008-05-21  2:43   ` Paul Mundt
2008-05-21 14:58 ` Pekka Enberg
2008-05-21 15:06   ` Matt Mackall
2008-05-21 17:13 ` Pekka J Enberg
2008-05-21 17:27   ` Christoph Lameter
2008-05-21 19:08   ` Pekka Enberg
2008-05-21 23:43   ` Paul Mundt
2008-05-22  0:01     ` Christoph Lameter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).