linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Corruption with MMOTS slub-bulk-allocation-from-per-cpu-partial-pages.patch
@ 2015-06-08 10:16 Jesper Dangaard Brouer
  2015-06-09  0:22 ` Joonsoo Kim
  0 siblings, 1 reply; 3+ messages in thread
From: Jesper Dangaard Brouer @ 2015-06-08 10:16 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Andrew Morton, linux-mm


It seems the patch from (inserted below):
 http://ozlabs.org/~akpm/mmots/broken-out/slub-bulk-allocation-from-per-cpu-partial-pages.patch

Is not protecting access to c->partial "enough" (section is under
local_irq_disable/enable).  When exercising bulk API I can make it
crash/corrupt memory when compiled with CONFIG_SLUB_CPU_PARTIAL=y

First I suspected:
 object = get_freelist(s, c->page); 
But the problem goes way with CONFIG_SLUB_CPU_PARTIAL=n


From: Christoph Lameter <cl@linux.com>
Subject: slub: bulk allocation from per cpu partial pages

Cover all of the per cpu objects available.

Expand the bulk allocation support to drain the per cpu partial pages
while interrupts are off.

Signed-off-by: Christoph Lameter <cl@linux.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/slub.c |   36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

diff -puN mm/slub.c~slub-bulk-allocation-from-per-cpu-partial-pages mm/slub.c
--- a/mm/slub.c~slub-bulk-allocation-from-per-cpu-partial-pages
+++ a/mm/slub.c
@@ -2769,15 +2769,45 @@ bool kmem_cache_alloc_bulk(struct kmem_c
 		while (size) {
 			void *object = c->freelist;
 
-			if (!object)
-				break;
+			if (unlikely(!object)) {
+				/*
+				 * Check if there remotely freed objects
+				 * availalbe in the page.
+				 */
+				object = get_freelist(s, c->page);
+
+				if (!object) {
+					/*
+					 * All objects in use lets check if
+					 * we have other per cpu partial
+					 * pages that have available
+					 * objects.
+					 */
+					c->page = c->partial;
+					if (!c->page) {
+						/* No per cpu objects left */
+						c->freelist = NULL;
+						break;
+					}
+
+					/* Next per cpu partial page */
+					c->partial = c->page->next;
+					c->freelist = get_freelist(s,
+							c->page);
+					continue;
+				}
+
+			}
+
 
-			c->freelist = get_freepointer(s, object);
 			*p++ = object;
 			size--;
 
 			if (unlikely(flags & __GFP_ZERO))
 				memset(object, 0, s->object_size);
+
+			c->freelist = get_freepointer(s, object);
+
 		}
 		c->tid = next_tid(c->tid);
 
_


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Corruption with MMOTS slub-bulk-allocation-from-per-cpu-partial-pages.patch
  2015-06-08 10:16 Corruption with MMOTS slub-bulk-allocation-from-per-cpu-partial-pages.patch Jesper Dangaard Brouer
@ 2015-06-09  0:22 ` Joonsoo Kim
  2015-06-10 10:44   ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 3+ messages in thread
From: Joonsoo Kim @ 2015-06-09  0:22 UTC (permalink / raw)
  To: Jesper Dangaard Brouer; +Cc: Christoph Lameter, Andrew Morton, linux-mm

On Mon, Jun 08, 2015 at 12:16:39PM +0200, Jesper Dangaard Brouer wrote:
> 
> It seems the patch from (inserted below):
>  http://ozlabs.org/~akpm/mmots/broken-out/slub-bulk-allocation-from-per-cpu-partial-pages.patch
> 
> Is not protecting access to c->partial "enough" (section is under
> local_irq_disable/enable).  When exercising bulk API I can make it
> crash/corrupt memory when compiled with CONFIG_SLUB_CPU_PARTIAL=y
> 
> First I suspected:
>  object = get_freelist(s, c->page); 
> But the problem goes way with CONFIG_SLUB_CPU_PARTIAL=n
> 
> 
> From: Christoph Lameter <cl@linux.com>
> Subject: slub: bulk allocation from per cpu partial pages
> 
> Cover all of the per cpu objects available.
> 
> Expand the bulk allocation support to drain the per cpu partial pages
> while interrupts are off.
> 
> Signed-off-by: Christoph Lameter <cl@linux.com>
> Cc: Jesper Dangaard Brouer <brouer@redhat.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
> 
>  mm/slub.c |   36 +++++++++++++++++++++++++++++++++---
>  1 file changed, 33 insertions(+), 3 deletions(-)
> 
> diff -puN mm/slub.c~slub-bulk-allocation-from-per-cpu-partial-pages mm/slub.c
> --- a/mm/slub.c~slub-bulk-allocation-from-per-cpu-partial-pages
> +++ a/mm/slub.c
> @@ -2769,15 +2769,45 @@ bool kmem_cache_alloc_bulk(struct kmem_c
>  		while (size) {
>  			void *object = c->freelist;
>  
> -			if (!object)
> -				break;
> +			if (unlikely(!object)) {
> +				/*
> +				 * Check if there remotely freed objects
> +				 * availalbe in the page.
> +				 */
> +				object = get_freelist(s, c->page);
> +
> +				if (!object) {
> +					/*
> +					 * All objects in use lets check if
> +					 * we have other per cpu partial
> +					 * pages that have available
> +					 * objects.
> +					 */
> +					c->page = c->partial;
> +					if (!c->page) {
> +						/* No per cpu objects left */
> +						c->freelist = NULL;
> +						break;
> +					}
> +
> +					/* Next per cpu partial page */
> +					c->partial = c->page->next;
> +					c->freelist = get_freelist(s,
> +							c->page);
> +					continue;
> +				}
> +
> +			}
> +
>  
> -			c->freelist = get_freepointer(s, object);
>  			*p++ = object;
>  			size--;
>  
>  			if (unlikely(flags & __GFP_ZERO))
>  				memset(object, 0, s->object_size);
> +
> +			c->freelist = get_freepointer(s, object);
> +

Hello,

get_freepointer() should be called before zeroing object.
It may help your problem.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Corruption with MMOTS slub-bulk-allocation-from-per-cpu-partial-pages.patch
  2015-06-09  0:22 ` Joonsoo Kim
@ 2015-06-10 10:44   ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 3+ messages in thread
From: Jesper Dangaard Brouer @ 2015-06-10 10:44 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Christoph Lameter, Andrew Morton, linux-mm


To Andrew/Christoph, can we drop this patch?  Then, I'll base my work
on top of the previous patch.  Which also need some bug fixes, as
pointed out by Joonsoo.

(p.s. iif then also drop
slub-bulk-allocation-from-per-cpu-partial-pages-fix.patch)


On Tue, 9 Jun 2015 09:22:59 +0900
Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:

> On Mon, Jun 08, 2015 at 12:16:39PM +0200, Jesper Dangaard Brouer wrote:
> > 
> > It seems the patch from (inserted below):
> >  http://ozlabs.org/~akpm/mmots/broken-out/slub-bulk-allocation-from-per-cpu-partial-pages.patch
> > 
> > Is not protecting access to c->partial "enough" (section is under
> > local_irq_disable/enable).  When exercising bulk API I can make it
> > crash/corrupt memory when compiled with CONFIG_SLUB_CPU_PARTIAL=y
> > 
> > First I suspected:
> >  object = get_freelist(s, c->page); 
> > But the problem goes way with CONFIG_SLUB_CPU_PARTIAL=n
> > 
> > 
> > From: Christoph Lameter <cl@linux.com>
> > Subject: slub: bulk allocation from per cpu partial pages
> > 
> > Cover all of the per cpu objects available.
> > 
> > Expand the bulk allocation support to drain the per cpu partial pages
> > while interrupts are off.
> > 
> > Signed-off-by: Christoph Lameter <cl@linux.com>
> > Cc: Jesper Dangaard Brouer <brouer@redhat.com>
> > Cc: Pekka Enberg <penberg@kernel.org>
> > Cc: David Rientjes <rientjes@google.com>
> > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> > ---
> > 
> >  mm/slub.c |   36 +++++++++++++++++++++++++++++++++---
> >  1 file changed, 33 insertions(+), 3 deletions(-)
> > 
> > diff -puN mm/slub.c~slub-bulk-allocation-from-per-cpu-partial-pages mm/slub.c
> > --- a/mm/slub.c~slub-bulk-allocation-from-per-cpu-partial-pages
> > +++ a/mm/slub.c
> > @@ -2769,15 +2769,45 @@ bool kmem_cache_alloc_bulk(struct kmem_c
> >  		while (size) {
> >  			void *object = c->freelist;
> >  
> > -			if (!object)
> > -				break;
> > +			if (unlikely(!object)) {
> > +				/*
> > +				 * Check if there remotely freed objects
> > +				 * availalbe in the page.
> > +				 */
> > +				object = get_freelist(s, c->page);
> > +
> > +				if (!object) {
> > +					/*
> > +					 * All objects in use lets check if
> > +					 * we have other per cpu partial
> > +					 * pages that have available
> > +					 * objects.
> > +					 */
> > +					c->page = c->partial;
> > +					if (!c->page) {
> > +						/* No per cpu objects left */
> > +						c->freelist = NULL;
> > +						break;
> > +					}
> > +
> > +					/* Next per cpu partial page */
> > +					c->partial = c->page->next;
> > +					c->freelist = get_freelist(s,
> > +							c->page);
> > +					continue;
> > +				}
> > +
> > +			}
> > +
> >  
> > -			c->freelist = get_freepointer(s, object);
> >  			*p++ = object;
> >  			size--;
> >  
> >  			if (unlikely(flags & __GFP_ZERO))
> >  				memset(object, 0, s->object_size);
> > +
> > +			c->freelist = get_freepointer(s, object);
> > +
> 
> Hello,
> 
> get_freepointer() should be called before zeroing object.
> It may help your problem.

That is a bug, but I'm not invoking with __GFP_ZERO...

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-06-10 10:44 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-08 10:16 Corruption with MMOTS slub-bulk-allocation-from-per-cpu-partial-pages.patch Jesper Dangaard Brouer
2015-06-09  0:22 ` Joonsoo Kim
2015-06-10 10:44   ` Jesper Dangaard Brouer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).