From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752683Ab2CDKe4 (ORCPT ); Sun, 4 Mar 2012 05:34:56 -0500 Received: from mail-pz0-f52.google.com ([209.85.210.52]:57665 "EHLO mail-pz0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752216Ab2CDKey (ORCPT ); Sun, 4 Mar 2012 05:34:54 -0500 Authentication-Results: mr.google.com; spf=pass (google.com: domain of minchan.kim@gmail.com designates 10.68.223.138 as permitted sender) smtp.mail=minchan.kim@gmail.com; dkim=pass header.i=minchan.kim@gmail.com Date: Sun, 4 Mar 2012 19:34:46 +0900 From: Minchan Kim To: Namhyung Kim Cc: Christoph Lameter , Pekka Enberg , Matt Mackall , Namhyung Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH -next] slub: set PG_slab on all of slab pages Message-ID: <20120304103446.GA9267@barrios> References: <1330505674-31610-1-git-send-email-namhyung.kim@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1330505674-31610-1-git-send-email-namhyung.kim@lge.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Namhyung, On Wed, Feb 29, 2012 at 05:54:34PM +0900, Namhyung Kim wrote: > Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would > call free_pages() incorrectly on a object in a tail page, she will get >i confused with the undefined result. Setting the flag would help her by > emitting a warning on bad_page() in such a case. > > Reported-by: Sangseok Lee > Signed-off-by: Namhyung Kim I read this thread and I feel the we don't reach right point. I think it's not a compound page problem. We can face above problem where we allocates big order page without __GFP_COMP and free middle page of it. Fortunately, We can catch such a problem by put_page_testzero in __free_pages if you enable CONFIG_DEBUG_VM. Did you tried that with CONFIG_DEBUG_VM? > --- > mm/slub.c | 12 ++++++++++-- > 1 files changed, 10 insertions(+), 2 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 33bab2aca882..575baacbec9b 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1287,6 +1287,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) > struct page *page; > struct kmem_cache_order_objects oo = s->oo; > gfp_t alloc_gfp; > + int i; > > flags &= gfp_allowed_mask; > > @@ -1320,6 +1321,9 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) > if (!page) > return NULL; > > + for (i = 0; i < 1 << oo_order(oo); i++) > + __SetPageSlab(page + i); > + > if (kmemcheck_enabled > && !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) { > int pages = 1 << oo_order(oo); > @@ -1369,7 +1373,6 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) > > inc_slabs_node(s, page_to_nid(page), page->objects); > page->slab = s; > - page->flags |= 1 << PG_slab; > > start = page_address(page); > > @@ -1396,6 +1399,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page) > { > int order = compound_order(page); > int pages = 1 << order; > + int i; > > if (kmem_cache_debug(s)) { > void *p; > @@ -1413,7 +1417,11 @@ static void __free_slab(struct kmem_cache *s, struct page *page) > NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE, > -pages); > > - __ClearPageSlab(page); > + for (i = 0; i < pages; i++) { > + BUG_ON(!PageSlab(page + i)); > + __ClearPageSlab(page + i); > + } > + > reset_page_mapcount(page); > if (current->reclaim_state) > current->reclaim_state->reclaimed_slab += pages; > -- > 1.7.9 > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ > Don't email: email@kvack.org