linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] slub: Hold list_lock unconditionally before the call to add_full.
@ 2014-02-07 18:46 Gautham R Shenoy
  2014-02-07 20:46 ` David Rientjes
  0 siblings, 1 reply; 3+ messages in thread
From: Gautham R Shenoy @ 2014-02-07 18:46 UTC (permalink / raw)
  To: linux-kernel; +Cc: peterz, penberg

Hi,

>From the lockdep annotation and the comment that existed before the
lockdep annotations were introduced, 
mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock
held.

However, there's a call path in deactivate_slab() when

	 (new.inuse || n->nr_partial <= s->min_partial) &&
	 !(new.freelist) &&
         !(kmem_cache_debug(s))

which ends up calling add_full() without holding
n->list_lock.

This was discovered while onlining/offlining cpus in 3.14-rc1 due to
the lockdep annotations added by commit
c65c1877bd6826ce0d9713d76e30a7bed8e49f38.

Fix this by unconditionally taking the lock
irrespective of the state of kmem_cache_debug(s).

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
---
 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 7e3e045..1f723f7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1882,7 +1882,7 @@ redo:
 		}
 	} else {
 		m = M_FULL;
-		if (kmem_cache_debug(s) && !lock) {
+		if (!lock) {
 			lock = 1;
 			/*
 			 * This also ensures that the scanning of full
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] slub: Hold list_lock unconditionally before the call to add_full.
  2014-02-07 18:46 [PATCH] slub: Hold list_lock unconditionally before the call to add_full Gautham R Shenoy
@ 2014-02-07 20:46 ` David Rientjes
  2014-02-08  3:00   ` Gautham R Shenoy
  0 siblings, 1 reply; 3+ messages in thread
From: David Rientjes @ 2014-02-07 20:46 UTC (permalink / raw)
  To: Gautham R Shenoy; +Cc: linux-kernel, peterz, penberg

On Sat, 8 Feb 2014, Gautham R Shenoy wrote:

> Hi,
> 
> From the lockdep annotation and the comment that existed before the
> lockdep annotations were introduced, 
> mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock
> held.
> 
> However, there's a call path in deactivate_slab() when
> 
> 	 (new.inuse || n->nr_partial <= s->min_partial) &&
> 	 !(new.freelist) &&
>          !(kmem_cache_debug(s))
> 
> which ends up calling add_full() without holding
> n->list_lock.
> 
> This was discovered while onlining/offlining cpus in 3.14-rc1 due to
> the lockdep annotations added by commit
> c65c1877bd6826ce0d9713d76e30a7bed8e49f38.
> 
> Fix this by unconditionally taking the lock
> irrespective of the state of kmem_cache_debug(s).
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Pekka Enberg <penberg@kernel.org>
> Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>

No, it's not needed unless kmem_cache_debug(s) is actually set, 
specifically s->flags & SLAB_STORE_USER.

You want the patch at http://marc.info/?l=linux-kernel&m=139147105027693 
instead which is already in -mm and linux-next.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] slub: Hold list_lock unconditionally before the call to add_full.
  2014-02-07 20:46 ` David Rientjes
@ 2014-02-08  3:00   ` Gautham R Shenoy
  0 siblings, 0 replies; 3+ messages in thread
From: Gautham R Shenoy @ 2014-02-08  3:00 UTC (permalink / raw)
  To: David Rientjes; +Cc: Gautham R Shenoy, linux-kernel, peterz, penberg

On Fri, Feb 07, 2014 at 12:46:19PM -0800, David Rientjes wrote:
> On Sat, 8 Feb 2014, Gautham R Shenoy wrote:
> 
> > Hi,
> > 
> > From the lockdep annotation and the comment that existed before the
> > lockdep annotations were introduced, 
> > mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock
> > held.
> > 
> > However, there's a call path in deactivate_slab() when
> > 
> > 	 (new.inuse || n->nr_partial <= s->min_partial) &&
> > 	 !(new.freelist) &&
> >          !(kmem_cache_debug(s))
> > 
> > which ends up calling add_full() without holding
> > n->list_lock.
> > 
> > This was discovered while onlining/offlining cpus in 3.14-rc1 due to
> > the lockdep annotations added by commit
> > c65c1877bd6826ce0d9713d76e30a7bed8e49f38.
> > 
> > Fix this by unconditionally taking the lock
> > irrespective of the state of kmem_cache_debug(s).
> > 
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Pekka Enberg <penberg@kernel.org>
> > Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
> 
> No, it's not needed unless kmem_cache_debug(s) is actually set, 
> specifically s->flags & SLAB_STORE_USER.
> 
> You want the patch at http://marc.info/?l=linux-kernel&m=139147105027693 
> instead which is already in -mm and linux-next.
>

Ah, thanks! Wasn't aware of this fix. Shall apply this one.

--
Thanks and Regards
gautham. 


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2014-02-08  3:01 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-07 18:46 [PATCH] slub: Hold list_lock unconditionally before the call to add_full Gautham R Shenoy
2014-02-07 20:46 ` David Rientjes
2014-02-08  3:00   ` Gautham R Shenoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).