From: Michael Wang <wangyun@linux.vnet.ibm.com>
To: LKML <linux-kernel@vger.kernel.org>, linux-mm@kvack.org
Cc: Matt Mackall <mpm@selenic.com>, Pekka Enberg <penberg@kernel.org>,
Christoph Lameter <cl@linux-foundation.org>
Subject: [PATCH] slab: fix the DEADLOCK issue on l3 alien lock
Date: Wed, 05 Sep 2012 10:33:18 +0800 [thread overview]
Message-ID: <5046B9EE.7000804@linux.vnet.ibm.com> (raw)
In-Reply-To: <5044692D.7080608@linux.vnet.ibm.com>
From: Michael Wang <wangyun@linux.vnet.ibm.com>
DEADLOCK will be report while running a kernel with NUMA and LOCKDEP enabled,
the process of this fake report is:
kmem_cache_free() //free obj in cachep
-> cache_free_alien() //acquire cachep's l3 alien lock
-> __drain_alien_cache()
-> free_block()
-> slab_destroy()
-> kmem_cache_free() //free slab in cachep->slabp_cache
-> cache_free_alien() //acquire cachep->slabp_cache's l3 alien lock
Since the cachep and cachep->slabp_cache's l3 alien are in the same lock class,
fake report generated.
This should not happen since we already have init_lock_keys() which will
reassign the lock class for both l3 list and l3 alien.
However, init_lock_keys() was invoked at a wrong position which is before we
invoke enable_cpucache() on each cache.
Since until set slab_state to be FULL, we won't invoke enable_cpucache()
on caches to build their l3 alien while creating them, so although we invoked
init_lock_keys(), the l3 alien lock class won't change since we don't have
them until invoked enable_cpucache() later.
This patch will invoke init_lock_keys() after we done enable_cpucache()
instead of before to avoid the fake DEADLOCK report.
Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com>
---
mm/slab.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index d4715e5..cc679ef 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1780,9 +1780,6 @@ void __init kmem_cache_init_late(void)
slab_state = UP;
- /* Annotate slab for lockdep -- annotate the malloc caches */
- init_lock_keys();
-
/* 6) resize the head arrays to their final sizes */
mutex_lock(&slab_mutex);
list_for_each_entry(cachep, &slab_caches, list)
@@ -1790,6 +1787,9 @@ void __init kmem_cache_init_late(void)
BUG();
mutex_unlock(&slab_mutex);
+ /* Annotate slab for lockdep -- annotate the malloc caches */
+ init_lock_keys();
+
/* Done! */
slab_state = FULL;
--
1.7.4.1
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next parent reply other threads:[~2012-09-05 2:33 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <5044692D.7080608@linux.vnet.ibm.com>
2012-09-05 2:33 ` Michael Wang [this message]
2012-09-05 13:55 ` [PATCH] slab: fix the DEADLOCK issue on l3 alien lock Christoph Lameter
2012-09-06 3:05 ` Michael Wang
2012-09-06 22:29 ` Paul E. McKenney
2012-09-08 8:39 ` Pekka Enberg
2012-09-11 2:50 ` Michael Wang
2012-09-11 16:29 ` Pekka Enberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5046B9EE.7000804@linux.vnet.ibm.com \
--to=wangyun@linux.vnet.ibm.com \
--cc=cl@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mpm@selenic.com \
--cc=penberg@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).