linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Steven Rostedt <rostedt@goodmis.org>
To: linux-kernel@vger.kernel.org,
	linux-rt-users <linux-rt-users@vger.kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Carsten Emde <C.Emde@osadl.org>, John Kacur <jkacur@redhat.com>
Subject: [PATCH RT 3/8] mm: slab: Fix potential deadlock
Date: Wed, 10 Oct 2012 09:34:09 -0400	[thread overview]
Message-ID: <20121010133523.438476409@goodmis.org> (raw)
In-Reply-To: 20121010133406.053812290@goodmis.org

[-- Attachment #1: 0003-mm-slab-Fix-potential-deadlock.patch --]
[-- Type: text/plain, Size: 3515 bytes --]

From: Thomas Gleixner <tglx@linutronix.de>

 =============================================
[ INFO: possible recursive locking detected ]
 3.6.0-rt1+ #49 Not tainted
 ---------------------------------------------
 swapper/0/1 is trying to acquire lock:
 lock_slab_on+0x72/0x77

 but task is already holding lock:
 __local_lock_irq+0x24/0x77

 other info that might help us debug this:
  Possible unsafe locking scenario:

        CPU0
        ----
   lock(&per_cpu(slab_lock, __cpu).lock);
   lock(&per_cpu(slab_lock, __cpu).lock);

  *** DEADLOCK ***

  May be due to missing lock nesting notation

 2 locks held by swapper/0/1:
 kmem_cache_create+0x33/0x89
 __local_lock_irq+0x24/0x77

 stack backtrace:
 Pid: 1, comm: swapper/0 Not tainted 3.6.0-rt1+ #49
 Call Trace:
 __lock_acquire+0x9a4/0xdc4
 ? __local_lock_irq+0x24/0x77
 ? lock_slab_on+0x72/0x77
 lock_acquire+0xc4/0x108
 ? lock_slab_on+0x72/0x77
 ? unlock_slab_on+0x5b/0x5b
 rt_spin_lock+0x36/0x3d
 ? lock_slab_on+0x72/0x77
 ? migrate_disable+0x85/0x93
 lock_slab_on+0x72/0x77
 do_ccupdate_local+0x19/0x44
 slab_on_each_cpu+0x36/0x5a
 do_tune_cpucache+0xc1/0x305
 enable_cpucache+0x8c/0xb5
 setup_cpu_cache+0x28/0x182
 __kmem_cache_create+0x34b/0x380
 ? shmem_mount+0x1a/0x1a
 kmem_cache_create+0x4a/0x89
 ? shmem_mount+0x1a/0x1a
 shmem_init+0x3e/0xd4
 kernel_init+0x11c/0x214
 kernel_thread_helper+0x4/0x10
 ? retint_restore_args+0x13/0x13
 ? start_kernel+0x3bc/0x3bc
 ? gs_change+0x13/0x13

It's not a missing annotation. It's simply wrong code and needs to be
fixed. Instead of nesting the local and the remote cpu lock simply
acquire only the remote cpu lock, which is sufficient protection for
this procedure.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/locallock.h |    8 ++++++++
 mm/slab.c                 |   10 ++--------
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/include/linux/locallock.h b/include/linux/locallock.h
index 8fbc393..0161fbb 100644
--- a/include/linux/locallock.h
+++ b/include/linux/locallock.h
@@ -96,6 +96,9 @@ static inline void __local_lock_irq(struct local_irq_lock *lv)
 #define local_lock_irq(lvar)						\
 	do { __local_lock_irq(&get_local_var(lvar)); } while (0)
 
+#define local_lock_irq_on(lvar, cpu)					\
+	do { __local_lock_irq(&per_cpu(lvar, cpu)); } while (0)
+
 static inline void __local_unlock_irq(struct local_irq_lock *lv)
 {
 	LL_WARN(!lv->nestcnt);
@@ -111,6 +114,11 @@ static inline void __local_unlock_irq(struct local_irq_lock *lv)
 		put_local_var(lvar);					\
 	} while (0)
 
+#define local_unlock_irq_on(lvar, cpu)					\
+	do {								\
+		__local_unlock_irq(&per_cpu(lvar, cpu));		\
+	} while (0)
+
 static inline int __local_lock_irqsave(struct local_irq_lock *lv)
 {
 	if (lv->owner != current) {
diff --git a/mm/slab.c b/mm/slab.c
index 64eb636..09addf6 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -751,18 +751,12 @@ slab_on_each_cpu(void (*func)(void *arg, int this_cpu), void *arg)
 
 static void lock_slab_on(unsigned int cpu)
 {
-	if (cpu == smp_processor_id())
-		local_lock_irq(slab_lock);
-	else
-		local_spin_lock_irq(slab_lock, &per_cpu(slab_lock, cpu).lock);
+	local_lock_irq_on(slab_lock, cpu);
 }
 
 static void unlock_slab_on(unsigned int cpu)
 {
-	if (cpu == smp_processor_id())
-		local_unlock_irq(slab_lock);
-	else
-		local_spin_unlock_irq(slab_lock, &per_cpu(slab_lock, cpu).lock);
+	local_unlock_irq_on(slab_lock, cpu);
 }
 #endif
 
-- 
1.7.10.4

  parent reply	other threads:[~2012-10-10 13:34 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-10 13:34 [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 1/8] random: Make it work on rt Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 2/8] softirq: Init softirq local lock after per cpu section is set up Steven Rostedt
2012-10-10 13:34 ` Steven Rostedt [this message]
2012-10-10 13:34 ` [PATCH RT 4/8] mm: page_alloc: Use local_lock_on() instead of plain spinlock Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 5/8] rt: rwsem/rwlock: lockdep annotations Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 6/8] sched: Better debug output for might sleep Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 7/8] stomp_machine: Use mutex_trylock when called from inactive cpu Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 8/8] Linux 3.4.13-rt22-rc1 Steven Rostedt
  -- strict thread matches above, loose matches on Subject: below --
2012-10-12  1:20 [PATCH RT 0/8] [ANNOUNCE] 3.2.31-rt47-rc1 stable review Steven Rostedt
2012-10-12  1:20 ` [PATCH RT 3/8] mm: slab: Fix potential deadlock Steven Rostedt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121010133523.438476409@goodmis.org \
    --to=rostedt@goodmis.org \
    --cc=C.Emde@osadl.org \
    --cc=jkacur@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).