public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mike Galbraith <umgwanakikbuti@gmail.com>
To: Thomas Gleixner <tglx@linutronix.de>,
	LKML <linux-kernel@vger.kernel.org>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>,
	Sebastian Sewior <bigeasy@linutronix.de>,
	Steven Rostedt <rostedt@goodmis.org>
Subject: Re: [ANNOUNCE] v4.4.3-rt9
Date: Mon, 29 Feb 2016 15:27:40 +0100	[thread overview]
Message-ID: <1456756060.3488.120.camel@gmail.com> (raw)
In-Reply-To: <alpine.DEB.2.11.1602291212180.3638@nanos>

On Mon, 2016-02-29 at 13:46 +0100, Thomas Gleixner wrote:
> Dear RT folks!
> 
> I'm pleased to announce the v4.4.3-rt9 patch set. v4.4.2-rt7 and v4.4.3-rt8
> are non-announced updates to incorporate the linux-4.4.y stable tree.
> 
> There is one change caused by the 4.4.3 update:
> 
>   The relaxed handling of dump_stack() on RT has been dropped as there is
>   actually a potential deadlock lurking around the corner. See: commit
>   d7ce36924344 upstream. This does not effect the other facilities which
>   gather stack traces.

Hrm.  I had rolled that dropped bit forward as below.  I was given
cause to do a very large pile of ltp oom4 testing (rt kernels will
livelock due to waitqueue workers waiting for kthreadd to get memory to
spawn a kworker thread, while stuck kworker holds manager mutex, unless
workers are run as rt tasks to keep us from getting that depleted in
the first place), which gives it oodles of exercise, and all _seemed_
well.  Only seemed?

--- a/lib/dump_stack.c	2016-02-29 14:20:29.512510444 +0100
+++ b/lib/dump_stack.c	2016-02-26 13:03:15.755297038 +0100
@@ -8,6 +8,7 @@
 #include <linux/sched.h>
 #include <linux/smp.h>
 #include <linux/atomic.h>
+#include <linux/locallock.h>
 
 static void __dump_stack(void)
 {
@@ -22,6 +23,7 @@ static void __dump_stack(void)
  */
 #ifdef CONFIG_SMP
 static atomic_t dump_lock = ATOMIC_INIT(-1);
+static DEFINE_LOCAL_IRQ_LOCK(dump_stack_irq_lock);
 
 asmlinkage __visible void dump_stack(void)
 {
@@ -35,7 +37,7 @@ asmlinkage __visible void dump_stack(voi
 	 * against other CPUs
 	 */
 retry:
-	local_irq_save(flags);
+	local_lock_irqsave(dump_stack_irq_lock, flags);
 	cpu = smp_processor_id();
 	old = atomic_cmpxchg(&dump_lock, -1, cpu);
 	if (old == -1) {
@@ -43,7 +45,7 @@ retry:
 	} else if (old == cpu) {
 		was_locked = 1;
 	} else {
-		local_irq_restore(flags);
+		local_unlock_irqrestore(dump_stack_irq_lock, flags);
 		cpu_relax();
 		goto retry;
 	}
@@ -53,7 +55,7 @@ retry:
 	if (!was_locked)
 		atomic_set(&dump_lock, -1);
 
-	local_irq_restore(flags);
+	local_unlock_irqrestore(dump_stack_irq_lock, flags);
 }
 #else
 asmlinkage __visible void dump_stack(void)

  reply	other threads:[~2016-02-29 14:27 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-29 12:46 [ANNOUNCE] v4.4.3-rt9 Thomas Gleixner
2016-02-29 14:27 ` Mike Galbraith [this message]
2016-02-29 15:00   ` Thomas Gleixner
2016-02-29 15:09     ` Mike Galbraith
2016-02-29 19:54 ` Steven Rostedt
2016-02-29 20:11   ` Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1456756060.3488.120.camel@gmail.com \
    --to=umgwanakikbuti@gmail.com \
    --cc=bigeasy@linutronix.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox