linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review
@ 2012-10-10 13:34 Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 1/8] random: Make it work on rt Steven Rostedt
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: Steven Rostedt @ 2012-10-10 13:34 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users; +Cc: Thomas Gleixner, Carsten Emde, John Kacur


Dear RT Folks,

This is the RT stable review cycle of patch 3.4.13-rt22-rc1.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 10/15/2012.

Enjoy,

-- Steve


To build 3.4.13-rt22-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.4.tar.xz

  http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.4.13.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/3.4/patch-3.4.13-rt22-rc1.patch.xz

You can also build from 3.4.13-rt21 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/3.4/incr/patch-3.4.13-rt21-rt22-rc1.patch.xz


Changes from 3.4.13-rt21:

---


Steven Rostedt (2):
      softirq: Init softirq local lock after per cpu section is set up
      Linux 3.4.13-rt22-rc1

Thomas Gleixner (6):
      random: Make it work on rt
      mm: slab: Fix potential deadlock
      mm: page_alloc: Use local_lock_on() instead of plain spinlock
      rt: rwsem/rwlock: lockdep annotations
      sched: Better debug output for might sleep
      stomp_machine: Use mutex_trylock when called from inactive cpu

----
 drivers/char/random.c     |   10 ++++++----
 include/linux/irqdesc.h   |    1 +
 include/linux/locallock.h |   19 +++++++++++++++++++
 include/linux/random.h    |    2 +-
 include/linux/sched.h     |    4 ++++
 init/main.c               |    2 +-
 kernel/irq/handle.c       |    7 +++++--
 kernel/irq/manage.c       |    6 ++++++
 kernel/rt.c               |   46 ++++++++++++++++++++++++---------------------
 kernel/sched/core.c       |   23 +++++++++++++++++++++--
 kernel/stop_machine.c     |   13 +++++++++----
 localversion-rt           |    2 +-
 mm/page_alloc.c           |    4 ++--
 mm/slab.c                 |   10 ++--------
 14 files changed, 103 insertions(+), 46 deletions(-)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH RT 1/8] random: Make it work on rt
  2012-10-10 13:34 [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review Steven Rostedt
@ 2012-10-10 13:34 ` Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 2/8] softirq: Init softirq local lock after per cpu section is set up Steven Rostedt
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2012-10-10 13:34 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users; +Cc: Thomas Gleixner, Carsten Emde, John Kacur

[-- Attachment #1: 0001-random-Make-it-work-on-rt.patch --]
[-- Type: text/plain, Size: 4241 bytes --]

From: Thomas Gleixner <tglx@linutronix.de>

Delegate the random insertion to the forced threaded interrupt
handler. Store the return IP of the hard interrupt handler in the irq
descriptor and feed it into the random generator as a source of
entropy.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 drivers/char/random.c   |   10 ++++++----
 include/linux/irqdesc.h |    1 +
 include/linux/random.h  |    2 +-
 kernel/irq/handle.c     |    7 +++++--
 kernel/irq/manage.c     |    6 ++++++
 5 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index feae549..f786798 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -745,18 +745,16 @@ EXPORT_SYMBOL_GPL(add_input_randomness);
 
 static DEFINE_PER_CPU(struct fast_pool, irq_randomness);
 
-void add_interrupt_randomness(int irq, int irq_flags)
+void add_interrupt_randomness(int irq, int irq_flags, __u64 ip)
 {
 	struct entropy_store	*r;
 	struct fast_pool	*fast_pool = &__get_cpu_var(irq_randomness);
-	struct pt_regs		*regs = get_irq_regs();
 	unsigned long		now = jiffies;
 	__u32			input[4], cycles = get_cycles();
 
 	input[0] = cycles ^ jiffies;
 	input[1] = irq;
-	if (regs) {
-		__u64 ip = instruction_pointer(regs);
+	if (ip) {
 		input[2] = ip;
 		input[3] = ip >> 32;
 	}
@@ -770,7 +768,11 @@ void add_interrupt_randomness(int irq, int irq_flags)
 	fast_pool->last = now;
 
 	r = nonblocking_pool.initialized ? &input_pool : &nonblocking_pool;
+#ifndef CONFIG_PREEMPT_RT_FULL
 	__mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool), NULL);
+#else
+	mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool), NULL);
+#endif
 	/*
 	 * If we don't have a valid cycle counter, and we see
 	 * back-to-back timer interrupts, then skip giving credit for
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index 9a323d1..5bf5add 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -52,6 +52,7 @@ struct irq_desc {
 	unsigned int		irq_count;	/* For detecting broken IRQs */
 	unsigned long		last_unhandled;	/* Aging timer for unhandled count */
 	unsigned int		irqs_unhandled;
+	u64			random_ip;
 	raw_spinlock_t		lock;
 	struct cpumask		*percpu_enabled;
 #ifdef CONFIG_SMP
diff --git a/include/linux/random.h b/include/linux/random.h
index ac621ce..9ae01e2 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -51,7 +51,7 @@ struct rnd_state {
 extern void add_device_randomness(const void *, unsigned int);
 extern void add_input_randomness(unsigned int type, unsigned int code,
 				 unsigned int value);
-extern void add_interrupt_randomness(int irq, int irq_flags);
+extern void add_interrupt_randomness(int irq, int irq_flags, __u64 ip);
 
 extern void get_random_bytes(void *buf, int nbytes);
 extern void get_random_bytes_arch(void *buf, int nbytes);
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index 311c4e6..7f50c55 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -132,6 +132,8 @@ static void irq_wake_thread(struct irq_desc *desc, struct irqaction *action)
 irqreturn_t
 handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action)
 {
+	struct pt_regs *regs = get_irq_regs();
+	u64 ip = regs ? instruction_pointer(regs) : 0;
 	irqreturn_t retval = IRQ_NONE;
 	unsigned int flags = 0, irq = desc->irq_data.irq;
 
@@ -173,8 +175,9 @@ handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action)
 	} while (action);
 
 #ifndef CONFIG_PREEMPT_RT_FULL
-	/* FIXME: Can we unbreak that ? */
-	add_interrupt_randomness(irq, flags);
+	add_interrupt_randomness(irq, flags, ip);
+#else
+	desc->random_ip = ip;
 #endif
 
 	if (!noirqdebug)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index ede56ac..90bf44a 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -814,6 +814,12 @@ static int irq_thread(void *data)
 		if (!noirqdebug)
 			note_interrupt(action->irq, desc, action_ret);
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+		migrate_disable();
+		add_interrupt_randomness(action->irq, 0,
+					 desc->random_ip ^ (u64) action);
+		migrate_enable();
+#endif
 		wake_threads_waitq(desc);
 	}
 
-- 
1.7.10.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 2/8] softirq: Init softirq local lock after per cpu section is set up
  2012-10-10 13:34 [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 1/8] random: Make it work on rt Steven Rostedt
@ 2012-10-10 13:34 ` Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 3/8] mm: slab: Fix potential deadlock Steven Rostedt
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2012-10-10 13:34 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, John Kacur, Clark Williams,
	Carsten Emde

[-- Attachment #1: 0002-softirq-Init-softirq-local-lock-after-per-cpu-sectio.patch --]
[-- Type: text/plain, Size: 4790 bytes --]

From: Steven Rostedt <rostedt@goodmis.org>

I discovered this bug when booting 3.4-rt on my powerpc box. It crashed
with the following report:

------------[ cut here ]------------
kernel BUG at /work/rt/stable-rt.git/kernel/rtmutex_common.h:75!
Oops: Exception in kernel mode, sig: 5 [#1]
PREEMPT SMP NR_CPUS=64 NUMA PA Semi PWRficient
Modules linked in:
NIP: c0000000004aa03c LR: c0000000004aa01c CTR: c00000000009b2ac
REGS: c00000003e8d7950 TRAP: 0700   Not tainted  (3.4.11-test-rt19)
MSR: 9000000000029032 <SF,HV,EE,ME,IR,DR,RI>  CR: 24000082  XER: 20000000
SOFTE: 0
TASK = c00000003e8fdcd0[11] 'ksoftirqd/1' THREAD: c00000003e8d4000 CPU: 1
GPR00: 0000000000000001 c00000003e8d7bd0 c000000000d6cbb0 0000000000000000
GPR04: c00000003e8fdcd0 0000000000000000 0000000024004082 c000000000011454
GPR08: 0000000000000000 0000000080000001 c00000003e8fdcd1 0000000000000000
GPR12: 0000000024000084 c00000000fff0280 ffffffffffffffff 000000003ffffad8
GPR16: ffffffffffffffff 000000000072c798 0000000000000060 0000000000000000
GPR20: 0000000000642741 000000000072c858 000000003ffffaf0 0000000000000417
GPR24: 000000000072dcd0 c00000003e7ff990 0000000000000000 0000000000000001
GPR28: 0000000000000000 c000000000792340 c000000000ccec78 c000000001182338
NIP [c0000000004aa03c] .wakeup_next_waiter+0x44/0xb8
LR [c0000000004aa01c] .wakeup_next_waiter+0x24/0xb8
Call Trace:
[c00000003e8d7bd0] [c0000000004aa01c] .wakeup_next_waiter+0x24/0xb8 (unreliable)
[c00000003e8d7c60] [c0000000004a0320] .rt_spin_lock_slowunlock+0x8c/0xe4
[c00000003e8d7ce0] [c0000000004a07cc] .rt_spin_unlock+0x54/0x64
[c00000003e8d7d60] [c0000000000636bc] .__thread_do_softirq+0x130/0x174
[c00000003e8d7df0] [c00000000006379c] .run_ksoftirqd+0x9c/0x1a4
[c00000003e8d7ea0] [c000000000080b68] .kthread+0xa8/0xb4
[c00000003e8d7f90] [c00000000001c2f8] .kernel_thread+0x54/0x70
Instruction dump:
60000000 e86d01c8 38630730 4bff7061 60000000 ebbf0008 7c7c1b78 e81d0040
7fe00278 7c000074 7800d182 68000001 <0b000000> e88d01c8 387d0010 38840738

The rtmutex_common.h:75 is:

rt_mutex_top_waiter(struct rt_mutex *lock)
{
	struct rt_mutex_waiter *w;

	w = plist_first_entry(&lock->wait_list, struct rt_mutex_waiter,
			       list_entry);
	BUG_ON(w->lock != lock);

	return w;
}

Where the waiter->lock is corrupted. I saw various other random bugs
that all had to with the softirq lock and plist. As plist needs to be
initialized before it is used I investigated how this lock is
initialized. It's initialized with:

void __init softirq_early_init(void)
{
	local_irq_lock_init(local_softirq_lock);
}

Where:

	do {								\
		int __cpu;						\
		for_each_possible_cpu(__cpu)				\
			spin_lock_init(&per_cpu(lvar, __cpu).lock);	\
	} while (0)

As the softirq lock is a local_irq_lock, which is a per_cpu lock, the
initialization is done to all per_cpu versions of the lock. But lets
look at where the softirq_early_init() is called from.

In init/main.c: start_kernel()

/*
 * Interrupts are still disabled. Do necessary setups, then
 * enable them
 */
	softirq_early_init();
	tick_init();
	boot_cpu_init();
	page_address_init();
	printk(KERN_NOTICE "%s", linux_banner);
	setup_arch(&command_line);
	mm_init_owner(&init_mm, &init_task);
	mm_init_cpumask(&init_mm);
	setup_command_line(command_line);
	setup_nr_cpu_ids();
	setup_per_cpu_areas();
	smp_prepare_boot_cpu();	/* arch-specific boot-cpu hooks */

One of the first things that is called is the initialization of the
softirq lock. But if you look further down, we see the per_cpu areas
have not been set up yet. Thus initializing a local_irq_lock() before
the per_cpu section is set up, may not work as it is initializing the
per cpu locks before the per cpu exists.

By moving the softirq_early_init() right after setup_per_cpu_areas(),
the kernel boots fine.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Clark Williams <clark@redhat.com>
Cc: John Kacur <jkacur@redhat.com>
Cc: Carsten Emde <cbe@osadl.org>
Cc: vomlehn@texas.net
Cc: stable-rt@vger.kernel.org
Link: http://lkml.kernel.org/r/1349362924.6755.18.camel@gandalf.local.home
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 init/main.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/init/main.c b/init/main.c
index f07f2b0..cee1a91 100644
--- a/init/main.c
+++ b/init/main.c
@@ -490,7 +490,6 @@ asmlinkage void __init start_kernel(void)
  * Interrupts are still disabled. Do necessary setups, then
  * enable them
  */
-	softirq_early_init();
 	tick_init();
 	boot_cpu_init();
 	page_address_init();
@@ -501,6 +500,7 @@ asmlinkage void __init start_kernel(void)
 	setup_command_line(command_line);
 	setup_nr_cpu_ids();
 	setup_per_cpu_areas();
+	softirq_early_init();
 	smp_prepare_boot_cpu();	/* arch-specific boot-cpu hooks */
 
 	build_all_zonelists(NULL);
-- 
1.7.10.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 3/8] mm: slab: Fix potential deadlock
  2012-10-10 13:34 [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 1/8] random: Make it work on rt Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 2/8] softirq: Init softirq local lock after per cpu section is set up Steven Rostedt
@ 2012-10-10 13:34 ` Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 4/8] mm: page_alloc: Use local_lock_on() instead of plain spinlock Steven Rostedt
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2012-10-10 13:34 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users; +Cc: Thomas Gleixner, Carsten Emde, John Kacur

[-- Attachment #1: 0003-mm-slab-Fix-potential-deadlock.patch --]
[-- Type: text/plain, Size: 3515 bytes --]

From: Thomas Gleixner <tglx@linutronix.de>

 =============================================
[ INFO: possible recursive locking detected ]
 3.6.0-rt1+ #49 Not tainted
 ---------------------------------------------
 swapper/0/1 is trying to acquire lock:
 lock_slab_on+0x72/0x77

 but task is already holding lock:
 __local_lock_irq+0x24/0x77

 other info that might help us debug this:
  Possible unsafe locking scenario:

        CPU0
        ----
   lock(&per_cpu(slab_lock, __cpu).lock);
   lock(&per_cpu(slab_lock, __cpu).lock);

  *** DEADLOCK ***

  May be due to missing lock nesting notation

 2 locks held by swapper/0/1:
 kmem_cache_create+0x33/0x89
 __local_lock_irq+0x24/0x77

 stack backtrace:
 Pid: 1, comm: swapper/0 Not tainted 3.6.0-rt1+ #49
 Call Trace:
 __lock_acquire+0x9a4/0xdc4
 ? __local_lock_irq+0x24/0x77
 ? lock_slab_on+0x72/0x77
 lock_acquire+0xc4/0x108
 ? lock_slab_on+0x72/0x77
 ? unlock_slab_on+0x5b/0x5b
 rt_spin_lock+0x36/0x3d
 ? lock_slab_on+0x72/0x77
 ? migrate_disable+0x85/0x93
 lock_slab_on+0x72/0x77
 do_ccupdate_local+0x19/0x44
 slab_on_each_cpu+0x36/0x5a
 do_tune_cpucache+0xc1/0x305
 enable_cpucache+0x8c/0xb5
 setup_cpu_cache+0x28/0x182
 __kmem_cache_create+0x34b/0x380
 ? shmem_mount+0x1a/0x1a
 kmem_cache_create+0x4a/0x89
 ? shmem_mount+0x1a/0x1a
 shmem_init+0x3e/0xd4
 kernel_init+0x11c/0x214
 kernel_thread_helper+0x4/0x10
 ? retint_restore_args+0x13/0x13
 ? start_kernel+0x3bc/0x3bc
 ? gs_change+0x13/0x13

It's not a missing annotation. It's simply wrong code and needs to be
fixed. Instead of nesting the local and the remote cpu lock simply
acquire only the remote cpu lock, which is sufficient protection for
this procedure.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/locallock.h |    8 ++++++++
 mm/slab.c                 |   10 ++--------
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/include/linux/locallock.h b/include/linux/locallock.h
index 8fbc393..0161fbb 100644
--- a/include/linux/locallock.h
+++ b/include/linux/locallock.h
@@ -96,6 +96,9 @@ static inline void __local_lock_irq(struct local_irq_lock *lv)
 #define local_lock_irq(lvar)						\
 	do { __local_lock_irq(&get_local_var(lvar)); } while (0)
 
+#define local_lock_irq_on(lvar, cpu)					\
+	do { __local_lock_irq(&per_cpu(lvar, cpu)); } while (0)
+
 static inline void __local_unlock_irq(struct local_irq_lock *lv)
 {
 	LL_WARN(!lv->nestcnt);
@@ -111,6 +114,11 @@ static inline void __local_unlock_irq(struct local_irq_lock *lv)
 		put_local_var(lvar);					\
 	} while (0)
 
+#define local_unlock_irq_on(lvar, cpu)					\
+	do {								\
+		__local_unlock_irq(&per_cpu(lvar, cpu));		\
+	} while (0)
+
 static inline int __local_lock_irqsave(struct local_irq_lock *lv)
 {
 	if (lv->owner != current) {
diff --git a/mm/slab.c b/mm/slab.c
index 64eb636..09addf6 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -751,18 +751,12 @@ slab_on_each_cpu(void (*func)(void *arg, int this_cpu), void *arg)
 
 static void lock_slab_on(unsigned int cpu)
 {
-	if (cpu == smp_processor_id())
-		local_lock_irq(slab_lock);
-	else
-		local_spin_lock_irq(slab_lock, &per_cpu(slab_lock, cpu).lock);
+	local_lock_irq_on(slab_lock, cpu);
 }
 
 static void unlock_slab_on(unsigned int cpu)
 {
-	if (cpu == smp_processor_id())
-		local_unlock_irq(slab_lock);
-	else
-		local_spin_unlock_irq(slab_lock, &per_cpu(slab_lock, cpu).lock);
+	local_unlock_irq_on(slab_lock, cpu);
 }
 #endif
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 4/8] mm: page_alloc: Use local_lock_on() instead of plain spinlock
  2012-10-10 13:34 [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review Steven Rostedt
                   ` (2 preceding siblings ...)
  2012-10-10 13:34 ` [PATCH RT 3/8] mm: slab: Fix potential deadlock Steven Rostedt
@ 2012-10-10 13:34 ` Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 5/8] rt: rwsem/rwlock: lockdep annotations Steven Rostedt
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2012-10-10 13:34 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users; +Cc: Thomas Gleixner, Carsten Emde, John Kacur

[-- Attachment #1: 0004-mm-page_alloc-Use-local_lock_on-instead-of-plain-spi.patch --]
[-- Type: text/plain, Size: 2088 bytes --]

From: Thomas Gleixner <tglx@linutronix.de>

The plain spinlock while sufficient does not update the local_lock
internals. Use a proper local_lock function instead to ease debugging.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/locallock.h |   11 +++++++++++
 mm/page_alloc.c           |    4 ++--
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/include/linux/locallock.h b/include/linux/locallock.h
index 0161fbb..f1804a3 100644
--- a/include/linux/locallock.h
+++ b/include/linux/locallock.h
@@ -137,6 +137,12 @@ static inline int __local_lock_irqsave(struct local_irq_lock *lv)
 		_flags = __get_cpu_var(lvar).flags;			\
 	} while (0)
 
+#define local_lock_irqsave_on(lvar, _flags, cpu)			\
+	do {								\
+		__local_lock_irqsave(&per_cpu(lvar, cpu));		\
+		_flags = per_cpu(lvar, cpu).flags;			\
+	} while (0)
+
 static inline int __local_unlock_irqrestore(struct local_irq_lock *lv,
 					    unsigned long flags)
 {
@@ -156,6 +162,11 @@ static inline int __local_unlock_irqrestore(struct local_irq_lock *lv,
 			put_local_var(lvar);				\
 	} while (0)
 
+#define local_unlock_irqrestore_on(lvar, flags, cpu)			\
+	do {								\
+		__local_unlock_irqrestore(&per_cpu(lvar, cpu), flags);	\
+	} while (0)
+
 #define local_spin_trylock_irq(lvar, lock)				\
 	({								\
 		int __locked;						\
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e097a56..9be717b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -221,9 +221,9 @@ static DEFINE_LOCAL_IRQ_LOCK(pa_lock);
 
 #ifdef CONFIG_PREEMPT_RT_BASE
 # define cpu_lock_irqsave(cpu, flags)		\
-	spin_lock_irqsave(&per_cpu(pa_lock, cpu).lock, flags)
+	local_lock_irqsave_on(pa_lock, flags, cpu)
 # define cpu_unlock_irqrestore(cpu, flags)		\
-	spin_unlock_irqrestore(&per_cpu(pa_lock, cpu).lock, flags)
+	local_unlock_irqrestore_on(pa_lock, flags, cpu)
 #else
 # define cpu_lock_irqsave(cpu, flags)		local_irq_save(flags)
 # define cpu_unlock_irqrestore(cpu, flags)	local_irq_restore(flags)
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 5/8] rt: rwsem/rwlock: lockdep annotations
  2012-10-10 13:34 [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review Steven Rostedt
                   ` (3 preceding siblings ...)
  2012-10-10 13:34 ` [PATCH RT 4/8] mm: page_alloc: Use local_lock_on() instead of plain spinlock Steven Rostedt
@ 2012-10-10 13:34 ` Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 6/8] sched: Better debug output for might sleep Steven Rostedt
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2012-10-10 13:34 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users; +Cc: Thomas Gleixner, Carsten Emde, John Kacur

[-- Attachment #1: 0005-rt-rwsem-rwlock-lockdep-annotations.patch --]
[-- Type: text/plain, Size: 3030 bytes --]

From: Thomas Gleixner <tglx@linutronix.de>

rwlocks and rwsems on RT do not allow multiple readers. Annotate the
lockdep acquire functions accordingly.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/rt.c |   46 +++++++++++++++++++++++++---------------------
 1 file changed, 25 insertions(+), 21 deletions(-)

diff --git a/kernel/rt.c b/kernel/rt.c
index 092d6b3..aa10504 100644
--- a/kernel/rt.c
+++ b/kernel/rt.c
@@ -216,15 +216,17 @@ int __lockfunc rt_read_trylock(rwlock_t *rwlock)
 	 * write locked.
 	 */
 	migrate_disable();
-	if (rt_mutex_owner(lock) != current)
+	if (rt_mutex_owner(lock) != current) {
 		ret = rt_mutex_trylock(lock);
-	else if (!rwlock->read_depth)
+		if (ret)
+			rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
+	} else if (!rwlock->read_depth) {
 		ret = 0;
+	}
 
-	if (ret) {
+	if (ret)
 		rwlock->read_depth++;
-		rwlock_acquire_read(&rwlock->dep_map, 0, 1, _RET_IP_);
-	} else
+	else
 		migrate_enable();
 
 	return ret;
@@ -242,13 +244,13 @@ void __lockfunc rt_read_lock(rwlock_t *rwlock)
 {
 	struct rt_mutex *lock = &rwlock->lock;
 
-	rwlock_acquire_read(&rwlock->dep_map, 0, 0, _RET_IP_);
-
 	/*
 	 * recursive read locks succeed when current owns the lock
 	 */
-	if (rt_mutex_owner(lock) != current)
+	if (rt_mutex_owner(lock) != current) {
+		rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_);
 		__rt_spin_lock(lock);
+	}
 	rwlock->read_depth++;
 }
 
@@ -264,11 +266,11 @@ EXPORT_SYMBOL(rt_write_unlock);
 
 void __lockfunc rt_read_unlock(rwlock_t *rwlock)
 {
-	rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
-
 	/* Release the lock only when read_depth is down to 0 */
-	if (--rwlock->read_depth == 0)
+	if (--rwlock->read_depth == 0) {
+		rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
 		__rt_spin_unlock(&rwlock->lock);
+	}
 }
 EXPORT_SYMBOL(rt_read_unlock);
 
@@ -315,9 +317,10 @@ EXPORT_SYMBOL(rt_up_write);
 
 void  rt_up_read(struct rw_semaphore *rwsem)
 {
-	rwsem_release(&rwsem->dep_map, 1, _RET_IP_);
-	if (--rwsem->read_depth == 0)
+	if (--rwsem->read_depth == 0) {
+		rwsem_release(&rwsem->dep_map, 1, _RET_IP_);
 		rt_mutex_unlock(&rwsem->lock);
+	}
 }
 EXPORT_SYMBOL(rt_up_read);
 
@@ -366,15 +369,16 @@ int  rt_down_read_trylock(struct rw_semaphore *rwsem)
 	 * but not when read_depth == 0 which means that the rwsem is
 	 * write locked.
 	 */
-	if (rt_mutex_owner(lock) != current)
+	if (rt_mutex_owner(lock) != current) {
 		ret = rt_mutex_trylock(&rwsem->lock);
-	else if (!rwsem->read_depth)
+		if (ret)
+			rwsem_acquire(&rwsem->dep_map, 0, 1, _RET_IP_);
+	} else if (!rwsem->read_depth) {
 		ret = 0;
+	}
 
-	if (ret) {
+	if (ret)
 		rwsem->read_depth++;
-		rwsem_acquire(&rwsem->dep_map, 0, 1, _RET_IP_);
-	}
 	return ret;
 }
 EXPORT_SYMBOL(rt_down_read_trylock);
@@ -383,10 +387,10 @@ static void __rt_down_read(struct rw_semaphore *rwsem, int subclass)
 {
 	struct rt_mutex *lock = &rwsem->lock;
 
-	rwsem_acquire_read(&rwsem->dep_map, subclass, 0, _RET_IP_);

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 6/8] sched: Better debug output for might sleep
  2012-10-10 13:34 [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review Steven Rostedt
                   ` (4 preceding siblings ...)
  2012-10-10 13:34 ` [PATCH RT 5/8] rt: rwsem/rwlock: lockdep annotations Steven Rostedt
@ 2012-10-10 13:34 ` Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 7/8] stomp_machine: Use mutex_trylock when called from inactive cpu Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 8/8] Linux 3.4.13-rt22-rc1 Steven Rostedt
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2012-10-10 13:34 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users; +Cc: Thomas Gleixner, Carsten Emde, John Kacur

[-- Attachment #1: 0006-sched-Better-debug-output-for-might-sleep.patch --]
[-- Type: text/plain, Size: 2223 bytes --]

From: Thomas Gleixner <tglx@linutronix.de>

might sleep can tell us where interrupts have been disabled, but we
have no idea what disabled preemption. Add some debug infrastructure.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/sched.h |    4 ++++
 kernel/sched/core.c   |   23 +++++++++++++++++++++--
 2 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6f342d8..f291347 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1655,6 +1655,10 @@ struct task_struct {
 	int kmap_idx;
 	pte_t kmap_pte[KM_TYPE_NR];
 #endif
+
+#ifdef CONFIG_DEBUG_PREEMPT
+	unsigned long preempt_disable_ip;
+#endif
 };
 
 #ifdef CONFIG_PREEMPT_RT_FULL
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 66d4dea..263fd96 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3215,8 +3215,13 @@ void __kprobes add_preempt_count(int val)
 	DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
 				PREEMPT_MASK - 10);
 #endif
-	if (preempt_count() == val)
-		trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
+	if (preempt_count() == val) {
+		unsigned long ip = get_parent_ip(CALLER_ADDR1);
+#ifdef CONFIG_DEBUG_PREEMPT
+		current->preempt_disable_ip = ip;
+#endif
+		trace_preempt_off(CALLER_ADDR0, ip);
+	}
 }
 EXPORT_SYMBOL(add_preempt_count);
 
@@ -3259,6 +3264,13 @@ static noinline void __schedule_bug(struct task_struct *prev)
 	print_modules();
 	if (irqs_disabled())
 		print_irqtrace_events(prev);
+#ifdef DEBUG_PREEMPT
+	if (in_atomic_preempt_off()) {
+		pr_err("Preemption disabled at:");
+		print_ip_sym(current->preempt_disable_ip);
+		pr_cont("\n");
+	}
+#endif
 	dump_stack();
 }
 
@@ -7479,6 +7491,13 @@ void __might_sleep(const char *file, int line, int preempt_offset)
 	debug_show_held_locks(current);
 	if (irqs_disabled())
 		print_irqtrace_events(current);
+#ifdef DEBUG_PREEMPT
+	if (!preempt_count_equals(preempt_offset)) {
+		pr_err("Preemption disabled at:");
+		print_ip_sym(current->preempt_disable_ip);
+		pr_cont("\n");
+	}
+#endif
 	dump_stack();
 }
 EXPORT_SYMBOL(__might_sleep);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 7/8] stomp_machine: Use mutex_trylock when called from inactive cpu
  2012-10-10 13:34 [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review Steven Rostedt
                   ` (5 preceding siblings ...)
  2012-10-10 13:34 ` [PATCH RT 6/8] sched: Better debug output for might sleep Steven Rostedt
@ 2012-10-10 13:34 ` Steven Rostedt
  2012-10-10 13:34 ` [PATCH RT 8/8] Linux 3.4.13-rt22-rc1 Steven Rostedt
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2012-10-10 13:34 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users; +Cc: Thomas Gleixner, Carsten Emde, John Kacur

[-- Attachment #1: 0007-stomp_machine-Use-mutex_trylock-when-called-from-ina.patch --]
[-- Type: text/plain, Size: 2153 bytes --]

From: Thomas Gleixner <tglx@linutronix.de>

If the stop machinery is called from inactive CPU we cannot use
mutex_lock, because some other stomp machine invokation might be in
progress and the mutex can be contended. We cannot schedule from this
context, so trylock and loop.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/stop_machine.c |   13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 561ba3a..e98c70b 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -158,7 +158,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, stop_cpus_work);
 
 static void queue_stop_cpus_work(const struct cpumask *cpumask,
 				 cpu_stop_fn_t fn, void *arg,
-				 struct cpu_stop_done *done)
+				 struct cpu_stop_done *done, bool inactive)
 {
 	struct cpu_stop_work *work;
 	unsigned int cpu;
@@ -175,7 +175,12 @@ static void queue_stop_cpus_work(const struct cpumask *cpumask,
 	 * Make sure that all work is queued on all cpus before we
 	 * any of the cpus can execute it.
 	 */
-	mutex_lock(&stopper_lock);
+	if (!inactive) {
+		mutex_lock(&stopper_lock);
+	} else {
+		while (!mutex_trylock(&stopper_lock))
+			cpu_relax();
+	}
 	for_each_cpu(cpu, cpumask)
 		cpu_stop_queue_work(&per_cpu(cpu_stopper, cpu),
 				    &per_cpu(stop_cpus_work, cpu));
@@ -188,7 +193,7 @@ static int __stop_cpus(const struct cpumask *cpumask,
 	struct cpu_stop_done done;
 
 	cpu_stop_init_done(&done, cpumask_weight(cpumask));
-	queue_stop_cpus_work(cpumask, fn, arg, &done);
+	queue_stop_cpus_work(cpumask, fn, arg, &done, false);
 	wait_for_stop_done(&done);
 	return done.executed ? done.ret : -ENOENT;
 }
@@ -601,7 +606,7 @@ int stop_machine_from_inactive_cpu(int (*fn)(void *), void *data,
 	set_state(&smdata, STOPMACHINE_PREPARE);
 	cpu_stop_init_done(&done, num_active_cpus());
 	queue_stop_cpus_work(cpu_active_mask, stop_machine_cpu_stop, &smdata,
-			     &done);
+			     &done, true);
 	ret = stop_machine_cpu_stop(&smdata);
 
 	/* Busy wait for completion. */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 8/8] Linux 3.4.13-rt22-rc1
  2012-10-10 13:34 [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review Steven Rostedt
                   ` (6 preceding siblings ...)
  2012-10-10 13:34 ` [PATCH RT 7/8] stomp_machine: Use mutex_trylock when called from inactive cpu Steven Rostedt
@ 2012-10-10 13:34 ` Steven Rostedt
  7 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2012-10-10 13:34 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users; +Cc: Thomas Gleixner, Carsten Emde, John Kacur

[-- Attachment #1: 0008-Linux-3.4.13-rt22-rc1.patch --]
[-- Type: text/plain, Size: 287 bytes --]

From: Steven Rostedt <srostedt@redhat.com>

---
 localversion-rt |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index 6c6cde1..e0a34f0 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt21
+-rt22-rc1
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH RT 1/8] random: Make it work on rt
  2012-10-12  1:20 [PATCH RT 0/8] [ANNOUNCE] 3.2.31-rt47-rc1 stable review Steven Rostedt
@ 2012-10-12  1:20 ` Steven Rostedt
  0 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2012-10-12  1:20 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users; +Cc: Thomas Gleixner, Carsten Emde, John Kacur

[-- Attachment #1: 0001-random-Make-it-work-on-rt.patch --]
[-- Type: text/plain, Size: 4337 bytes --]

From: Thomas Gleixner <tglx@linutronix.de>

Delegate the random insertion to the forced threaded interrupt
handler. Store the return IP of the hard interrupt handler in the irq
descriptor and feed it into the random generator as a source of
entropy.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 drivers/char/random.c   |   10 ++++++----
 include/linux/irqdesc.h |    1 +
 include/linux/random.h  |    2 +-
 kernel/irq/handle.c     |    7 +++++--
 kernel/irq/manage.c     |    6 ++++++
 5 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index d38af32..66c8a0f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -767,18 +767,16 @@ EXPORT_SYMBOL_GPL(add_input_randomness);
 
 static DEFINE_PER_CPU(struct fast_pool, irq_randomness);
 
-void add_interrupt_randomness(int irq, int irq_flags)
+void add_interrupt_randomness(int irq, int irq_flags, __u64 ip)
 {
 	struct entropy_store	*r;
 	struct fast_pool	*fast_pool = &__get_cpu_var(irq_randomness);
-	struct pt_regs		*regs = get_irq_regs();
 	unsigned long		now = jiffies;
 	__u32			input[4], cycles = get_cycles();
 
 	input[0] = cycles ^ jiffies;
 	input[1] = irq;
-	if (regs) {
-		__u64 ip = instruction_pointer(regs);
+	if (ip) {
 		input[2] = ip;
 		input[3] = ip >> 32;
 	}
@@ -792,7 +790,11 @@ void add_interrupt_randomness(int irq, int irq_flags)
 	fast_pool->last = now;
 
 	r = nonblocking_pool.initialized ? &input_pool : &nonblocking_pool;
+#ifndef CONFIG_PREEMPT_RT_FULL
 	__mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool), NULL);
+#else
+	mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool), NULL);
+#endif
 	/*
 	 * If we don't have a valid cycle counter, and we see
 	 * back-to-back timer interrupts, then skip giving credit for
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index f1e2527..5f4f091 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -53,6 +53,7 @@ struct irq_desc {
 	unsigned int		irq_count;	/* For detecting broken IRQs */
 	unsigned long		last_unhandled;	/* Aging timer for unhandled count */
 	unsigned int		irqs_unhandled;
+	u64			random_ip;
 	raw_spinlock_t		lock;
 	struct cpumask		*percpu_enabled;
 #ifdef CONFIG_SMP
diff --git a/include/linux/random.h b/include/linux/random.h
index 29e217a..3995b33 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -53,7 +53,7 @@ extern void rand_initialize_irq(int irq);
 extern void add_device_randomness(const void *, unsigned int);
 extern void add_input_randomness(unsigned int type, unsigned int code,
 				 unsigned int value);
-extern void add_interrupt_randomness(int irq, int irq_flags);
+extern void add_interrupt_randomness(int irq, int irq_flags, __u64 ip);
 
 extern void get_random_bytes(void *buf, int nbytes);
 extern void get_random_bytes_arch(void *buf, int nbytes);
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index a768885..f6b91bc 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -116,6 +116,8 @@ static void irq_wake_thread(struct irq_desc *desc, struct irqaction *action)
 irqreturn_t
 handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action)
 {
+	struct pt_regs *regs = get_irq_regs();
+	u64 ip = regs ? instruction_pointer(regs) : 0;
 	irqreturn_t retval = IRQ_NONE;
 	unsigned int flags = 0, irq = desc->irq_data.irq;
 
@@ -157,8 +159,9 @@ handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action)
 	} while (action);
 
 #ifndef CONFIG_PREEMPT_RT_FULL
-	/* FIXME: Can we unbreak that ? */
-	add_interrupt_randomness(irq, flags);
+	add_interrupt_randomness(irq, flags, ip);
+#else
+	desc->random_ip = ip;
 #endif
 
 	if (!noirqdebug)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 87dc053..2204340 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -816,6 +816,12 @@ static int irq_thread(void *data)
 			action_ret = handler_fn(desc, action);
 			if (!noirqdebug)
 				note_interrupt(action->irq, desc, action_ret);
+#ifdef CONFIG_PREEMPT_RT_FULL
+			migrate_disable();
+			add_interrupt_randomness(action->irq, 0,
+						 desc->random_ip ^ (u64) action);
+			migrate_enable();
+#endif
 		}
 
 		wake = atomic_dec_and_test(&desc->threads_active);
-- 
1.7.10.4



^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-10-12  1:21 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-10 13:34 [PATCH RT 0/8] [ANNOUNCE] 3.4.13-rt22-rc1 stable review Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 1/8] random: Make it work on rt Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 2/8] softirq: Init softirq local lock after per cpu section is set up Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 3/8] mm: slab: Fix potential deadlock Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 4/8] mm: page_alloc: Use local_lock_on() instead of plain spinlock Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 5/8] rt: rwsem/rwlock: lockdep annotations Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 6/8] sched: Better debug output for might sleep Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 7/8] stomp_machine: Use mutex_trylock when called from inactive cpu Steven Rostedt
2012-10-10 13:34 ` [PATCH RT 8/8] Linux 3.4.13-rt22-rc1 Steven Rostedt
  -- strict thread matches above, loose matches on Subject: below --
2012-10-12  1:20 [PATCH RT 0/8] [ANNOUNCE] 3.2.31-rt47-rc1 stable review Steven Rostedt
2012-10-12  1:20 ` [PATCH RT 1/8] random: Make it work on rt Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).