linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RT 00/11] Linux 3.12.61-rt82-rc1
@ 2016-07-12 16:49 Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 01/11] kvm, rt: change async pagefault code locking for PREEMPT_RT Steven Rostedt
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker


Dear RT Folks,

This is the RT stable review cycle of patch 3.12.61-rt82-rc1.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 7/14/2016.

Enjoy,

-- Steve


To build 3.12.61-rt82-rc1 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.12.tar.xz

  http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.12.61.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/patch-3.12.61-rt82-rc1.patch.xz

You can also build from 3.12.61-rt81 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/incr/patch-3.12.61-rt81-rt82-rc1.patch.xz


Changes from 3.12.61-rt81:

---


Corey Minyard (1):
      x86: Fix an RT MCE crash

Josh Cartwright (1):
      list_bl: fixup bogus lockdep warning

Luiz Capitulino (1):
      mm: perform lru_add_drain_all() remotely

Rik van Riel (1):
      kvm, rt: change async pagefault code locking for PREEMPT_RT

Sebastian Andrzej Siewior (6):
      net: dev: always take qdisc's busylock in __dev_xmit_skb()
      ARM: imx: always use TWD on IMX6Q
      kernel/printk: Don't try to print from IRQ/NMI region
      arm: lazy preempt: correct resched condition
      locallock: add local_lock_on()
      trace: correct off by one while recording the trace-event

Steven Rostedt (Red Hat) (1):
      Linux 3.12.61-rt82-rc1

----
 arch/arm/kernel/entry-armv.S     |  6 +++++-
 arch/arm/mach-imx/Kconfig        |  2 +-
 arch/x86/kernel/cpu/mcheck/mce.c |  3 ++-
 arch/x86/kernel/kvm.c            | 37 +++++++++++++++++++------------------
 include/linux/list_bl.h          | 12 +++++++-----
 include/linux/locallock.h        |  6 ++++++
 include/trace/ftrace.h           |  3 +++
 kernel/printk/printk.c           | 10 ++++++++++
 localversion-rt                  |  2 +-
 mm/swap.c                        | 37 ++++++++++++++++++++++++++++++-------
 net/core/dev.c                   |  4 ++++
 11 files changed, 88 insertions(+), 34 deletions(-)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH RT 01/11] kvm, rt: change async pagefault code locking for PREEMPT_RT
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
@ 2016-07-12 16:49 ` Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 02/11] net: dev: always take qdiscs busylock in __dev_xmit_skb() Steven Rostedt
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Rik van Riel, Paolo Bonzini

[-- Attachment #1: 0001-kvm-rt-change-async-pagefault-code-locking-for-PREEM.patch --]
[-- Type: text/plain, Size: 4519 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Rik van Riel <riel@redhat.com>

The async pagefault wake code can run from the idle task in exception
context, so everything here needs to be made non-preemptible.

Conversion to a simple wait queue and raw spinlock does the trick.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/kvm.c | 37 +++++++++++++++++++------------------
 1 file changed, 19 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index e72593338df6..7e640682699d 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -35,6 +35,7 @@
 #include <linux/slab.h>
 #include <linux/kprobes.h>
 #include <linux/debugfs.h>
+#include <linux/wait-simple.h>
 #include <asm/timer.h>
 #include <asm/cpu.h>
 #include <asm/traps.h>
@@ -90,14 +91,14 @@ static void kvm_io_delay(void)
 
 struct kvm_task_sleep_node {
 	struct hlist_node link;
-	wait_queue_head_t wq;
+	struct swait_head wq;
 	u32 token;
 	int cpu;
 	bool halted;
 };
 
 static struct kvm_task_sleep_head {
-	spinlock_t lock;
+	raw_spinlock_t lock;
 	struct hlist_head list;
 } async_pf_sleepers[KVM_TASK_SLEEP_HASHSIZE];
 
@@ -121,17 +122,17 @@ void kvm_async_pf_task_wait(u32 token)
 	u32 key = hash_32(token, KVM_TASK_SLEEP_HASHBITS);
 	struct kvm_task_sleep_head *b = &async_pf_sleepers[key];
 	struct kvm_task_sleep_node n, *e;
-	DEFINE_WAIT(wait);
+	DEFINE_SWAITER(wait);
 
 	rcu_irq_enter();
 
-	spin_lock(&b->lock);
+	raw_spin_lock(&b->lock);
 	e = _find_apf_task(b, token);
 	if (e) {
 		/* dummy entry exist -> wake up was delivered ahead of PF */
 		hlist_del(&e->link);
 		kfree(e);
-		spin_unlock(&b->lock);
+		raw_spin_unlock(&b->lock);
 
 		rcu_irq_exit();
 		return;
@@ -140,13 +141,13 @@ void kvm_async_pf_task_wait(u32 token)
 	n.token = token;
 	n.cpu = smp_processor_id();
 	n.halted = is_idle_task(current) || preempt_count() > 1;
-	init_waitqueue_head(&n.wq);
+	init_swait_head(&n.wq);
 	hlist_add_head(&n.link, &b->list);
-	spin_unlock(&b->lock);
+	raw_spin_unlock(&b->lock);
 
 	for (;;) {
 		if (!n.halted)
-			prepare_to_wait(&n.wq, &wait, TASK_UNINTERRUPTIBLE);
+			swait_prepare(&n.wq, &wait, TASK_UNINTERRUPTIBLE);
 		if (hlist_unhashed(&n.link))
 			break;
 
@@ -165,7 +166,7 @@ void kvm_async_pf_task_wait(u32 token)
 		}
 	}
 	if (!n.halted)
-		finish_wait(&n.wq, &wait);
+		swait_finish(&n.wq, &wait);
 
 	rcu_irq_exit();
 	return;
@@ -177,8 +178,8 @@ static void apf_task_wake_one(struct kvm_task_sleep_node *n)
 	hlist_del_init(&n->link);
 	if (n->halted)
 		smp_send_reschedule(n->cpu);
-	else if (waitqueue_active(&n->wq))
-		wake_up(&n->wq);
+	else if (swaitqueue_active(&n->wq))
+		swait_wake(&n->wq);
 }
 
 static void apf_task_wake_all(void)
@@ -188,14 +189,14 @@ static void apf_task_wake_all(void)
 	for (i = 0; i < KVM_TASK_SLEEP_HASHSIZE; i++) {
 		struct hlist_node *p, *next;
 		struct kvm_task_sleep_head *b = &async_pf_sleepers[i];
-		spin_lock(&b->lock);
+		raw_spin_lock(&b->lock);
 		hlist_for_each_safe(p, next, &b->list) {
 			struct kvm_task_sleep_node *n =
 				hlist_entry(p, typeof(*n), link);
 			if (n->cpu == smp_processor_id())
 				apf_task_wake_one(n);
 		}
-		spin_unlock(&b->lock);
+		raw_spin_unlock(&b->lock);
 	}
 }
 
@@ -211,7 +212,7 @@ void kvm_async_pf_task_wake(u32 token)
 	}
 
 again:
-	spin_lock(&b->lock);
+	raw_spin_lock(&b->lock);
 	n = _find_apf_task(b, token);
 	if (!n) {
 		/*
@@ -224,17 +225,17 @@ again:
 			 * Allocation failed! Busy wait while other cpu
 			 * handles async PF.
 			 */
-			spin_unlock(&b->lock);
+			raw_spin_unlock(&b->lock);
 			cpu_relax();
 			goto again;
 		}
 		n->token = token;
 		n->cpu = smp_processor_id();
-		init_waitqueue_head(&n->wq);
+		init_swait_head(&n->wq);
 		hlist_add_head(&n->link, &b->list);
 	} else
 		apf_task_wake_one(n);
-	spin_unlock(&b->lock);
+	raw_spin_unlock(&b->lock);
 	return;
 }
 EXPORT_SYMBOL_GPL(kvm_async_pf_task_wake);
@@ -484,7 +485,7 @@ void __init kvm_guest_init(void)
 	paravirt_ops_setup();
 	register_reboot_notifier(&kvm_pv_reboot_nb);
 	for (i = 0; i < KVM_TASK_SLEEP_HASHSIZE; i++)
-		spin_lock_init(&async_pf_sleepers[i].lock);
+		raw_spin_lock_init(&async_pf_sleepers[i].lock);
 	if (kvm_para_has_feature(KVM_FEATURE_ASYNC_PF))
 		x86_init.irqs.trap_init = kvm_apf_trap_init;
 
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RT 02/11] net: dev: always take qdiscs busylock in __dev_xmit_skb()
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 01/11] kvm, rt: change async pagefault code locking for PREEMPT_RT Steven Rostedt
@ 2016-07-12 16:49 ` Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 03/11] list_bl: fixup bogus lockdep warning Steven Rostedt
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0002-net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch --]
[-- Type: text/plain, Size: 1421 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

The root-lock is dropped before dev_hard_start_xmit() is invoked and after
setting the __QDISC___STATE_RUNNING bit. If this task is now pushed away
by a task with a higher priority then the task with the higher priority
won't be able to submit packets to the NIC directly instead they will be
enqueued into the Qdisc. The NIC will remain idle until the task(s) with
higher priority leave the CPU and the task with lower priority gets back
and finishes the job.

If we take always the busylock we ensure that the RT task can boost the
low-prio task and submit the packet.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 net/core/dev.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/net/core/dev.c b/net/core/dev.c
index ee08da0fe0d7..c34af0bf3c0e 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2713,7 +2713,11 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
 	 * This permits __QDISC_STATE_RUNNING owner to get the lock more often
 	 * and dequeue packets faster.
 	 */
+#ifdef CONFIG_PREEMPT_RT_FULL
+	contended = true;
+#else
 	contended = qdisc_is_running(q);
+#endif
 	if (unlikely(contended))
 		spin_lock(&q->busylock);
 
-- 
2.8.1



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RT 03/11] list_bl: fixup bogus lockdep warning
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 01/11] kvm, rt: change async pagefault code locking for PREEMPT_RT Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 02/11] net: dev: always take qdiscs busylock in __dev_xmit_skb() Steven Rostedt
@ 2016-07-12 16:49 ` Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 04/11] ARM: imx: always use TWD on IMX6Q Steven Rostedt
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Luis Claudio R. Goncalves,
	Josh Cartwright

[-- Attachment #1: 0003-list_bl-fixup-bogus-lockdep-warning.patch --]
[-- Type: text/plain, Size: 3030 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Josh Cartwright <joshc@ni.com>

At first glance, the use of 'static inline' seems appropriate for
INIT_HLIST_BL_HEAD().

However, when a 'static inline' function invocation is inlined by gcc,
all callers share any static local data declared within that inline
function.

This presents a problem for how lockdep classes are setup.  raw_spinlocks, for
example, when CONFIG_DEBUG_SPINLOCK,

	# define raw_spin_lock_init(lock)				\
	do {								\
		static struct lock_class_key __key;			\
									\
		__raw_spin_lock_init((lock), #lock, &__key);		\
	} while (0)

When this macro is expanded into a 'static inline' caller, like
INIT_HLIST_BL_HEAD():

	static inline INIT_HLIST_BL_HEAD(struct hlist_bl_head *h)
	{
		h->first = NULL;
		raw_spin_lock_init(&h->lock);
	}

...the static local lock_class_key object is made a function static.

For compilation units which initialize invoke INIT_HLIST_BL_HEAD() more
than once, then, all of the invocations share this same static local
object.

This can lead to some very confusing lockdep splats (example below).
Solve this problem by forcing the INIT_HLIST_BL_HEAD() to be a macro,
which prevents the lockdep class object sharing.

 =============================================
 [ INFO: possible recursive locking detected ]
 4.4.4-rt11 #4 Not tainted
 ---------------------------------------------
 kswapd0/59 is trying to acquire lock:
  (&h->lock#2){+.+.-.}, at: mb_cache_shrink_scan

 but task is already holding lock:
  (&h->lock#2){+.+.-.}, at:  mb_cache_shrink_scan

 other info that might help us debug this:
  Possible unsafe locking scenario:

        CPU0
        ----
   lock(&h->lock#2);
   lock(&h->lock#2);

  *** DEADLOCK ***

  May be due to missing lock nesting notation

 2 locks held by kswapd0/59:
  #0:  (shrinker_rwsem){+.+...}, at: rt_down_read_trylock
  #1:  (&h->lock#2){+.+.-.}, at: mb_cache_shrink_scan

Reported-by: Luis Claudio R. Goncalves <lclaudio@uudg.org>
Tested-by: Luis Claudio R. Goncalves <lclaudio@uudg.org>
Signed-off-by: Josh Cartwright <joshc@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/list_bl.h | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
index d8876a0cf036..017d0f1c1eb4 100644
--- a/include/linux/list_bl.h
+++ b/include/linux/list_bl.h
@@ -42,13 +42,15 @@ struct hlist_bl_node {
 	struct hlist_bl_node *next, **pprev;
 };
 
-static inline void INIT_HLIST_BL_HEAD(struct hlist_bl_head *h)
-{
-	h->first = NULL;
 #ifdef CONFIG_PREEMPT_RT_BASE
-	raw_spin_lock_init(&h->lock);
+#define INIT_HLIST_BL_HEAD(h)		\
+do {					\
+	(h)->first = NULL;		\
+	raw_spin_lock_init(&(h)->lock);	\
+} while (0)
+#else
+#define INIT_HLIST_BL_HEAD(h) (h)->first = NULL
 #endif
-}
 
 static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h)
 {
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RT 04/11] ARM: imx: always use TWD on IMX6Q
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
                   ` (2 preceding siblings ...)
  2016-07-12 16:49 ` [PATCH RT 03/11] list_bl: fixup bogus lockdep warning Steven Rostedt
@ 2016-07-12 16:49 ` Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 05/11] kernel/printk: Dont try to print from IRQ/NMI region Steven Rostedt
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0004-ARM-imx-always-use-TWD-on-IMX6Q.patch --]
[-- Type: text/plain, Size: 1093 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

There is no reason to limit the TWD to be used on SMP kernels only if the
hardware has it available.
On Wandboard i.MX6SOLO, running PREEMPT-RT and cyclictest I see as max
immediately after start in idle:
UP : ~90us
SMP: ~50us
UP + TWD: ~20us.
Based on this numbers I prefer the TWD over the slightly slower MXC
timer.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/arm/mach-imx/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/mach-imx/Kconfig b/arch/arm/mach-imx/Kconfig
index 29a8af6922a8..79671485618a 100644
--- a/arch/arm/mach-imx/Kconfig
+++ b/arch/arm/mach-imx/Kconfig
@@ -794,7 +794,7 @@ config SOC_IMX6Q
 	select COMMON_CLK
 	select CPU_V7
 	select HAVE_ARM_SCU if SMP
-	select HAVE_ARM_TWD if SMP
+	select HAVE_ARM_TWD
 	select HAVE_IMX_ANATOP
 	select HAVE_IMX_GPC
 	select HAVE_IMX_MMDC
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RT 05/11] kernel/printk: Dont try to print from IRQ/NMI region
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
                   ` (3 preceding siblings ...)
  2016-07-12 16:49 ` [PATCH RT 04/11] ARM: imx: always use TWD on IMX6Q Steven Rostedt
@ 2016-07-12 16:49 ` Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 06/11] arm: lazy preempt: correct resched condition Steven Rostedt
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0005-kernel-printk-Don-t-try-to-print-from-IRQ-NMI-region.patch --]
[-- Type: text/plain, Size: 1409 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

On -RT we try to acquire sleeping locks which might lead to warnings
from lockdep or a warn_on() from spin_try_lock() (which is a rtmutex on
RT).
We don't print in general from a IRQ off region so we should not try
this via console_unblank() / bust_spinlocks() as well.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/printk/printk.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 9a751f6c471e..7283909a9943 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1293,6 +1293,11 @@ static void call_console_drivers(int level, const char *text, size_t len)
 	if (!console_drivers)
 		return;
 
+	if (IS_ENABLED(CONFIG_PREEMPT_RT_BASE)) {
+		if (in_irq() || in_nmi())
+			return;
+	}
+
 	migrate_disable();
 	for_each_console(con) {
 		if (exclusive_console && con != exclusive_console)
@@ -2215,6 +2220,11 @@ void console_unblank(void)
 {
 	struct console *c;
 
+	if (IS_ENABLED(CONFIG_PREEMPT_RT_BASE)) {
+		if (in_irq() || in_nmi())
+			return;
+	}
+
 	/*
 	 * console_unblank can no longer be called in interrupt context unless
 	 * oops_in_progress is set to 1..
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RT 06/11] arm: lazy preempt: correct resched condition
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
                   ` (4 preceding siblings ...)
  2016-07-12 16:49 ` [PATCH RT 05/11] kernel/printk: Dont try to print from IRQ/NMI region Steven Rostedt
@ 2016-07-12 16:49 ` Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 07/11] locallock: add local_lock_on() Steven Rostedt
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0006-arm-lazy-preempt-correct-resched-condition.patch --]
[-- Type: text/plain, Size: 1191 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

If we get out of preempt_schedule_irq() then we check for NEED_RESCHED
and call the former function again if set because the preemption counter
has be zero at this point.
However the counter for lazy-preempt might not be zero therefore we have
to check the counter before looking at the need_resched_lazy flag.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/arm/kernel/entry-armv.S | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index 8c5e809c1f07..96eb4d26a5c1 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -234,7 +234,11 @@ svc_preempt:
 	bne	1b
 	tst	r0, #_TIF_NEED_RESCHED_LAZY
 	moveq	pc, r8				@ go again
-	b	1b
+	ldr	r0, [tsk, #TI_PREEMPT_LAZY]	@ get preempt lazy count
+	teq	r0, #0				@ if preempt lazy count != 0
+	beq	1b
+	mov	pc, r8				@ go again
+
 #endif
 
 __und_fault:
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RT 07/11] locallock: add local_lock_on()
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
                   ` (5 preceding siblings ...)
  2016-07-12 16:49 ` [PATCH RT 06/11] arm: lazy preempt: correct resched condition Steven Rostedt
@ 2016-07-12 16:49 ` Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 08/11] mm: perform lru_add_drain_all() remotely Steven Rostedt
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0007-locallock-add-local_lock_on.patch --]
[-- Type: text/plain, Size: 1283 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/locallock.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/include/linux/locallock.h b/include/linux/locallock.h
index 21653e9bfa20..015271ff8ec8 100644
--- a/include/linux/locallock.h
+++ b/include/linux/locallock.h
@@ -66,6 +66,9 @@ static inline void __local_lock(struct local_irq_lock *lv)
 #define local_lock(lvar)					\
 	do { __local_lock(&get_local_var(lvar)); } while (0)
 
+#define local_lock_on(lvar, cpu)				\
+	do { __local_lock(&per_cpu(lvar, cpu)); } while (0)
+
 static inline int __local_trylock(struct local_irq_lock *lv)
 {
 	if (lv->owner != current && spin_trylock_local(&lv->lock)) {
@@ -104,6 +107,9 @@ static inline void __local_unlock(struct local_irq_lock *lv)
 		put_local_var(lvar);				\
 	} while (0)
 
+#define local_unlock_on(lvar, cpu)                       \
+	do { __local_unlock(&per_cpu(lvar, cpu)); } while (0)
+
 static inline void __local_lock_irq(struct local_irq_lock *lv)
 {
 	spin_lock_irqsave(&lv->lock, lv->flags);
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RT 08/11] mm: perform lru_add_drain_all() remotely
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
                   ` (6 preceding siblings ...)
  2016-07-12 16:49 ` [PATCH RT 07/11] locallock: add local_lock_on() Steven Rostedt
@ 2016-07-12 16:49 ` Steven Rostedt
  2016-07-12 16:49 ` [PATCH RT 09/11] trace: correct off by one while recording the trace-event Steven Rostedt
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Rik van Riel, Luiz Capitulino

[-- Attachment #1: 0008-mm-perform-lru_add_drain_all-remotely.patch --]
[-- Type: text/plain, Size: 3192 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Luiz Capitulino <lcapitulino@redhat.com>

lru_add_drain_all() works by scheduling lru_add_drain_cpu() to run
on all CPUs that have non-empty LRU pagevecs and then waiting for
the scheduled work to complete. However, workqueue threads may never
have the chance to run on a CPU that's running a SCHED_FIFO task.
This causes lru_add_drain_all() to block forever.

This commit solves this problem by changing lru_add_drain_all()
to drain the LRU pagevecs of remote CPUs. This is done by grabbing
swapvec_lock and calling lru_add_drain_cpu().

PS: This is based on an idea and initial implementation by
    Rik van Riel.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 mm/swap.c | 37 ++++++++++++++++++++++++++++++-------
 1 file changed, 30 insertions(+), 7 deletions(-)

diff --git a/mm/swap.c b/mm/swap.c
index 8ab73ba62a68..05e75d61c707 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -718,9 +718,15 @@ void lru_add_drain_cpu(int cpu)
 		unsigned long flags;
 
 		/* No harm done if a racing interrupt already did this */
+#ifdef CONFIG_PREEMPT_RT_BASE
+		local_lock_irqsave_on(rotate_lock, flags, cpu);
+		pagevec_move_tail(pvec);
+		local_unlock_irqrestore_on(rotate_lock, flags, cpu);
+#else
 		local_lock_irqsave(rotate_lock, flags);
 		pagevec_move_tail(pvec);
 		local_unlock_irqrestore(rotate_lock, flags);
+#endif
 	}
 
 	pvec = &per_cpu(lru_deactivate_pvecs, cpu);
@@ -763,12 +769,32 @@ void lru_add_drain(void)
 	local_unlock_cpu(swapvec_lock);
 }
 
+
+#ifdef CONFIG_PREEMPT_RT_BASE
+static inline void remote_lru_add_drain(int cpu, struct cpumask *has_work)
+{
+	local_lock_on(swapvec_lock, cpu);
+	lru_add_drain_cpu(cpu);
+	local_unlock_on(swapvec_lock, cpu);
+}
+
+#else
+
 static void lru_add_drain_per_cpu(struct work_struct *dummy)
 {
 	lru_add_drain();
 }
 
 static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
+static inline void remote_lru_add_drain(int cpu, struct cpumask *has_work)
+{
+	struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
+
+	INIT_WORK(work, lru_add_drain_per_cpu);
+	schedule_work_on(cpu, work);
+	cpumask_set_cpu(cpu, has_work);
+}
+#endif
 
 void lru_add_drain_all(void)
 {
@@ -781,20 +807,17 @@ void lru_add_drain_all(void)
 	cpumask_clear(&has_work);
 
 	for_each_online_cpu(cpu) {
-		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
-
 		if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) ||
 		    pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) ||
 		    pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) ||
-		    need_activate_page_drain(cpu)) {
-			INIT_WORK(work, lru_add_drain_per_cpu);
-			schedule_work_on(cpu, work);
-			cpumask_set_cpu(cpu, &has_work);
-		}
+		    need_activate_page_drain(cpu))
+			remote_lru_add_drain(cpu, &has_work);
 	}
 
+#ifndef CONFIG_PREEMPT_RT_BASE
 	for_each_cpu(cpu, &has_work)
 		flush_work(&per_cpu(lru_add_drain_work, cpu));
+#endif
 
 	put_online_cpus();
 	mutex_unlock(&lock);
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RT 09/11] trace: correct off by one while recording the trace-event
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
                   ` (7 preceding siblings ...)
  2016-07-12 16:49 ` [PATCH RT 08/11] mm: perform lru_add_drain_all() remotely Steven Rostedt
@ 2016-07-12 16:49 ` Steven Rostedt
  2016-07-12 16:50 ` [PATCH RT 10/11] x86: Fix an RT MCE crash Steven Rostedt
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:49 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, stable-rt

[-- Attachment #1: 0009-trace-correct-off-by-one-while-recording-the-trace-e.patch --]
[-- Type: text/plain, Size: 1205 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Trace events like raw_syscalls show always a preempt code of one. The
reason is that on PREEMPT kernels rcu_read_lock_sched_notrace()
increases the preemption counter and the function recording the counter
is caller within the RCU section.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
[ Changed this to upstream version. See commit e947841c0dce ]
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 include/trace/ftrace.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index 645d749d3c9c..1c74dcd4c76e 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -536,6 +536,9 @@ ftrace_raw_event_##call(void *__data, proto)				\
 									\
 	local_save_flags(irq_flags);					\
 	pc = preempt_count();						\
+	/* Account for tracepoint preempt disable */			\
+	if (IS_ENABLED(CONFIG_PREEMPT))					\
+		pc--;							\
 									\
 	__data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
 									\
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RT 10/11] x86: Fix an RT MCE crash
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
                   ` (8 preceding siblings ...)
  2016-07-12 16:49 ` [PATCH RT 09/11] trace: correct off by one while recording the trace-event Steven Rostedt
@ 2016-07-12 16:50 ` Steven Rostedt
  2016-07-12 16:50 ` [PATCH RT 11/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
  2016-07-12 23:23 ` Linux 3.12.61-rt82-rc2 Steven Rostedt
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker, Borislav Petkov, Corey Minyard

[-- Attachment #1: 0010-x86-Fix-an-RT-MCE-crash.patch --]
[-- Type: text/plain, Size: 1272 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Corey Minyard <cminyard@mvista.com>

On some x86 systems an MCE interrupt would come in before the kernel
was ready for it.  Looking at the latest RT code, it has similar
(but not quite the same) code, except it adds a bool that tells if
MCE handling is initialized.  That was required because they had
switched to use swork instead of a kernel thread.  Here, just
checking to see if the thread is NULL is good enough to see if
MCE handling is initialized.

Suggested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/cpu/mcheck/mce.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index 3a7ab0b08cdf..9901b77ed819 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -1403,7 +1403,8 @@ static int mce_notify_work_init(void)
 
 static void mce_notify_work(void)
 {
-	wake_up_process(mce_notify_helper);
+	if (mce_notify_helper)
+		wake_up_process(mce_notify_helper);
 }
 #else
 static void mce_notify_work(void)
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH RT 11/11] Linux 3.12.61-rt82-rc1
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
                   ` (9 preceding siblings ...)
  2016-07-12 16:50 ` [PATCH RT 10/11] x86: Fix an RT MCE crash Steven Rostedt
@ 2016-07-12 16:50 ` Steven Rostedt
  2016-07-12 23:23 ` Linux 3.12.61-rt82-rc2 Steven Rostedt
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 16:50 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker

[-- Attachment #1: 0011-Linux-3.12.61-rt82-rc1.patch --]
[-- Type: text/plain, Size: 412 bytes --]

3.12.61-rt82-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>

---
 localversion-rt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/localversion-rt b/localversion-rt
index 8269ec129c0c..ef83083bad25 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt81
+-rt82-rc1
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Linux 3.12.61-rt82-rc2
  2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
                   ` (10 preceding siblings ...)
  2016-07-12 16:50 ` [PATCH RT 11/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
@ 2016-07-12 23:23 ` Steven Rostedt
  11 siblings, 0 replies; 13+ messages in thread
From: Steven Rostedt @ 2016-07-12 23:23 UTC (permalink / raw)
  To: linux-kernel, linux-rt-users
  Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
	John Kacur, Paul Gortmaker


Dear RT Folks,

This is the RT stable review cycle of patch 3.12.61-rt82-rc2.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 7/14/2016.

Only difference from v1 is the removal of "ARM: imx: always use TWD on
IMX6Q"

Enjoy,

-- Steve


To build 3.12.61-rt82-rc2 directly, the following patches should be applied:

  http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.12.tar.xz

  http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.12.61.xz

  http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/patch-3.12.61-rt82-rc2.patch.xz

You can also build from 3.12.61-rt81 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/incr/patch-3.12.61-rt81-rt82-rc2.patch.xz


Changes from 3.12.61-rt81:

---


Corey Minyard (1):
      x86: Fix an RT MCE crash

Josh Cartwright (1):
      list_bl: fixup bogus lockdep warning

Luiz Capitulino (1):
      mm: perform lru_add_drain_all() remotely

Rik van Riel (1):
      kvm, rt: change async pagefault code locking for PREEMPT_RT

Sebastian Andrzej Siewior (5):
      net: dev: always take qdisc's busylock in __dev_xmit_skb()
      kernel/printk: Don't try to print from IRQ/NMI region
      arm: lazy preempt: correct resched condition
      locallock: add local_lock_on()
      trace: correct off by one while recording the trace-event

Steven Rostedt (Red Hat) (1):
      Linux 3.12.61-rt82-rc2

----
 arch/arm/kernel/entry-armv.S     |  6 +++++-
 arch/x86/kernel/cpu/mcheck/mce.c |  3 ++-
 arch/x86/kernel/kvm.c            | 37 +++++++++++++++++++------------------
 include/linux/list_bl.h          | 12 +++++++-----
 include/linux/locallock.h        |  6 ++++++
 include/trace/ftrace.h           |  3 +++
 kernel/printk/printk.c           | 10 ++++++++++
 localversion-rt                  |  2 +-
 mm/swap.c                        | 37 ++++++++++++++++++++++++++++++-------
 net/core/dev.c                   |  4 ++++
 10 files changed, 87 insertions(+), 33 deletions(-)

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-07-12 23:23 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-07-12 16:49 [PATCH RT 00/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
2016-07-12 16:49 ` [PATCH RT 01/11] kvm, rt: change async pagefault code locking for PREEMPT_RT Steven Rostedt
2016-07-12 16:49 ` [PATCH RT 02/11] net: dev: always take qdiscs busylock in __dev_xmit_skb() Steven Rostedt
2016-07-12 16:49 ` [PATCH RT 03/11] list_bl: fixup bogus lockdep warning Steven Rostedt
2016-07-12 16:49 ` [PATCH RT 04/11] ARM: imx: always use TWD on IMX6Q Steven Rostedt
2016-07-12 16:49 ` [PATCH RT 05/11] kernel/printk: Dont try to print from IRQ/NMI region Steven Rostedt
2016-07-12 16:49 ` [PATCH RT 06/11] arm: lazy preempt: correct resched condition Steven Rostedt
2016-07-12 16:49 ` [PATCH RT 07/11] locallock: add local_lock_on() Steven Rostedt
2016-07-12 16:49 ` [PATCH RT 08/11] mm: perform lru_add_drain_all() remotely Steven Rostedt
2016-07-12 16:49 ` [PATCH RT 09/11] trace: correct off by one while recording the trace-event Steven Rostedt
2016-07-12 16:50 ` [PATCH RT 10/11] x86: Fix an RT MCE crash Steven Rostedt
2016-07-12 16:50 ` [PATCH RT 11/11] Linux 3.12.61-rt82-rc1 Steven Rostedt
2016-07-12 23:23 ` Linux 3.12.61-rt82-rc2 Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).