* [PATCH RT 0/8] Linux 4.4.97-rt111-rc1
@ 2017-11-21 16:06 Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 1/8] timer/hrtimer: check properly for a running timer Steven Rostedt
` (7 more replies)
0 siblings, 8 replies; 12+ messages in thread
From: Steven Rostedt @ 2017-11-21 16:06 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, Alex Shi
Dear RT Folks,
This is the RT stable review cycle of patch 4.4.97-rt111-rc1.
Please scream at me if I messed something up. Please test the patches too.
The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).
The pre-releases will not be pushed to the git repository, only the
final release is.
If all goes well, this patch will be converted to the next main release
on 11/27/2017.
Enjoy,
-- Steve
To build 4.4.97-rt111-rc1 directly, the following patches should be applied:
http://www.kernel.org/pub/linux/kernel/v4.x/linux-4.4.tar.xz
http://www.kernel.org/pub/linux/kernel/v4.x/patch-4.4.97.xz
http://www.kernel.org/pub/linux/kernel/projects/rt/4.4/patch-4.4.97-rt111-rc1.patch.xz
You can also build from 4.4.97-rt110 by applying the incremental patch:
http://www.kernel.org/pub/linux/kernel/projects/rt/4.4/incr/patch-4.4.97-rt110-rt111-rc1.patch.xz
Changes from 4.4.97-rt110:
---
Peter Zijlstra (1):
sched: Remove TASK_ALL
Sebastian Andrzej Siewior (4):
timer/hrtimer: check properly for a running timer
random: avoid preempt_disable()ed section
sched/migrate disable: handle updated task-mask mg-dis section
kernel/locking: use an exclusive wait_q for sleepers
Steven Rostedt (VMware) (1):
Linux 4.4.97-rt111-rc1
Thomas Gleixner (2):
rtmutex: Make lock_killable work
sched: Prevent task state corruption by spurious lock wakeup
----
drivers/char/random.c | 10 +++---
include/linux/hrtimer.h | 8 ++++-
include/linux/sched.h | 19 ++++++++++--
kernel/fork.c | 1 +
kernel/locking/rtmutex.c | 21 +++++--------
kernel/sched/core.c | 81 +++++++++++++++++++++++++++++++++++++++++-------
localversion-rt | 2 +-
7 files changed, 109 insertions(+), 33 deletions(-)
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH RT 1/8] timer/hrtimer: check properly for a running timer
2017-11-21 16:06 [PATCH RT 0/8] Linux 4.4.97-rt111-rc1 Steven Rostedt
@ 2017-11-21 16:06 ` Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 2/8] rtmutex: Make lock_killable work Steven Rostedt
` (6 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2017-11-21 16:06 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, Alex Shi, stable-rt, Alexander Gerasiov
[-- Attachment #1: 0001-timer-hrtimer-check-properly-for-a-running-timer.patch --]
[-- Type: text/plain, Size: 1344 bytes --]
4.4.97-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.
------------------
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
hrtimer_callback_running() checks only whether a timmer is running on a
CPU in hardirq-context. This is okay for !RT. For RT environment we move
most timers to the timer-softirq and therefore we therefore need to
check if the timer is running in the softirq context.
Cc: stable-rt@vger.kernel.org
Reported-by: Alexander Gerasiov <gq@cs.msu.su>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
include/linux/hrtimer.h | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
index 8fbcdfa5dc77..ff317006d3e8 100644
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
@@ -455,7 +455,13 @@ static inline int hrtimer_is_queued(struct hrtimer *timer)
*/
static inline int hrtimer_callback_running(const struct hrtimer *timer)
{
- return timer->base->cpu_base->running == timer;
+ if (timer->base->cpu_base->running == timer)
+ return 1;
+#ifdef CONFIG_PREEMPT_RT_BASE
+ if (timer->base->cpu_base->running_soft == timer)
+ return 1;
+#endif
+ return 0;
}
/* Forward a hrtimer so it expires after now: */
--
2.13.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RT 2/8] rtmutex: Make lock_killable work
2017-11-21 16:06 [PATCH RT 0/8] Linux 4.4.97-rt111-rc1 Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 1/8] timer/hrtimer: check properly for a running timer Steven Rostedt
@ 2017-11-21 16:06 ` Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 3/8] random: avoid preempt_disable()ed section Steven Rostedt
` (5 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2017-11-21 16:06 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, Alex Shi, rt-stable
[-- Attachment #1: 0002-rtmutex-Make-lock_killable-work.patch --]
[-- Type: text/plain, Size: 1439 bytes --]
4.4.97-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner <tglx@linutronix.de>
Locking an rt mutex killable does not work because signal handling is
restricted to TASK_INTERRUPTIBLE.
Use signal_pending_state() unconditionaly.
Cc: rt-stable@vger.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
kernel/locking/rtmutex.c | 19 +++++++------------
1 file changed, 7 insertions(+), 12 deletions(-)
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 0e9a6260441d..552dc6dd3a79 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1672,18 +1672,13 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
if (try_to_take_rt_mutex(lock, current, waiter))
break;
- /*
- * TASK_INTERRUPTIBLE checks for signals and
- * timeout. Ignored otherwise.
- */
- if (unlikely(state == TASK_INTERRUPTIBLE)) {
- /* Signal pending? */
- if (signal_pending(current))
- ret = -EINTR;
- if (timeout && !timeout->task)
- ret = -ETIMEDOUT;
- if (ret)
- break;
+ if (timeout && !timeout->task) {
+ ret = -ETIMEDOUT;
+ break;
+ }
+ if (signal_pending_state(state, current)) {
+ ret = -EINTR;
+ break;
}
if (ww_ctx && ww_ctx->acquired > 0) {
--
2.13.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RT 3/8] random: avoid preempt_disable()ed section
2017-11-21 16:06 [PATCH RT 0/8] Linux 4.4.97-rt111-rc1 Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 1/8] timer/hrtimer: check properly for a running timer Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 2/8] rtmutex: Make lock_killable work Steven Rostedt
@ 2017-11-21 16:06 ` Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 4/8] sched: Prevent task state corruption by spurious lock wakeup Steven Rostedt
` (4 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2017-11-21 16:06 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, Alex Shi
[-- Attachment #1: 0003-random-avoid-preempt_disable-ed-section.patch --]
[-- Type: text/plain, Size: 2160 bytes --]
4.4.97-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.
------------------
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
extract_crng() will use sleeping locks while in a preempt_disable()
section due to get_cpu_var().
Work around it with local_locks.
Cc: stable-rt@vger.kernel.org # where it applies to
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
drivers/char/random.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index fecc40a69df8..b41745c5962c 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -260,6 +260,7 @@
#include <linux/irq.h>
#include <linux/syscalls.h>
#include <linux/completion.h>
+#include <linux/locallock.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
@@ -1796,6 +1797,7 @@ int random_int_secret_init(void)
static DEFINE_PER_CPU(__u32 [MD5_DIGEST_WORDS], get_random_int_hash)
__aligned(sizeof(unsigned long));
+static DEFINE_LOCAL_IRQ_LOCK(hash_entropy_int_lock);
/*
* Get a random word for internal kernel use only. Similar to urandom but
@@ -1811,12 +1813,12 @@ unsigned int get_random_int(void)
if (arch_get_random_int(&ret))
return ret;
- hash = get_cpu_var(get_random_int_hash);
+ hash = &get_locked_var(hash_entropy_int_lock, get_random_int_hash);
hash[0] += current->pid + jiffies + random_get_entropy();
md5_transform(hash, random_int_secret);
ret = hash[0];
- put_cpu_var(get_random_int_hash);
+ put_locked_var(hash_entropy_int_lock, get_random_int_hash);
return ret;
}
@@ -1833,12 +1835,12 @@ unsigned long get_random_long(void)
if (arch_get_random_long(&ret))
return ret;
- hash = get_cpu_var(get_random_int_hash);
+ hash = &get_locked_var(hash_entropy_int_lock, get_random_int_hash);
hash[0] += current->pid + jiffies + random_get_entropy();
md5_transform(hash, random_int_secret);
ret = *(unsigned long *)hash;
- put_cpu_var(get_random_int_hash);
+ put_locked_var(hash_entropy_int_lock, get_random_int_hash);
return ret;
}
--
2.13.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RT 4/8] sched: Prevent task state corruption by spurious lock wakeup
2017-11-21 16:06 [PATCH RT 0/8] Linux 4.4.97-rt111-rc1 Steven Rostedt
` (2 preceding siblings ...)
2017-11-21 16:06 ` [PATCH RT 3/8] random: avoid preempt_disable()ed section Steven Rostedt
@ 2017-11-21 16:06 ` Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 5/8] sched: Remove TASK_ALL Steven Rostedt
` (3 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2017-11-21 16:06 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, Alex Shi, stable-rt, Mathias Koehrer, David Hauck
[-- Attachment #1: 0004-sched-Prevent-task-state-corruption-by-spurious-lock.patch --]
[-- Type: text/plain, Size: 2756 bytes --]
4.4.97-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner <tglx@linutronix.de>
Mathias and others reported GDB failures on RT.
The following scenario leads to task state corruption:
CPU0 CPU1
T1->state = TASK_XXX;
spin_lock(&lock)
rt_spin_lock_slowlock(&lock->rtmutex)
raw_spin_lock(&rtm->wait_lock);
T1->saved_state = current->state;
T1->state = TASK_UNINTERRUPTIBLE;
spin_unlock(&lock)
task_blocks_on_rt_mutex(rtm) rt_spin_lock_slowunlock(&lock->rtmutex)
queue_waiter(rtm) raw_spin_lock(&rtm->wait_lock);
pi_chain_walk(rtm)
raw_spin_unlock(&rtm->wait_lock);
wake_top_waiter(T1)
raw_spin_lock(&rtm->wait_lock);
for (;;) {
if (__try_to_take_rt_mutex()) <- Succeeds
break;
...
}
T1->state = T1->saved_state;
try_to_wake_up(T1)
ttwu_do_wakeup(T1)
T1->state = TASK_RUNNING;
In most cases this is harmless because waiting for some event, which is the
usual reason for TASK_[UN]INTERRUPTIBLE has to be safe against other forms
of spurious wakeups anyway.
But in case of TASK_TRACED this is actually fatal, because the task loses
the TASK_TRACED state. In consequence it fails to consume SIGSTOP which was
sent from the debugger and actually delivers SIGSTOP to the task which
breaks the ptrace mechanics and brings the debugger into an unexpected
state.
The TASK_TRACED state should prevent getting there due to the state
matching logic in try_to_wake_up(). But that's not true because
wake_up_lock_sleeper() uses TASK_ALL as state mask. That's bogus because
lock sleepers always use TASK_UNINTERRUPTIBLE, so the wakeup should use
that as well.
The cure is way simpler as figuring it out:
Change the mask used in wake_up_lock_sleeper() from TASK_ALL to
TASK_UNINTERRUPTIBLE.
Cc: stable-rt@vger.kernel.org
Reported-by: Mathias Koehrer <mathias.koehrer@etas.com>
Reported-by: David Hauck <davidh@netacquire.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ed9550c87f66..970b893a1d15 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2212,7 +2212,7 @@ EXPORT_SYMBOL(wake_up_process);
*/
int wake_up_lock_sleeper(struct task_struct *p)
{
- return try_to_wake_up(p, TASK_ALL, WF_LOCK_SLEEPER);
+ return try_to_wake_up(p, TASK_UNINTERRUPTIBLE, WF_LOCK_SLEEPER);
}
int wake_up_state(struct task_struct *p, unsigned int state)
--
2.13.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RT 5/8] sched: Remove TASK_ALL
2017-11-21 16:06 [PATCH RT 0/8] Linux 4.4.97-rt111-rc1 Steven Rostedt
` (3 preceding siblings ...)
2017-11-21 16:06 ` [PATCH RT 4/8] sched: Prevent task state corruption by spurious lock wakeup Steven Rostedt
@ 2017-11-21 16:06 ` Steven Rostedt
2017-11-21 16:19 ` Peter Zijlstra
2017-11-21 16:07 ` [PATCH RT 6/8] sched/migrate disable: handle updated task-mask mg-dis section Steven Rostedt
` (2 subsequent siblings)
7 siblings, 1 reply; 12+ messages in thread
From: Steven Rostedt @ 2017-11-21 16:06 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, Alex Shi, stable-rt, Peter Zijlstra (Intel)
[-- Attachment #1: 0005-sched-Remove-TASK_ALL.patch --]
[-- Type: text/plain, Size: 1067 bytes --]
4.4.97-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra <peterz@infradead.org>
It's unused:
$ git grep "\<TASK_ALL\>" | wc -l
1
And dangerous, kill the bugger.
Cc: stable-rt@vger.kernel.org
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
include/linux/sched.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index b7b001e26509..56ccd0a3dd49 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -234,7 +234,6 @@ extern char ___assert_task_state[1 - 2*!!(
/* Convenience macros for the sake of wake_up */
#define TASK_NORMAL (TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE)
-#define TASK_ALL (TASK_NORMAL | __TASK_STOPPED | __TASK_TRACED)
/* get_task_state() */
#define TASK_REPORT (TASK_RUNNING | TASK_INTERRUPTIBLE | \
--
2.13.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RT 6/8] sched/migrate disable: handle updated task-mask mg-dis section
2017-11-21 16:06 [PATCH RT 0/8] Linux 4.4.97-rt111-rc1 Steven Rostedt
` (4 preceding siblings ...)
2017-11-21 16:06 ` [PATCH RT 5/8] sched: Remove TASK_ALL Steven Rostedt
@ 2017-11-21 16:07 ` Steven Rostedt
2017-11-21 16:07 ` [PATCH RT 8/8] Linux 4.4.97-rt111-rc1 Steven Rostedt
[not found] ` <20171121160706.703915626@goodmis.org>
7 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2017-11-21 16:07 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, Alex Shi, stable-rt
[-- Attachment #1: 0006-sched-migrate-disable-handle-updated-task-mask-mg-di.patch --]
[-- Type: text/plain, Size: 3838 bytes --]
4.4.97-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.
------------------
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
If task's cpumask changes while in the task is in a migrate_disable()
section then we don't react on it after a migrate_enable(). It matters
however if current CPU is no longer part of the cpumask. We also miss
the ->set_cpus_allowed() callback.
This patch fixes it by setting task->migrate_disable_update once we this
"delayed" hook.
This bug was introduced while fixing unrelated issue in
migrate_disable() in v4.4-rt3 (update_migrate_disable() got removed
during that).
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
include/linux/sched.h | 1 +
kernel/sched/core.c | 59 +++++++++++++++++++++++++++++++++++++++++++++------
2 files changed, 54 insertions(+), 6 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 56ccd0a3dd49..331cdbfc6431 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1438,6 +1438,7 @@ struct task_struct {
unsigned int policy;
#ifdef CONFIG_PREEMPT_RT_FULL
int migrate_disable;
+ int migrate_disable_update;
# ifdef CONFIG_SCHED_DEBUG
int migrate_disable_atomic;
# endif
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 970b893a1d15..bea476417297 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1212,18 +1212,14 @@ void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_ma
p->nr_cpus_allowed = cpumask_weight(new_mask);
}
-void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
+static void __do_set_cpus_allowed_tail(struct task_struct *p,
+ const struct cpumask *new_mask)
{
struct rq *rq = task_rq(p);
bool queued, running;
lockdep_assert_held(&p->pi_lock);
- if (__migrate_disabled(p)) {
- cpumask_copy(&p->cpus_allowed, new_mask);
- return;
- }
-
queued = task_on_rq_queued(p);
running = task_current(rq, p);
@@ -1246,6 +1242,20 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
enqueue_task(rq, p, ENQUEUE_RESTORE);
}
+void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
+{
+ if (__migrate_disabled(p)) {
+ lockdep_assert_held(&p->pi_lock);
+
+ cpumask_copy(&p->cpus_allowed, new_mask);
+#if defined(CONFIG_PREEMPT_RT_FULL) && defined(CONFIG_SMP)
+ p->migrate_disable_update = 1;
+#endif
+ return;
+ }
+ __do_set_cpus_allowed_tail(p, new_mask);
+}
+
static DEFINE_PER_CPU(struct cpumask, sched_cpumasks);
static DEFINE_MUTEX(sched_down_mutex);
static cpumask_t sched_down_cpumask;
@@ -3231,6 +3241,43 @@ void migrate_enable(void)
*/
p->migrate_disable = 0;
+ if (p->migrate_disable_update) {
+ unsigned long flags;
+ struct rq *rq;
+
+ rq = task_rq_lock(p, &flags);
+ update_rq_clock(rq);
+
+ __do_set_cpus_allowed_tail(p, &p->cpus_allowed);
+ task_rq_unlock(rq, p, &flags);
+
+ p->migrate_disable_update = 0;
+
+ WARN_ON(smp_processor_id() != task_cpu(p));
+ if (!cpumask_test_cpu(task_cpu(p), &p->cpus_allowed)) {
+ const struct cpumask *cpu_valid_mask = cpu_active_mask;
+ struct migration_arg arg;
+ unsigned int dest_cpu;
+
+ if (p->flags & PF_KTHREAD) {
+ /*
+ * Kernel threads are allowed on online && !active CPUs
+ */
+ cpu_valid_mask = cpu_online_mask;
+ }
+ dest_cpu = cpumask_any_and(cpu_valid_mask, &p->cpus_allowed);
+ arg.task = p;
+ arg.dest_cpu = dest_cpu;
+
+ unpin_current_cpu();
+ preempt_lazy_enable();
+ preempt_enable();
+ stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg);
+ tlb_migrate_finish(p->mm);
+ return;
+ }
+ }
+
unpin_current_cpu();
preempt_enable();
preempt_lazy_enable();
--
2.13.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RT 8/8] Linux 4.4.97-rt111-rc1
2017-11-21 16:06 [PATCH RT 0/8] Linux 4.4.97-rt111-rc1 Steven Rostedt
` (5 preceding siblings ...)
2017-11-21 16:07 ` [PATCH RT 6/8] sched/migrate disable: handle updated task-mask mg-dis section Steven Rostedt
@ 2017-11-21 16:07 ` Steven Rostedt
[not found] ` <20171121160706.703915626@goodmis.org>
7 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2017-11-21 16:07 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, Alex Shi
[-- Attachment #1: 0008-Linux-4.4.97-rt111-rc1.patch --]
[-- Type: text/plain, Size: 414 bytes --]
4.4.97-rt111-rc1 stable review patch.
If anyone has any objections, please let me know.
------------------
From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
---
localversion-rt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/localversion-rt b/localversion-rt
index b3e668a8fb94..ff68eff1428c 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt110
+-rt111-rc1
--
2.13.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH RT 5/8] sched: Remove TASK_ALL
2017-11-21 16:06 ` [PATCH RT 5/8] sched: Remove TASK_ALL Steven Rostedt
@ 2017-11-21 16:19 ` Peter Zijlstra
2017-11-21 16:27 ` Steven Rostedt
0 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2017-11-21 16:19 UTC (permalink / raw)
To: Steven Rostedt
Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker,
Julia Cartwright, Daniel Wagner, tom.zanussi, Alex Shi, stable-rt
On Tue, Nov 21, 2017 at 11:06:59AM -0500, Steven Rostedt wrote:
> 4.4.97-rt111-rc1 stable review patch.
> If anyone has any objections, please let me know.
No real objection, just curious as to why its -stable material.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH RT 7/8] kernel/locking: use an exclusive wait_q for sleepers
[not found] ` <20171121160706.703915626@goodmis.org>
@ 2017-11-21 16:20 ` Steven Rostedt
0 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2017-11-21 16:20 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker, Julia Cartwright, Daniel Wagner,
tom.zanussi, Alex Shi, Mike Galbraith, stable-rt
On Tue, 21 Nov 2017 11:07:01 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> 4.4.97-rt111-rc1 stable review patch.
> If anyone has any objections, please let me know.
>
> ------------------
>
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>
> If a task is queued as a sleeper for a wakeup and never goes to
> schedule() (because it just obtained the lock) then it will receive a
> spurious wake up which is not "bad", it is considered. Until that wake
> up happens this task can no be enqueued for any wake ups handled by the
> WAKE_Q infrastructure (because a task can only be enqueued once). This
> wouldn't be bad if we would use the same wakeup mechanism for the wake
> up of sleepers as we do for "normal" wake ups. But we don'tâŠ
Strange character's in change log, made quilt mail fail to send to list
(bad encoding, because quilt mail can't handle this).
-- Steve
>
> So.
> T1 T2 T3
> spin_lock(x) spin_unlock(x);
> wake_q_add_sleeper(q1, T1)
> spin_unlock(x)
> set_state(TASK_INTERRUPTIBLE)
> if (!condition)
> schedule()
> condition = true
> wake_q_add(q2, T1)
> // T1 not added, still enqueued
> wake_up_q(q2)
> wake_up_q_sleeper(q1)
> // T1 not woken up, wrong task state
>
> In order to solve this race this patch adds a wake_q_node for the
> sleeper case.
>
> Reported-by: Mike Galbraith <efault@gmx.de>
> Cc: stable-rt@vger.kernel.org
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> ---
> include/linux/sched.h | 17 +++++++++++++++--
> kernel/fork.c | 1 +
> kernel/locking/rtmutex.c | 2 +-
> kernel/sched/core.c | 20 ++++++++++++++++----
> 4 files changed, 33 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 331cdbfc6431..f37654adf12a 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -979,8 +979,20 @@ struct wake_q_head {
> #define WAKE_Q(name) \
> struct wake_q_head name = { WAKE_Q_TAIL, &name.first }
>
> -extern void wake_q_add(struct wake_q_head *head,
> - struct task_struct *task);
> +extern void __wake_q_add(struct wake_q_head *head,
> + struct task_struct *task, bool sleeper);
> +static inline void wake_q_add(struct wake_q_head *head,
> + struct task_struct *task)
> +{
> + __wake_q_add(head, task, false);
> +}
> +
> +static inline void wake_q_add_sleeper(struct wake_q_head *head,
> + struct task_struct *task)
> +{
> + __wake_q_add(head, task, true);
> +}
> +
> extern void __wake_up_q(struct wake_q_head *head, bool sleeper);
>
> static inline void wake_up_q(struct wake_q_head *head)
> @@ -1640,6 +1652,7 @@ struct task_struct {
> raw_spinlock_t pi_lock;
>
> struct wake_q_node wake_q;
> + struct wake_q_node wake_q_sleeper;
>
> #ifdef CONFIG_RT_MUTEXES
> /* PI waiters blocked on a rt_mutex held by this task */
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 0a873f52999f..368e770abee6 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -395,6 +395,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
> tsk->splice_pipe = NULL;
> tsk->task_frag.page = NULL;
> tsk->wake_q.next = NULL;
> + tsk->wake_q_sleeper.next = NULL;
>
> account_kernel_stack(ti, 1);
>
> diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
> index 552dc6dd3a79..b5b89c51f27e 100644
> --- a/kernel/locking/rtmutex.c
> +++ b/kernel/locking/rtmutex.c
> @@ -1557,7 +1557,7 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
> raw_spin_unlock(¤t->pi_lock);
>
> if (waiter->savestate)
> - wake_q_add(wake_sleeper_q, waiter->task);
> + wake_q_add_sleeper(wake_sleeper_q, waiter->task);
> else
> wake_q_add(wake_q, waiter->task);
> }
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index bea476417297..ed0f841d4d5c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -523,9 +523,15 @@ static bool set_nr_if_polling(struct task_struct *p)
> #endif
> #endif
>
> -void wake_q_add(struct wake_q_head *head, struct task_struct *task)
> +void __wake_q_add(struct wake_q_head *head, struct task_struct *task,
> + bool sleeper)
> {
> - struct wake_q_node *node = &task->wake_q;
> + struct wake_q_node *node;
> +
> + if (sleeper)
> + node = &task->wake_q_sleeper;
> + else
> + node = &task->wake_q;
>
> /*
> * Atomically grab the task, if ->wake_q is !nil already it means
> @@ -554,11 +560,17 @@ void __wake_up_q(struct wake_q_head *head, bool sleeper)
> while (node != WAKE_Q_TAIL) {
> struct task_struct *task;
>
> - task = container_of(node, struct task_struct, wake_q);
> + if (sleeper)
> + task = container_of(node, struct task_struct, wake_q_sleeper);
> + else
> + task = container_of(node, struct task_struct, wake_q);
> BUG_ON(!task);
> /* task can safely be re-inserted now */
> node = node->next;
> - task->wake_q.next = NULL;
> + if (sleeper)
> + task->wake_q_sleeper.next = NULL;
> + else
> + task->wake_q.next = NULL;
>
> /*
> * wake_up_process() implies a wmb() to pair with the queueing
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH RT 5/8] sched: Remove TASK_ALL
2017-11-21 16:19 ` Peter Zijlstra
@ 2017-11-21 16:27 ` Steven Rostedt
2017-11-21 17:26 ` Thomas Gleixner
0 siblings, 1 reply; 12+ messages in thread
From: Steven Rostedt @ 2017-11-21 16:27 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde,
Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker,
Julia Cartwright, Daniel Wagner, tom.zanussi, Alex Shi, stable-rt
On Tue, 21 Nov 2017 17:19:07 +0100
Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, Nov 21, 2017 at 11:06:59AM -0500, Steven Rostedt wrote:
> > 4.4.97-rt111-rc1 stable review patch.
> > If anyone has any objections, please let me know.
>
> No real objection, just curious as to why its -stable material.
Good question ;-) But it was marked with the stable-rt tag, Thomas
said it was dangerous, and it applied nicely.
-- Steve
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH RT 5/8] sched: Remove TASK_ALL
2017-11-21 16:27 ` Steven Rostedt
@ 2017-11-21 17:26 ` Thomas Gleixner
0 siblings, 0 replies; 12+ messages in thread
From: Thomas Gleixner @ 2017-11-21 17:26 UTC (permalink / raw)
To: Steven Rostedt
Cc: Peter Zijlstra, linux-kernel, linux-rt-users, Carsten Emde,
Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker,
Julia Cartwright, Daniel Wagner, tom.zanussi, Alex Shi, stable-rt
On Tue, 21 Nov 2017, Steven Rostedt wrote:
> On Tue, 21 Nov 2017 17:19:07 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
>
> > On Tue, Nov 21, 2017 at 11:06:59AM -0500, Steven Rostedt wrote:
> > > 4.4.97-rt111-rc1 stable review patch.
> > > If anyone has any objections, please let me know.
> >
> > No real objection, just curious as to why its -stable material.
>
> Good question ;-) But it was marked with the stable-rt tag, Thomas
> said it was dangerous, and it applied nicely.
Yes, it's dangerous and I wanted to make sure that it does not get reused
by chance.
Thanks,
tglx
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2017-11-21 17:27 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-11-21 16:06 [PATCH RT 0/8] Linux 4.4.97-rt111-rc1 Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 1/8] timer/hrtimer: check properly for a running timer Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 2/8] rtmutex: Make lock_killable work Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 3/8] random: avoid preempt_disable()ed section Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 4/8] sched: Prevent task state corruption by spurious lock wakeup Steven Rostedt
2017-11-21 16:06 ` [PATCH RT 5/8] sched: Remove TASK_ALL Steven Rostedt
2017-11-21 16:19 ` Peter Zijlstra
2017-11-21 16:27 ` Steven Rostedt
2017-11-21 17:26 ` Thomas Gleixner
2017-11-21 16:07 ` [PATCH RT 6/8] sched/migrate disable: handle updated task-mask mg-dis section Steven Rostedt
2017-11-21 16:07 ` [PATCH RT 8/8] Linux 4.4.97-rt111-rc1 Steven Rostedt
[not found] ` <20171121160706.703915626@goodmis.org>
2017-11-21 16:20 ` [PATCH RT 7/8] kernel/locking: use an exclusive wait_q for sleepers Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).