* [PATCH RT 0/3] Linux 3.10.47-rt50-rc1
@ 2014-07-14 20:04 Steven Rostedt
2014-07-14 20:04 ` [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() Steven Rostedt
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Steven Rostedt @ 2014-07-14 20:04 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker
Dear RT Folks,
This is the RT stable review cycle of patch 3.10.47-rt50-rc1.
Please scream at me if I messed something up. Please test the patches too.
The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).
The pre-releases will not be pushed to the git repository, only the
final release is.
If all goes well, this patch will be converted to the next main release
on 7/17/2014.
Enjoy,
-- Steve
To build 3.10.47-rt50-rc1 directly, the following patches should be applied:
http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.10.tar.xz
http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.10.47.xz
http://www.kernel.org/pub/linux/kernel/projects/rt/3.10/patch-3.10.47-rt50-rc1.patch.xz
You can also build from 3.10.47-rt49 by applying the incremental patch:
http://www.kernel.org/pub/linux/kernel/projects/rt/3.10/incr/patch-3.10.47-rt49-rt50-rc1.patch.xz
Changes from 3.10.47-rt49:
---
Steven Rostedt (1):
sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq()
Steven Rostedt (Red Hat) (1):
Linux 3.10.47-rt50-rc1
Thomas Gleixner (1):
workqueue: Prevent deadlock/stall on RT
----
kernel/sched/core.c | 13 +++++-------
kernel/workqueue.c | 61 +++++++++++++++++++++++++++++++++++++++++------------
localversion-rt | 2 +-
3 files changed, 54 insertions(+), 22 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() 2014-07-14 20:04 [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 Steven Rostedt @ 2014-07-14 20:04 ` Steven Rostedt 2014-07-14 20:04 ` [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT Steven Rostedt ` (2 subsequent siblings) 3 siblings, 0 replies; 9+ messages in thread From: Steven Rostedt @ 2014-07-14 20:04 UTC (permalink / raw) To: linux-kernel, linux-rt-users Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker, stable-rt, Clark Williams, Peter Zijlstra [-- Attachment #1: 0001-sched-Do-not-clear-PF_NO_SETAFFINITY-flag-in-select_.patch --] [-- Type: text/plain, Size: 1688 bytes --] 3.10.47-rt50-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Steven Rostedt <rostedt@goodmis.org> I talked with Peter Zijlstra about this, and he told me that the clearing of the PF_NO_SETAFFINITY flag was to deal with the optimization of migrate_disable/enable() that ignores tasks that have that flag set. But that optimization was removed when I did a rework of the cpu hotplug code. I found that ignoring tasks that had that flag set would cause those tasks to not sync with the hotplug code and cause the kernel to crash. Thus it needed to not treat them special and those tasks had to go though the same work as tasks without that flag set. Now that those tasks are not treated special, there's no reason to clear the flag. May still need to be tested as the migrate_me() code does not ignore those flags. Cc: stable-rt@vger.kernel.org Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Clark Williams <williams@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140701111444.0cfebaa1@gandalf.local.home Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/sched/core.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a060a092e92b..b8acecc0600f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1289,12 +1289,6 @@ out: } } - /* - * Clear PF_NO_SETAFFINITY, otherwise we wreckage - * migrate_disable/enable. See optimization for - * PF_NO_SETAFFINITY tasks there. - */ - p->flags &= ~PF_NO_SETAFFINITY; return dest_cpu; } -- 2.0.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT 2014-07-14 20:04 [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 Steven Rostedt 2014-07-14 20:04 ` [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() Steven Rostedt @ 2014-07-14 20:04 ` Steven Rostedt 2014-07-14 20:04 ` [PATCH RT 3/3] Linux 3.10.47-rt50-rc1 Steven Rostedt 2014-07-15 0:09 ` [PATCH RT 0/3] " Corey Minyard 3 siblings, 0 replies; 9+ messages in thread From: Steven Rostedt @ 2014-07-14 20:04 UTC (permalink / raw) To: linux-kernel, linux-rt-users Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker, Richard Weinberger, stable-rt [-- Attachment #1: 0002-workqueue-Prevent-deadlock-stall-on-RT.patch --] [-- Type: text/plain, Size: 7002 bytes --] 3.10.47-rt50-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Thomas Gleixner <tglx@linutronix.de> Austin reported a XFS deadlock/stall on RT where scheduled work gets never exececuted and tasks are waiting for each other for ever. The underlying problem is the modification of the RT code to the handling of workers which are about to go to sleep. In mainline a worker thread which goes to sleep wakes an idle worker if there is more work to do. This happens from the guts of the schedule() function. On RT this must be outside and the accessed data structures are not protected against scheduling due to the spinlock to rtmutex conversion. So the naive solution to this was to move the code outside of the scheduler and protect the data structures by the pool lock. That approach turned out to be a little naive as we cannot call into that code when the thread blocks on a lock, as it is not allowed to block on two locks in parallel. So we dont call into the worker wakeup magic when the worker is blocked on a lock, which causes the deadlock/stall observed by Austin and Mike. Looking deeper into that worker code it turns out that the only relevant data structure which needs to be protected is the list of idle workers which can be woken up. So the solution is to protect the list manipulation operations with preempt_enable/disable pairs on RT and call unconditionally into the worker code even when the worker is blocked on a lock. The preemption protection is safe as there is nothing which can fiddle with the list outside of thread context. Reported-and_tested-by: Austin Schuh <austin@peloton-tech.com> Reported-and_tested-by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: http://vger.kernel.org/r/alpine.DEB.2.10.1406271249510.5170@nanos Cc: Richard Weinberger <richard.weinberger@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: stable-rt@vger.kernel.org Signed-off-by: Steven Rostedt <rostedt@goodmis.org> --- kernel/sched/core.c | 7 ++++-- kernel/workqueue.c | 61 +++++++++++++++++++++++++++++++++++++++++------------ 2 files changed, 53 insertions(+), 15 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b8acecc0600f..f7aa4ca0cedb 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3217,9 +3217,8 @@ need_resched: static inline void sched_submit_work(struct task_struct *tsk) { - if (!tsk->state || tsk_is_pi_blocked(tsk)) + if (!tsk->state) return; - /* * If a worker went to sleep, notify and ask workqueue whether * it wants to wake up a task to maintain concurrency. @@ -3227,6 +3226,10 @@ static inline void sched_submit_work(struct task_struct *tsk) if (tsk->flags & PF_WQ_WORKER) wq_worker_sleeping(tsk); + + if (tsk_is_pi_blocked(tsk)) + return; + /* * If we are going to sleep and we have plugged IO queued, * make sure to submit it to avoid deadlocks. diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 318c86593597..8f080af2d863 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -125,6 +125,11 @@ enum { * cpu or grabbing pool->lock is enough for read access. If * POOL_DISASSOCIATED is set, it's identical to L. * + * On RT we need the extra protection via rt_lock_idle_list() for + * the list manipulations against read access from + * wq_worker_sleeping(). All other places are nicely serialized via + * pool->lock. + * * MG: pool->manager_mutex and pool->lock protected. Writes require both * locks. Reads can happen under either lock. * @@ -395,6 +400,31 @@ static void copy_workqueue_attrs(struct workqueue_attrs *to, if (({ assert_rcu_or_wq_mutex(wq); false; })) { } \ else +#ifdef CONFIG_PREEMPT_RT_BASE +static inline void rt_lock_idle_list(struct worker_pool *pool) +{ + preempt_disable(); +} +static inline void rt_unlock_idle_list(struct worker_pool *pool) +{ + preempt_enable(); +} +static inline void sched_lock_idle_list(struct worker_pool *pool) { } +static inline void sched_unlock_idle_list(struct worker_pool *pool) { } +#else +static inline void rt_lock_idle_list(struct worker_pool *pool) { } +static inline void rt_unlock_idle_list(struct worker_pool *pool) { } +static inline void sched_lock_idle_list(struct worker_pool *pool) +{ + spin_lock_irq(&pool->lock); +} +static inline void sched_unlock_idle_list(struct worker_pool *pool) +{ + spin_unlock_irq(&pool->lock); +} +#endif + + #ifdef CONFIG_DEBUG_OBJECTS_WORK static struct debug_obj_descr work_debug_descr; @@ -785,10 +815,16 @@ static struct worker *first_worker(struct worker_pool *pool) */ static void wake_up_worker(struct worker_pool *pool) { - struct worker *worker = first_worker(pool); + struct worker *worker; + + rt_lock_idle_list(pool); + + worker = first_worker(pool); if (likely(worker)) wake_up_process(worker->task); + + rt_unlock_idle_list(pool); } /** @@ -816,7 +852,7 @@ void wq_worker_running(struct task_struct *task) */ void wq_worker_sleeping(struct task_struct *task) { - struct worker *next, *worker = kthread_data(task); + struct worker *worker = kthread_data(task); struct worker_pool *pool; /* @@ -833,25 +869,18 @@ void wq_worker_sleeping(struct task_struct *task) return; worker->sleeping = 1; - spin_lock_irq(&pool->lock); + /* * The counterpart of the following dec_and_test, implied mb, * worklist not empty test sequence is in insert_work(). * Please read comment there. - * - * NOT_RUNNING is clear. This means that we're bound to and - * running on the local cpu w/ rq lock held and preemption - * disabled, which in turn means that none else could be - * manipulating idle_list, so dereferencing idle_list without pool - * lock is safe. */ if (atomic_dec_and_test(&pool->nr_running) && !list_empty(&pool->worklist)) { - next = first_worker(pool); - if (next) - wake_up_process(next->task); + sched_lock_idle_list(pool); + wake_up_worker(pool); + sched_unlock_idle_list(pool); } - spin_unlock_irq(&pool->lock); } /** @@ -1553,7 +1582,9 @@ static void worker_enter_idle(struct worker *worker) worker->last_active = jiffies; /* idle_list is LIFO */ + rt_lock_idle_list(pool); list_add(&worker->entry, &pool->idle_list); + rt_unlock_idle_list(pool); if (too_many_workers(pool) && !timer_pending(&pool->idle_timer)) mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT); @@ -1586,7 +1617,9 @@ static void worker_leave_idle(struct worker *worker) return; worker_clr_flags(worker, WORKER_IDLE); pool->nr_idle--; + rt_lock_idle_list(pool); list_del_init(&worker->entry); + rt_unlock_idle_list(pool); } /** @@ -1829,7 +1862,9 @@ static void destroy_worker(struct worker *worker) */ get_task_struct(worker->task); + rt_lock_idle_list(pool); list_del_init(&worker->entry); + rt_unlock_idle_list(pool); worker->flags |= WORKER_DIE; idr_remove(&pool->worker_idr, worker->id); -- 2.0.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH RT 3/3] Linux 3.10.47-rt50-rc1 2014-07-14 20:04 [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 Steven Rostedt 2014-07-14 20:04 ` [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() Steven Rostedt 2014-07-14 20:04 ` [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT Steven Rostedt @ 2014-07-14 20:04 ` Steven Rostedt 2014-07-15 0:09 ` [PATCH RT 0/3] " Corey Minyard 3 siblings, 0 replies; 9+ messages in thread From: Steven Rostedt @ 2014-07-14 20:04 UTC (permalink / raw) To: linux-kernel, linux-rt-users Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker [-- Attachment #1: 0003-Linux-3.10.47-rt50-rc1.patch --] [-- Type: text/plain, Size: 412 bytes --] 3.10.47-rt50-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org> --- localversion-rt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/localversion-rt b/localversion-rt index 4b7dca68a5b4..e8a9a36bb066 100644 --- a/localversion-rt +++ b/localversion-rt @@ -1 +1 @@ --rt49 +-rt50-rc1 -- 2.0.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 2014-07-14 20:04 [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 Steven Rostedt ` (2 preceding siblings ...) 2014-07-14 20:04 ` [PATCH RT 3/3] Linux 3.10.47-rt50-rc1 Steven Rostedt @ 2014-07-15 0:09 ` Corey Minyard 2014-07-15 0:53 ` Steven Rostedt 3 siblings, 1 reply; 9+ messages in thread From: Corey Minyard @ 2014-07-15 0:09 UTC (permalink / raw) To: Steven Rostedt, linux-kernel, linux-rt-users Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker Can we get: tracing-use-migrate_disable-to-prevent-beeing-pushed.patch into 3.10, also? Thanks, -corey On 07/14/2014 03:04 PM, Steven Rostedt wrote: > Dear RT Folks, > > This is the RT stable review cycle of patch 3.10.47-rt50-rc1. > > Please scream at me if I messed something up. Please test the patches too. > > The -rc release will be uploaded to kernel.org and will be deleted when > the final release is out. This is just a review release (or release candidate). > > The pre-releases will not be pushed to the git repository, only the > final release is. > > If all goes well, this patch will be converted to the next main release > on 7/17/2014. > > Enjoy, > > -- Steve > > > To build 3.10.47-rt50-rc1 directly, the following patches should be applied: > > http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.10.tar.xz > > http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.10.47.xz > > http://www.kernel.org/pub/linux/kernel/projects/rt/3.10/patch-3.10.47-rt50-rc1.patch.xz > > You can also build from 3.10.47-rt49 by applying the incremental patch: > > http://www.kernel.org/pub/linux/kernel/projects/rt/3.10/incr/patch-3.10.47-rt49-rt50-rc1.patch.xz > > > Changes from 3.10.47-rt49: > > --- > > > Steven Rostedt (1): > sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() > > Steven Rostedt (Red Hat) (1): > Linux 3.10.47-rt50-rc1 > > Thomas Gleixner (1): > workqueue: Prevent deadlock/stall on RT > > ---- > kernel/sched/core.c | 13 +++++------- > kernel/workqueue.c | 61 +++++++++++++++++++++++++++++++++++++++++------------ > localversion-rt | 2 +- > 3 files changed, 54 insertions(+), 22 deletions(-) > -- > To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 2014-07-15 0:09 ` [PATCH RT 0/3] " Corey Minyard @ 2014-07-15 0:53 ` Steven Rostedt 2014-07-16 14:31 ` Corey Minyard 0 siblings, 1 reply; 9+ messages in thread From: Steven Rostedt @ 2014-07-15 0:53 UTC (permalink / raw) To: Corey Minyard Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker On Mon, 14 Jul 2014 19:09:48 -0500 Corey Minyard <cminyard@mvista.com> wrote: > Can we get: > > tracing-use-migrate_disable-to-prevent-beeing-pushed.patch Sure except that patch is buggy: - preempt_disable(); + migrate_disable(); /* The update must run on the CPU that is being updated. */ if (cpu_id == smp_processor_id() || !cpu_online(cpu_id)) rb_update_pages(cpu_buffer); else { - /* - * Can not disable preemption for schedule_work_on() - * on PREEMPT_RT. - */ - preempt_enable(); schedule_work_on(cpu_id, &cpu_buffer->update_pages_work); wait_for_completion(&cpu_buffer->update_done); - preempt_disable(); } - preempt_enable(); + migrate_enable(); migrate_disable() on non-PREEMPT_RT is preempt_disable(). You can't call wait_or_completion with preemption disabled. When that gets fixed in mainline -rt, I'll add it to the stable branches too. -- Steve > > into 3.10, also? > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 2014-07-15 0:53 ` Steven Rostedt @ 2014-07-16 14:31 ` Corey Minyard 2014-07-16 15:22 ` Steven Rostedt 0 siblings, 1 reply; 9+ messages in thread From: Corey Minyard @ 2014-07-16 14:31 UTC (permalink / raw) To: Steven Rostedt Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker On 07/14/2014 07:53 PM, Steven Rostedt wrote: > On Mon, 14 Jul 2014 19:09:48 -0500 > Corey Minyard <cminyard@mvista.com> wrote: > >> Can we get: >> >> tracing-use-migrate_disable-to-prevent-beeing-pushed.patch > Sure except that patch is buggy: > > - preempt_disable(); > + migrate_disable(); > /* The update must run on the CPU that is being updated. */ > if (cpu_id == smp_processor_id() || !cpu_online(cpu_id)) > rb_update_pages(cpu_buffer); > else { > - /* > - * Can not disable preemption for schedule_work_on() > - * on PREEMPT_RT. > - */ > - preempt_enable(); > schedule_work_on(cpu_id, > &cpu_buffer->update_pages_work); > wait_for_completion(&cpu_buffer->update_done); > - preempt_disable(); > } > - preempt_enable(); > + migrate_enable(); > > migrate_disable() on non-PREEMPT_RT is preempt_disable(). You can't > call wait_or_completion with preemption disabled. > > When that gets fixed in mainline -rt, I'll add it to the stable > branches too. I originally did a patch that just always did the else clause (the schedule_work_on() and wait_for_completion()) on all CPUs. That seemed to work just fine and simplifies the code a bit and gets rid of all the preempt/migrate calls. You could try that approach, or I could submit something if you liked. -corey > > -- Steve > > > >> into 3.10, also? >> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 2014-07-16 14:31 ` Corey Minyard @ 2014-07-16 15:22 ` Steven Rostedt 0 siblings, 0 replies; 9+ messages in thread From: Steven Rostedt @ 2014-07-16 15:22 UTC (permalink / raw) To: Corey Minyard Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker On Wed, 16 Jul 2014 09:31:17 -0500 Corey Minyard <cminyard@mvista.com> wrote: > I originally did a patch that just always did the else clause (the > schedule_work_on() and wait_for_completion()) on all CPUs. That seemed > to work just fine and simplifies the code a bit and gets rid of all the > preempt/migrate calls. You could try that approach, or I could submit > something if you liked. Yeah, perhaps that's the way to go. But it needs to go to mainline before it goes to -rt. Can you resend it against my for-next branch. Thanks, -- Steve ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH RT 0/3] Linux 3.12.24-rt38-rc1
@ 2014-07-14 20:03 Steven Rostedt
2014-07-14 20:03 ` [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() Steven Rostedt
0 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2014-07-14 20:03 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker
Dear RT Folks,
This is the RT stable review cycle of patch 3.12.24-rt38-rc1.
Please scream at me if I messed something up. Please test the patches too.
The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).
The pre-releases will not be pushed to the git repository, only the
final release is.
If all goes well, this patch will be converted to the next main release
on 7/17/2014.
Enjoy,
-- Steve
To build 3.12.24-rt38-rc1 directly, the following patches should be applied:
http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.12.tar.xz
http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.12.24.xz
http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/patch-3.12.24-rt38-rc1.patch.xz
You can also build from 3.12.24-rt37 by applying the incremental patch:
http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/incr/patch-3.12.24-rt37-rt38-rc1.patch.xz
Changes from 3.12.24-rt37:
---
Steven Rostedt (1):
sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq()
Steven Rostedt (Red Hat) (1):
Linux 3.12.24-rt38-rc1
Thomas Gleixner (1):
workqueue: Prevent deadlock/stall on RT
----
kernel/sched/core.c | 13 +++++-------
kernel/workqueue.c | 61 +++++++++++++++++++++++++++++++++++++++++------------
localversion-rt | 2 +-
3 files changed, 54 insertions(+), 22 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() 2014-07-14 20:03 [PATCH RT 0/3] Linux 3.12.24-rt38-rc1 Steven Rostedt @ 2014-07-14 20:03 ` Steven Rostedt 0 siblings, 0 replies; 9+ messages in thread From: Steven Rostedt @ 2014-07-14 20:03 UTC (permalink / raw) To: linux-kernel, linux-rt-users Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker, stable-rt, Clark Williams, Peter Zijlstra [-- Attachment #1: 0001-sched-Do-not-clear-PF_NO_SETAFFINITY-flag-in-select_.patch --] [-- Type: text/plain, Size: 1690 bytes --] 3.12.24-rt38-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Steven Rostedt <rostedt@goodmis.org> I talked with Peter Zijlstra about this, and he told me that the clearing of the PF_NO_SETAFFINITY flag was to deal with the optimization of migrate_disable/enable() that ignores tasks that have that flag set. But that optimization was removed when I did a rework of the cpu hotplug code. I found that ignoring tasks that had that flag set would cause those tasks to not sync with the hotplug code and cause the kernel to crash. Thus it needed to not treat them special and those tasks had to go though the same work as tasks without that flag set. Now that those tasks are not treated special, there's no reason to clear the flag. May still need to be tested as the migrate_me() code does not ignore those flags. Cc: stable-rt@vger.kernel.org Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Clark Williams <williams@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140701111444.0cfebaa1@gandalf.local.home Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/sched/core.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index f6f3b3d72578..400ae9869c0e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1289,12 +1289,6 @@ out: } } - /* - * Clear PF_NO_SETAFFINITY, otherwise we wreckage - * migrate_disable/enable. See optimization for - * PF_NO_SETAFFINITY tasks there. - */ - p->flags &= ~PF_NO_SETAFFINITY; return dest_cpu; } -- 2.0.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
end of thread, other threads:[~2014-07-16 15:22 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-07-14 20:04 [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 Steven Rostedt 2014-07-14 20:04 ` [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() Steven Rostedt 2014-07-14 20:04 ` [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT Steven Rostedt 2014-07-14 20:04 ` [PATCH RT 3/3] Linux 3.10.47-rt50-rc1 Steven Rostedt 2014-07-15 0:09 ` [PATCH RT 0/3] " Corey Minyard 2014-07-15 0:53 ` Steven Rostedt 2014-07-16 14:31 ` Corey Minyard 2014-07-16 15:22 ` Steven Rostedt -- strict thread matches above, loose matches on Subject: below -- 2014-07-14 20:03 [PATCH RT 0/3] Linux 3.12.24-rt38-rc1 Steven Rostedt 2014-07-14 20:03 ` [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() Steven Rostedt
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).