* [PATCH RT 0/3] Linux 3.12.24-rt38-rc1
@ 2014-07-14 20:03 Steven Rostedt
2014-07-14 20:03 ` [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() Steven Rostedt
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Steven Rostedt @ 2014-07-14 20:03 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker
Dear RT Folks,
This is the RT stable review cycle of patch 3.12.24-rt38-rc1.
Please scream at me if I messed something up. Please test the patches too.
The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).
The pre-releases will not be pushed to the git repository, only the
final release is.
If all goes well, this patch will be converted to the next main release
on 7/17/2014.
Enjoy,
-- Steve
To build 3.12.24-rt38-rc1 directly, the following patches should be applied:
http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.12.tar.xz
http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.12.24.xz
http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/patch-3.12.24-rt38-rc1.patch.xz
You can also build from 3.12.24-rt37 by applying the incremental patch:
http://www.kernel.org/pub/linux/kernel/projects/rt/3.12/incr/patch-3.12.24-rt37-rt38-rc1.patch.xz
Changes from 3.12.24-rt37:
---
Steven Rostedt (1):
sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq()
Steven Rostedt (Red Hat) (1):
Linux 3.12.24-rt38-rc1
Thomas Gleixner (1):
workqueue: Prevent deadlock/stall on RT
----
kernel/sched/core.c | 13 +++++-------
kernel/workqueue.c | 61 +++++++++++++++++++++++++++++++++++++++++------------
localversion-rt | 2 +-
3 files changed, 54 insertions(+), 22 deletions(-)
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() 2014-07-14 20:03 [PATCH RT 0/3] Linux 3.12.24-rt38-rc1 Steven Rostedt @ 2014-07-14 20:03 ` Steven Rostedt 2014-07-14 20:03 ` [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT Steven Rostedt 2014-07-14 20:03 ` [PATCH RT 3/3] Linux 3.12.24-rt38-rc1 Steven Rostedt 2 siblings, 0 replies; 5+ messages in thread From: Steven Rostedt @ 2014-07-14 20:03 UTC (permalink / raw) To: linux-kernel, linux-rt-users Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker, stable-rt, Clark Williams, Peter Zijlstra [-- Attachment #1: 0001-sched-Do-not-clear-PF_NO_SETAFFINITY-flag-in-select_.patch --] [-- Type: text/plain, Size: 1690 bytes --] 3.12.24-rt38-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Steven Rostedt <rostedt@goodmis.org> I talked with Peter Zijlstra about this, and he told me that the clearing of the PF_NO_SETAFFINITY flag was to deal with the optimization of migrate_disable/enable() that ignores tasks that have that flag set. But that optimization was removed when I did a rework of the cpu hotplug code. I found that ignoring tasks that had that flag set would cause those tasks to not sync with the hotplug code and cause the kernel to crash. Thus it needed to not treat them special and those tasks had to go though the same work as tasks without that flag set. Now that those tasks are not treated special, there's no reason to clear the flag. May still need to be tested as the migrate_me() code does not ignore those flags. Cc: stable-rt@vger.kernel.org Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Clark Williams <williams@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140701111444.0cfebaa1@gandalf.local.home Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- kernel/sched/core.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index f6f3b3d72578..400ae9869c0e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1289,12 +1289,6 @@ out: } } - /* - * Clear PF_NO_SETAFFINITY, otherwise we wreckage - * migrate_disable/enable. See optimization for - * PF_NO_SETAFFINITY tasks there. - */ - p->flags &= ~PF_NO_SETAFFINITY; return dest_cpu; } -- 2.0.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT 2014-07-14 20:03 [PATCH RT 0/3] Linux 3.12.24-rt38-rc1 Steven Rostedt 2014-07-14 20:03 ` [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() Steven Rostedt @ 2014-07-14 20:03 ` Steven Rostedt 2014-07-14 20:03 ` [PATCH RT 3/3] Linux 3.12.24-rt38-rc1 Steven Rostedt 2 siblings, 0 replies; 5+ messages in thread From: Steven Rostedt @ 2014-07-14 20:03 UTC (permalink / raw) To: linux-kernel, linux-rt-users Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker, Richard Weinberger, stable-rt [-- Attachment #1: 0002-workqueue-Prevent-deadlock-stall-on-RT.patch --] [-- Type: text/plain, Size: 7000 bytes --] 3.12.24-rt38-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Thomas Gleixner <tglx@linutronix.de> Austin reported a XFS deadlock/stall on RT where scheduled work gets never exececuted and tasks are waiting for each other for ever. The underlying problem is the modification of the RT code to the handling of workers which are about to go to sleep. In mainline a worker thread which goes to sleep wakes an idle worker if there is more work to do. This happens from the guts of the schedule() function. On RT this must be outside and the accessed data structures are not protected against scheduling due to the spinlock to rtmutex conversion. So the naive solution to this was to move the code outside of the scheduler and protect the data structures by the pool lock. That approach turned out to be a little naive as we cannot call into that code when the thread blocks on a lock, as it is not allowed to block on two locks in parallel. So we dont call into the worker wakeup magic when the worker is blocked on a lock, which causes the deadlock/stall observed by Austin and Mike. Looking deeper into that worker code it turns out that the only relevant data structure which needs to be protected is the list of idle workers which can be woken up. So the solution is to protect the list manipulation operations with preempt_enable/disable pairs on RT and call unconditionally into the worker code even when the worker is blocked on a lock. The preemption protection is safe as there is nothing which can fiddle with the list outside of thread context. Reported-and_tested-by: Austin Schuh <austin@peloton-tech.com> Reported-and_tested-by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: http://vger.kernel.org/r/alpine.DEB.2.10.1406271249510.5170@nanos Cc: Richard Weinberger <richard.weinberger@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: stable-rt@vger.kernel.org Signed-off-by: Steven Rostedt <rostedt@goodmis.org> --- kernel/sched/core.c | 7 ++++-- kernel/workqueue.c | 61 +++++++++++++++++++++++++++++++++++++++++------------ 2 files changed, 53 insertions(+), 15 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 400ae9869c0e..5e741c96af15 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2645,9 +2645,8 @@ need_resched: static inline void sched_submit_work(struct task_struct *tsk) { - if (!tsk->state || tsk_is_pi_blocked(tsk)) + if (!tsk->state) return; - /* * If a worker went to sleep, notify and ask workqueue whether * it wants to wake up a task to maintain concurrency. @@ -2655,6 +2654,10 @@ static inline void sched_submit_work(struct task_struct *tsk) if (tsk->flags & PF_WQ_WORKER) wq_worker_sleeping(tsk); + + if (tsk_is_pi_blocked(tsk)) + return; + /* * If we are going to sleep and we have plugged IO queued, * make sure to submit it to avoid deadlocks. diff --git a/kernel/workqueue.c b/kernel/workqueue.c index be0ef50a2395..505b55b3c7ae 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -126,6 +126,11 @@ enum { * cpu or grabbing pool->lock is enough for read access. If * POOL_DISASSOCIATED is set, it's identical to L. * + * On RT we need the extra protection via rt_lock_idle_list() for + * the list manipulations against read access from + * wq_worker_sleeping(). All other places are nicely serialized via + * pool->lock. + * * MG: pool->manager_mutex and pool->lock protected. Writes require both * locks. Reads can happen under either lock. * @@ -409,6 +414,31 @@ static void copy_workqueue_attrs(struct workqueue_attrs *to, if (({ assert_rcu_or_wq_mutex(wq); false; })) { } \ else +#ifdef CONFIG_PREEMPT_RT_BASE +static inline void rt_lock_idle_list(struct worker_pool *pool) +{ + preempt_disable(); +} +static inline void rt_unlock_idle_list(struct worker_pool *pool) +{ + preempt_enable(); +} +static inline void sched_lock_idle_list(struct worker_pool *pool) { } +static inline void sched_unlock_idle_list(struct worker_pool *pool) { } +#else +static inline void rt_lock_idle_list(struct worker_pool *pool) { } +static inline void rt_unlock_idle_list(struct worker_pool *pool) { } +static inline void sched_lock_idle_list(struct worker_pool *pool) +{ + spin_lock_irq(&pool->lock); +} +static inline void sched_unlock_idle_list(struct worker_pool *pool) +{ + spin_unlock_irq(&pool->lock); +} +#endif + + #ifdef CONFIG_DEBUG_OBJECTS_WORK static struct debug_obj_descr work_debug_descr; @@ -801,10 +831,16 @@ static struct worker *first_worker(struct worker_pool *pool) */ static void wake_up_worker(struct worker_pool *pool) { - struct worker *worker = first_worker(pool); + struct worker *worker; + + rt_lock_idle_list(pool); + + worker = first_worker(pool); if (likely(worker)) wake_up_process(worker->task); + + rt_unlock_idle_list(pool); } /** @@ -832,7 +868,7 @@ void wq_worker_running(struct task_struct *task) */ void wq_worker_sleeping(struct task_struct *task) { - struct worker *next, *worker = kthread_data(task); + struct worker *worker = kthread_data(task); struct worker_pool *pool; /* @@ -849,25 +885,18 @@ void wq_worker_sleeping(struct task_struct *task) return; worker->sleeping = 1; - spin_lock_irq(&pool->lock); + /* * The counterpart of the following dec_and_test, implied mb, * worklist not empty test sequence is in insert_work(). * Please read comment there. - * - * NOT_RUNNING is clear. This means that we're bound to and - * running on the local cpu w/ rq lock held and preemption - * disabled, which in turn means that none else could be - * manipulating idle_list, so dereferencing idle_list without pool - * lock is safe. */ if (atomic_dec_and_test(&pool->nr_running) && !list_empty(&pool->worklist)) { - next = first_worker(pool); - if (next) - wake_up_process(next->task); + sched_lock_idle_list(pool); + wake_up_worker(pool); + sched_unlock_idle_list(pool); } - spin_unlock_irq(&pool->lock); } /** @@ -1571,7 +1600,9 @@ static void worker_enter_idle(struct worker *worker) worker->last_active = jiffies; /* idle_list is LIFO */ + rt_lock_idle_list(pool); list_add(&worker->entry, &pool->idle_list); + rt_unlock_idle_list(pool); if (too_many_workers(pool) && !timer_pending(&pool->idle_timer)) mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT); @@ -1604,7 +1635,9 @@ static void worker_leave_idle(struct worker *worker) return; worker_clr_flags(worker, WORKER_IDLE); pool->nr_idle--; + rt_lock_idle_list(pool); list_del_init(&worker->entry); + rt_unlock_idle_list(pool); } /** @@ -1849,7 +1882,9 @@ static void destroy_worker(struct worker *worker) */ get_task_struct(worker->task); + rt_lock_idle_list(pool); list_del_init(&worker->entry); + rt_unlock_idle_list(pool); worker->flags |= WORKER_DIE; idr_remove(&pool->worker_idr, worker->id); -- 2.0.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH RT 3/3] Linux 3.12.24-rt38-rc1 2014-07-14 20:03 [PATCH RT 0/3] Linux 3.12.24-rt38-rc1 Steven Rostedt 2014-07-14 20:03 ` [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() Steven Rostedt 2014-07-14 20:03 ` [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT Steven Rostedt @ 2014-07-14 20:03 ` Steven Rostedt 2 siblings, 0 replies; 5+ messages in thread From: Steven Rostedt @ 2014-07-14 20:03 UTC (permalink / raw) To: linux-kernel, linux-rt-users Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker [-- Attachment #1: 0003-Linux-3.12.24-rt38-rc1.patch --] [-- Type: text/plain, Size: 412 bytes --] 3.12.24-rt38-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org> --- localversion-rt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/localversion-rt b/localversion-rt index a3b2408c1da6..625367387621 100644 --- a/localversion-rt +++ b/localversion-rt @@ -1 +1 @@ --rt37 +-rt38-rc1 -- 2.0.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH RT 0/3] Linux 3.10.47-rt50-rc1
@ 2014-07-14 20:04 Steven Rostedt
2014-07-14 20:04 ` [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT Steven Rostedt
0 siblings, 1 reply; 5+ messages in thread
From: Steven Rostedt @ 2014-07-14 20:04 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior,
John Kacur, Paul Gortmaker
Dear RT Folks,
This is the RT stable review cycle of patch 3.10.47-rt50-rc1.
Please scream at me if I messed something up. Please test the patches too.
The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).
The pre-releases will not be pushed to the git repository, only the
final release is.
If all goes well, this patch will be converted to the next main release
on 7/17/2014.
Enjoy,
-- Steve
To build 3.10.47-rt50-rc1 directly, the following patches should be applied:
http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.10.tar.xz
http://www.kernel.org/pub/linux/kernel/v3.x/patch-3.10.47.xz
http://www.kernel.org/pub/linux/kernel/projects/rt/3.10/patch-3.10.47-rt50-rc1.patch.xz
You can also build from 3.10.47-rt49 by applying the incremental patch:
http://www.kernel.org/pub/linux/kernel/projects/rt/3.10/incr/patch-3.10.47-rt49-rt50-rc1.patch.xz
Changes from 3.10.47-rt49:
---
Steven Rostedt (1):
sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq()
Steven Rostedt (Red Hat) (1):
Linux 3.10.47-rt50-rc1
Thomas Gleixner (1):
workqueue: Prevent deadlock/stall on RT
----
kernel/sched/core.c | 13 +++++-------
kernel/workqueue.c | 61 +++++++++++++++++++++++++++++++++++++++++------------
localversion-rt | 2 +-
3 files changed, 54 insertions(+), 22 deletions(-)
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT 2014-07-14 20:04 [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 Steven Rostedt @ 2014-07-14 20:04 ` Steven Rostedt 0 siblings, 0 replies; 5+ messages in thread From: Steven Rostedt @ 2014-07-14 20:04 UTC (permalink / raw) To: linux-kernel, linux-rt-users Cc: Thomas Gleixner, Carsten Emde, Sebastian Andrzej Siewior, John Kacur, Paul Gortmaker, Richard Weinberger, stable-rt [-- Attachment #1: 0002-workqueue-Prevent-deadlock-stall-on-RT.patch --] [-- Type: text/plain, Size: 7002 bytes --] 3.10.47-rt50-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Thomas Gleixner <tglx@linutronix.de> Austin reported a XFS deadlock/stall on RT where scheduled work gets never exececuted and tasks are waiting for each other for ever. The underlying problem is the modification of the RT code to the handling of workers which are about to go to sleep. In mainline a worker thread which goes to sleep wakes an idle worker if there is more work to do. This happens from the guts of the schedule() function. On RT this must be outside and the accessed data structures are not protected against scheduling due to the spinlock to rtmutex conversion. So the naive solution to this was to move the code outside of the scheduler and protect the data structures by the pool lock. That approach turned out to be a little naive as we cannot call into that code when the thread blocks on a lock, as it is not allowed to block on two locks in parallel. So we dont call into the worker wakeup magic when the worker is blocked on a lock, which causes the deadlock/stall observed by Austin and Mike. Looking deeper into that worker code it turns out that the only relevant data structure which needs to be protected is the list of idle workers which can be woken up. So the solution is to protect the list manipulation operations with preempt_enable/disable pairs on RT and call unconditionally into the worker code even when the worker is blocked on a lock. The preemption protection is safe as there is nothing which can fiddle with the list outside of thread context. Reported-and_tested-by: Austin Schuh <austin@peloton-tech.com> Reported-and_tested-by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: http://vger.kernel.org/r/alpine.DEB.2.10.1406271249510.5170@nanos Cc: Richard Weinberger <richard.weinberger@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: stable-rt@vger.kernel.org Signed-off-by: Steven Rostedt <rostedt@goodmis.org> --- kernel/sched/core.c | 7 ++++-- kernel/workqueue.c | 61 +++++++++++++++++++++++++++++++++++++++++------------ 2 files changed, 53 insertions(+), 15 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b8acecc0600f..f7aa4ca0cedb 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3217,9 +3217,8 @@ need_resched: static inline void sched_submit_work(struct task_struct *tsk) { - if (!tsk->state || tsk_is_pi_blocked(tsk)) + if (!tsk->state) return; - /* * If a worker went to sleep, notify and ask workqueue whether * it wants to wake up a task to maintain concurrency. @@ -3227,6 +3226,10 @@ static inline void sched_submit_work(struct task_struct *tsk) if (tsk->flags & PF_WQ_WORKER) wq_worker_sleeping(tsk); + + if (tsk_is_pi_blocked(tsk)) + return; + /* * If we are going to sleep and we have plugged IO queued, * make sure to submit it to avoid deadlocks. diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 318c86593597..8f080af2d863 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -125,6 +125,11 @@ enum { * cpu or grabbing pool->lock is enough for read access. If * POOL_DISASSOCIATED is set, it's identical to L. * + * On RT we need the extra protection via rt_lock_idle_list() for + * the list manipulations against read access from + * wq_worker_sleeping(). All other places are nicely serialized via + * pool->lock. + * * MG: pool->manager_mutex and pool->lock protected. Writes require both * locks. Reads can happen under either lock. * @@ -395,6 +400,31 @@ static void copy_workqueue_attrs(struct workqueue_attrs *to, if (({ assert_rcu_or_wq_mutex(wq); false; })) { } \ else +#ifdef CONFIG_PREEMPT_RT_BASE +static inline void rt_lock_idle_list(struct worker_pool *pool) +{ + preempt_disable(); +} +static inline void rt_unlock_idle_list(struct worker_pool *pool) +{ + preempt_enable(); +} +static inline void sched_lock_idle_list(struct worker_pool *pool) { } +static inline void sched_unlock_idle_list(struct worker_pool *pool) { } +#else +static inline void rt_lock_idle_list(struct worker_pool *pool) { } +static inline void rt_unlock_idle_list(struct worker_pool *pool) { } +static inline void sched_lock_idle_list(struct worker_pool *pool) +{ + spin_lock_irq(&pool->lock); +} +static inline void sched_unlock_idle_list(struct worker_pool *pool) +{ + spin_unlock_irq(&pool->lock); +} +#endif + + #ifdef CONFIG_DEBUG_OBJECTS_WORK static struct debug_obj_descr work_debug_descr; @@ -785,10 +815,16 @@ static struct worker *first_worker(struct worker_pool *pool) */ static void wake_up_worker(struct worker_pool *pool) { - struct worker *worker = first_worker(pool); + struct worker *worker; + + rt_lock_idle_list(pool); + + worker = first_worker(pool); if (likely(worker)) wake_up_process(worker->task); + + rt_unlock_idle_list(pool); } /** @@ -816,7 +852,7 @@ void wq_worker_running(struct task_struct *task) */ void wq_worker_sleeping(struct task_struct *task) { - struct worker *next, *worker = kthread_data(task); + struct worker *worker = kthread_data(task); struct worker_pool *pool; /* @@ -833,25 +869,18 @@ void wq_worker_sleeping(struct task_struct *task) return; worker->sleeping = 1; - spin_lock_irq(&pool->lock); + /* * The counterpart of the following dec_and_test, implied mb, * worklist not empty test sequence is in insert_work(). * Please read comment there. - * - * NOT_RUNNING is clear. This means that we're bound to and - * running on the local cpu w/ rq lock held and preemption - * disabled, which in turn means that none else could be - * manipulating idle_list, so dereferencing idle_list without pool - * lock is safe. */ if (atomic_dec_and_test(&pool->nr_running) && !list_empty(&pool->worklist)) { - next = first_worker(pool); - if (next) - wake_up_process(next->task); + sched_lock_idle_list(pool); + wake_up_worker(pool); + sched_unlock_idle_list(pool); } - spin_unlock_irq(&pool->lock); } /** @@ -1553,7 +1582,9 @@ static void worker_enter_idle(struct worker *worker) worker->last_active = jiffies; /* idle_list is LIFO */ + rt_lock_idle_list(pool); list_add(&worker->entry, &pool->idle_list); + rt_unlock_idle_list(pool); if (too_many_workers(pool) && !timer_pending(&pool->idle_timer)) mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT); @@ -1586,7 +1617,9 @@ static void worker_leave_idle(struct worker *worker) return; worker_clr_flags(worker, WORKER_IDLE); pool->nr_idle--; + rt_lock_idle_list(pool); list_del_init(&worker->entry); + rt_unlock_idle_list(pool); } /** @@ -1829,7 +1862,9 @@ static void destroy_worker(struct worker *worker) */ get_task_struct(worker->task); + rt_lock_idle_list(pool); list_del_init(&worker->entry); + rt_unlock_idle_list(pool); worker->flags |= WORKER_DIE; idr_remove(&pool->worker_idr, worker->id); -- 2.0.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2014-07-14 20:04 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-07-14 20:03 [PATCH RT 0/3] Linux 3.12.24-rt38-rc1 Steven Rostedt 2014-07-14 20:03 ` [PATCH RT 1/3] sched: Do not clear PF_NO_SETAFFINITY flag in select_fallback_rq() Steven Rostedt 2014-07-14 20:03 ` [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT Steven Rostedt 2014-07-14 20:03 ` [PATCH RT 3/3] Linux 3.12.24-rt38-rc1 Steven Rostedt -- strict thread matches above, loose matches on Subject: below -- 2014-07-14 20:04 [PATCH RT 0/3] Linux 3.10.47-rt50-rc1 Steven Rostedt 2014-07-14 20:04 ` [PATCH RT 2/3] workqueue: Prevent deadlock/stall on RT Steven Rostedt
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).