* [PATCH workqueue wq/for-3.19-fixes] workqueue: fix subtle pool management issue which can stall whole worker_pool [not found] ` <20150113233552.GH9489@htj.dyndns.org> @ 2015-01-16 19:32 ` Tejun Heo 2015-01-19 2:15 ` Lai Jiangshan 0 siblings, 1 reply; 2+ messages in thread From: Tejun Heo @ 2015-01-16 19:32 UTC (permalink / raw) To: Eric Sandeen, Lai Jiangshan; +Cc: Eric Sandeen, xfs-oss, linux-kernel >From 29187a9eeaf362d8422e62e17a22a6e115277a49 Mon Sep 17 00:00:00 2001 From: Tejun Heo <tj@kernel.org> Date: Fri, 16 Jan 2015 14:21:16 -0500 A worker_pool's forward progress is guaranteed by the fact that the last idle worker assumes the manager role to create more workers and summon the rescuers if creating workers doesn't succeed in timely manner before proceeding to execute work items. This manager role is implemented in manage_workers(), which indicates whether the worker may proceed to work item execution with its return value. This is necessary because multiple workers may contend for the manager role, and, if there already is a manager, others should proceed to work item execution. Unfortunately, the function also indicates that the worker may proceed to work item execution if need_to_create_worker() is false at the head of the function. need_to_create_worker() tests the following conditions. pending work items && !nr_running && !nr_idle The first and third conditions are protected by pool->lock and thus won't change while holding pool->lock; however, nr_running can change asynchronously as other workers block and resume and while it's likely to be zero, as someone woke this worker up in the first place, some other workers could have become runnable inbetween making it non-zero. If this happens, manage_worker() could return false even with zero nr_idle making the worker, the last idle one, proceed to execute work items. If then all workers of the pool end up blocking on a resource which can only be released by a work item which is pending on that pool, the whole pool can deadlock as there's no one to create more workers or summon the rescuers. This patch fixes the problem by removing the early exit condition from maybe_create_worker() and making manage_workers() return false iff there's already another manager, which ensures that the last worker doesn't start executing work items. We can leave the early exit condition alone and just ignore the return value but the only reason it was put there is because the manage_workers() used to perform both creations and destructions of workers and thus the function may be invoked while the pool is trying to reduce the number of workers. Now that manage_workers() is called only when more workers are needed, the only case this early exit condition is triggered is rare race conditions rendering it pointless. Tested with simulated workload and modified workqueue code which trigger the pool deadlock reliably without this patch. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Eric Sandeen <sandeen@sandeen.net> Link: http://lkml.kernel.org/g/54B019F4.8030009@sandeen.net Cc: Dave Chinner <david@fromorbit.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: stable@vger.kernel.org --- Hello, It took quite some effort to reproduce the issue and verify the fix, but this works. Applying to libata/for-3.19-fixes. Thansk. kernel/workqueue.c | 25 ++++++++----------------- 1 file changed, 8 insertions(+), 17 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 6202b08..beeeac9 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -1841,17 +1841,11 @@ static void pool_mayday_timeout(unsigned long __pool) * spin_lock_irq(pool->lock) which may be released and regrabbed * multiple times. Does GFP_KERNEL allocations. Called only from * manager. - * - * Return: - * %false if no action was taken and pool->lock stayed locked, %true - * otherwise. */ -static bool maybe_create_worker(struct worker_pool *pool) +static void maybe_create_worker(struct worker_pool *pool) __releases(&pool->lock) __acquires(&pool->lock) { - if (!need_to_create_worker(pool)) - return false; restart: spin_unlock_irq(&pool->lock); @@ -1877,7 +1871,6 @@ restart: */ if (need_to_create_worker(pool)) goto restart; - return true; } /** @@ -1897,16 +1890,14 @@ restart: * multiple times. Does GFP_KERNEL allocations. * * Return: - * %false if the pool don't need management and the caller can safely start - * processing works, %true indicates that the function released pool->lock - * and reacquired it to perform some management function and that the - * conditions that the caller verified while holding the lock before - * calling the function might no longer be true. + * %false if the pool doesn't need management and the caller can safely + * start processing works, %true if management function was performed and + * the conditions that the caller verified before calling the function may + * no longer be true. */ static bool manage_workers(struct worker *worker) { struct worker_pool *pool = worker->pool; - bool ret = false; /* * Anyone who successfully grabs manager_arb wins the arbitration @@ -1919,12 +1910,12 @@ static bool manage_workers(struct worker *worker) * actual management, the pool may stall indefinitely. */ if (!mutex_trylock(&pool->manager_arb)) - return ret; + return false; - ret |= maybe_create_worker(pool); + maybe_create_worker(pool); mutex_unlock(&pool->manager_arb); - return ret; + return true; } /** -- 2.1.0 ^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH workqueue wq/for-3.19-fixes] workqueue: fix subtle pool management issue which can stall whole worker_pool 2015-01-16 19:32 ` [PATCH workqueue wq/for-3.19-fixes] workqueue: fix subtle pool management issue which can stall whole worker_pool Tejun Heo @ 2015-01-19 2:15 ` Lai Jiangshan 0 siblings, 0 replies; 2+ messages in thread From: Lai Jiangshan @ 2015-01-19 2:15 UTC (permalink / raw) To: Tejun Heo, Eric Sandeen; +Cc: Eric Sandeen, xfs-oss, linux-kernel On 01/17/2015 03:32 AM, Tejun Heo wrote: >>From 29187a9eeaf362d8422e62e17a22a6e115277a49 Mon Sep 17 00:00:00 2001 > From: Tejun Heo <tj@kernel.org> > Date: Fri, 16 Jan 2015 14:21:16 -0500 > > A worker_pool's forward progress is guaranteed by the fact that the > last idle worker assumes the manager role to create more workers and > summon the rescuers if creating workers doesn't succeed in timely > manner before proceeding to execute work items. > > This manager role is implemented in manage_workers(), which indicates > whether the worker may proceed to work item execution with its return > value. This is necessary because multiple workers may contend for the > manager role, and, if there already is a manager, others should > proceed to work item execution. > > Unfortunately, the function also indicates that the worker may proceed > to work item execution if need_to_create_worker() is false at the head > of the function. need_to_create_worker() tests the following > conditions. > > pending work items && !nr_running && !nr_idle > > The first and third conditions are protected by pool->lock and thus > won't change while holding pool->lock; however, nr_running can change > asynchronously as other workers block and resume and while it's likely > to be zero, as someone woke this worker up in the first place, some > other workers could have become runnable inbetween making it non-zero. I had sent a patch similar: https://lkml.org/lkml/2014/7/10/446 It was shame for me that I did not think deep enough that time. > > If this happens, manage_worker() could return false even with zero > nr_idle making the worker, the last idle one, proceed to execute work > items. If then all workers of the pool end up blocking on a resource > which can only be released by a work item which is pending on that > pool, the whole pool can deadlock as there's no one to create more > workers or summon the rescuers. How nr_running is decreased to zero in this case? ( The decreasing of nr_running is also protected by "X" ) ( I just checked the cpu-hotplug code again ... find no suspect) > -static bool maybe_create_worker(struct worker_pool *pool) > +static void maybe_create_worker(struct worker_pool *pool) > __releases(&pool->lock) > __acquires(&pool->lock) > { > - if (!need_to_create_worker(pool)) > - return false; It only returns false here, if there ware bug, the bug would be here. But it still holds the pool->lock and no releasing form the beginning to here) My doubt might be wrong, but at least it is a good cleanup Acked-by: Lai Jiangshan <laijs@cn.fujitsu.com> Thanks Lai > restart: > spin_unlock_irq(&pool->lock); > > @@ -1877,7 +1871,6 @@ restart: > */ > if (need_to_create_worker(pool)) > goto restart; > - return true; > } > > /** > @@ -1897,16 +1890,14 @@ restart: > * multiple times. Does GFP_KERNEL allocations. > * > * Return: > - * %false if the pool don't need management and the caller can safely start > - * processing works, %true indicates that the function released pool->lock > - * and reacquired it to perform some management function and that the > - * conditions that the caller verified while holding the lock before > - * calling the function might no longer be true. > + * %false if the pool doesn't need management and the caller can safely > + * start processing works, %true if management function was performed and > + * the conditions that the caller verified before calling the function may > + * no longer be true. > */ > static bool manage_workers(struct worker *worker) > { > struct worker_pool *pool = worker->pool; > - bool ret = false; > > /* > * Anyone who successfully grabs manager_arb wins the arbitration > @@ -1919,12 +1910,12 @@ static bool manage_workers(struct worker *worker) > * actual management, the pool may stall indefinitely. > */ > if (!mutex_trylock(&pool->manager_arb)) > - return ret; > + return false; > > - ret |= maybe_create_worker(pool); > + maybe_create_worker(pool); > > mutex_unlock(&pool->manager_arb); > - return ret; > + return true; > } > > /** > ^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2015-01-19 2:14 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <54B429EB.9050807@sandeen.net>
[not found] ` <20150112225314.GC22156@htj.dyndns.org>
[not found] ` <54B454E2.70707@sandeen.net>
[not found] ` <20150112233755.GD22156@htj.dyndns.org>
[not found] ` <54B56D2B.6090401@sandeen.net>
[not found] ` <20150113201900.GA9489@htj.dyndns.org>
[not found] ` <54B58041.9070502@sandeen.net>
[not found] ` <20150113204633.GC9489@htj.dyndns.org>
[not found] ` <54B5A313.2030300@sandeen.net>
[not found] ` <20150113233552.GH9489@htj.dyndns.org>
2015-01-16 19:32 ` [PATCH workqueue wq/for-3.19-fixes] workqueue: fix subtle pool management issue which can stall whole worker_pool Tejun Heo
2015-01-19 2:15 ` Lai Jiangshan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox