From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: Tejun Heo <tj@kernel.org>
Cc: linux-kernel@vger.kernel.org, axboe@kernel.dk, jmoyer@redhat.com,
zab@redhat.com
Subject: Re: [PATCH 14/31] workqueue: replace POOL_MANAGING_WORKERS flag with worker_pool->manager_mutex
Date: Sun, 10 Mar 2013 18:09:38 +0800 [thread overview]
Message-ID: <513C5BE2.8090409@cn.fujitsu.com> (raw)
In-Reply-To: <1362194662-2344-15-git-send-email-tj@kernel.org>
On 02/03/13 11:24, Tejun Heo wrote:
> POOL_MANAGING_WORKERS is used to synchronize the manager role.
> Synchronizing among workers doesn't need blocking and that's why it's
> implemented as a flag.
>
> It got converted to a mutex a while back to add blocking wait from CPU
> hotplug path - 6037315269 ("workqueue: use mutex for global_cwq
> manager exclusion"). Later it turned out that synchronization among
> workers and cpu hotplug need to be done separately. Eventually,
> POOL_MANAGING_WORKERS is restored and workqueue->manager_mutex got
> morphed into workqueue->assoc_mutex - 552a37e936 ("workqueue: restore
> POOL_MANAGING_WORKERS") and b2eb83d123 ("workqueue: rename
> manager_mutex to assoc_mutex").
>
> Now, we're gonna need to be able to lock out managers from
> destroy_workqueue() to support multiple unbound pools with custom
> attributes making it again necessary to be able to block on the
> manager role. This patch replaces POOL_MANAGING_WORKERS with
> worker_pool->manager_mutex.
>
> This patch doesn't introduce any behavior changes.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> ---
> kernel/workqueue.c | 13 ++++++-------
> 1 file changed, 6 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 2645218..68b3443 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -64,7 +64,6 @@ enum {
> * create_worker() is in progress.
> */
> POOL_MANAGE_WORKERS = 1 << 0, /* need to manage workers */
> - POOL_MANAGING_WORKERS = 1 << 1, /* managing workers */
> POOL_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */
> POOL_FREEZING = 1 << 3, /* freeze in progress */
>
> @@ -145,6 +144,7 @@ struct worker_pool {
> DECLARE_HASHTABLE(busy_hash, BUSY_WORKER_HASH_ORDER);
> /* L: hash of busy workers */
>
> + struct mutex manager_mutex; /* the holder is the manager */
> struct mutex assoc_mutex; /* protect POOL_DISASSOCIATED */
> struct ida worker_ida; /* L: for worker IDs */
>
> @@ -702,7 +702,7 @@ static bool need_to_manage_workers(struct worker_pool *pool)
> /* Do we have too many workers and should some go away? */
> static bool too_many_workers(struct worker_pool *pool)
> {
> - bool managing = pool->flags & POOL_MANAGING_WORKERS;
> + bool managing = mutex_is_locked(&pool->manager_mutex);
> int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
> int nr_busy = pool->nr_workers - nr_idle;
>
> @@ -2027,15 +2027,13 @@ static bool manage_workers(struct worker *worker)
> struct worker_pool *pool = worker->pool;
> bool ret = false;
>
> - if (pool->flags & POOL_MANAGING_WORKERS)
> + if (!mutex_trylock(&pool->manager_mutex))
> return ret;
>
> - pool->flags |= POOL_MANAGING_WORKERS;
if mutex_trylock(&pool->manager_mutex) fails, it does not mean
the pool is managing workers. (although current code does).
so I recommend to keep POOL_MANAGING_WORKERS.
I suggest that you reuse assoc_mutex for your purpose(later patches).
(and rename assoc_mutex back to manager_mutex)
> -
> /*
> * To simplify both worker management and CPU hotplug, hold off
> * management while hotplug is in progress. CPU hotplug path can't
> - * grab %POOL_MANAGING_WORKERS to achieve this because that can
> + * grab @pool->manager_mutex to achieve this because that can
> * lead to idle worker depletion (all become busy thinking someone
> * else is managing) which in turn can result in deadlock under
> * extreme circumstances. Use @pool->assoc_mutex to synchronize
> @@ -2075,8 +2073,8 @@ static bool manage_workers(struct worker *worker)
> ret |= maybe_destroy_workers(pool);
> ret |= maybe_create_worker(pool);
>
> - pool->flags &= ~POOL_MANAGING_WORKERS;
> mutex_unlock(&pool->assoc_mutex);
> + mutex_unlock(&pool->manager_mutex);
> return ret;
> }
>
> @@ -3805,6 +3803,7 @@ static int __init init_workqueues(void)
> setup_timer(&pool->mayday_timer, pool_mayday_timeout,
> (unsigned long)pool);
>
> + mutex_init(&pool->manager_mutex);
> mutex_init(&pool->assoc_mutex);
> ida_init(&pool->worker_ida);
>
next prev parent reply other threads:[~2013-03-10 10:07 UTC|newest]
Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-02 3:23 [PATCHSET wq/for-3.10-tmp] workqueue: implement workqueue with custom worker attributes Tejun Heo
2013-03-02 3:23 ` [PATCH 01/31] workqueue: make sanity checks less punshing using WARN_ON[_ONCE]()s Tejun Heo
2013-03-02 3:23 ` [PATCH 02/31] workqueue: make workqueue_lock irq-safe Tejun Heo
2013-03-02 3:23 ` [PATCH 03/31] workqueue: introduce kmem_cache for pool_workqueues Tejun Heo
2013-03-02 3:23 ` [PATCH 04/31] workqueue: add workqueue_struct->pwqs list Tejun Heo
2013-03-02 3:23 ` [PATCH 05/31] workqueue: replace for_each_pwq_cpu() with for_each_pwq() Tejun Heo
2013-03-02 3:23 ` [PATCH 06/31] workqueue: introduce for_each_pool() Tejun Heo
2013-03-02 3:23 ` [PATCH 07/31] workqueue: restructure pool / pool_workqueue iterations in freeze/thaw functions Tejun Heo
2013-03-10 10:09 ` Lai Jiangshan
2013-03-10 12:34 ` Tejun Heo
2013-03-02 3:23 ` [PATCH 08/31] workqueue: add wokrqueue_struct->maydays list to replace mayday cpu iterators Tejun Heo
2013-03-02 3:24 ` [PATCH 09/31] workqueue: consistently use int for @cpu variables Tejun Heo
2013-03-02 3:24 ` [PATCH 10/31] workqueue: remove workqueue_struct->pool_wq.single Tejun Heo
2013-03-02 3:24 ` [PATCH 11/31] workqueue: replace get_pwq() with explicit per_cpu_ptr() accesses and first_pwq() Tejun Heo
2013-03-02 3:24 ` [PATCH 12/31] workqueue: update synchronization rules on workqueue->pwqs Tejun Heo
2013-03-10 10:09 ` Lai Jiangshan
2013-03-10 12:38 ` Tejun Heo
2013-03-12 18:20 ` [PATCH v2 " Tejun Heo
2013-03-02 3:24 ` [PATCH 13/31] workqueue: update synchronization rules on worker_pool_idr Tejun Heo
2013-03-12 18:20 ` [PATCH v2 " Tejun Heo
2013-03-02 3:24 ` [PATCH 14/31] workqueue: replace POOL_MANAGING_WORKERS flag with worker_pool->manager_mutex Tejun Heo
2013-03-10 10:09 ` Lai Jiangshan [this message]
2013-03-10 12:46 ` Tejun Heo
2013-03-12 18:19 ` [PATCH v2 " Tejun Heo
2013-03-02 3:24 ` [PATCH 15/31] workqueue: separate out init_worker_pool() from init_workqueues() Tejun Heo
2013-03-02 3:24 ` [PATCH 16/31] workqueue: introduce workqueue_attrs Tejun Heo
2013-03-04 18:37 ` [PATCH v2 " Tejun Heo
2013-03-05 22:29 ` Ryan Mallon
2013-03-05 22:33 ` Tejun Heo
2013-03-05 22:34 ` Tejun Heo
2013-03-05 22:40 ` Ryan Mallon
2013-03-05 22:44 ` Tejun Heo
2013-03-05 23:20 ` Ryan Mallon
2013-03-05 23:28 ` Tejun Heo
2013-03-02 3:24 ` [PATCH 17/31] workqueue: implement attribute-based unbound worker_pool management Tejun Heo
2013-03-10 10:08 ` Lai Jiangshan
2013-03-10 12:58 ` Tejun Heo
2013-03-10 18:36 ` Tejun Heo
2013-03-12 18:21 ` [PATCH v2 " Tejun Heo
2013-03-02 3:24 ` [PATCH 18/31] workqueue: remove unbound_std_worker_pools[] and related helpers Tejun Heo
2013-03-02 3:24 ` [PATCH 19/31] workqueue: drop "std" from cpu_std_worker_pools and for_each_std_worker_pool() Tejun Heo
2013-03-02 3:24 ` [PATCH 20/31] workqueue: add pool ID to the names of unbound kworkers Tejun Heo
2013-03-02 3:24 ` [PATCH 21/31] workqueue: drop WQ_RESCUER and test workqueue->rescuer for NULL instead Tejun Heo
2013-03-02 3:24 ` [PATCH 22/31] workqueue: restructure __alloc_workqueue_key() Tejun Heo
2013-03-02 3:24 ` [PATCH 23/31] workqueue: implement get/put_pwq() Tejun Heo
2013-03-02 3:24 ` [PATCH 24/31] workqueue: prepare flush_workqueue() for dynamic creation and destrucion of unbound pool_workqueues Tejun Heo
2013-03-02 3:24 ` [PATCH 25/31] workqueue: perform non-reentrancy test when queueing to unbound workqueues too Tejun Heo
2013-03-02 3:24 ` [PATCH 26/31] workqueue: implement apply_workqueue_attrs() Tejun Heo
2013-03-02 3:24 ` [PATCH 27/31] workqueue: make it clear that WQ_DRAINING is an internal flag Tejun Heo
2013-03-02 3:24 ` [PATCH 28/31] workqueue: reject increasing max_active for ordered workqueues Tejun Heo
2013-03-04 18:30 ` [PATCH UPDATED 28/31] workqueue: reject adjusting max_active or applying attrs to " Tejun Heo
2013-03-02 3:24 ` [PATCH 29/31] cpumask: implement cpumask_parse() Tejun Heo
2013-03-02 3:24 ` [PATCH 30/31] driver/base: implement subsys_virtual_register() Tejun Heo
2013-03-02 18:17 ` Greg Kroah-Hartman
2013-03-02 20:26 ` Tejun Heo
2013-03-03 6:42 ` Kay Sievers
2013-03-05 20:43 ` Tejun Heo
2013-03-07 23:31 ` Greg Kroah-Hartman
2013-03-08 0:04 ` Kay Sievers
2013-03-10 11:57 ` Tejun Heo
2013-03-10 16:45 ` Greg Kroah-Hartman
2013-03-10 17:00 ` Kay Sievers
2013-03-10 17:24 ` Greg Kroah-Hartman
2013-03-10 17:50 ` Kay Sievers
2013-03-10 18:34 ` Tejun Heo
2013-03-12 18:40 ` Tejun Heo
2013-03-02 3:24 ` [PATCH 31/31] workqueue: implement sysfs interface for workqueues Tejun Heo
2013-03-04 18:30 ` [PATCH v2 " Tejun Heo
2013-03-05 20:41 ` [PATCHSET wq/for-3.10-tmp] workqueue: implement workqueue with custom worker attributes Tejun Heo
2013-03-10 10:34 ` Lai Jiangshan
2013-03-10 12:01 ` Tejun Heo
2013-03-11 15:24 ` Tejun Heo
2013-03-11 15:40 ` Lai Jiangshan
2013-03-11 15:42 ` Lai Jiangshan
2013-03-11 15:43 ` Tejun Heo
2013-03-12 18:10 ` Tejun Heo
2013-03-12 18:34 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=513C5BE2.8090409@cn.fujitsu.com \
--to=laijs@cn.fujitsu.com \
--cc=axboe@kernel.dk \
--cc=jmoyer@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=tj@kernel.org \
--cc=zab@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox