public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: linux-kernel@vger.kernel.org
Cc: laijs@cn.fujitsu.com, Tejun Heo <tj@kernel.org>
Subject: [PATCH 3/7] workqueue: better define locking rules around worker creation / destruction
Date: Wed, 13 Mar 2013 19:57:21 -0700	[thread overview]
Message-ID: <1363229845-6831-4-git-send-email-tj@kernel.org> (raw)
In-Reply-To: <1363229845-6831-1-git-send-email-tj@kernel.org>

When a manager creates or destroys workers, the operations are always
done with the manager_mutex held; however, initial worker creation or
worker destruction during pool release don't grab the mutex.  They are
still correct as initial worker creation doesn't require
synchronization and grabbing manager_arb provides enough exclusion for
pool release path.

Still, let's make everyone follow the same rules for consistency and
such that lockdep annotations can be added.

Update create_and_start_worker() and put_unbound_pool() to grab
manager_mutex around thread creation and destruction respectively and
add lockdep assertions to create_worker() and destroy_worker().

This patch doesn't introduce any visible behavior changes.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index cac7106..ce1ab06 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1715,6 +1715,8 @@ static struct worker *create_worker(struct worker_pool *pool)
 	struct worker *worker = NULL;
 	int id = -1;
 
+	lockdep_assert_held(&pool->manager_mutex);
+
 	spin_lock_irq(&pool->lock);
 	while (ida_get_new(&pool->worker_ida, &id)) {
 		spin_unlock_irq(&pool->lock);
@@ -1796,12 +1798,14 @@ static void start_worker(struct worker *worker)
  * create_and_start_worker - create and start a worker for a pool
  * @pool: the target pool
  *
- * Create and start a new worker for @pool.
+ * Grab the managership of @pool and create and start a new worker for it.
  */
 static int create_and_start_worker(struct worker_pool *pool)
 {
 	struct worker *worker;
 
+	mutex_lock(&pool->manager_mutex);
+
 	worker = create_worker(pool);
 	if (worker) {
 		spin_lock_irq(&pool->lock);
@@ -1809,6 +1813,8 @@ static int create_and_start_worker(struct worker_pool *pool)
 		spin_unlock_irq(&pool->lock);
 	}
 
+	mutex_unlock(&pool->manager_mutex);
+
 	return worker ? 0 : -ENOMEM;
 }
 
@@ -1826,6 +1832,9 @@ static void destroy_worker(struct worker *worker)
 	struct worker_pool *pool = worker->pool;
 	int id = worker->id;
 
+	lockdep_assert_held(&pool->manager_mutex);
+	lockdep_assert_held(&pool->lock);
+
 	/* sanity check frenzy */
 	if (WARN_ON(worker->current_work) ||
 	    WARN_ON(!list_empty(&worker->scheduled)))
@@ -3531,6 +3540,7 @@ static void put_unbound_pool(struct worker_pool *pool)
 	 * manager_mutex.
 	 */
 	mutex_lock(&pool->manager_arb);
+	mutex_lock(&pool->manager_mutex);
 	spin_lock_irq(&pool->lock);
 
 	while ((worker = first_worker(pool)))
@@ -3538,6 +3548,7 @@ static void put_unbound_pool(struct worker_pool *pool)
 	WARN_ON(pool->nr_workers || pool->nr_idle);
 
 	spin_unlock_irq(&pool->lock);
+	mutex_unlock(&pool->manager_mutex);
 	mutex_unlock(&pool->manager_arb);
 
 	/* shut down the timers */
-- 
1.8.1.4


  parent reply	other threads:[~2013-03-14  2:59 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-14  2:57 [PATCHSET wq/for-3.10] workqueue: break up workqueue_lock into multiple locks Tejun Heo
2013-03-14  2:57 ` [PATCH 1/7] workqueue: rename worker_pool->assoc_mutex to ->manager_mutex Tejun Heo
2013-03-14 16:00   ` Lai Jiangshan
2013-03-14 16:41     ` Tejun Heo
2013-03-14  2:57 ` [PATCH 2/7] workqueue: factor out initial worker creation into create_and_start_worker() Tejun Heo
2013-03-14  2:57 ` Tejun Heo [this message]
2013-03-14  2:57 ` [PATCH 4/7] workqueue: relocate global variable defs and function decls in workqueue.c Tejun Heo
2013-03-14  2:57 ` [PATCH 5/7] workqueue: separate out pool and workqueue locking into wq_mutex Tejun Heo
2013-03-14  2:57 ` [PATCH 6/7] workqueue: separate out pool_workqueue locking into pwq_lock Tejun Heo
2013-03-14  2:57 ` [PATCH 7/7] workqueue: rename workqueue_lock to wq_mayday_lock Tejun Heo
2013-03-18 19:51 ` [PATCHSET wq/for-3.10] workqueue: break up workqueue_lock into multiple locks Tejun Heo
2013-03-20 14:01   ` JoonSoo Kim
2013-03-20 14:38     ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1363229845-6831-4-git-send-email-tj@kernel.org \
    --to=tj@kernel.org \
    --cc=laijs@cn.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox