* [PATCH 0/3] workqueue: manager cleanup
@ 2014-07-10 16:01 Lai Jiangshan
2014-07-10 16:01 ` [PATCH 1/3] workqueue: remove the first check and the return value of maybe_create_worker() Lai Jiangshan
` (3 more replies)
0 siblings, 4 replies; 14+ messages in thread
From: Lai Jiangshan @ 2014-07-10 16:01 UTC (permalink / raw)
To: linux-kernel; +Cc: Tejun Heo, Lai Jiangshan
Hi, TJ
These are the cleanups for which you asked during the prevous development,
especailly the patch3 merging start_worker() into create_worker().
Thanks,
Lai
Lai Jiangshan (3):
workqueue: remove the first check and the return value of
maybe_create_worker()
workqueue: remove the guarantee and the retrying of
maybe_create_worker()
workqueue: unfold start_worker() into create_worker()
kernel/workqueue.c | 113 +++++++++++----------------------------------------
1 files changed, 25 insertions(+), 88 deletions(-)
--
1.7.4.4
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/3] workqueue: remove the first check and the return value of maybe_create_worker()
2014-07-10 16:01 [PATCH 0/3] workqueue: manager cleanup Lai Jiangshan
@ 2014-07-10 16:01 ` Lai Jiangshan
2014-07-11 15:03 ` Tejun Heo
2014-07-10 16:01 ` [PATCH 2/3] workqueue: remove the guarantee and the retrying " Lai Jiangshan
` (2 subsequent siblings)
3 siblings, 1 reply; 14+ messages in thread
From: Lai Jiangshan @ 2014-07-10 16:01 UTC (permalink / raw)
To: linux-kernel; +Cc: Tejun Heo, Lai Jiangshan
The pool->lock is kept locked during the time the worker_thread() checks the
condition and maybe_create_worker() is called, therefore, the condition
is unchanged and the first condition check is unneeded.
After this unneeded check is removed, maybe_create_worker() only returns
true, so we remove the return value of it and adjust the call-site.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 18 +++++-------------
1 files changed, 5 insertions(+), 13 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f344334..92f7ea0c 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1887,17 +1887,11 @@ static void pool_mayday_timeout(unsigned long __pool)
* spin_lock_irq(pool->lock) which may be released and regrabbed
* multiple times. Does GFP_KERNEL allocations. Called only from
* manager.
- *
- * Return:
- * %false if no action was taken and pool->lock stayed locked, %true
- * otherwise.
*/
-static bool maybe_create_worker(struct worker_pool *pool)
+static void maybe_create_worker(struct worker_pool *pool)
__releases(&pool->lock)
__acquires(&pool->lock)
{
- if (!need_to_create_worker(pool))
- return false;
restart:
spin_unlock_irq(&pool->lock);
@@ -1914,7 +1908,7 @@ restart:
start_worker(worker);
if (WARN_ON_ONCE(need_to_create_worker(pool)))
goto restart;
- return true;
+ return;
}
if (!need_to_create_worker(pool))
@@ -1930,7 +1924,6 @@ restart:
spin_lock_irq(&pool->lock);
if (need_to_create_worker(pool))
goto restart;
- return true;
}
/**
@@ -1959,7 +1952,6 @@ restart:
static bool manage_workers(struct worker *worker)
{
struct worker_pool *pool = worker->pool;
- bool ret = false;
/*
* Anyone who successfully grabs manager_arb wins the arbitration
@@ -1972,12 +1964,12 @@ static bool manage_workers(struct worker *worker)
* actual management, the pool may stall indefinitely.
*/
if (!mutex_trylock(&pool->manager_arb))
- return ret;
+ return false;
- ret |= maybe_create_worker(pool);
+ maybe_create_worker(pool);
mutex_unlock(&pool->manager_arb);
- return ret;
+ return true;
}
/**
--
1.7.4.4
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/3] workqueue: remove the guarantee and the retrying of maybe_create_worker()
2014-07-10 16:01 [PATCH 0/3] workqueue: manager cleanup Lai Jiangshan
2014-07-10 16:01 ` [PATCH 1/3] workqueue: remove the first check and the return value of maybe_create_worker() Lai Jiangshan
@ 2014-07-10 16:01 ` Lai Jiangshan
2014-07-11 15:02 ` Tejun Heo
2014-07-10 16:01 ` [PATCH 3/3] workqueue: unfold start_worker() into create_worker() Lai Jiangshan
2014-07-14 4:05 ` [PATCH 0/1 V2] workqueue: manager cleanup Lai Jiangshan
3 siblings, 1 reply; 14+ messages in thread
From: Lai Jiangshan @ 2014-07-10 16:01 UTC (permalink / raw)
To: linux-kernel; +Cc: Tejun Heo, Lai Jiangshan
maybe_create_worker() has a strong guarantee that there has at least
one idle worker on return from this function. This guarantee is guaranteed
via the check and the retrying in this function.
But the caller (worker_thread()) also has the same check and retrying,
so the guarantee is not really required and the check and the retrying in
maybe_create_worker() are unnecessary and redundant.
So we remove the guarantee as well as the check and the retrying. The caller
takes responsibility to retry when needed.
The only trade-off is that the mayday timer will be removed and re-added
across retrying. Since retrying is expected as rare case, the trade-off
is acceptible.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 49 ++++++++++++++++++-------------------------------
1 files changed, 18 insertions(+), 31 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 92f7ea0c..4fdd6d0 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1875,14 +1875,17 @@ static void pool_mayday_timeout(unsigned long __pool)
* @pool: pool to create a new worker for
*
* Create a new worker for @pool if necessary. @pool is guaranteed to
- * have at least one idle worker on return from this function. If
- * creating a new worker takes longer than MAYDAY_INTERVAL, mayday is
+ * make some progresses on return from this function.
+ * 1) success to create a new idle worker. Or
+ * 2) cool down a while after it failed. Or
+ * 3) condition changed (no longer need to create worker) after it failed.
+ * In any case, the caller will recheck the condition and retry when needed,
+ * so this function doesn't need to retry.
+ *
+ * If creating a new worker takes longer than MAYDAY_INTERVAL, mayday is
* sent to all rescuers with works scheduled on @pool to resolve
* possible allocation deadlock.
*
- * On return, need_to_create_worker() is guaranteed to be %false and
- * may_start_working() %true.
- *
* LOCKING:
* spin_lock_irq(pool->lock) which may be released and regrabbed
* multiple times. Does GFP_KERNEL allocations. Called only from
@@ -1892,38 +1895,26 @@ static void maybe_create_worker(struct worker_pool *pool)
__releases(&pool->lock)
__acquires(&pool->lock)
{
-restart:
+ struct worker *worker;
+
spin_unlock_irq(&pool->lock);
/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
- while (true) {
- struct worker *worker;
-
- worker = create_worker(pool);
- if (worker) {
- del_timer_sync(&pool->mayday_timer);
- spin_lock_irq(&pool->lock);
- start_worker(worker);
- if (WARN_ON_ONCE(need_to_create_worker(pool)))
- goto restart;
- return;
- }
-
- if (!need_to_create_worker(pool))
- break;
+ worker = create_worker(pool);
+ if (worker) {
+ del_timer_sync(&pool->mayday_timer);
+ spin_lock_irq(&pool->lock);
+ start_worker(worker);
+ return;
+ }
+ if (need_to_create_worker(pool))
schedule_timeout_interruptible(CREATE_COOLDOWN);
- if (!need_to_create_worker(pool))
- break;
- }
-
del_timer_sync(&pool->mayday_timer);
spin_lock_irq(&pool->lock);
- if (need_to_create_worker(pool))
- goto restart;
}
/**
@@ -1934,10 +1925,6 @@ restart:
* to. At any given time, there can be only zero or one manager per
* pool. The exclusion is handled automatically by this function.
*
- * The caller can safely start processing works on false return. On
- * true return, it's guaranteed that need_to_create_worker() is false
- * and may_start_working() is true.
- *
* CONTEXT:
* spin_lock_irq(pool->lock) which may be released and regrabbed
* multiple times. Does GFP_KERNEL allocations.
--
1.7.4.4
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/3] workqueue: unfold start_worker() into create_worker()
2014-07-10 16:01 [PATCH 0/3] workqueue: manager cleanup Lai Jiangshan
2014-07-10 16:01 ` [PATCH 1/3] workqueue: remove the first check and the return value of maybe_create_worker() Lai Jiangshan
2014-07-10 16:01 ` [PATCH 2/3] workqueue: remove the guarantee and the retrying " Lai Jiangshan
@ 2014-07-10 16:01 ` Lai Jiangshan
2014-07-14 4:05 ` [PATCH 0/1 V2] workqueue: manager cleanup Lai Jiangshan
3 siblings, 0 replies; 14+ messages in thread
From: Lai Jiangshan @ 2014-07-10 16:01 UTC (permalink / raw)
To: linux-kernel; +Cc: Tejun Heo, Lai Jiangshan
Simply unfold the code of start_worker() into create_worker() and
remove the original start_worker() and create_and_start_worker().
maybe_create_worker() also becomes decently shorter.
The only trade-off is the introduced overhead that the pool->lock
is released and regrabbed after the newly worker is started.
The overhead is acceptible since the manager is slow path.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 68 ++++++++++------------------------------------------
1 files changed, 13 insertions(+), 55 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4fdd6d0..c820057 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1553,7 +1553,7 @@ static void worker_enter_idle(struct worker *worker)
(worker->hentry.next || worker->hentry.pprev)))
return;
- /* can't use worker_set_flags(), also called from start_worker() */
+ /* can't use worker_set_flags(), also called from create_worker() */
worker->flags |= WORKER_IDLE;
pool->nr_idle++;
worker->last_active = jiffies;
@@ -1674,8 +1674,7 @@ static void worker_detach_from_pool(struct worker *worker,
* create_worker - create a new workqueue worker
* @pool: pool the new worker will belong to
*
- * Create a new worker which is attached to @pool. The new worker must be
- * started by start_worker().
+ * Create and start a new worker which is attached to @pool.
*
* CONTEXT:
* Might sleep. Does GFP_KERNEL allocations.
@@ -1720,6 +1719,13 @@ static struct worker *create_worker(struct worker_pool *pool)
/* successful, attach the worker to the pool */
worker_attach_to_pool(worker, pool);
+ /* start the newly created worker */
+ spin_lock_irq(&pool->lock);
+ worker->pool->nr_workers++;
+ worker_enter_idle(worker);
+ wake_up_process(worker->task);
+ spin_unlock_irq(&pool->lock);
+
return worker;
fail:
@@ -1730,44 +1736,6 @@ fail:
}
/**
- * start_worker - start a newly created worker
- * @worker: worker to start
- *
- * Make the pool aware of @worker and start it.
- *
- * CONTEXT:
- * spin_lock_irq(pool->lock).
- */
-static void start_worker(struct worker *worker)
-{
- worker->pool->nr_workers++;
- worker_enter_idle(worker);
- wake_up_process(worker->task);
-}
-
-/**
- * create_and_start_worker - create and start a worker for a pool
- * @pool: the target pool
- *
- * Grab the managership of @pool and create and start a new worker for it.
- *
- * Return: 0 on success. A negative error code otherwise.
- */
-static int create_and_start_worker(struct worker_pool *pool)
-{
- struct worker *worker;
-
- worker = create_worker(pool);
- if (worker) {
- spin_lock_irq(&pool->lock);
- start_worker(worker);
- spin_unlock_irq(&pool->lock);
- }
-
- return worker ? 0 : -ENOMEM;
-}
-
-/**
* destroy_worker - destroy a workqueue worker
* @worker: worker to be destroyed
*
@@ -1895,22 +1863,12 @@ static void maybe_create_worker(struct worker_pool *pool)
__releases(&pool->lock)
__acquires(&pool->lock)
{
- struct worker *worker;
-
spin_unlock_irq(&pool->lock);
/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
- worker = create_worker(pool);
- if (worker) {
- del_timer_sync(&pool->mayday_timer);
- spin_lock_irq(&pool->lock);
- start_worker(worker);
- return;
- }
-
- if (need_to_create_worker(pool))
+ if (!create_worker(pool) && need_to_create_worker(pool))
schedule_timeout_interruptible(CREATE_COOLDOWN);
del_timer_sync(&pool->mayday_timer);
@@ -3524,7 +3482,7 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
goto fail;
/* create and start the initial worker */
- if (create_and_start_worker(pool) < 0)
+ if (!create_worker(pool))
goto fail;
/* install */
@@ -4598,7 +4556,7 @@ static int workqueue_cpu_up_callback(struct notifier_block *nfb,
for_each_cpu_worker_pool(pool, cpu) {
if (pool->nr_workers)
continue;
- if (create_and_start_worker(pool) < 0)
+ if (!create_worker(pool))
return NOTIFY_BAD;
}
break;
@@ -4898,7 +4856,7 @@ static int __init init_workqueues(void)
for_each_cpu_worker_pool(pool, cpu) {
pool->flags &= ~POOL_DISASSOCIATED;
- BUG_ON(create_and_start_worker(pool) < 0);
+ BUG_ON(!create_worker(pool));
}
}
--
1.7.4.4
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 2/3] workqueue: remove the guarantee and the retrying of maybe_create_worker()
2014-07-10 16:01 ` [PATCH 2/3] workqueue: remove the guarantee and the retrying " Lai Jiangshan
@ 2014-07-11 15:02 ` Tejun Heo
0 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2014-07-11 15:02 UTC (permalink / raw)
To: Lai Jiangshan; +Cc: linux-kernel
On Fri, Jul 11, 2014 at 12:01:04AM +0800, Lai Jiangshan wrote:
> maybe_create_worker() has a strong guarantee that there has at least
> one idle worker on return from this function. This guarantee is guaranteed
> via the check and the retrying in this function.
>
> But the caller (worker_thread()) also has the same check and retrying,
> so the guarantee is not really required and the check and the retrying in
> maybe_create_worker() are unnecessary and redundant.
>
> So we remove the guarantee as well as the check and the retrying. The caller
> takes responsibility to retry when needed.
>
> The only trade-off is that the mayday timer will be removed and re-added
> across retrying. Since retrying is expected as rare case, the trade-off
> is acceptible.
>
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> ---
> kernel/workqueue.c | 49 ++++++++++++++++++-------------------------------
> 1 files changed, 18 insertions(+), 31 deletions(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 92f7ea0c..4fdd6d0 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1875,14 +1875,17 @@ static void pool_mayday_timeout(unsigned long __pool)
> * @pool: pool to create a new worker for
> *
> * Create a new worker for @pool if necessary. @pool is guaranteed to
> - * have at least one idle worker on return from this function. If
> - * creating a new worker takes longer than MAYDAY_INTERVAL, mayday is
> + * make some progresses on return from this function.
Hmmm.... not really.
> + * 1) success to create a new idle worker. Or
Another work item can race and occupy that worker before this worker
regrabs pool->lock, which is safe because
> + * 2) cool down a while after it failed. Or
> + * 3) condition changed (no longer need to create worker) after it failed.
> + * In any case, the caller will recheck the condition and retry when needed,
> + * so this function doesn't need to retry.
the caller checks the condition again but the above 1/2/3 are
confusing at best.
...
> + worker = create_worker(pool);
> + if (worker) {
> + del_timer_sync(&pool->mayday_timer);
> + spin_lock_irq(&pool->lock);
> + start_worker(worker);
> + return;
> + }
>
> + if (need_to_create_worker(pool))
> schedule_timeout_interruptible(CREATE_COOLDOWN);
Ugh, so we're now inserting delay in the inner function and looping
from the outside? That's messy. I'd prefer removing the outer loop
instead.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/3] workqueue: remove the first check and the return value of maybe_create_worker()
2014-07-10 16:01 ` [PATCH 1/3] workqueue: remove the first check and the return value of maybe_create_worker() Lai Jiangshan
@ 2014-07-11 15:03 ` Tejun Heo
2014-07-14 2:30 ` Lai Jiangshan
0 siblings, 1 reply; 14+ messages in thread
From: Tejun Heo @ 2014-07-11 15:03 UTC (permalink / raw)
To: Lai Jiangshan; +Cc: linux-kernel
On Fri, Jul 11, 2014 at 12:01:03AM +0800, Lai Jiangshan wrote:
> @@ -1887,17 +1887,11 @@ static void pool_mayday_timeout(unsigned long __pool)
> * spin_lock_irq(pool->lock) which may be released and regrabbed
> * multiple times. Does GFP_KERNEL allocations. Called only from
> * manager.
> - *
> - * Return:
> - * %false if no action was taken and pool->lock stayed locked, %true
> - * otherwise.
> */
> -static bool maybe_create_worker(struct worker_pool *pool)
> +static void maybe_create_worker(struct worker_pool *pool)
We probably should drop "maybe_" from the function name?
--
tejun
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/3] workqueue: remove the first check and the return value of maybe_create_worker()
2014-07-11 15:03 ` Tejun Heo
@ 2014-07-14 2:30 ` Lai Jiangshan
2014-07-14 14:40 ` Tejun Heo
0 siblings, 1 reply; 14+ messages in thread
From: Lai Jiangshan @ 2014-07-14 2:30 UTC (permalink / raw)
To: Tejun Heo; +Cc: linux-kernel
On 07/11/2014 11:03 PM, Tejun Heo wrote:
> On Fri, Jul 11, 2014 at 12:01:03AM +0800, Lai Jiangshan wrote:
>> @@ -1887,17 +1887,11 @@ static void pool_mayday_timeout(unsigned long __pool)
>> * spin_lock_irq(pool->lock) which may be released and regrabbed
>> * multiple times. Does GFP_KERNEL allocations. Called only from
>> * manager.
>> - *
>> - * Return:
>> - * %false if no action was taken and pool->lock stayed locked, %true
>> - * otherwise.
>> */
>> -static bool maybe_create_worker(struct worker_pool *pool)
>> +static void maybe_create_worker(struct worker_pool *pool)
>
> We probably should drop "maybe_" from the function name?
>
We already have create_worker(). And maybe_create_worker() does not always create worker.
maybe_create_worker() may not create worker if the condition is changed after
it fails at the first time,
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 0/1 V2] workqueue: manager cleanup
2014-07-10 16:01 [PATCH 0/3] workqueue: manager cleanup Lai Jiangshan
` (2 preceding siblings ...)
2014-07-10 16:01 ` [PATCH 3/3] workqueue: unfold start_worker() into create_worker() Lai Jiangshan
@ 2014-07-14 4:05 ` Lai Jiangshan
2014-07-14 4:05 ` [PATCH 1/1 V2] workqueue: unfold start_worker() into create_worker() Lai Jiangshan
3 siblings, 1 reply; 14+ messages in thread
From: Lai Jiangshan @ 2014-07-14 4:05 UTC (permalink / raw)
To: linux-kernel; +Cc: Tejun Heo, Lai Jiangshan
Hi, TJ,
I dropped the patch1 & patch2 of the V1, only the patch3 is kept and
re-based. The new patch depends on the patch of last night:
"workqueue: remove the del_timer_sync()s in maybe_create_worker()".
Thanks,
Lai
Lai Jiangshan (1):
workqueue: unfold start_worker() into create_worker()
kernel/workqueue.c | 69 ++++++++++------------------------------------------
1 files changed, 13 insertions(+), 56 deletions(-)
--
1.7.4.4
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/1 V2] workqueue: unfold start_worker() into create_worker()
2014-07-14 4:05 ` [PATCH 0/1 V2] workqueue: manager cleanup Lai Jiangshan
@ 2014-07-14 4:05 ` Lai Jiangshan
2014-07-16 1:28 ` Lai Jiangshan
0 siblings, 1 reply; 14+ messages in thread
From: Lai Jiangshan @ 2014-07-14 4:05 UTC (permalink / raw)
To: linux-kernel; +Cc: Tejun Heo, Lai Jiangshan
Simply unfold the code of start_worker() into create_worker() and
remove the original start_worker() and create_and_start_worker().
The only trade-off is the introduced overhead that the pool->lock
is released and re-grabbed after the newly worker is started.
The overhead is acceptable since the manager is slow path.
And because this new locking behavior, the newly created worker
may grab the lock earlier than the manager and go to process
work items. In this case, the need_to_create_worker() may be
true as expected and the manager goes to restart without complaint.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 69 ++++++++++------------------------------------------
1 files changed, 13 insertions(+), 56 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 2b3c12c..5b68527 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1553,7 +1553,7 @@ static void worker_enter_idle(struct worker *worker)
(worker->hentry.next || worker->hentry.pprev)))
return;
- /* can't use worker_set_flags(), also called from start_worker() */
+ /* can't use worker_set_flags(), also called from create_worker() */
worker->flags |= WORKER_IDLE;
pool->nr_idle++;
worker->last_active = jiffies;
@@ -1674,8 +1674,7 @@ static void worker_detach_from_pool(struct worker *worker,
* create_worker - create a new workqueue worker
* @pool: pool the new worker will belong to
*
- * Create a new worker which is attached to @pool. The new worker must be
- * started by start_worker().
+ * Create and start a new worker which is attached to @pool.
*
* CONTEXT:
* Might sleep. Does GFP_KERNEL allocations.
@@ -1720,6 +1719,13 @@ static struct worker *create_worker(struct worker_pool *pool)
/* successful, attach the worker to the pool */
worker_attach_to_pool(worker, pool);
+ /* start the newly created worker */
+ spin_lock_irq(&pool->lock);
+ worker->pool->nr_workers++;
+ worker_enter_idle(worker);
+ wake_up_process(worker->task);
+ spin_unlock_irq(&pool->lock);
+
return worker;
fail:
@@ -1730,44 +1736,6 @@ fail:
}
/**
- * start_worker - start a newly created worker
- * @worker: worker to start
- *
- * Make the pool aware of @worker and start it.
- *
- * CONTEXT:
- * spin_lock_irq(pool->lock).
- */
-static void start_worker(struct worker *worker)
-{
- worker->pool->nr_workers++;
- worker_enter_idle(worker);
- wake_up_process(worker->task);
-}
-
-/**
- * create_and_start_worker - create and start a worker for a pool
- * @pool: the target pool
- *
- * Grab the managership of @pool and create and start a new worker for it.
- *
- * Return: 0 on success. A negative error code otherwise.
- */
-static int create_and_start_worker(struct worker_pool *pool)
-{
- struct worker *worker;
-
- worker = create_worker(pool);
- if (worker) {
- spin_lock_irq(&pool->lock);
- start_worker(worker);
- spin_unlock_irq(&pool->lock);
- }
-
- return worker ? 0 : -ENOMEM;
-}
-
-/**
* destroy_worker - destroy a workqueue worker
* @worker: worker to be destroyed
*
@@ -1905,18 +1873,7 @@ restart:
mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
while (true) {
- struct worker *worker;
-
- worker = create_worker(pool);
- if (worker) {
- spin_lock_irq(&pool->lock);
- start_worker(worker);
- if (WARN_ON_ONCE(need_to_create_worker(pool)))
- goto restart;
- return true;
- }
-
- if (!need_to_create_worker(pool))
+ if (create_worker(pool) || !need_to_create_worker(pool))
break;
schedule_timeout_interruptible(CREATE_COOLDOWN);
@@ -3543,7 +3500,7 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
goto fail;
/* create and start the initial worker */
- if (create_and_start_worker(pool) < 0)
+ if (!create_worker(pool))
goto fail;
/* install */
@@ -4617,7 +4574,7 @@ static int workqueue_cpu_up_callback(struct notifier_block *nfb,
for_each_cpu_worker_pool(pool, cpu) {
if (pool->nr_workers)
continue;
- if (create_and_start_worker(pool) < 0)
+ if (!create_worker(pool))
return NOTIFY_BAD;
}
break;
@@ -4917,7 +4874,7 @@ static int __init init_workqueues(void)
for_each_cpu_worker_pool(pool, cpu) {
pool->flags &= ~POOL_DISASSOCIATED;
- BUG_ON(create_and_start_worker(pool) < 0);
+ BUG_ON(!create_worker(pool));
}
}
--
1.7.4.4
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 1/3] workqueue: remove the first check and the return value of maybe_create_worker()
2014-07-14 2:30 ` Lai Jiangshan
@ 2014-07-14 14:40 ` Tejun Heo
0 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2014-07-14 14:40 UTC (permalink / raw)
To: Lai Jiangshan; +Cc: linux-kernel
On Mon, Jul 14, 2014 at 10:30:11AM +0800, Lai Jiangshan wrote:
> We already have create_worker(). And maybe_create_worker() does not always create worker.
Well, it does unless it fails. Explicit maybe_ there is quite
misleading.
> maybe_create_worker() may not create worker if the condition is changed after
> it fails at the first time,
It's misleading. Please come up with a different name.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1 V2] workqueue: unfold start_worker() into create_worker()
2014-07-14 4:05 ` [PATCH 1/1 V2] workqueue: unfold start_worker() into create_worker() Lai Jiangshan
@ 2014-07-16 1:28 ` Lai Jiangshan
2014-07-18 22:57 ` Tejun Heo
0 siblings, 1 reply; 14+ messages in thread
From: Lai Jiangshan @ 2014-07-16 1:28 UTC (permalink / raw)
To: Lai Jiangshan; +Cc: linux-kernel, Tejun Heo
On 07/14/2014 12:05 PM, Lai Jiangshan wrote:
> Simply unfold the code of start_worker() into create_worker() and
> remove the original start_worker() and create_and_start_worker().
>
> The only trade-off is the introduced overhead that the pool->lock
> is released and re-grabbed after the newly worker is started.
> The overhead is acceptable since the manager is slow path.
Hi, TJ
Will you accept this trade-off and the patch?
If so, I will rebase this patch without any dependence on other patch.
Thanks,
Lai
>
> And because this new locking behavior, the newly created worker
> may grab the lock earlier than the manager and go to process
> work items. In this case, the need_to_create_worker() may be
> true as expected and the manager goes to restart without complaint.
>
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> ---
> kernel/workqueue.c | 69 ++++++++++------------------------------------------
> 1 files changed, 13 insertions(+), 56 deletions(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 2b3c12c..5b68527 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1553,7 +1553,7 @@ static void worker_enter_idle(struct worker *worker)
> (worker->hentry.next || worker->hentry.pprev)))
> return;
>
> - /* can't use worker_set_flags(), also called from start_worker() */
> + /* can't use worker_set_flags(), also called from create_worker() */
> worker->flags |= WORKER_IDLE;
> pool->nr_idle++;
> worker->last_active = jiffies;
> @@ -1674,8 +1674,7 @@ static void worker_detach_from_pool(struct worker *worker,
> * create_worker - create a new workqueue worker
> * @pool: pool the new worker will belong to
> *
> - * Create a new worker which is attached to @pool. The new worker must be
> - * started by start_worker().
> + * Create and start a new worker which is attached to @pool.
> *
> * CONTEXT:
> * Might sleep. Does GFP_KERNEL allocations.
> @@ -1720,6 +1719,13 @@ static struct worker *create_worker(struct worker_pool *pool)
> /* successful, attach the worker to the pool */
> worker_attach_to_pool(worker, pool);
>
> + /* start the newly created worker */
> + spin_lock_irq(&pool->lock);
> + worker->pool->nr_workers++;
> + worker_enter_idle(worker);
> + wake_up_process(worker->task);
> + spin_unlock_irq(&pool->lock);
> +
> return worker;
>
> fail:
> @@ -1730,44 +1736,6 @@ fail:
> }
>
> /**
> - * start_worker - start a newly created worker
> - * @worker: worker to start
> - *
> - * Make the pool aware of @worker and start it.
> - *
> - * CONTEXT:
> - * spin_lock_irq(pool->lock).
> - */
> -static void start_worker(struct worker *worker)
> -{
> - worker->pool->nr_workers++;
> - worker_enter_idle(worker);
> - wake_up_process(worker->task);
> -}
> -
> -/**
> - * create_and_start_worker - create and start a worker for a pool
> - * @pool: the target pool
> - *
> - * Grab the managership of @pool and create and start a new worker for it.
> - *
> - * Return: 0 on success. A negative error code otherwise.
> - */
> -static int create_and_start_worker(struct worker_pool *pool)
> -{
> - struct worker *worker;
> -
> - worker = create_worker(pool);
> - if (worker) {
> - spin_lock_irq(&pool->lock);
> - start_worker(worker);
> - spin_unlock_irq(&pool->lock);
> - }
> -
> - return worker ? 0 : -ENOMEM;
> -}
> -
> -/**
> * destroy_worker - destroy a workqueue worker
> * @worker: worker to be destroyed
> *
> @@ -1905,18 +1873,7 @@ restart:
> mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
>
> while (true) {
> - struct worker *worker;
> -
> - worker = create_worker(pool);
> - if (worker) {
> - spin_lock_irq(&pool->lock);
> - start_worker(worker);
> - if (WARN_ON_ONCE(need_to_create_worker(pool)))
> - goto restart;
> - return true;
> - }
> -
> - if (!need_to_create_worker(pool))
> + if (create_worker(pool) || !need_to_create_worker(pool))
> break;
>
> schedule_timeout_interruptible(CREATE_COOLDOWN);
> @@ -3543,7 +3500,7 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
> goto fail;
>
> /* create and start the initial worker */
> - if (create_and_start_worker(pool) < 0)
> + if (!create_worker(pool))
> goto fail;
>
> /* install */
> @@ -4617,7 +4574,7 @@ static int workqueue_cpu_up_callback(struct notifier_block *nfb,
> for_each_cpu_worker_pool(pool, cpu) {
> if (pool->nr_workers)
> continue;
> - if (create_and_start_worker(pool) < 0)
> + if (!create_worker(pool))
> return NOTIFY_BAD;
> }
> break;
> @@ -4917,7 +4874,7 @@ static int __init init_workqueues(void)
>
> for_each_cpu_worker_pool(pool, cpu) {
> pool->flags &= ~POOL_DISASSOCIATED;
> - BUG_ON(create_and_start_worker(pool) < 0);
> + BUG_ON(!create_worker(pool));
> }
> }
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1 V2] workqueue: unfold start_worker() into create_worker()
2014-07-16 1:28 ` Lai Jiangshan
@ 2014-07-18 22:57 ` Tejun Heo
2014-07-22 5:03 ` [PATCH V3] " Lai Jiangshan
0 siblings, 1 reply; 14+ messages in thread
From: Tejun Heo @ 2014-07-18 22:57 UTC (permalink / raw)
To: Lai Jiangshan; +Cc: linux-kernel
Sorry about the delay.
On Wed, Jul 16, 2014 at 09:28:55AM +0800, Lai Jiangshan wrote:
> On 07/14/2014 12:05 PM, Lai Jiangshan wrote:
> > Simply unfold the code of start_worker() into create_worker() and
> > remove the original start_worker() and create_and_start_worker().
> >
> > The only trade-off is the introduced overhead that the pool->lock
> > is released and re-grabbed after the newly worker is started.
> > The overhead is acceptable since the manager is slow path.
>
> Hi, TJ
>
> Will you accept this trade-off and the patch?
> If so, I will rebase this patch without any dependence on other patch.
Yeap, that's fine.
Thanks!
--
tejun
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH V3] workqueue: unfold start_worker() into create_worker()
2014-07-18 22:57 ` Tejun Heo
@ 2014-07-22 5:03 ` Lai Jiangshan
2014-07-22 15:05 ` Tejun Heo
0 siblings, 1 reply; 14+ messages in thread
From: Lai Jiangshan @ 2014-07-22 5:03 UTC (permalink / raw)
To: linux-kernel; +Cc: Tejun Heo, Lai Jiangshan
Simply unfold the code of start_worker() into create_worker() and
remove the original start_worker() and create_and_start_worker().
The only trade-off is the introduced overhead that the pool->lock
is released and regrabbed after the newly worker is started.
The overhead is acceptible since the manager is slow path.
And because this new locking behavior, the newly created worker
may grab the lock earlier than the manager and go to process
work items. In this case, the recheck need_to_create_worker() may be
true as expected and the manager goes to restart without complaint.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
kernel/workqueue.c | 74 ++++++++++++----------------------------------------
1 files changed, 17 insertions(+), 57 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 40b1f00..bf837b0 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1540,7 +1540,7 @@ static void worker_enter_idle(struct worker *worker)
(worker->hentry.next || worker->hentry.pprev)))
return;
- /* can't use worker_set_flags(), also called from start_worker() */
+ /* can't use worker_set_flags(), also called from create_worker() */
worker->flags |= WORKER_IDLE;
pool->nr_idle++;
worker->last_active = jiffies;
@@ -1661,8 +1661,7 @@ static void worker_detach_from_pool(struct worker *worker,
* create_worker - create a new workqueue worker
* @pool: pool the new worker will belong to
*
- * Create a new worker which is attached to @pool. The new worker must be
- * started by start_worker().
+ * Create and start a new worker which is attached to @pool.
*
* CONTEXT:
* Might sleep. Does GFP_KERNEL allocations.
@@ -1707,6 +1706,13 @@ static struct worker *create_worker(struct worker_pool *pool)
/* successful, attach the worker to the pool */
worker_attach_to_pool(worker, pool);
+ /* start the newly created worker */
+ spin_lock_irq(&pool->lock);
+ worker->pool->nr_workers++;
+ worker_enter_idle(worker);
+ wake_up_process(worker->task);
+ spin_unlock_irq(&pool->lock);
+
return worker;
fail:
@@ -1717,44 +1723,6 @@ fail:
}
/**
- * start_worker - start a newly created worker
- * @worker: worker to start
- *
- * Make the pool aware of @worker and start it.
- *
- * CONTEXT:
- * spin_lock_irq(pool->lock).
- */
-static void start_worker(struct worker *worker)
-{
- worker->pool->nr_workers++;
- worker_enter_idle(worker);
- wake_up_process(worker->task);
-}
-
-/**
- * create_and_start_worker - create and start a worker for a pool
- * @pool: the target pool
- *
- * Grab the managership of @pool and create and start a new worker for it.
- *
- * Return: 0 on success. A negative error code otherwise.
- */
-static int create_and_start_worker(struct worker_pool *pool)
-{
- struct worker *worker;
-
- worker = create_worker(pool);
- if (worker) {
- spin_lock_irq(&pool->lock);
- start_worker(worker);
- spin_unlock_irq(&pool->lock);
- }
-
- return worker ? 0 : -ENOMEM;
-}
-
-/**
* destroy_worker - destroy a workqueue worker
* @worker: worker to be destroyed
*
@@ -1892,19 +1860,7 @@ restart:
mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
while (true) {
- struct worker *worker;
-
- worker = create_worker(pool);
- if (worker) {
- del_timer_sync(&pool->mayday_timer);
- spin_lock_irq(&pool->lock);
- start_worker(worker);
- if (WARN_ON_ONCE(need_to_create_worker(pool)))
- goto restart;
- return true;
- }
-
- if (!need_to_create_worker(pool))
+ if (create_worker(pool) || !need_to_create_worker(pool))
break;
schedule_timeout_interruptible(CREATE_COOLDOWN);
@@ -1915,6 +1871,10 @@ restart:
del_timer_sync(&pool->mayday_timer);
spin_lock_irq(&pool->lock);
+ /*
+ * recheck, even when a new worker was just successfully created due
+ * to the pool->lock was also dropped after the new worker started.
+ */
if (need_to_create_worker(pool))
goto restart;
return true;
@@ -3537,7 +3497,7 @@ static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs)
goto fail;
/* create and start the initial worker */
- if (create_and_start_worker(pool) < 0)
+ if (!create_worker(pool))
goto fail;
/* install */
@@ -4611,7 +4571,7 @@ static int workqueue_cpu_up_callback(struct notifier_block *nfb,
for_each_cpu_worker_pool(pool, cpu) {
if (pool->nr_workers)
continue;
- if (create_and_start_worker(pool) < 0)
+ if (!create_worker(pool))
return NOTIFY_BAD;
}
break;
@@ -4911,7 +4871,7 @@ static int __init init_workqueues(void)
for_each_cpu_worker_pool(pool, cpu) {
pool->flags &= ~POOL_DISASSOCIATED;
- BUG_ON(create_and_start_worker(pool) < 0);
+ BUG_ON(!create_worker(pool));
}
}
--
1.7.4.4
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH V3] workqueue: unfold start_worker() into create_worker()
2014-07-22 5:03 ` [PATCH V3] " Lai Jiangshan
@ 2014-07-22 15:05 ` Tejun Heo
0 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2014-07-22 15:05 UTC (permalink / raw)
To: Lai Jiangshan; +Cc: linux-kernel
On Tue, Jul 22, 2014 at 01:03:02PM +0800, Lai Jiangshan wrote:
> Simply unfold the code of start_worker() into create_worker() and
> remove the original start_worker() and create_and_start_worker().
>
> The only trade-off is the introduced overhead that the pool->lock
> is released and regrabbed after the newly worker is started.
> The overhead is acceptible since the manager is slow path.
>
> And because this new locking behavior, the newly created worker
> may grab the lock earlier than the manager and go to process
> work items. In this case, the recheck need_to_create_worker() may be
> true as expected and the manager goes to restart without complaint.
>
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Applied to wq/for-3.17.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2014-07-22 15:05 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-10 16:01 [PATCH 0/3] workqueue: manager cleanup Lai Jiangshan
2014-07-10 16:01 ` [PATCH 1/3] workqueue: remove the first check and the return value of maybe_create_worker() Lai Jiangshan
2014-07-11 15:03 ` Tejun Heo
2014-07-14 2:30 ` Lai Jiangshan
2014-07-14 14:40 ` Tejun Heo
2014-07-10 16:01 ` [PATCH 2/3] workqueue: remove the guarantee and the retrying " Lai Jiangshan
2014-07-11 15:02 ` Tejun Heo
2014-07-10 16:01 ` [PATCH 3/3] workqueue: unfold start_worker() into create_worker() Lai Jiangshan
2014-07-14 4:05 ` [PATCH 0/1 V2] workqueue: manager cleanup Lai Jiangshan
2014-07-14 4:05 ` [PATCH 1/1 V2] workqueue: unfold start_worker() into create_worker() Lai Jiangshan
2014-07-16 1:28 ` Lai Jiangshan
2014-07-18 22:57 ` Tejun Heo
2014-07-22 5:03 ` [PATCH V3] " Lai Jiangshan
2014-07-22 15:05 ` Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox