public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v8 0/5] workqueue: destroy_worker() vs isolated CPUs
@ 2023-01-12 16:14 Valentin Schneider
  2023-01-12 16:14 ` [PATCH v8 1/5] workqueue: Protects wq_unbound_cpumask with wq_pool_attach_mutex Valentin Schneider
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Valentin Schneider @ 2023-01-12 16:14 UTC (permalink / raw)
  To: linux-kernel
  Cc: Tejun Heo, Lai Jiangshan, Peter Zijlstra, Frederic Weisbecker,
	Juri Lelli, Phil Auld, Marcelo Tosatti

Hi folks,

This version only brings a small change: getting rid of wq_manager_inactive()
for (somewhat) saner wq_pool_attach_mutex acquisition.

range-diff with previous version
================================

1:  2448692cdc707 = 1:  2448692cdc707 workqueue: Protects wq_unbound_cpumask with wq_pool_attach_mutex
2:  55d7ac5db1560 = 2:  55d7ac5db1560 workqueue: Factorize unbind/rebind_workers() logic
3:  d35d1e33d9621 = 3:  d35d1e33d9621 workqueue: Convert the idle_timer to a timer + work_struct
-:  ------------- > 4:  d596651130433 workqueue: Don't hold any lock while rcuwait'ing for !POOL_MANAGER_ACTIVE
4:  d1ce4e27cbd20 ! 5:  6b6961c5ded12 workqueue: Unbind kworkers before sending them to exit()
    @@ kernel/workqueue.c: static int init_worker_pool(struct worker_pool *pool)
      
      	ida_init(&pool->worker_ida);
      	INIT_HLIST_NODE(&pool->hash_node);
    -@@ kernel/workqueue.c: static bool wq_manager_inactive(struct worker_pool *pool)
    +@@ kernel/workqueue.c: static void rcu_free_pool(struct rcu_head *rcu)
      static void put_unbound_pool(struct worker_pool *pool)
      {
      	DECLARE_COMPLETION_ONSTACK(detach_completion);
    @@ kernel/workqueue.c: static bool wq_manager_inactive(struct worker_pool *pool)
      
      	if (--pool->refcnt)
     @@ kernel/workqueue.c: static void put_unbound_pool(struct worker_pool *pool)
    - 			   TASK_UNINTERRUPTIBLE);
    - 	pool->flags |= POOL_MANAGER_ACTIVE;
    - 
    -+	/*
    -+	 * We need to hold wq_pool_attach_mutex() while destroying the workers,
    -+	 * but we can't grab it in rcuwait_wait_event() as it can clobber
    -+	 * current's task state. We can drop pool->lock here as we've set
    -+	 * POOL_MANAGER_ACTIVE, no one else can steal our manager position.
    -+	 */
    -+	raw_spin_unlock_irq(&pool->lock);
    -+	mutex_lock(&wq_pool_attach_mutex);
    -+	raw_spin_lock_irq(&pool->lock);
    + 		rcuwait_wait_event(&manager_wait,
    + 				   !(pool->flags & POOL_MANAGER_ACTIVE),
    + 				   TASK_UNINTERRUPTIBLE);
     +
    ++		mutex_lock(&wq_pool_attach_mutex);
    + 		raw_spin_lock_irq(&pool->lock);
    + 		if (!(pool->flags & POOL_MANAGER_ACTIVE)) {
    + 			pool->flags |= POOL_MANAGER_ACTIVE;
    + 			break;
    + 		}
    + 		raw_spin_unlock_irq(&pool->lock);
    ++		mutex_unlock(&wq_pool_attach_mutex);
    + 	}
    + 
      	while ((worker = first_idle_worker(pool)))
     -		destroy_worker(worker);
     +		set_worker_dying(worker, &cull_list);

Revisions
=========

v7 -> v8
++++++++

o Nuke wq_manager_inactive() (Tejun)

v6 -> v7
++++++++

o Rebased onto v6.2-rc3

o Dropped work pending check in worker_enter_idle() (Tejun)
o Overall comment cleanup (Tejun)

o put_unbound_pool() locking issue (Lai)
  Unfortunately the mutex cannot be acquired from within wq_manager_inactive()
  as rcuwait_wait_event() sets the task state to TASK_UNINTERRUPTIBLE before
  invoking it, so grabbing the mutex could clobber the task state.

  I've gone with dropping the pool->lock and reacquiring the two locks in the
  right order after we've become the manager, see comments.

o Applied Lai's RB

v5 -> v6
++++++++

o Rebase onto v6.1-rc7
o Get rid of worker_pool.idle_cull_list; only do minimal amount of work in the
  timer callback (Tejun)
o Dropped the too_many_workers() -> nr_workers_to_cull() change

v4 -> v5
++++++++

o Rebase onto v6.1-rc6

o Overall renaming from "reaping" to "cull"
  I somehow convinced myself this was more appropriate
  
o Split the dwork into timer callback + work item (Tejun)

  I didn't want to have redudant operations happen in the timer callback and in
  the work item, so I made the timer callback detect which workers are "ripe"
  enough and then toss them to a worker for removal.

  This however means we release the pool->lock before getting to actually doing
  anything to those idle workers, which means they can wake up in the meantime.
  The new worker_pool.idle_cull_list is there for that reason.

  The alternative was to have the timer callback detect if any worker was ripe
  enough, kick the work item if so, and have the work item do the same thing
  again, which I didn't like.

RFCv3 -> v4
+++++++++++

o Rebase onto v6.0
o Split into more patches for reviewability
o Take dying workers out of the pool->workers as suggested by Lai

RFCv2 -> RFCv3
++++++++++++++

o Rebase onto v5.19
o Add new patch (1/3) around accessing wq_unbound_cpumask

o Prevent WORKER_DIE workers for kfree()'ing themselves before the idle reaper
  gets to handle them (Tejun)

  Bit of an aside on that: I've been struggling to convince myself this can
  happen due to spurious wakeups and would like some help here.

  Idle workers are TASK_UNINTERRUPTIBLE, so they can't be woken up by
  signals. That state is set *under* pool->lock, and all wakeups (before this
  patch) are also done while holding pool->lock.
  
  wake_up_worker() is done under pool->lock AND only wakes a worker on the
  pool->idle_list. Thus the to-be-woken worker *cannot* have WORKER_DIE, though
  it could gain it *after* being woken but *before* it runs, e.g.:
                          
  LOCK pool->lock
  wake_up_worker(pool)
      wake_up_process(p)
  UNLOCK pool->lock
                          idle_reaper_fn()
                            LOCK pool->lock
                            destroy_worker(worker, list);
			    UNLOCK pool->lock
			                            worker_thread()
						      goto woke_up;
                                                      LOCK pool->lock
						      READ worker->flags & WORKER_DIE
                                                          UNLOCK pool->lock
                                                          ...
						          kfree(worker);
                            reap_worker(worker);
			        // Uh-oh
			  
  ... But IMO that's not a spurious wakeup, that's a concurrency issue. I don't
  see any spurious/unexpected worker wakeup happening once a worker is off the
  pool->idle_list.
  

RFCv1 -> RFCv2
++++++++++++++

o Change the pool->timer into a delayed_work to have a sleepable context for
  unbinding kworkers

Cheers,
Valentin

Lai Jiangshan (1):
  workqueue: Protects wq_unbound_cpumask with wq_pool_attach_mutex

Valentin Schneider (4):
  workqueue: Factorize unbind/rebind_workers() logic
  workqueue: Convert the idle_timer to a timer + work_struct
  workqueue: Don't hold any lock while rcuwait'ing for
    !POOL_MANAGER_ACTIVE
  workqueue: Unbind kworkers before sending them to exit()

 kernel/workqueue.c | 234 ++++++++++++++++++++++++++++++++-------------
 1 file changed, 166 insertions(+), 68 deletions(-)

--
2.31.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-01-12 16:26 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-01-12 16:14 [PATCH v8 0/5] workqueue: destroy_worker() vs isolated CPUs Valentin Schneider
2023-01-12 16:14 ` [PATCH v8 1/5] workqueue: Protects wq_unbound_cpumask with wq_pool_attach_mutex Valentin Schneider
2023-01-12 16:14 ` [PATCH v8 2/5] workqueue: Factorize unbind/rebind_workers() logic Valentin Schneider
2023-01-12 16:14 ` [PATCH v8 3/5] workqueue: Convert the idle_timer to a timer + work_struct Valentin Schneider
2023-01-12 16:14 ` [PATCH v8 4/5] workqueue: Don't hold any lock while rcuwait'ing for !POOL_MANAGER_ACTIVE Valentin Schneider
2023-01-12 16:14 ` [PATCH v8 5/5] workqueue: Unbind kworkers before sending them to exit() Valentin Schneider
2023-01-12 16:22 ` [PATCH v8 0/5] workqueue: destroy_worker() vs isolated CPUs Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox