public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Shrikanth Hegde <sshegde@linux.ibm.com>
To: paulmck@kernel.org, Tejun Heo <tj@kernel.org>,
	Vasily Gorbik <gor@linux.ibm.com>
Cc: Srikar Dronamraju <srikar@linux.ibm.com>,
	Boqun Feng <boqun@kernel.org>,
	Frederic Weisbecker <frederic@kernel.org>,
	Neeraj Upadhyay <neeraj.upadhyay@kernel.org>,
	Joel Fernandes <joelagnelf@nvidia.com>,
	Uladzislau Rezki <urezki@gmail.com>,
	rcu@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-s390@vger.kernel.org,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	samir@linux.ibm.com
Subject: Re: BUG: workqueue lockup - SRCU schedules work on not-online CPUs during size transition
Date: Wed, 29 Apr 2026 23:14:56 +0530	[thread overview]
Message-ID: <3f6d1123-6e1a-4566-8be7-ce95efe0609c@linux.ibm.com> (raw)
In-Reply-To: <3b1563df-b1aa-40b9-b83e-650d967df09c@paulmck-laptop>


I have limited understanding in rcu or workqueues, but my two cents.

On 4/29/26 10:48 PM, Paul E. McKenney wrote:
> On Wed, Apr 29, 2026 at 07:08:23PM +0200, Vasily Gorbik wrote:
>> On Wed, Apr 29, 2026 at 08:30:38PM +0530, Srikar Dronamraju wrote:
>>> * Tejun Heo <tj@kernel.org> [2026-04-10 08:53:30]:
>>>> Hello,
>>>>
>>>>> Seems that we (mostly Paul) have our own trick to track whether a CPU
>>>>> has ever been onlined in RCU, see rcu_cpu_beenfullyonline(). Paul also
>>>>> used it in his fix [1]. And I think it won't be that hard to copy it
>>>>> into workqueue and let queue_work_on() use it so that if the user queues
>>>>> a work on a never-onlined CPU, it can detect it (with a warning?) and do
>>>>> something?
>>>>
>>>> The easiest way to do this is just creating the initial workers for all
>>>> possible pools. Please see below. However, the downside is that it's going
>>>> to create all workers for all possible cpus. This isn't a problem for
>>>> anybody else but these IBM mainframes often come up with a lot of possible
>>>> but not-yet-or-ever-online CPUs for capacity management, so the cost may not
>>>> be negligible on some configurations.
>>>>
>>>> IBM folks, is that okay?
>>>
>>> Even on PowerPC LPARS, its not uncommon to have possible cpus != online cpus
>>> at boot.  However your approach will work.
>>>
>>> And Samir has already tested the same too and reported here
>>> https://lkml.kernel.org/r/1b89c25b-7c1d-4ed8-adf3-ac504b6f086a@linux.ibm.com
>>>
>>>> From: Tejun Heo <tj@kernel.org>
>>>> Subject: workqueue: Create workers for all possible CPUs on init
>>>>
>>>> Per-CPU worker pools are initialized for every possible CPU during early boot,
>>>> but workqueue_init() only creates initial workers for online CPUs. On systems
>>>> where possible CPUs outnumber online CPUs (e.g. s390 LPARs with 76 online and
>>>> 400 possible CPUs), the pools for never-onlined CPUs have POOL_DISASSOCIATED
>>>> set but no workers. Any work item queued on such a CPU hangs indefinitely.
>>>>
>>>> This was exposed by 61bbcfb50514 ("srcu: Push srcu_node allocation to GP when
>>>> non-preemptible") which made SRCU schedule callbacks on all possible CPUs
>>>> during size transitions, triggering workqueue lockup warnings for all
>>>> never-onlined CPUs.
>>>>
>>>> Create workers for all possible CPUs during init, not just online ones. For
>>>> online CPUs, the behavior is unchanged - POOL_DISASSOCIATED is cleared and the
>>>> worker is bound to the CPU. For not-yet-online CPUs, POOL_DISASSOCIATED
>>>> remains set, so worker_attach_to_pool() marks the worker UNBOUND and it can
>>>> execute on any CPU. When the CPU later comes online, rebind_workers() handles
>>>> the transition to associated operation as usual.
>>>>
>>>
>>> With these patch, if a CPU has been onlined once, it's should be ok to queue
>>> the work on that CPU even if its offline now.
>>
>> That already seems to hold without this patch, what this patch newly
>> covers is queueing on CPUs that have never been online.
>>
>> Do we actually need to create workers for every possible CPU at boot?
>> On the s390 LPAR in question (76 online / 400 possible) that's a few
>> hundred extra kthreads kept around for the life of the system.
>> That's probably the same on PowerPC.
>>
>> Wouldn't Paul's SRCU-side fix [1] alone be enough here for PowerPC
>> as well? I retested it on s390 (76/400) and on x86 KVM with
>> --smp 16,maxcpus=255 and the lockup didn't reproduce in either case.
>>
>> [1] https://lore.kernel.org/rcu/ed1fa6cd-7343-4ca3-8b9d-d699ca496f83@paulmck-laptop/
> 
> Just to emphasize that SRCU really was buggy before my fix.  The
> queue_work_on() kernel-doc header clearly states the rules.  The bug
> is even more embarrassing given just who it was that wrote those two
> sentences.  ;-)
> 

That mask = ~0 is really looks uncomfortable to me. What does it mean?
It might end up even sending to non possible CPUs without proper checks.

It should use either cpumask_setall? or use cpu_online_mask?

Your current patch rcu_cpu_beenfullyonline indicates that code around
srcu_schedule_cbs_sdp handles hotplug already right?
in that case, just setting mask = cpu_online_mask would work?


> 							Thanx, Paul
> 
> /**
>   * queue_work_on - queue work on specific cpu
>   * @cpu: CPU number to execute work on
>   * @wq: workqueue to use
>   * @work: work to queue
>   *
>   * We queue the work to a specific CPU, the caller must ensure it
>   * can't go away.  Callers that fail to ensure that the specified
>   * CPU cannot go away will execute on a randomly chosen CPU.
>   * But note well that callers specifying a CPU that never has been
>   * online will get a splat.
>   *
>   * Return: %false if @work was already on a queue, %true otherwise.
>   */


In that case, making offline CPUs have a unbound workqueue is wrong. no?

It might encourage more users to abuse queue_work_on interface to
send to offline CPUs without any checks and onus now falls onto
workqueue to disaptch to unbound wqs.

So I think it is better to put the guardrails in SRCU instead of any change in
workqueue.

  reply	other threads:[~2026-04-29 17:45 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-09 13:08 BUG: workqueue lockup - SRCU schedules work on not-online CPUs during size transition Vasily Gorbik
2026-04-09 17:22 ` Paul E. McKenney
2026-04-09 19:15   ` Vasily Gorbik
2026-04-09 20:10     ` Paul E. McKenney
2026-04-10  4:03       ` Paul E. McKenney
2026-04-14 19:24         ` Paul E. McKenney
2026-04-29 17:50           ` Vasily Gorbik
2026-04-29 18:05             ` Paul E. McKenney
2026-04-29 18:23               ` Vasily Gorbik
2026-04-09 17:26 ` Boqun Feng
2026-04-09 17:40   ` Boqun Feng
2026-04-09 17:47     ` Tejun Heo
2026-04-09 17:48       ` Tejun Heo
2026-04-09 18:04         ` Paul E. McKenney
2026-04-09 18:09           ` Tejun Heo
2026-04-09 18:15             ` Paul E. McKenney
2026-04-09 18:10       ` Boqun Feng
2026-04-09 18:27         ` Paul E. McKenney
2026-04-10 18:53         ` Tejun Heo
2026-04-10 19:17           ` Paul E. McKenney
2026-04-10 19:29             ` Tejun Heo
2026-04-29 15:00           ` Srikar Dronamraju
2026-04-29 17:08             ` Vasily Gorbik
2026-04-29 17:18               ` Paul E. McKenney
2026-04-29 17:44                 ` Shrikanth Hegde [this message]
2026-04-29 18:01                   ` Paul E. McKenney
2026-04-30  7:08                     ` Shrikanth Hegde
2026-04-30 16:05                       ` Paul E. McKenney
2026-04-30 16:10                       ` Paul E. McKenney
2026-05-01 13:17                         ` Shrikanth Hegde
2026-05-01 14:00                           ` Paul E. McKenney
2026-04-29 18:17           ` Samir M

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3f6d1123-6e1a-4566-8be7-ce95efe0609c@linux.ibm.com \
    --to=sshegde@linux.ibm.com \
    --cc=boqun@kernel.org \
    --cc=frederic@kernel.org \
    --cc=gor@linux.ibm.com \
    --cc=jiangshanlai@gmail.com \
    --cc=joelagnelf@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=neeraj.upadhyay@kernel.org \
    --cc=paulmck@kernel.org \
    --cc=rcu@vger.kernel.org \
    --cc=samir@linux.ibm.com \
    --cc=srikar@linux.ibm.com \
    --cc=tj@kernel.org \
    --cc=urezki@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox