public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Boqun Feng <boqun@kernel.org>
To: Vasily Gorbik <gor@linux.ibm.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>,
	Frederic Weisbecker <frederic@kernel.org>,
	Neeraj Upadhyay <neeraj.upadhyay@kernel.org>,
	Joel Fernandes <joelagnelf@nvidia.com>,
	Uladzislau Rezki <urezki@gmail.com>,
	rcu@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-s390@vger.kernel.org, Tejun Heo <tj@kernel.org>,
	Lai Jiangshan <jiangshanlai@gmail.com>
Subject: Re: BUG: workqueue lockup - SRCU schedules work on not-online CPUs during size transition
Date: Thu, 9 Apr 2026 10:26:49 -0700	[thread overview]
Message-ID: <adfhWQr1yFImSM2Q@tardis.local> (raw)
In-Reply-To: <ttd89ul@ub.hpns>

On Thu, Apr 09, 2026 at 03:08:45PM +0200, Vasily Gorbik wrote:
> Commit 61bbcfb50514 ("srcu: Push srcu_node allocation to GP when
> non-preemptible") defers srcu_node tree allocation when called under
> raw spinlock, putting SRCU through ~6 transitional grace periods
> (SRCU_SIZE_ALLOC to SRCU_SIZE_BIG). During this transition srcu_gp_end()
> uses mask = ~0, which makes srcu_schedule_cbs_snp() call queue_work_on()
> for every possible CPU. Since rcu_gp_wq is WQ_PERCPU, work targets
> per-CPU pools directly - pools for not-online CPUs have no workers,

[Cc workqueue]

Hmm.. I thought for offline CPUs the corresponding worker pools become a
unbound one hence there are still workers?

Regards,
Boqun

> work accumulates, workqueue lockup detector fires.
> 
> Before 61bbcfb50514, GFP_ATOMIC allocation went straight to
> SRCU_SIZE_BIG, the mask = ~0 path was never reached.
> 
> Affects systems with convert_to_big active (auto when nr_cpu_ids >= 128)
> and possible CPUs > online CPUs. Hit on s390 LPAR (76 online, 400 possible),
> where possible CPUs > online CPUs is the usual case.
> Also reproducible on x86 KVM --smp 16,maxcpus=255 (CONFIG_NR_CPUS=256)
> or simply -smp 1,maxcpus=2 with srcutree.convert_to_big=1
> or --smp 16,maxcpus=64 with srcutree.big_cpu_lim=32 (CONFIG_NR_CPUS=64)
> 
> s390 log (76 online CPUs, 400 possible, all pools 76-399 stuck):
> 
>   BUG: workqueue lockup - pool cpus=76 node=0 flags=0x4 nice=0 stuck for 1842s!
>   BUG: workqueue lockup - pool cpus=77 node=0 flags=0x4 nice=0 stuck for 1842s!
>   ...
>   BUG: workqueue lockup - pool cpus=399 node=0 flags=0x4 nice=0 stuck for 1842s!
>   Showing busy workqueues and worker pools:
>   workqueue rcu_gp: flags=0x108
>     pwq 306: cpus=76 node=0 flags=0x4 nice=0 active=3 refcnt=4
>       pending: 3*srcu_invoke_callbacks
>     pwq 310: cpus=77 node=0 flags=0x4 nice=0 active=3 refcnt=4
>       pending: 3*srcu_invoke_callbacks
>     ...
>     pwq 1598: cpus=399 node=0 flags=0x4 nice=0 active=3 refcnt=4
>       pending: 3*srcu_invoke_callbacks
> 
> Not sure if replacing mask = ~0 with something derived from
> cpu_online_mask would be racy in that context.
> 
> [1] https://lore.kernel.org/rcu/acRho9L4zA2MRuxc@tardis.local
> [2] https://lore.kernel.org/rcu/fe28d664-3872-40f6-83c6-818627ad5b7d@paulmck-laptop

  parent reply	other threads:[~2026-04-09 17:26 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-09 13:08 BUG: workqueue lockup - SRCU schedules work on not-online CPUs during size transition Vasily Gorbik
2026-04-09 17:22 ` Paul E. McKenney
2026-04-09 19:15   ` Vasily Gorbik
2026-04-09 20:10     ` Paul E. McKenney
2026-04-10  4:03       ` Paul E. McKenney
2026-04-09 17:26 ` Boqun Feng [this message]
2026-04-09 17:40   ` Boqun Feng
2026-04-09 17:47     ` Tejun Heo
2026-04-09 17:48       ` Tejun Heo
2026-04-09 18:04         ` Paul E. McKenney
2026-04-09 18:09           ` Tejun Heo
2026-04-09 18:15             ` Paul E. McKenney
2026-04-09 18:10       ` Boqun Feng
2026-04-09 18:27         ` Paul E. McKenney
2026-04-10 18:53         ` Tejun Heo
2026-04-10 19:17           ` Paul E. McKenney
2026-04-10 19:29             ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=adfhWQr1yFImSM2Q@tardis.local \
    --to=boqun@kernel.org \
    --cc=frederic@kernel.org \
    --cc=gor@linux.ibm.com \
    --cc=jiangshanlai@gmail.com \
    --cc=joelagnelf@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=neeraj.upadhyay@kernel.org \
    --cc=paulmck@kernel.org \
    --cc=rcu@vger.kernel.org \
    --cc=tj@kernel.org \
    --cc=urezki@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox