From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B3073E8665 for ; Thu, 7 May 2026 10:20:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778149238; cv=none; b=HZo/O8uVbJh4Pu0zeJwvdI1/P6lirS6T4iN89GyvfXU0dIwdAp/8kOsvme3Snf/KhAqUQguNzEzqngGVUVhLYG4qcEHNdR4Z+2XYsjAuB97nYufh0/yIDSKscWTH7WyO2pr2yGxWbVKKyv5sQhKsmJf1S4XkDaS+0VQ2nUzeHVc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778149238; c=relaxed/simple; bh=UaeHnkK+wbcYP/6cG0x8rqP23+mC52ph6vo6JkDhcxs=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=pv549jaFkGwRjH/4obV13u9yf6W3KW192LqvW9p8AuQTKYddQbbWakwOtvr1Oav0bHogQ0bd9W4O6JyNzpYM1yO3bxaKNYkBSONBNL16XnHc0UaTFGwlp2UHQv/W78yQqQtXq51g4/EyobKg3s+Vn9T3wJeR07nqA5JYIrEDVZo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Sd6WmI/5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Sd6WmI/5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3554C2BCB8; Thu, 7 May 2026 10:20:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778149238; bh=UaeHnkK+wbcYP/6cG0x8rqP23+mC52ph6vo6JkDhcxs=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=Sd6WmI/5PTJkoqxMM/PJUHXOISsHZ0NhfHhWGfPZF7Cyg8YP9ebsBmwFmw/T0qJNT /La4qc1tfY79NuKG/q4WI2MjZ3UAyuxinkjHj9NaGvUNSUMF4qijKTyO1BshCySIxq x3332XgvVt9L+fcm9N0x+6k4/U0t7QZ/jZkF22MUGHoZvk4EXb9LDuB6j+eh1aZ5Tf 07oSYpmfNk8/895e3Uk9D3UYAaKphp8h2ldbR5VPiCXsJ9Gxf2HDzswsRwk1DVcX3D QWMuDUkPLeZfidGLjbx/dfdApnkF8sJhELvDcq6yaSSahhylX/yelhGCP8nN7DuzxB Yrns7uXSY/L1w== Message-ID: <4ed2b000-2306-49fe-87b8-8bfd7f6b6d43@kernel.org> Date: Thu, 7 May 2026 12:20:33 +0200 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 4/5] workqueue: Show all busy workers in stall diagnostics To: Breno Leitao , Tejun Heo , Lai Jiangshan , Andrew Morton Cc: linux-kernel@vger.kernel.org, Omar Sandoval , Song Liu , Danielle Costantino , kasan-dev@googlegroups.com, Petr Mladek , kernel-team@meta.com References: <20260305-wqstall_start-at-v2-0-b60863ee0899@debian.org> <20260305-wqstall_start-at-v2-4-b60863ee0899@debian.org> Content-Language: en-US From: Jiri Slaby Autocrypt: addr=jirislaby@kernel.org; keydata= xsFNBE6S54YBEACzzjLwDUbU5elY4GTg/NdotjA0jyyJtYI86wdKraekbNE0bC4zV+ryvH4j rrcDwGs6tFVrAHvdHeIdI07s1iIx5R/ndcHwt4fvI8CL5PzPmn5J+h0WERR5rFprRh6axhOk rSD5CwQl19fm4AJCS6A9GJtOoiLpWn2/IbogPc71jQVrupZYYx51rAaHZ0D2KYK/uhfc6neJ i0WqPlbtIlIrpvWxckucNu6ZwXjFY0f3qIRg3Vqh5QxPkojGsq9tXVFVLEkSVz6FoqCHrUTx wr+aw6qqQVgvT/McQtsI0S66uIkQjzPUrgAEtWUv76rM4ekqL9stHyvTGw0Fjsualwb0Gwdx ReTZzMgheAyoy/umIOKrSEpWouVoBt5FFSZUyjuDdlPPYyPav+hpI6ggmCTld3u2hyiHji2H cDpcLM2LMhlHBipu80s9anNeZhCANDhbC5E+NZmuwgzHBcan8WC7xsPXPaiZSIm7TKaVoOcL 9tE5aN3jQmIlrT7ZUX52Ff/hSdx/JKDP3YMNtt4B0cH6ejIjtqTd+Ge8sSttsnNM0CQUkXps w98jwz+Lxw/bKMr3NSnnFpUZaxwji3BC9vYyxKMAwNelBCHEgS/OAa3EJoTfuYOK6wT6nadm YqYjwYbZE5V/SwzMbpWu7Jwlvuwyfo5mh7w5iMfnZE+vHFwp/wARAQABzSFKaXJpIFNsYWJ5 IDxqaXJpc2xhYnlAa2VybmVsLm9yZz7CwXcEEwEIACEFAlW3RUwCGwMFCwkIBwIGFQgJCgsC BBYCAwECHgECF4AACgkQvSWxBAa0cEnVTg//TQpdIAr8Tn0VAeUjdVIH9XCFw+cPSU+zMSCH eCZoA/N6gitEcnvHoFVVM7b3hK2HgoFUNbmYC0RdcSc80pOF5gCnACSP9XWHGWzeKCARRcQR 4s5YD8I4VV5hqXcKo2DFAtIOVbHDW+0okOzcecdasCakUTr7s2fXz97uuoc2gIBB7bmHUGAH XQXHvdnCLjDjR+eJN+zrtbqZKYSfj89s/ZHn5Slug6w8qOPT1sVNGG+eWPlc5s7XYhT9z66E l5C0rG35JE4PhC+tl7BaE5IwjJlBMHf/cMJxNHAYoQ1hWQCKOfMDQ6bsEr++kGUCbHkrEFwD UVA72iLnnnlZCMevwE4hc0zVhseWhPc/KMYObU1sDGqaCesRLkE3tiE7X2cikmj/qH0CoMWe gjnwnQ2qVJcaPSzJ4QITvchEQ+tbuVAyvn9H+9MkdT7b7b2OaqYsUP8rn/2k1Td5zknUz7iF oJ0Z9wPTl6tDfF8phaMIPISYrhceVOIoL+rWfaikhBulZTIT5ihieY9nQOw6vhOfWkYvv0Dl o4GRnb2ybPQpfEs7WtetOsUgiUbfljTgILFw3CsPW8JESOGQc0Pv8ieznIighqPPFz9g+zSu Ss/rpcsqag5n9rQp/H3WW5zKUpeYcKGaPDp/vSUovMcjp8USIhzBBrmI7UWAtuedG9prjqfO wU0ETpLnhgEQAM+cDWLL+Wvc9cLhA2OXZ/gMmu7NbYKjfth1UyOuBd5emIO+d4RfFM02XFTI t4MxwhAryhsKQQcA4iQNldkbyeviYrPKWjLTjRXT5cD2lpWzr+Jx7mX7InV5JOz1Qq+P+nJW YIBjUKhI03ux89p58CYil24Zpyn2F5cX7U+inY8lJIBwLPBnc9Z0An/DVnUOD+0wIcYVnZAK DiIXODkGqTg3fhZwbbi+KAhtHPFM2fGw2VTUf62IHzV+eBSnamzPOBc1XsJYKRo3FHNeLuS8 f4wUe7bWb9O66PPFK/RkeqNX6akkFBf9VfrZ1rTEKAyJ2uqf1EI1olYnENk4+00IBa+BavGQ 8UW9dGW3nbPrfuOV5UUvbnsSQwj67pSdrBQqilr5N/5H9z7VCDQ0dhuJNtvDSlTf2iUFBqgk 3smln31PUYiVPrMP0V4ja0i9qtO/TB01rTfTyXTRtqz53qO5dGsYiliJO5aUmh8swVpotgK4 /57h3zGsaXO9PGgnnAdqeKVITaFTLY1ISg+Ptb4KoliiOjrBMmQUSJVtkUXMrCMCeuPDGHo7 39Xc75lcHlGuM3yEB//htKjyprbLeLf1y4xPyTeeF5zg/0ztRZNKZicgEmxyUNBHHnBKHQxz 1j+mzH0HjZZtXjGu2KLJ18G07q0fpz2ZPk2D53Ww39VNI/J9ABEBAAHCwV8EGAECAAkFAk6S 54YCGwwACgkQvSWxBAa0cEk3tRAAgO+DFpbyIa4RlnfpcW17AfnpZi9VR5+zr496n2jH/1ld wRO/S+QNSA8qdABqMb9WI4BNaoANgcg0AS429Mq0taaWKkAjkkGAT7mD1Q5PiLr06Y/+Kzdr 90eUVneqM2TUQQbK+Kh7JwmGVrRGNqQrDk+gRNvKnGwFNeTkTKtJ0P8jYd7P1gZb9Fwj9YLx jhn/sVIhNmEBLBoI7PL+9fbILqJPHgAwW35rpnq4f/EYTykbk1sa13Tav6btJ+4QOgbcezWI wZ5w/JVfEJW9JXp3BFAVzRQ5nVrrLDAJZ8Y5ioWcm99JtSIIxXxt9FJaGc1Bgsi5K/+dyTKL wLMJgiBzbVx8G+fCJJ9YtlNOPWhbKPlrQ8+AY52Aagi9WNhe6XfJdh5g6ptiOILm330mkR4g W6nEgZVyIyTq3ekOuruftWL99qpP5zi+eNrMmLRQx9iecDNgFr342R9bTDlb1TLuRb+/tJ98 f/bIWIr0cqQmqQ33FgRhrG1+Xml6UXyJ2jExmlO8JljuOGeXYh6ZkIEyzqzffzBLXZCujlYQ DFXpyMNVJ2ZwPmX2mWEoYuaBU0JN7wM+/zWgOf2zRwhEuD3A2cO2PxoiIfyUEfB9SSmffaK/ S4xXoB6wvGENZ85Hg37C7WDNdaAt6Xh2uQIly5grkgvWppkNy4ZHxE+jeNsU7tg= In-Reply-To: <20260305-wqstall_start-at-v2-4-b60863ee0899@debian.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 05. 03. 26, 17:15, Breno Leitao wrote: > show_cpu_pool_hog() only prints workers whose task is currently running > on the CPU (task_is_running()). This misses workers that are busy > processing a work item but are sleeping or blocked — for example, a > worker that clears PF_WQ_WORKER and enters wait_event_idle(). Such a > worker still occupies a pool slot and prevents progress, yet produces > an empty backtrace section in the watchdog output. > > This is happening on real arm64 systems, where > toggle_allocation_gate() IPIs every single CPU in the machine (which > lacks NMI), causing workqueue stalls that show empty backtraces because > toggle_allocation_gate() is sleeping in wait_event_idle(). > > Remove the task_is_running() filter so every in-flight worker in the > pool's busy_hash is dumped. The busy_hash is protected by pool->lock, > which is already held. > > Signed-off-by: Breno Leitao > --- > kernel/workqueue.c | 28 +++++++++++++--------------- > 1 file changed, 13 insertions(+), 15 deletions(-) > > diff --git a/kernel/workqueue.c b/kernel/workqueue.c > index 56d8af13843f8..09b9ad78d566c 100644 > --- a/kernel/workqueue.c > +++ b/kernel/workqueue.c > @@ -7583,9 +7583,9 @@ MODULE_PARM_DESC(panic_on_stall_time, "Panic if stall exceeds this many seconds > > /* > * Show workers that might prevent the processing of pending work items. > - * The only candidates are CPU-bound workers in the running state. > - * Pending work items should be handled by another idle worker > - * in all other situations. > + * A busy worker that is not running on the CPU (e.g. sleeping in > + * wait_event_idle() with PF_WQ_WORKER cleared) can stall the pool just as > + * effectively as a CPU-bound one, so dump every in-flight worker. > */ > static void show_cpu_pool_hog(struct worker_pool *pool) > { > @@ -7596,19 +7596,17 @@ static void show_cpu_pool_hog(struct worker_pool *pool) > raw_spin_lock_irqsave(&pool->lock, irq_flags); > > hash_for_each(pool->busy_hash, bkt, worker, hentry) { > - if (task_is_running(worker->task)) { We see dumps from non-existent cpus on 7.0 like: BUG: workqueue lockup - pool cpus=144 node=0 flags=0x4 nice=0 stuck for 168224s! ... Showing busy workqueues and worker pools: workqueue rcu_gp: flags=0x108 pwq 578: cpus=144 node=0 flags=0x4 nice=0 active=3 refcnt=4 in: https://bugzilla.suse.com/show_bug.cgi?id=1263947 ? Can this (or other patch from the series) cause this? Should there be something like cpu_online() instead of task_is_running() somewhere? > - /* > - * Defer printing to avoid deadlocks in console > - * drivers that queue work while holding locks > - * also taken in their write paths. > - */ > - printk_deferred_enter(); > + /* > + * Defer printing to avoid deadlocks in console > + * drivers that queue work while holding locks > + * also taken in their write paths. > + */ > + printk_deferred_enter(); > > - pr_info("pool %d:\n", pool->id); > - sched_show_task(worker->task); > + pr_info("pool %d:\n", pool->id); > + sched_show_task(worker->task); > > - printk_deferred_exit(); > - } > + printk_deferred_exit(); > } > > raw_spin_unlock_irqrestore(&pool->lock, irq_flags); thanks, -- js suse labs