From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7628E258CCC; Wed, 1 Apr 2026 20:20:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775074815; cv=none; b=aDdNGUxYbeJXGS9pJHLttTD1vgrhsNcDnRlFTM7QI3ZDVqdz22BWbAeeDWFDHx4/z3C+pX3z7jyfGRqyFJ7eniu5QNsKLedvxZzDNMX30eaxCsu5oO9ifbEFXEmOQkQ+oTWhL+D2W+JTdhehqTw3nRCeX+O6KZKqCqIwtyb057I= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775074815; c=relaxed/simple; bh=fxuj2SQ4zAFgYm8BZkUW2BKIyhMwt2J/siG8mDBm/IA=; h=Date:Message-ID:From:To:Cc:Subject:In-Reply-To:References; b=c/NRVRHBiEfWIOgyzby67MYK49ct+rncbJy6oCxtWZMpM4+OxaCmwKg1NU+HvCKkAZLJmhfdkBIqI2EVQjQ3TF8LFFWysKQGO0/epHqNp+uwE95+HAEHnBIrKVJRTIFdz9MoyQq0LR3/M5A8AiXZ1d3Hj4Tj7/gnCR5aQ6cmkdk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LTR1wdXu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LTR1wdXu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01873C4CEF7; Wed, 1 Apr 2026 20:20:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775074815; bh=fxuj2SQ4zAFgYm8BZkUW2BKIyhMwt2J/siG8mDBm/IA=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=LTR1wdXuawTnvw8nOp6Xa+rFm153aVYzl4OjjF06Cn1dbV6j2+sSAMMz6WctWeSZw 566U1GKU/F+CpngARrwDc4jmS5/pvcrfHmMxcg9tuYHPRZ+SDIvgqoTWSJGhffKnpW RE2SYMFnoFh+tKMNgpB0gIFgnEmBkfJKzJKsF6hX0g0M7wCwmNXueo1kSVWUGbTWKG j51LgIzcrI2xCq2yDzWzjOUWpYypTOrqyErp8rS6DAhIfri0CJ735ZfxTdIqtfDuRQ eoJhIik8gbG8UYDnvhA2ZUoX1WY3/6Iaw1Ntjxp+MdNPWhmRzXMbi0W7ZdueTy4Smu zcLJR3hJa9KSw== Date: Wed, 01 Apr 2026 10:20:13 -1000 Message-ID: <604e3d6aea8767a245160e8c6d3b4b4c@kernel.org> From: Tejun Heo To: Matthew Brost Cc: Waiman Long , intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Carlos Santa , Ryan Neph , stable@vger.kernel.org, Lai Jiangshan Subject: Re: [PATCH v2] workqueue: Add pool_workqueue to pending_pwqs list when unplugging multiple inactive works In-Reply-To: <20260401010739.1053192-1-matthew.brost@intel.com> References: <20260401010739.1053192-1-matthew.brost@intel.com> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Hello, Applied to wq/for-7.0-fixes with the comment updated as below. Thanks. --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -1852,12 +1852,11 @@ if (pwq_activate_first_inactive(pwq, true)) { /* - * pwq is unbound. Additional inactive work_items need - * to reinsert the pwq into nna->pending_pwqs, which - * was skipped while pwq->plugged was true. See - * pwq_tryinc_nr_active() for additional details. + * While plugged, queueing skips activation which + * includes bumping the nr_active count and adding the + * pwq to nna->pending_pwqs if the count can't be + * obtained. We need to restore both for the pwq being + * unplugged. The first call activates the first + * inactive work item and the second, if there are more + * inactive, puts the pwq on pending_pwqs. */ -- tejun