From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E554AC678D5 for ; Tue, 7 Mar 2023 17:22:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231383AbjCGRWg (ORCPT ); Tue, 7 Mar 2023 12:22:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231344AbjCGRWN (ORCPT ); Tue, 7 Mar 2023 12:22:13 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1262996617 for ; Tue, 7 Mar 2023 09:17:29 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 954E96150B for ; Tue, 7 Mar 2023 17:17:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8DA4EC433EF; Tue, 7 Mar 2023 17:17:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1678209448; bh=tLYuglSdSJIicHTiry0jQS51HHRXL9hxeQYORoDM3fA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bWlUMLUnxCtzmBBtXZ7dyCrk/EuPmNwgpGBv7yTsxrok5wpbl3gWTynvxK5Z/+E0d GMisrmyPa3mWojLBHO0y2WG8oUuIDRiXytMtSs8ZvIObTiqlPovyN9i/XHn2Yafm4n E++rQOJiCxhpL1kO+henHajdMC3bO2DXDi1lrFiw= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Valentin Schneider , Lai Jiangshan , Tejun Heo , Sasha Levin Subject: [PATCH 6.2 0192/1001] workqueue: Protects wq_unbound_cpumask with wq_pool_attach_mutex Date: Tue, 7 Mar 2023 17:49:24 +0100 Message-Id: <20230307170030.222294811@linuxfoundation.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307170022.094103862@linuxfoundation.org> References: <20230307170022.094103862@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Lai Jiangshan [ Upstream commit 99c621ef243bda726fb8d982a274ded96570b410 ] When unbind_workers() reads wq_unbound_cpumask to set the affinity of freshly-unbound kworkers, it only holds wq_pool_attach_mutex. This isn't sufficient as wq_unbound_cpumask is only protected by wq_pool_mutex. Make wq_unbound_cpumask protected with wq_pool_attach_mutex and also remove the need of temporary saved_cpumask. Fixes: 10a5a651e3af ("workqueue: Restrict kworker in the offline CPU pool running on housekeeping CPUs") Reported-by: Valentin Schneider Signed-off-by: Lai Jiangshan Signed-off-by: Tejun Heo Signed-off-by: Sasha Levin --- kernel/workqueue.c | 41 ++++++++++++++++------------------------- 1 file changed, 16 insertions(+), 25 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 07895deca2711..76ea87b0251ce 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -326,7 +326,7 @@ static struct rcuwait manager_wait = __RCUWAIT_INITIALIZER(manager_wait); static LIST_HEAD(workqueues); /* PR: list of all workqueues */ static bool workqueue_freezing; /* PL: have wqs started freezing? */ -/* PL: allowable cpus for unbound wqs and work items */ +/* PL&A: allowable cpus for unbound wqs and work items */ static cpumask_var_t wq_unbound_cpumask; /* CPU where unbound work was last round robin scheduled from this CPU */ @@ -3952,7 +3952,8 @@ static void apply_wqattrs_cleanup(struct apply_wqattrs_ctx *ctx) /* allocate the attrs and pwqs for later installation */ static struct apply_wqattrs_ctx * apply_wqattrs_prepare(struct workqueue_struct *wq, - const struct workqueue_attrs *attrs) + const struct workqueue_attrs *attrs, + const cpumask_var_t unbound_cpumask) { struct apply_wqattrs_ctx *ctx; struct workqueue_attrs *new_attrs, *tmp_attrs; @@ -3968,14 +3969,15 @@ apply_wqattrs_prepare(struct workqueue_struct *wq, goto out_free; /* - * Calculate the attrs of the default pwq. + * Calculate the attrs of the default pwq with unbound_cpumask + * which is wq_unbound_cpumask or to set to wq_unbound_cpumask. * If the user configured cpumask doesn't overlap with the * wq_unbound_cpumask, we fallback to the wq_unbound_cpumask. */ copy_workqueue_attrs(new_attrs, attrs); - cpumask_and(new_attrs->cpumask, new_attrs->cpumask, wq_unbound_cpumask); + cpumask_and(new_attrs->cpumask, new_attrs->cpumask, unbound_cpumask); if (unlikely(cpumask_empty(new_attrs->cpumask))) - cpumask_copy(new_attrs->cpumask, wq_unbound_cpumask); + cpumask_copy(new_attrs->cpumask, unbound_cpumask); /* * We may create multiple pwqs with differing cpumasks. Make a @@ -4072,7 +4074,7 @@ static int apply_workqueue_attrs_locked(struct workqueue_struct *wq, wq->flags &= ~__WQ_ORDERED; } - ctx = apply_wqattrs_prepare(wq, attrs); + ctx = apply_wqattrs_prepare(wq, attrs, wq_unbound_cpumask); if (!ctx) return -ENOMEM; @@ -5334,7 +5336,7 @@ void thaw_workqueues(void) } #endif /* CONFIG_FREEZER */ -static int workqueue_apply_unbound_cpumask(void) +static int workqueue_apply_unbound_cpumask(const cpumask_var_t unbound_cpumask) { LIST_HEAD(ctxs); int ret = 0; @@ -5350,7 +5352,7 @@ static int workqueue_apply_unbound_cpumask(void) if (wq->flags & __WQ_ORDERED) continue; - ctx = apply_wqattrs_prepare(wq, wq->unbound_attrs); + ctx = apply_wqattrs_prepare(wq, wq->unbound_attrs, unbound_cpumask); if (!ctx) { ret = -ENOMEM; break; @@ -5365,6 +5367,11 @@ static int workqueue_apply_unbound_cpumask(void) apply_wqattrs_cleanup(ctx); } + if (!ret) { + mutex_lock(&wq_pool_attach_mutex); + cpumask_copy(wq_unbound_cpumask, unbound_cpumask); + mutex_unlock(&wq_pool_attach_mutex); + } return ret; } @@ -5383,7 +5390,6 @@ static int workqueue_apply_unbound_cpumask(void) int workqueue_set_unbound_cpumask(cpumask_var_t cpumask) { int ret = -EINVAL; - cpumask_var_t saved_cpumask; /* * Not excluding isolated cpus on purpose. @@ -5397,23 +5403,8 @@ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask) goto out_unlock; } - if (!zalloc_cpumask_var(&saved_cpumask, GFP_KERNEL)) { - ret = -ENOMEM; - goto out_unlock; - } - - /* save the old wq_unbound_cpumask. */ - cpumask_copy(saved_cpumask, wq_unbound_cpumask); - - /* update wq_unbound_cpumask at first and apply it to wqs. */ - cpumask_copy(wq_unbound_cpumask, cpumask); - ret = workqueue_apply_unbound_cpumask(); - - /* restore the wq_unbound_cpumask when failed. */ - if (ret < 0) - cpumask_copy(wq_unbound_cpumask, saved_cpumask); + ret = workqueue_apply_unbound_cpumask(cpumask); - free_cpumask_var(saved_cpumask); out_unlock: apply_wqattrs_unlock(); } -- 2.39.2