From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 693A2E75458 for ; Wed, 24 Dec 2025 13:49:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Ym0clnOXcCkHRqDykTV1fJDIYWiu2wzmH3T6iBeqLuA=; b=niEYFAwYBFvTPd+gt5ZsmJTePQ Bhm6mZ8rygaBnA4JvqsdltFcduAWTMKosY05cHLdcIplpyPe3Ha/bJdcxiIlJL3VScqoXxBJS9kM8 btYKGGd/flHNxJYFVm463OqEmE/lv06ZtcVhXUMuOw6P4VjVX17ULoxvAgpNIHW9XrzWw9/hihW9+ o2o1kQ6B/THYlZ88R7ZtpVX8ag4f6JEEcnbrISvHelmt/y9DZ6Ph9kBcDOI/uwmDurmLgRmaQ7vIe p5x1049je930A5My1pTkfLTz8LaoUIKBylYVenVWA6bNc9A3iHNrmkedJgNdOn5rdLJsa4KJbgCOG a6lzCqAA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vYPEr-0000000GttP-0l2q; Wed, 24 Dec 2025 13:49:17 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vYPEg-0000000Gtkj-0TuC for linux-arm-kernel@lists.infradead.org; Wed, 24 Dec 2025 13:49:11 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id B997540AD6; Wed, 24 Dec 2025 13:49:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97CAAC4CEFB; Wed, 24 Dec 2025 13:48:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766584145; bh=IvvJgMcd52cz0eUXtHfqF0Ab2ynthHHbFGfYcVS7MhI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kaBuFHVDB+r1+wmFnnz/WXtOLPFboUNL1OfEf6O7xFqhEpPyvk1qxpu41Pur1Xjn9 6Rg/yUgFRgX0/0gJgIgcozT2cXOLTx/lYiYTwV9KoYdAINweW494Xw02UR6s97sqtC NX5Bw1Wwzz+XMvfy7tMlFM3xcXriho5wAtb5tZlZIZOzyb4lLYEb7qc3dZ9nHWDeKN KetC6eCqsdmW/FfngzqvExoTcqJPGM7FqyMUw6eKtvSUjB7/W5w8L5kCb8slh2rNCn TAQ9olUE/bT5wEZ82t0sE2yzV8RLX0WCBKZAUWIxrXslvPmpaK+/563Fyd67a6KXJr ib8CaUU8B1QDw== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Andrew Morton , Bjorn Helgaas , Catalin Marinas , Chen Ridong , Danilo Krummrich , "David S . Miller" , Eric Dumazet , Gabriele Monaco , Greg Kroah-Hartman , Ingo Molnar , Jakub Kicinski , Jens Axboe , Johannes Weiner , Lai Jiangshan , Marco Crivellari , Michal Hocko , Muchun Song , Paolo Abeni , Peter Zijlstra , Phil Auld , "Rafael J . Wysocki" , Roman Gushchin , Shakeel Butt , Simon Horman , Tejun Heo , Thomas Gleixner , Vlastimil Babka , Waiman Long , Will Deacon , cgroups@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 25/33] kthread: Include unbound kthreads in the managed affinity list Date: Wed, 24 Dec 2025 14:45:12 +0100 Message-ID: <20251224134520.33231-26-frederic@kernel.org> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251224134520.33231-1-frederic@kernel.org> References: <20251224134520.33231-1-frederic@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20251224_054908_393519_C6F2576F X-CRM114-Status: GOOD ( 18.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The managed affinity list currently contains only unbound kthreads that have affinity preferences. Unbound kthreads globally affine by default are outside of the list because their affinity is automatically managed by the scheduler (through the fallback housekeeping mask) and by cpuset. However in order to preserve the preferred affinity of kthreads, cpuset will delegate the isolated partition update propagation to the housekeeping and kthread code. Prepare for that with including all unbound kthreads in the managed affinity list. Signed-off-by: Frederic Weisbecker --- kernel/kthread.c | 70 ++++++++++++++++++++++++++++-------------------- 1 file changed, 41 insertions(+), 29 deletions(-) diff --git a/kernel/kthread.c b/kernel/kthread.c index f1e4f1f35cae..51c0908d3d02 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -365,9 +365,10 @@ static void kthread_fetch_affinity(struct kthread *kthread, struct cpumask *cpum if (kthread->preferred_affinity) { pref = kthread->preferred_affinity; } else { - if (WARN_ON_ONCE(kthread->node == NUMA_NO_NODE)) - return; - pref = cpumask_of_node(kthread->node); + if (kthread->node == NUMA_NO_NODE) + pref = housekeeping_cpumask(HK_TYPE_KTHREAD); + else + pref = cpumask_of_node(kthread->node); } cpumask_and(cpumask, pref, housekeeping_cpumask(HK_TYPE_KTHREAD)); @@ -380,32 +381,29 @@ static void kthread_affine_node(void) struct kthread *kthread = to_kthread(current); cpumask_var_t affinity; - WARN_ON_ONCE(kthread_is_per_cpu(current)); + if (WARN_ON_ONCE(kthread_is_per_cpu(current))) + return; - if (kthread->node == NUMA_NO_NODE) { - housekeeping_affine(current, HK_TYPE_KTHREAD); - } else { - if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) { - WARN_ON_ONCE(1); - return; - } - - mutex_lock(&kthread_affinity_lock); - WARN_ON_ONCE(!list_empty(&kthread->affinity_node)); - list_add_tail(&kthread->affinity_node, &kthread_affinity_list); - /* - * The node cpumask is racy when read from kthread() but: - * - a racing CPU going down will either fail on the subsequent - * call to set_cpus_allowed_ptr() or be migrated to housekeepers - * afterwards by the scheduler. - * - a racing CPU going up will be handled by kthreads_online_cpu() - */ - kthread_fetch_affinity(kthread, affinity); - set_cpus_allowed_ptr(current, affinity); - mutex_unlock(&kthread_affinity_lock); - - free_cpumask_var(affinity); + if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) { + WARN_ON_ONCE(1); + return; } + + mutex_lock(&kthread_affinity_lock); + WARN_ON_ONCE(!list_empty(&kthread->affinity_node)); + list_add_tail(&kthread->affinity_node, &kthread_affinity_list); + /* + * The node cpumask is racy when read from kthread() but: + * - a racing CPU going down will either fail on the subsequent + * call to set_cpus_allowed_ptr() or be migrated to housekeepers + * afterwards by the scheduler. + * - a racing CPU going up will be handled by kthreads_online_cpu() + */ + kthread_fetch_affinity(kthread, affinity); + set_cpus_allowed_ptr(current, affinity); + mutex_unlock(&kthread_affinity_lock); + + free_cpumask_var(affinity); } static int kthread(void *_create) @@ -919,8 +917,22 @@ static int kthreads_online_cpu(unsigned int cpu) ret = -EINVAL; continue; } - kthread_fetch_affinity(k, affinity); - set_cpus_allowed_ptr(k->task, affinity); + + /* + * Unbound kthreads without preferred affinity are already affine + * to housekeeping, whether those CPUs are online or not. So no need + * to handle newly online CPUs for them. + * + * But kthreads with a preferred affinity or node are different: + * if none of their preferred CPUs are online and part of + * housekeeping at the same time, they must be affine to housekeeping. + * But as soon as one of their preferred CPU becomes online, they must + * be affine to them. + */ + if (k->preferred_affinity || k->node != NUMA_NO_NODE) { + kthread_fetch_affinity(k, affinity); + set_cpus_allowed_ptr(k->task, affinity); + } } free_cpumask_var(affinity); -- 2.51.1