From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 363D8E75458 for ; Wed, 24 Dec 2025 13:49:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=WlrWhAOcB5ZojhifL9pR+Jw2f+8cjxe/d8XcV/qKo4A=; b=bKzcYsl924cgZfBnjQvCVUlnSE tzfiCoZDyDHrBwBwZLzYjHmxiG6PveorfFCZUPRvQVTklN+e5EpoXLiP/GJMIahFPSisSH+adJMRA 8AnkSAVtBt5JbB/rlE8BMmZEDKGXvDo/wZbE3kuUBxMhHTWukUcYPdXOVeAavV07MhWtCNO2DwpaP zNAHenfUQPKIkbC1kkVj0mxBWSPe5L9c2MOm5+peX+5lgG176AHIgkwOyYqdd1/RHOlyWnqt8L6nW U9bJO2bwbQrXuDL+81mIMjbo9uIJZHjhRL9FmRtH+O+G7N17Ln932tU03gZRpMdPZi9UNaCBO3o7p SiAfI2fQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vYPEj-0000000GtnG-2WMa; Wed, 24 Dec 2025 13:49:09 +0000 Received: from tor.source.kernel.org ([2600:3c04:e001:324:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vYPEY-0000000Gtec-0Ftj for linux-arm-kernel@lists.infradead.org; Wed, 24 Dec 2025 13:49:01 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 7418060134; Wed, 24 Dec 2025 13:48:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 612C9C116D0; Wed, 24 Dec 2025 13:48:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766584137; bh=xyhATY1zVFGVtiCd57WQG2O0PBgMfm2nst85/O2cEbs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NyRzL6z5R5v0G5V60c/cMOAmlf2X4K/bgFhayrJykk2mRvNnX5g4eb9eVwjP9B3uD YmMjEnLxAniTSnyzgpbIVbN45S/BowGe8CXoFq/R/cpy/hHlmDhKZvyGI1o12xe/xk lsSJcWzOeAq7LUns5a50t3eL2K51xPottSbxSRwYPRSjK5pwvbbTlYaDLUI4MUQG+N GawSYe+yi7seRcKX0BL+qCRl5mLupLaOEV4Y/yru4Om0oG70W0wLXJ3KEAxlnYkADM avr5NumTqcyaDshTroFYBWKkdaUs71T6FSO720HlWg306E1oHDl/aVXY7PpKMC8mvI b1VfpzLQZ6Kaw== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Andrew Morton , Bjorn Helgaas , Catalin Marinas , Chen Ridong , Danilo Krummrich , "David S . Miller" , Eric Dumazet , Gabriele Monaco , Greg Kroah-Hartman , Ingo Molnar , Jakub Kicinski , Jens Axboe , Johannes Weiner , Lai Jiangshan , Marco Crivellari , Michal Hocko , Muchun Song , Paolo Abeni , Peter Zijlstra , Phil Auld , "Rafael J . Wysocki" , Roman Gushchin , Shakeel Butt , Simon Horman , Tejun Heo , Thomas Gleixner , Vlastimil Babka , Waiman Long , Will Deacon , cgroups@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-mm@kvack.org, linux-pci@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 24/33] kthread: Refine naming of affinity related fields Date: Wed, 24 Dec 2025 14:45:11 +0100 Message-ID: <20251224134520.33231-25-frederic@kernel.org> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251224134520.33231-1-frederic@kernel.org> References: <20251224134520.33231-1-frederic@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The kthreads preferred affinity related fields use "hotplug" as the base of their naming because the affinity management was initially deemed to deal with CPU hotplug. The scope of this role is going to broaden now and also deal with cpuset isolated partition updates. Switch the naming accordingly. Signed-off-by: Frederic Weisbecker --- kernel/kthread.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/kernel/kthread.c b/kernel/kthread.c index 99a3808d086f..f1e4f1f35cae 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -35,8 +35,8 @@ static DEFINE_SPINLOCK(kthread_create_lock); static LIST_HEAD(kthread_create_list); struct task_struct *kthreadd_task; -static LIST_HEAD(kthreads_hotplug); -static DEFINE_MUTEX(kthreads_hotplug_lock); +static LIST_HEAD(kthread_affinity_list); +static DEFINE_MUTEX(kthread_affinity_lock); struct kthread_create_info { @@ -69,7 +69,7 @@ struct kthread { /* To store the full name if task comm is truncated. */ char *full_name; struct task_struct *task; - struct list_head hotplug_node; + struct list_head affinity_node; struct cpumask *preferred_affinity; }; @@ -128,7 +128,7 @@ bool set_kthread_struct(struct task_struct *p) init_completion(&kthread->exited); init_completion(&kthread->parked); - INIT_LIST_HEAD(&kthread->hotplug_node); + INIT_LIST_HEAD(&kthread->affinity_node); p->vfork_done = &kthread->exited; kthread->task = p; @@ -323,10 +323,10 @@ void __noreturn kthread_exit(long result) { struct kthread *kthread = to_kthread(current); kthread->result = result; - if (!list_empty(&kthread->hotplug_node)) { - mutex_lock(&kthreads_hotplug_lock); - list_del(&kthread->hotplug_node); - mutex_unlock(&kthreads_hotplug_lock); + if (!list_empty(&kthread->affinity_node)) { + mutex_lock(&kthread_affinity_lock); + list_del(&kthread->affinity_node); + mutex_unlock(&kthread_affinity_lock); if (kthread->preferred_affinity) { kfree(kthread->preferred_affinity); @@ -390,9 +390,9 @@ static void kthread_affine_node(void) return; } - mutex_lock(&kthreads_hotplug_lock); - WARN_ON_ONCE(!list_empty(&kthread->hotplug_node)); - list_add_tail(&kthread->hotplug_node, &kthreads_hotplug); + mutex_lock(&kthread_affinity_lock); + WARN_ON_ONCE(!list_empty(&kthread->affinity_node)); + list_add_tail(&kthread->affinity_node, &kthread_affinity_list); /* * The node cpumask is racy when read from kthread() but: * - a racing CPU going down will either fail on the subsequent @@ -402,7 +402,7 @@ static void kthread_affine_node(void) */ kthread_fetch_affinity(kthread, affinity); set_cpus_allowed_ptr(current, affinity); - mutex_unlock(&kthreads_hotplug_lock); + mutex_unlock(&kthread_affinity_lock); free_cpumask_var(affinity); } @@ -873,16 +873,16 @@ int kthread_affine_preferred(struct task_struct *p, const struct cpumask *mask) goto out; } - mutex_lock(&kthreads_hotplug_lock); + mutex_lock(&kthread_affinity_lock); cpumask_copy(kthread->preferred_affinity, mask); - WARN_ON_ONCE(!list_empty(&kthread->hotplug_node)); - list_add_tail(&kthread->hotplug_node, &kthreads_hotplug); + WARN_ON_ONCE(!list_empty(&kthread->affinity_node)); + list_add_tail(&kthread->affinity_node, &kthread_affinity_list); kthread_fetch_affinity(kthread, affinity); scoped_guard (raw_spinlock_irqsave, &p->pi_lock) set_cpus_allowed_force(p, affinity); - mutex_unlock(&kthreads_hotplug_lock); + mutex_unlock(&kthread_affinity_lock); out: free_cpumask_var(affinity); @@ -903,9 +903,9 @@ static int kthreads_online_cpu(unsigned int cpu) struct kthread *k; int ret; - guard(mutex)(&kthreads_hotplug_lock); + guard(mutex)(&kthread_affinity_lock); - if (list_empty(&kthreads_hotplug)) + if (list_empty(&kthread_affinity_list)) return 0; if (!zalloc_cpumask_var(&affinity, GFP_KERNEL)) @@ -913,7 +913,7 @@ static int kthreads_online_cpu(unsigned int cpu) ret = 0; - list_for_each_entry(k, &kthreads_hotplug, hotplug_node) { + list_for_each_entry(k, &kthread_affinity_list, affinity_node) { if (WARN_ON_ONCE((k->task->flags & PF_NO_SETAFFINITY) || kthread_is_per_cpu(k->task))) { ret = -EINVAL; -- 2.51.1