From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 551943D5234 for ; Fri, 8 May 2026 18:21:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778264514; cv=none; b=d/y3dx2+29CrXd5z4zMOoa9A9ikz4SyDlJrXTksCCCz5VQSlbKPCI3FPnQGX76a4wxXWaMuKuUPlwUVapOjcvlio5oT9Et2Eq3ZBxH6eL3LUOqeQFb54DT9eQzKTzkzBpk3LBojpJrd16tKTMJLKAfEBtyFwf7AmSqpj5HzZWEs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778264514; c=relaxed/simple; bh=E8mfTCDNKbeGgk6TThyI+YXc7htFJhG+Q5wfaOKsOpE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eAo2dXi4sRFuQAG8i8RnUqRVYqQbkd72RECFHoJFVzuO5TaxtQU13UzGFmTyD6mtqSPhl2LjwrbxM6u2qOSMRb93lGr6M9rkQUPv3Az90XxNiRPP6R/B/hh19ynajPF242MD0eIQDxBrpQMJAhYe1izzH17jThG7A98uETmVGU0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=auSzf3nm; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="auSzf3nm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1778264512; x=1809800512; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=E8mfTCDNKbeGgk6TThyI+YXc7htFJhG+Q5wfaOKsOpE=; b=auSzf3nm0R3iRtWqEAnuuB8Bx3+LnyOx2rn1I8cRyRklFFcdO5o5xugY pP/ZFEtV0HaGf3RPQwq5lEu4Jyo71Ip3htXnuSW7WYroj2HYuQ9me8dYi 8lPOIgwtb4HrSEcg1y+QBZO50cQd5B/KjULBwmrrVllKHKAdaWWWxVKuw U8J30tY3OuASgI8rN5QojX6iK7hho5C2ey+MhJiOXFFHgoJTIXtG5xw+u ueIE2beabYltmNiX0Fvd9gSXVQRQ47bIo6WT4WfLm2D5nRjf8oj58rj0T ce0jPUO6trN5c3N0lB9FdkxjTUvy5wxzV03h9fm9cM5B/VuMgrSX/v28r w==; X-CSE-ConnectionGUID: H0CP1xehT4OLalU66QnF1Q== X-CSE-MsgGUID: VFwpo+r5SUSB9AZzwI7S8g== X-IronPort-AV: E=McAfee;i="6800,10657,11780"; a="79264147" X-IronPort-AV: E=Sophos;i="6.23,224,1770624000"; d="scan'208";a="79264147" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2026 11:21:49 -0700 X-CSE-ConnectionGUID: Nlfu6KUjSp+bFBQd/morjw== X-CSE-MsgGUID: ClGzbSbwQZSmGH4pAsU+7w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,224,1770624000"; d="scan'208";a="233776426" Received: from mdroper-mobl2.amr.corp.intel.com (HELO agluck-desk3.intel.com) ([10.124.220.98]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2026 11:21:49 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin , Chen Yu Cc: Borislav Petkov , x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH 1/4] fs/resctrl: Move functions to avoid forward references in subsequent fixes Date: Fri, 8 May 2026 11:21:40 -0700 Message-ID: <20260508182143.14592-2-tony.luck@intel.com> X-Mailer: git-send-email 2.54.0 In-Reply-To: <20260508182143.14592-1-tony.luck@intel.com> References: <20260508182143.14592-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit No functional change. Just pull some functions before rdt_get_tree(). Signed-off-by: Tony Luck --- fs/resctrl/rdtgroup.c | 376 +++++++++++++++++++++--------------------- 1 file changed, 188 insertions(+), 188 deletions(-) diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c index 5dfdaa6f9d8f..a6376a3fc4c3 100644 --- a/fs/resctrl/rdtgroup.c +++ b/fs/resctrl/rdtgroup.c @@ -2782,6 +2782,194 @@ static void schemata_list_destroy(void) } } +/* + * Move tasks from one to the other group. If @from is NULL, then all tasks + * in the systems are moved unconditionally (used for teardown). + * + * If @mask is not NULL the cpus on which moved tasks are running are set + * in that mask so the update smp function call is restricted to affected + * cpus. + */ +static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to, + struct cpumask *mask) +{ + struct task_struct *p, *t; + + read_lock(&tasklist_lock); + for_each_process_thread(p, t) { + if (!from || is_closid_match(t, from) || + is_rmid_match(t, from)) { + resctrl_arch_set_closid_rmid(t, to->closid, + to->mon.rmid); + + /* + * Order the closid/rmid stores above before the loads + * in task_curr(). This pairs with the full barrier + * between the rq->curr update and + * resctrl_arch_sched_in() during context switch. + */ + smp_mb(); + + /* + * If the task is on a CPU, set the CPU in the mask. + * The detection is inaccurate as tasks might move or + * schedule before the smp function call takes place. + * In such a case the function call is pointless, but + * there is no other side effect. + */ + if (IS_ENABLED(CONFIG_SMP) && mask && task_curr(t)) + cpumask_set_cpu(task_cpu(t), mask); + } + } + read_unlock(&tasklist_lock); +} + +static void free_all_child_rdtgrp(struct rdtgroup *rdtgrp) +{ + struct rdtgroup *sentry, *stmp; + struct list_head *head; + + head = &rdtgrp->mon.crdtgrp_list; + list_for_each_entry_safe(sentry, stmp, head, mon.crdtgrp_list) { + rdtgroup_unassign_cntrs(sentry); + free_rmid(sentry->closid, sentry->mon.rmid); + list_del(&sentry->mon.crdtgrp_list); + + if (atomic_read(&sentry->waitcount) != 0) + sentry->flags = RDT_DELETED; + else + rdtgroup_remove(sentry); + } +} + +/* + * Forcibly remove all of subdirectories under root. + */ +static void rmdir_all_sub(void) +{ + struct rdtgroup *rdtgrp, *tmp; + + /* Move all tasks to the default resource group */ + rdt_move_group_tasks(NULL, &rdtgroup_default, NULL); + + list_for_each_entry_safe(rdtgrp, tmp, &rdt_all_groups, rdtgroup_list) { + /* Free any child rmids */ + free_all_child_rdtgrp(rdtgrp); + + /* Remove each rdtgroup other than root */ + if (rdtgrp == &rdtgroup_default) + continue; + + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP || + rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) + rdtgroup_pseudo_lock_remove(rdtgrp); + + /* + * Give any CPUs back to the default group. We cannot copy + * cpu_online_mask because a CPU might have executed the + * offline callback already, but is still marked online. + */ + cpumask_or(&rdtgroup_default.cpu_mask, + &rdtgroup_default.cpu_mask, &rdtgrp->cpu_mask); + + rdtgroup_unassign_cntrs(rdtgrp); + + free_rmid(rdtgrp->closid, rdtgrp->mon.rmid); + + kernfs_remove(rdtgrp->kn); + list_del(&rdtgrp->rdtgroup_list); + + if (atomic_read(&rdtgrp->waitcount) != 0) + rdtgrp->flags = RDT_DELETED; + else + rdtgroup_remove(rdtgrp); + } + /* Notify online CPUs to update per cpu storage and PQR_ASSOC MSR */ + update_closid_rmid(cpu_online_mask, &rdtgroup_default); + + kernfs_remove(kn_info); + kernfs_remove(kn_mongrp); + kernfs_remove(kn_mondata); +} + +/** + * mon_get_kn_priv() - Get the mon_data priv data for this event. + * + * The same values are used across the mon_data directories of all control and + * monitor groups for the same event in the same domain. Keep a list of + * allocated structures and re-use an existing one with the same values for + * @rid, @domid, etc. + * + * @rid: The resource id for the event file being created. + * @domid: The domain id for the event file being created. + * @mevt: The type of event file being created. + * @do_sum: Whether SNC summing monitors are being created. Only set + * when @rid == RDT_RESOURCE_L3. + * + * Return: Pointer to mon_data private data of the event, NULL on failure. + */ +static struct mon_data *mon_get_kn_priv(enum resctrl_res_level rid, int domid, + struct mon_evt *mevt, + bool do_sum) +{ + struct mon_data *priv; + + lockdep_assert_held(&rdtgroup_mutex); + + list_for_each_entry(priv, &mon_data_kn_priv_list, list) { + if (priv->rid == rid && priv->domid == domid && + priv->sum == do_sum && priv->evt == mevt) + return priv; + } + + priv = kzalloc_obj(*priv); + if (!priv) + return NULL; + + priv->rid = rid; + priv->domid = domid; + priv->sum = do_sum; + priv->evt = mevt; + list_add_tail(&priv->list, &mon_data_kn_priv_list); + + return priv; +} + +/** + * mon_put_kn_priv() - Free all allocated mon_data structures. + * + * Called when resctrl file system is unmounted. + */ +static void mon_put_kn_priv(void) +{ + struct mon_data *priv, *tmp; + + lockdep_assert_held(&rdtgroup_mutex); + + list_for_each_entry_safe(priv, tmp, &mon_data_kn_priv_list, list) { + list_del(&priv->list); + kfree(priv); + } +} + +static void resctrl_fs_teardown(void) +{ + lockdep_assert_held(&rdtgroup_mutex); + + /* Cleared by rdtgroup_destroy_root() */ + if (!rdtgroup_default.kn) + return; + + rmdir_all_sub(); + rdtgroup_unassign_cntrs(&rdtgroup_default); + mon_put_kn_priv(); + rdt_pseudo_lock_release(); + rdtgroup_default.mode = RDT_MODE_SHAREABLE; + closid_exit(); + schemata_list_destroy(); + rdtgroup_destroy_root(); +} + static int rdt_get_tree(struct fs_context *fc) { struct rdt_fs_context *ctx = rdt_fc2context(fc); @@ -2981,194 +3169,6 @@ static int rdt_init_fs_context(struct fs_context *fc) return 0; } -/* - * Move tasks from one to the other group. If @from is NULL, then all tasks - * in the systems are moved unconditionally (used for teardown). - * - * If @mask is not NULL the cpus on which moved tasks are running are set - * in that mask so the update smp function call is restricted to affected - * cpus. - */ -static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to, - struct cpumask *mask) -{ - struct task_struct *p, *t; - - read_lock(&tasklist_lock); - for_each_process_thread(p, t) { - if (!from || is_closid_match(t, from) || - is_rmid_match(t, from)) { - resctrl_arch_set_closid_rmid(t, to->closid, - to->mon.rmid); - - /* - * Order the closid/rmid stores above before the loads - * in task_curr(). This pairs with the full barrier - * between the rq->curr update and - * resctrl_arch_sched_in() during context switch. - */ - smp_mb(); - - /* - * If the task is on a CPU, set the CPU in the mask. - * The detection is inaccurate as tasks might move or - * schedule before the smp function call takes place. - * In such a case the function call is pointless, but - * there is no other side effect. - */ - if (IS_ENABLED(CONFIG_SMP) && mask && task_curr(t)) - cpumask_set_cpu(task_cpu(t), mask); - } - } - read_unlock(&tasklist_lock); -} - -static void free_all_child_rdtgrp(struct rdtgroup *rdtgrp) -{ - struct rdtgroup *sentry, *stmp; - struct list_head *head; - - head = &rdtgrp->mon.crdtgrp_list; - list_for_each_entry_safe(sentry, stmp, head, mon.crdtgrp_list) { - rdtgroup_unassign_cntrs(sentry); - free_rmid(sentry->closid, sentry->mon.rmid); - list_del(&sentry->mon.crdtgrp_list); - - if (atomic_read(&sentry->waitcount) != 0) - sentry->flags = RDT_DELETED; - else - rdtgroup_remove(sentry); - } -} - -/* - * Forcibly remove all of subdirectories under root. - */ -static void rmdir_all_sub(void) -{ - struct rdtgroup *rdtgrp, *tmp; - - /* Move all tasks to the default resource group */ - rdt_move_group_tasks(NULL, &rdtgroup_default, NULL); - - list_for_each_entry_safe(rdtgrp, tmp, &rdt_all_groups, rdtgroup_list) { - /* Free any child rmids */ - free_all_child_rdtgrp(rdtgrp); - - /* Remove each rdtgroup other than root */ - if (rdtgrp == &rdtgroup_default) - continue; - - if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP || - rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) - rdtgroup_pseudo_lock_remove(rdtgrp); - - /* - * Give any CPUs back to the default group. We cannot copy - * cpu_online_mask because a CPU might have executed the - * offline callback already, but is still marked online. - */ - cpumask_or(&rdtgroup_default.cpu_mask, - &rdtgroup_default.cpu_mask, &rdtgrp->cpu_mask); - - rdtgroup_unassign_cntrs(rdtgrp); - - free_rmid(rdtgrp->closid, rdtgrp->mon.rmid); - - kernfs_remove(rdtgrp->kn); - list_del(&rdtgrp->rdtgroup_list); - - if (atomic_read(&rdtgrp->waitcount) != 0) - rdtgrp->flags = RDT_DELETED; - else - rdtgroup_remove(rdtgrp); - } - /* Notify online CPUs to update per cpu storage and PQR_ASSOC MSR */ - update_closid_rmid(cpu_online_mask, &rdtgroup_default); - - kernfs_remove(kn_info); - kernfs_remove(kn_mongrp); - kernfs_remove(kn_mondata); -} - -/** - * mon_get_kn_priv() - Get the mon_data priv data for this event. - * - * The same values are used across the mon_data directories of all control and - * monitor groups for the same event in the same domain. Keep a list of - * allocated structures and re-use an existing one with the same values for - * @rid, @domid, etc. - * - * @rid: The resource id for the event file being created. - * @domid: The domain id for the event file being created. - * @mevt: The type of event file being created. - * @do_sum: Whether SNC summing monitors are being created. Only set - * when @rid == RDT_RESOURCE_L3. - * - * Return: Pointer to mon_data private data of the event, NULL on failure. - */ -static struct mon_data *mon_get_kn_priv(enum resctrl_res_level rid, int domid, - struct mon_evt *mevt, - bool do_sum) -{ - struct mon_data *priv; - - lockdep_assert_held(&rdtgroup_mutex); - - list_for_each_entry(priv, &mon_data_kn_priv_list, list) { - if (priv->rid == rid && priv->domid == domid && - priv->sum == do_sum && priv->evt == mevt) - return priv; - } - - priv = kzalloc_obj(*priv); - if (!priv) - return NULL; - - priv->rid = rid; - priv->domid = domid; - priv->sum = do_sum; - priv->evt = mevt; - list_add_tail(&priv->list, &mon_data_kn_priv_list); - - return priv; -} - -/** - * mon_put_kn_priv() - Free all allocated mon_data structures. - * - * Called when resctrl file system is unmounted. - */ -static void mon_put_kn_priv(void) -{ - struct mon_data *priv, *tmp; - - lockdep_assert_held(&rdtgroup_mutex); - - list_for_each_entry_safe(priv, tmp, &mon_data_kn_priv_list, list) { - list_del(&priv->list); - kfree(priv); - } -} - -static void resctrl_fs_teardown(void) -{ - lockdep_assert_held(&rdtgroup_mutex); - - /* Cleared by rdtgroup_destroy_root() */ - if (!rdtgroup_default.kn) - return; - - rmdir_all_sub(); - rdtgroup_unassign_cntrs(&rdtgroup_default); - mon_put_kn_priv(); - rdt_pseudo_lock_release(); - rdtgroup_default.mode = RDT_MODE_SHAREABLE; - closid_exit(); - schemata_list_destroy(); - rdtgroup_destroy_root(); -} - static void rdt_kill_sb(struct super_block *sb) { struct rdt_resource *r; -- 2.54.0