From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5109F14D2A3 for ; Mon, 10 Jun 2024 18:35:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.16 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718044542; cv=none; b=rY1TxcuA4O/xq2A2PEJ3LjG53CQzuJDv/n3ejtcq8wdDKRNBGXlRS/5heOXjk8RkE2tNNWRQGiyBNqjXhZfHoX+A5LcpHHWPOYv+KI7Pk59RgE3n6BZbVeeayhw5oCloRCnV9UdJurF6aLzEYBxDJlH5eB2a/onb+0t4yHdgUWE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718044542; c=relaxed/simple; bh=be68LBU+fr4IIPUdrsajL4e2NAXqJ07WQJ1vShJ4zG8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mKSKO8MYHD/njutc1w7UEWSDcAfNSNoiKyuMhU7PVw+tXQ2/UGVasEN48U0U6AjfABmsFq3WsMbj5USmhO5plxY50MhUE2UdDK78D3JuDoXpi/QlW60SC8bUKnoo/gA3n+G0dx/YddvodMk9LlSCNdouA71znVBACRfdkE/kL18= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=M3xuzURq; arc=none smtp.client-ip=192.198.163.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="M3xuzURq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1718044540; x=1749580540; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=be68LBU+fr4IIPUdrsajL4e2NAXqJ07WQJ1vShJ4zG8=; b=M3xuzURqkUvV65HU4rUbVN1IZ0vILRJ4SSgcEWFVZ6CQqhowwsG83CPp XixITJbKxquRneCiRYB00mNo8Q9UP8zEqEjKuMZFiOjzqUkWzU/xrPVX5 y0EjLi1WeJ5I3hLX36Cl/NzGkcZsAje2D1JgUDdObSVViPNTu//vyOL3c lfgm00DWqYQ05oOtsn0+CE+d37lAQtZSDUtjsRlkaYLbc3ESvPZq/Ql4t LnFyzLpptl1bq3BTxyj1j1VkVPt4pxVSlE50uz+vW1hEa6HjaTkWAsxcb PQxPDFNveEazRqSS3rNunAMSWOKTaej857q1pNIr6Xznf9MCcySCQ9oQ7 Q==; X-CSE-ConnectionGUID: eJ/ClPFdS7ShwR38Er4vMA== X-CSE-MsgGUID: usGlQe9NRnmLLMNz1E0qjQ== X-IronPort-AV: E=McAfee;i="6600,9927,11099"; a="12004774" X-IronPort-AV: E=Sophos;i="6.08,227,1712646000"; d="scan'208";a="12004774" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jun 2024 11:35:37 -0700 X-CSE-ConnectionGUID: ic3ZZqWqRGWNrDzdzz5HRg== X-CSE-MsgGUID: buGIWCnaScq7U8iEjZWABQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,227,1712646000"; d="scan'208";a="43576549" Received: from agluck-desk3.sc.intel.com ([172.25.222.70]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jun 2024 11:35:37 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Maciej Wieczor-Retman , Peter Newman , James Morse , Babu Moger , Drew Fustini , Dave Martin Cc: x86@kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v20 01/18] x86/resctrl: Prepare for new domain scope Date: Mon, 10 Jun 2024 11:35:11 -0700 Message-ID: <20240610183528.349198-2-tony.luck@intel.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20240610183528.349198-1-tony.luck@intel.com> References: <20240610183528.349198-1-tony.luck@intel.com> Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Resctrl resources operate on subsets of CPUs in the system with the defining attribute of each subset being an instance of a particular level of cache. E.g. all CPUs sharing an L3 cache would be part of the same domain. In preparation for features that are scoped at the NUMA node level change the code from explicit references to "cache_level" to a more generic scope. At this point the only options for this scope are groups of CPUs that share an L2 cache or L3 cache. Clean up the error handling when looking up domains. Report invalid id's before calling rdt_find_domain() in preparation for better messages when scope can be other than cache scope. This means that rdt_find_domain() will never return an error. So remove checks for error from the callsites. Signed-off-by: Tony Luck --- include/linux/resctrl.h | 9 ++++- arch/x86/kernel/cpu/resctrl/core.c | 46 ++++++++++++++++------- arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 2 +- arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 6 ++- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 5 ++- 5 files changed, 49 insertions(+), 19 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index a365f67131ec..ed693bfe474d 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -150,13 +150,18 @@ struct resctrl_membw { struct rdt_parse_data; struct resctrl_schema; +enum resctrl_scope { + RESCTRL_L2_CACHE = 2, + RESCTRL_L3_CACHE = 3, +}; + /** * struct rdt_resource - attributes of a resctrl resource * @rid: The index of the resource * @alloc_capable: Is allocation available on this machine * @mon_capable: Is monitor feature available on this machine * @num_rmid: Number of RMIDs available - * @cache_level: Which cache level defines scope of this resource + * @scope: Scope of this resource * @cache: Cache allocation related data * @membw: If the component has bandwidth controls, their properties. * @domains: RCU list of all domains for this resource @@ -174,7 +179,7 @@ struct rdt_resource { bool alloc_capable; bool mon_capable; int num_rmid; - int cache_level; + enum resctrl_scope scope; struct resctrl_cache cache; struct resctrl_membw membw; struct list_head domains; diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index a113d9aba553..f85b2ff40eef 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -68,7 +68,7 @@ struct rdt_hw_resource rdt_resources_all[] = { .r_resctrl = { .rid = RDT_RESOURCE_L3, .name = "L3", - .cache_level = 3, + .scope = RESCTRL_L3_CACHE, .domains = domain_init(RDT_RESOURCE_L3), .parse_ctrlval = parse_cbm, .format_str = "%d=%0*x", @@ -82,7 +82,7 @@ struct rdt_hw_resource rdt_resources_all[] = { .r_resctrl = { .rid = RDT_RESOURCE_L2, .name = "L2", - .cache_level = 2, + .scope = RESCTRL_L2_CACHE, .domains = domain_init(RDT_RESOURCE_L2), .parse_ctrlval = parse_cbm, .format_str = "%d=%0*x", @@ -96,7 +96,7 @@ struct rdt_hw_resource rdt_resources_all[] = { .r_resctrl = { .rid = RDT_RESOURCE_MBA, .name = "MB", - .cache_level = 3, + .scope = RESCTRL_L3_CACHE, .domains = domain_init(RDT_RESOURCE_MBA), .parse_ctrlval = parse_bw, .format_str = "%d=%*u", @@ -108,7 +108,7 @@ struct rdt_hw_resource rdt_resources_all[] = { .r_resctrl = { .rid = RDT_RESOURCE_SMBA, .name = "SMBA", - .cache_level = 3, + .scope = RESCTRL_L3_CACHE, .domains = domain_init(RDT_RESOURCE_SMBA), .parse_ctrlval = parse_bw, .format_str = "%d=%*u", @@ -392,9 +392,6 @@ struct rdt_domain *rdt_find_domain(struct rdt_resource *r, int id, struct rdt_domain *d; struct list_head *l; - if (id < 0) - return ERR_PTR(-ENODEV); - list_for_each(l, &r->domains) { d = list_entry(l, struct rdt_domain, list); /* When id is found, return its domain. */ @@ -484,6 +481,19 @@ static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_domain *hw_dom) return 0; } +static int get_domain_id_from_scope(int cpu, enum resctrl_scope scope) +{ + switch (scope) { + case RESCTRL_L2_CACHE: + case RESCTRL_L3_CACHE: + return get_cpu_cacheinfo_id(cpu, scope); + default: + break; + } + + return -EINVAL; +} + /* * domain_add_cpu - Add a cpu to a resource's domain list. * @@ -499,7 +509,7 @@ static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_domain *hw_dom) */ static void domain_add_cpu(int cpu, struct rdt_resource *r) { - int id = get_cpu_cacheinfo_id(cpu, r->cache_level); + int id = get_domain_id_from_scope(cpu, r->scope); struct list_head *add_pos = NULL; struct rdt_hw_domain *hw_dom; struct rdt_domain *d; @@ -507,12 +517,14 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r) lockdep_assert_held(&domain_list_lock); - d = rdt_find_domain(r, id, &add_pos); - if (IS_ERR(d)) { - pr_warn("Couldn't find cache id for CPU %d\n", cpu); + if (id < 0) { + pr_warn_once("Can't find domain id for CPU:%d scope:%d for resource %s\n", + cpu, r->scope, r->name); return; } + d = rdt_find_domain(r, id, &add_pos); + if (d) { cpumask_set_cpu(cpu, &d->cpu_mask); if (r->cache.arch_has_per_cpu_cfg) @@ -552,15 +564,21 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r) static void domain_remove_cpu(int cpu, struct rdt_resource *r) { - int id = get_cpu_cacheinfo_id(cpu, r->cache_level); + int id = get_domain_id_from_scope(cpu, r->scope); struct rdt_hw_domain *hw_dom; struct rdt_domain *d; lockdep_assert_held(&domain_list_lock); + if (id < 0) { + pr_warn_once("Can't find domain id for CPU:%d scope:%d for resource %s\n", + cpu, r->scope, r->name); + return; + } + d = rdt_find_domain(r, id, NULL); - if (IS_ERR_OR_NULL(d)) { - pr_warn("Couldn't find cache id for CPU %d\n", cpu); + if (!d) { + pr_warn("Couldn't find domain with id=%d for CPU %d\n", id, cpu); return; } hw_dom = resctrl_to_arch_dom(d); diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c index b7291f60399c..2bf021d42500 100644 --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c @@ -577,7 +577,7 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg) r = &rdt_resources_all[resid].r_resctrl; d = rdt_find_domain(r, domid, NULL); - if (IS_ERR_OR_NULL(d)) { + if (!d) { ret = -ENOENT; goto out; } diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c index 1bbfd3c1e300..201011f0ed0b 100644 --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c @@ -292,9 +292,13 @@ static void pseudo_lock_region_clear(struct pseudo_lock_region *plr) */ static int pseudo_lock_region_init(struct pseudo_lock_region *plr) { + enum resctrl_scope scope = plr->s->res->scope; struct cacheinfo *ci; int ret; + if (WARN_ON_ONCE(scope != RESCTRL_L2_CACHE && scope != RESCTRL_L3_CACHE)) + return -ENODEV; + /* Pick the first cpu we find that is associated with the cache. */ plr->cpu = cpumask_first(&plr->d->cpu_mask); @@ -305,7 +309,7 @@ static int pseudo_lock_region_init(struct pseudo_lock_region *plr) goto out_region; } - ci = get_cpu_cacheinfo_level(plr->cpu, plr->s->res->cache_level); + ci = get_cpu_cacheinfo_level(plr->cpu, scope); if (ci) { plr->line_size = ci->coherency_line_size; plr->size = rdtgroup_cbm_to_size(plr->s->res, plr->d, plr->cbm); diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index cb68a121dabb..50f5876a3020 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -1454,8 +1454,11 @@ unsigned int rdtgroup_cbm_to_size(struct rdt_resource *r, struct cacheinfo *ci; int num_b; + if (WARN_ON_ONCE(r->scope != RESCTRL_L2_CACHE && r->scope != RESCTRL_L3_CACHE)) + return size; + num_b = bitmap_weight(&cbm, r->cache.cbm_len); - ci = get_cpu_cacheinfo_level(cpumask_any(&d->cpu_mask), r->cache_level); + ci = get_cpu_cacheinfo_level(cpumask_any(&d->cpu_mask), r->scope); if (ci) size = ci->size / r->cache.cbm_len * num_b; -- 2.45.0