From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26FA423A9B0 for ; Thu, 18 Dec 2025 13:36:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766064980; cv=none; b=ZDVcdZeN6hJ1/SVHh+poX3sh06LyKg5iFlEZhFaD+5CFL4uRLJmbKtRa4K5M/mfjUv5SSIbSh9n/BWGRKCSSRfj7lz1lrYV+HyJk64tmQd8Lv3GjGH3gyXskWDtJgLrR12r831E8F5K0Bj4rEkFRfkWJLq50i71BnYWphuYXIfQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766064980; c=relaxed/simple; bh=cl5WYeNLNOIoTg5+WRmPp3IfUeLdNdmP6j6v/b/YgAI=; h=Date:From:To:CC:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=tuWl9+eJhHTdEISULnK082+wkJL33SNISyAUYBLHWVVRz40zsGRxCEwZoSNzOlSLOiCd0hMSfnWDJQtscpyKalPMGk9kVwpojHqyecWEYY9PyFvR25v1skKk6H04EblTSFzDYCPM3VEE7ifItLFo0nPdKnkAYjMvkBCmEUos3QQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.224.83]) by frasgout.his.huawei.com (SkyGuard) with ESMTPS id 4dXBWM3S9YzJ46bj; Thu, 18 Dec 2025 21:35:43 +0800 (CST) Received: from dubpeml100005.china.huawei.com (unknown [7.214.146.113]) by mail.maildlp.com (Postfix) with ESMTPS id 818F340565; Thu, 18 Dec 2025 21:36:14 +0800 (CST) Received: from localhost (10.203.177.15) by dubpeml100005.china.huawei.com (7.214.146.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.36; Thu, 18 Dec 2025 13:36:13 +0000 Date: Thu, 18 Dec 2025 13:36:11 +0000 From: Jonathan Cameron To: James Morse CC: , , D Scott Phillips OS , , , , , , Jamie Iles , "Xin Hao" , , , , David Hildenbrand , Dave Martin , Koba Ko , Shanker Donthineni , , , Gavin Shan , Ben Horgan , , , Punit Agrawal Subject: Re: [RFC PATCH 19/38] arm_mpam: resctrl: pick classes for use as mbm counters Message-ID: <20251218133611.00000a3c@huawei.com> In-Reply-To: <20251205215901.17772-20-james.morse@arm.com> References: <20251205215901.17772-1-james.morse@arm.com> <20251205215901.17772-20-james.morse@arm.com> X-Mailer: Claws Mail 4.3.0 (GTK 3.24.42; x86_64-w64-mingw32) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-ClientProxiedBy: lhrpeml500011.china.huawei.com (7.191.174.215) To dubpeml100005.china.huawei.com (7.214.146.113) On Fri, 5 Dec 2025 21:58:42 +0000 James Morse wrote: > resctrl has two types of counters, NUMA-local and global. MPAM has only > bandwidth counters, but the position of the MSC may mean it counts > NUMA-local, or global traffic. > > But the topology information is not available. > > Apply a heuristic: the L2 or L3 supports bandwidth monitors, these are > probably NUMA-local. If the memory controller supports bandwidth > monitors, they are probably global. > > This also allows us to assert that we don't have the same class > backing two different resctrl events. > > Because the class or component backing the event may not be 'the L3', > it is necessary for mpam_resctrl_get_domain_from_cpu() to search > the monitor domains too. This matters the most for 'monitor only' > systems, where 'the L3' control domains may be empty, and the > ctrl_comp pointer NULL. > > resctrl expects there to be enough monitors for every possible control > and monitor group to have one. Such a system gets called 'free running' > as the monitors can be programmed once and left running. > Any other platform will need to emulate ABMC. > > Signed-off-by: James Morse > --- > diff --git a/drivers/resctrl/mpam_resctrl.c b/drivers/resctrl/mpam_resctrl.c > index fc1f054f187e..9978eb48c1f4 100644 > --- a/drivers/resctrl/mpam_resctrl.c > +++ b/drivers/resctrl/mpam_resctrl.c > @@ -586,7 +614,37 @@ static void mpam_resctrl_pick_counters(void) > return; > } > } > + > + has_mbwu = class_has_usable_mbwu(class); > + if (has_mbwu && topology_matches_l3(class)) { Might get reused in later patches. If not if (class_has_usable_mbwu(class) && topology_matches_l3(class)) > + pr_debug("class %u has usable MBWU, and matches L3 topology", > + class->level); > + > + /* > + * MBWU counters may be 'local' or 'total' depending on > + * where they are in the topology. Counters on caches > + * are assumed to be local. If it's on the memory > + * controller, its assumed to be global. > + */ > + switch (class->type) { > + case MPAM_CLASS_CACHE: > + counter_update_class(QOS_L3_MBM_LOCAL_EVENT_ID, > + class); > + break; > + case MPAM_CLASS_MEMORY: > + counter_update_class(QOS_L3_MBM_TOTAL_EVENT_ID, > + class); > + break; > + default: > + break; > + } > + } > } > + > + /* Allocation of MBWU monitors assumes that the class is unique... */ > + if (mpam_resctrl_counters[QOS_L3_MBM_LOCAL_EVENT_ID].class) > + WARN_ON_ONCE(mpam_resctrl_counters[QOS_L3_MBM_LOCAL_EVENT_ID].class == > + mpam_resctrl_counters[QOS_L3_MBM_TOTAL_EVENT_ID].class); > } > > +/* > + * We know all the monitors are associated with the L3, even if there are no > + * controls and therefore no control component. Find the cache-id for the CPU > + * and use that to search for existing resctrl domains. > + * This relies on mpam_resctrl_pick_domain_id() using the L3 cache-id > + * for anything that is not a cache. > + */ > +static struct mpam_resctrl_dom *mpam_resctrl_get_mon_domain_from_cpu(int cpu) > +{ > + u32 cache_id; > + struct rdt_mon_domain *mon_d; > + struct mpam_resctrl_dom *dom; > + struct mpam_resctrl_res *l3 = &mpam_resctrl_controls[RDT_RESOURCE_L3]; > + > + if (!l3->class) > + return NULL; > + /* TODO: how does this order with cacheinfo updates under cpuhp? */ > + cache_id = get_cpu_cacheinfo_id(cpu, 3); > + if (cache_id == ~0) > + return NULL; > + > + list_for_each_entry(mon_d, &l3->resctrl_res.mon_domains, hdr.list) { > + dom = container_of(mon_d, struct mpam_resctrl_dom, resctrl_mon_dom); Similar comment to one on earlier patch. Can make the list iterator directly provide dom as that's what it's actually a list of, not rdt_mon_domain structures. > + > + if (mon_d->hdr.id == cache_id) > + return dom; > + } > + > + return NULL; > +} > + > static struct mpam_resctrl_dom * > mpam_resctrl_get_domain_from_cpu(int cpu, struct mpam_resctrl_res *res) > { > struct mpam_resctrl_dom *dom; > struct rdt_ctrl_domain *ctrl_d; > + struct rdt_resource *r = &res->resctrl_res; Push back to original patch. > > lockdep_assert_cpus_held(); > > - list_for_each_entry_rcu(ctrl_d, &res->resctrl_res.ctrl_domains, > - hdr.list) { > + list_for_each_entry_rcu(ctrl_d, &r->ctrl_domains, hdr.list) { > dom = container_of(ctrl_d, struct mpam_resctrl_dom, > resctrl_ctrl_dom); >