From: Tony Luck <tony.luck@intel.com>
To: Fenghua Yu <fenghua.yu@intel.com>,
Reinette Chatre <reinette.chatre@intel.com>,
Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>,
Peter Newman <peternewman@google.com>,
James Morse <james.morse@arm.com>,
Babu Moger <babu.moger@amd.com>,
Drew Fustini <dfustini@baylibre.com>,
Dave Martin <Dave.Martin@arm.com>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org,
patches@lists.linux.dev, Tony Luck <tony.luck@intel.com>
Subject: [PATCH v23 16/19] x86/resctrl: Make __mon_event_count() handle sum domains
Date: Fri, 28 Jun 2024 14:56:16 -0700 [thread overview]
Message-ID: <20240628215619.76401-17-tony.luck@intel.com> (raw)
In-Reply-To: <20240628215619.76401-1-tony.luck@intel.com>
Legacy resctrl monitor files must provide the sum of event values across
all Sub-NUMA Cluster (SNC) domains that share an L3 cache instance.
There are now two cases:
1) A specific domain is provided in struct rmid_read
This is either a non-SNC system, or the request is to read data
from just one SNC node.
2) Domain pointer is NULL. In this case the cacheinfo field in struct
rmid_read indicates that all SNC nodes that share that L3 cache
instance should have the event read and return the sum of all
values.
Update the CPU sanity check. The existing check that an event is read
from a CPU in the requested domain still applies when reading a single
domain. But when summing across domains a more relaxed check that the
current CPU is in the scope of the L3 cache instance is appropriate
since the MSRs to read events are scoped at L3 cache level.
Signed-off-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Reinette Chatre <reinette.chatre@intel.com>
---
arch/x86/kernel/cpu/resctrl/monitor.c | 51 ++++++++++++++++++++++-----
1 file changed, 42 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index ca309c93a56b..ca486d00541e 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -324,9 +324,6 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *d,
resctrl_arch_rmid_read_context_check();
- if (!cpumask_test_cpu(smp_processor_id(), &d->hdr.cpu_mask))
- return -EINVAL;
-
prmid = logical_rmid_to_physical_rmid(cpu, rmid);
ret = __rmid_read_phys(prmid, eventid, &msr_val);
if (ret)
@@ -592,7 +589,10 @@ static struct mbm_state *get_mbm_state(struct rdt_mon_domain *d, u32 closid,
static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
{
+ int cpu = smp_processor_id();
+ struct rdt_mon_domain *d;
struct mbm_state *m;
+ int err, ret;
u64 tval = 0;
if (rr->first) {
@@ -603,14 +603,47 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
return 0;
}
- rr->err = resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid, rr->evtid,
- &tval, rr->arch_mon_ctx);
- if (rr->err)
- return rr->err;
+ if (rr->d) {
+ /* Reading a single domain, must be on a CPU in that domain. */
+ if (!cpumask_test_cpu(cpu, &rr->d->hdr.cpu_mask))
+ return -EINVAL;
+ rr->err = resctrl_arch_rmid_read(rr->r, rr->d, closid, rmid,
+ rr->evtid, &tval, rr->arch_mon_ctx);
+ if (rr->err)
+ return rr->err;
- rr->val += tval;
+ rr->val += tval;
- return 0;
+ return 0;
+ }
+
+ /* Summing domains that share a cache, must be on a CPU for that cache. */
+ if (!cpumask_test_cpu(cpu, &rr->ci->shared_cpu_map))
+ return -EINVAL;
+
+ /*
+ * Legacy files must report the sum of an event across all
+ * domains that share the same L3 cache instance.
+ * Report success if a read from any domain succeeds, -EINVAL
+ * (translated to "Unavailable" for user space) if reading from
+ * all domains fail for any reason.
+ */
+ ret = -EINVAL;
+ list_for_each_entry(d, &rr->r->mon_domains, hdr.list) {
+ if (d->ci->id != rr->ci->id)
+ continue;
+ err = resctrl_arch_rmid_read(rr->r, d, closid, rmid,
+ rr->evtid, &tval, rr->arch_mon_ctx);
+ if (!err) {
+ rr->val += tval;
+ ret = 0;
+ }
+ }
+
+ if (ret)
+ rr->err = ret;
+
+ return ret;
}
/*
--
2.45.2
next prev parent reply other threads:[~2024-06-28 21:56 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-28 21:56 [PATCH v23 00/19] Add support for Sub-NUMA cluster (SNC) systems Tony Luck
2024-06-28 21:56 ` [PATCH v23 01/19] x86/resctrl: Prepare for new domain scope Tony Luck
2024-06-28 21:56 ` [PATCH v23 02/19] x86/resctrl: Prepare to split rdt_domain structure Tony Luck
2024-06-28 21:56 ` [PATCH v23 03/19] x86/resctrl: Prepare for different scope for control/monitor operations Tony Luck
2024-06-28 21:56 ` [PATCH v23 04/19] x86/resctrl: Split the rdt_domain and rdt_hw_domain structures Tony Luck
2024-06-28 21:56 ` [PATCH v23 05/19] x86/resctrl: Add node-scope to the options for feature scope Tony Luck
2024-06-28 21:56 ` [PATCH v23 06/19] x86/resctrl: Introduce snc_nodes_per_l3_cache Tony Luck
2024-07-26 19:13 ` Peter Newman
2024-06-28 21:56 ` [PATCH v23 07/19] x86/resctrl: Block use of mba_MBps mount option on Sub-NUMA Cluster (SNC) systems Tony Luck
2024-06-28 21:56 ` [PATCH v23 08/19] x86/resctrl: Prepare for new Sub-NUMA Cluster (SNC) monitor files Tony Luck
2024-06-28 21:56 ` [PATCH v23 09/19] x86/resctrl: Add a new field to struct rmid_read for summation of domains Tony Luck
2024-06-28 21:56 ` [PATCH v23 10/19] x86/resctrl: Initialize on-stack struct rmid_read instances Tony Luck
2024-06-28 21:56 ` [PATCH v23 11/19] x86/resctrl: Refactor mkdir_mondata_subdir() with a helper function Tony Luck
2024-06-28 21:56 ` [PATCH v23 12/19] x86/resctrl: Allocate a new field in union mon_data_bits Tony Luck
2024-06-28 21:56 ` [PATCH v23 13/19] x86/resctrl: Create Sub-NUMA Cluster (SNC) monitor files Tony Luck
2024-06-28 21:56 ` [PATCH v23 14/19] x86/resctrl: Handle removing directories in Sub-NUMA Cluster (SNC) mode Tony Luck
2024-07-02 8:53 ` Borislav Petkov
2024-07-02 17:16 ` Tony Luck
2024-07-02 17:28 ` Borislav Petkov
2024-06-28 21:56 ` [PATCH v23 15/19] x86/resctrl: Fill out rmid_read structure for smp_call*() to read a counter Tony Luck
2024-06-28 21:56 ` Tony Luck [this message]
2024-06-28 21:56 ` [PATCH v23 17/19] x86/resctrl: Enable shared RMID mode on Sub-NUMA Cluster (SNC) systems Tony Luck
2024-07-02 8:59 ` Borislav Petkov
2024-06-28 21:56 ` [PATCH v23 18/19] x86/resctrl: Sub-NUMA Cluster (SNC) detection Tony Luck
2024-06-28 21:56 ` [PATCH v23 19/19] x86/resctrl: Update documentation with Sub-NUMA cluster changes Tony Luck
2024-07-01 14:24 ` [PATCH v23 00/19] Add support for Sub-NUMA cluster (SNC) systems Moger, Babu
2024-07-01 16:08 ` Luck, Tony
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240628215619.76401-17-tony.luck@intel.com \
--to=tony.luck@intel.com \
--cc=Dave.Martin@arm.com \
--cc=babu.moger@amd.com \
--cc=dfustini@baylibre.com \
--cc=fenghua.yu@intel.com \
--cc=james.morse@arm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=maciej.wieczor-retman@intel.com \
--cc=patches@lists.linux.dev \
--cc=peternewman@google.com \
--cc=reinette.chatre@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox