public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms
@ 2026-03-13 20:32 Reinette Chatre
  2026-03-13 20:32 ` [PATCH v3 01/10] selftests/resctrl: Improve accuracy of cache occupancy test Reinette Chatre
                   ` (10 more replies)
  0 siblings, 11 replies; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

Changes since v2:
- v2: https://lore.kernel.org/linux-patches/cover.1772582958.git.reinette.chatre@intel.com/
- Rebased on top of v7.0-rc3.
- Split "selftests/resctrl: Improve accuracy of cache occupancy test" into
  changes impacting L3 and L2 respectively. (Ilpo)
- "long_mask" -> "full_mask", "return_value" -> "measurement", "org_count"
  -> "orig_count". (Ilpo)
- Use PATH_MAX where appropriate. (Ilpo)
- Handle errors first to reduce indentation. (Ilpo)
- Detailed changes in changelogs.
- No functional changes since v2. Series tested by running 20 iterations of all
  tests on Emerald Rapids, Granite Rapids, Sapphire Rapids, Ice Lake, Sierra
  Forest, and Broadwell.

Changes since v1:
- v1: https://lore.kernel.org/lkml/cover.1770406608.git.reinette.chatre@intel.com/
- The new perf interface that resctrl selftests can utilize has been accepted and
  merged into v7.0-rc2. This series can thus now be considered for inclusion.
  For reference,
  commit 6a8a48644c4b ("perf/x86/intel/uncore: Add per-scheduler IMC CAS count events")
  The resctrl selftest changes making use of the new perf interface are backward
  compatible. The selftests do not require a v7.0-rc2 kernel to run but the
  tests can only pass on recent Intel platforms running v7.0-rc2 or later.
- Combine the two outstanding resctrl selftest submissions into one series
  for easier tracking:
  https://lore.kernel.org/lkml/084e82b5c29d75f16f24af8768d50d39ba0118a5.1769101788.git.reinette.chatre@intel.com/
  https://lore.kernel.org/lkml/cover.1770406608.git.reinette.chatre@intel.com/
- Fix typo in changelog of "selftests/resctrl: Improve accuracy of cache
  occupancy test": "the data my be in L2" -> "the data may be in L2"
- Add Zide Chen's RB tags.

Cover letter updated to be accurate wrt perf changes:

The resctrl selftests fail on recent Intel platforms. Intermittent failures
in the CAT test and permanent failures of MBM and MBA tests on new platforms
like Sierra Forest and Granite Rapids.

The MBM and MBA resctrl selftests both generate memory traffic and compare the
memory bandwidth measurements between the iMC PMUs and MBM to determine pass or
fail. Both these tests are failing on recent platforms like Sierra Forest and
Granite Rapids that have two events that need to be read and combined
for a total memory bandwidth count instead of the single event available on
earlier platforms.

resctrl selftests prefer to obtain event details via sysfs instead of adding
model specific details on which events to read. Enhancements to perf to expose
the new event details are available since:
 commit 6a8a48644c4b ("perf/x86/intel/uncore: Add per-scheduler IMC CAS count events")
This series demonstrates use of the new sysfs interface to perf
to obtain accurate iMC read memory bandwidth measurements.

An additional issue with all the tests is that these selftests are part
performance tests and determine pass/fail on performance heuristics selected
after running the tests on a variety of platforms. When new platforms
arrive the previous heuristics may cause the tests to fail. These failures are
not because of an issue with the resctrl subsystem the tests intend to test
but because of the architectural changes in the new platforms.

Adapt the resctrl tests to not be as sensitive to architectural changes
while adjusting the remaining heuristics to ensure tests pass on a variety
of platforms. More details in individual patches.

Tested by running 100 iterations of all tests on Emerald Rapids, Granite
Rapids, Sapphire Rapids, Ice Lake, Sierra Forest, and Broadwell.

Reinette Chatre (10):
  selftests/resctrl: Improve accuracy of cache occupancy test
  selftests/resctrl: Reduce interference from L2 occupancy during cache
    occupancy test
  selftests/resctrl: Do not store iMC counter value in counter config
    structure
  selftests/resctrl: Prepare for parsing multiple events per iMC
  selftests/resctrl: Support multiple events associated with iMC
  selftests/resctrl: Increase size of buffer used in MBM and MBA tests
  selftests/resctrl: Raise threshold at which MBM and PMU values are
    compared
  selftests/resctrl: Remove requirement on cache miss rate
  selftests/resctrl: Simplify perf usage in CAT test
  selftests/resctrl: Reduce L2 impact on CAT test

 tools/testing/selftests/resctrl/cache.c       |  30 ++--
 tools/testing/selftests/resctrl/cat_test.c    |  41 ++----
 tools/testing/selftests/resctrl/cmt_test.c    |  36 ++++-
 tools/testing/selftests/resctrl/fill_buf.c    |   4 +-
 tools/testing/selftests/resctrl/mba_test.c    |   6 +-
 tools/testing/selftests/resctrl/mbm_test.c    |   6 +-
 tools/testing/selftests/resctrl/resctrl.h     |  20 ++-
 tools/testing/selftests/resctrl/resctrl_val.c | 135 +++++++++++++-----
 8 files changed, 179 insertions(+), 99 deletions(-)

-- 
2.50.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v3 01/10] selftests/resctrl: Improve accuracy of cache occupancy test
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
@ 2026-03-13 20:32 ` Reinette Chatre
  2026-03-26 12:44   ` Ilpo Järvinen
  2026-03-13 20:32 ` [PATCH v3 02/10] selftests/resctrl: Reduce interference from L2 occupancy during " Reinette Chatre
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

Dave Martin reported inconsistent CMT test failures. In one experiment
the first run of the CMT test failed because of too large (24%) difference
between measured and achievable cache occupancy while the second run passed
with an acceptable 4% difference.

The CMT test is susceptible to interference from the rest of the system.
This can be demonstrated with a utility like stress-ng by running the CMT
test while introducing cache misses using:

   stress-ng --matrix-3d 0 --matrix-3d-zyx

Below shows an example of the CMT test failing because of a significant
difference between measured and achievable cache occupancy when run with
interference:
    # Starting CMT test ...
    # Mounting resctrl to "/sys/fs/resctrl"
    # Cache size :335544320
    # Writing benchmark parameters to resctrl FS
    # Benchmark PID: 7011
    # Checking for pass/fail
    # Fail: Check cache miss rate within 15%
    # Percent diff=99
    # Number of bits: 5
    # Average LLC val: 235929
    # Cache span (bytes): 83886080
    not ok 1 CMT: test

The CMT test creates a new control group that is also capable of monitoring
and assigns the workload to it. The workload allocates a buffer that by
default fills a portion of the L3 and keeps reading from the buffer,
measuring the L3 occupancy at intervals. The test passes if the workload's
L3 occupancy is within 15% of the buffer size.

By not adjusting any capacity bitmasks the workload shares the cache with
the rest of the system. Any other task that may be running could evict
the workload's data from the cache causing it to have low cache occupancy.

Reduce interference from the rest of the system by ensuring that the
workload's control group uses the capacity bitmask found in the user
parameters for L3 and that the rest of the system can only allocate into
the inverse of the workload's L3 cache portion. Other tasks can thus no
longer evict the workload's data from L3.

With the above adjustments the CMT test is more consistent. Repeating the
CMT test while generating interference with stress-ng on a sample
system after applying the fixes show significant improvement in test
accuracy:

    # Starting CMT test ...
    # Mounting resctrl to "/sys/fs/resctrl"
    # Cache size :335544320
    # Writing benchmark parameters to resctrl FS
    # Write schema "L3:0=fffe0" to resctrl FS
    # Write schema "L3:0=1f" to resctrl FS
    # Benchmark PID: 7089
    # Checking for pass/fail
    # Pass: Check cache miss rate within 15%
    # Percent diff=12
    # Number of bits: 5
    # Average LLC val: 73269248
    # Cache span (bytes): 83886080
    ok 1 CMT: test

Reported-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
Link: https://lore.kernel.org/lkml/aO+7MeSMV29VdbQs@e133380.arm.com/
---
Changes since v1:
- Fix typo in changelog: "data my be in L2" -> "data may be in L2".

Changes since v2:
- Split patch to separate changes impacting L3 and L2 resource. (Ilpo)
- Re-run tests after patch split to ensure test impact match patch
  and update changelog with refreshed data.
- Since fix is now split across two patches: "Closes:" -> "Link:"
- Rename "long_mask" to "full_mask". (Ilpo)
- Add Chen Yu's tag.
---
 tools/testing/selftests/resctrl/cmt_test.c    | 26 +++++++++++++++++--
 tools/testing/selftests/resctrl/mba_test.c    |  4 ++-
 tools/testing/selftests/resctrl/mbm_test.c    |  4 ++-
 tools/testing/selftests/resctrl/resctrl.h     |  4 ++-
 tools/testing/selftests/resctrl/resctrl_val.c |  2 +-
 5 files changed, 34 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/resctrl/cmt_test.c b/tools/testing/selftests/resctrl/cmt_test.c
index d09e693dc739..7bc6cf49c1c5 100644
--- a/tools/testing/selftests/resctrl/cmt_test.c
+++ b/tools/testing/selftests/resctrl/cmt_test.c
@@ -19,12 +19,34 @@
 #define CON_MON_LCC_OCCUP_PATH		\
 	"%s/%s/mon_data/mon_L3_%02d/llc_occupancy"
 
-static int cmt_init(const struct resctrl_val_param *param, int domain_id)
+/*
+ * Initialize capacity bitmasks (CBMs) of:
+ * - control group being tested per test parameters,
+ * - default resource group as inverse of control group being tested to prevent
+ *   other tasks from interfering with test.
+ */
+static int cmt_init(const struct resctrl_test *test,
+		    const struct user_params *uparams,
+		    const struct resctrl_val_param *param, int domain_id)
 {
+	unsigned long full_mask;
+	char schemata[64];
+	int ret;
+
 	sprintf(llc_occup_path, CON_MON_LCC_OCCUP_PATH, RESCTRL_PATH,
 		param->ctrlgrp, domain_id);
 
-	return 0;
+	ret = get_full_cbm(test->resource, &full_mask);
+	if (ret)
+		return ret;
+
+	snprintf(schemata, sizeof(schemata), "%lx", ~param->mask & full_mask);
+	ret = write_schemata("", schemata, uparams->cpu, test->resource);
+	if (ret)
+		return ret;
+
+	snprintf(schemata, sizeof(schemata), "%lx", param->mask);
+	return write_schemata(param->ctrlgrp, schemata, uparams->cpu, test->resource);
 }
 
 static int cmt_setup(const struct resctrl_test *test,
diff --git a/tools/testing/selftests/resctrl/mba_test.c b/tools/testing/selftests/resctrl/mba_test.c
index c7e9adc0368f..cd4c715b7ffd 100644
--- a/tools/testing/selftests/resctrl/mba_test.c
+++ b/tools/testing/selftests/resctrl/mba_test.c
@@ -17,7 +17,9 @@
 #define ALLOCATION_MIN		10
 #define ALLOCATION_STEP		10
 
-static int mba_init(const struct resctrl_val_param *param, int domain_id)
+static int mba_init(const struct resctrl_test *test,
+		    const struct user_params *uparams,
+		    const struct resctrl_val_param *param, int domain_id)
 {
 	int ret;
 
diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
index 84d8bc250539..58201f844740 100644
--- a/tools/testing/selftests/resctrl/mbm_test.c
+++ b/tools/testing/selftests/resctrl/mbm_test.c
@@ -83,7 +83,9 @@ static int check_results(size_t span)
 	return ret;
 }
 
-static int mbm_init(const struct resctrl_val_param *param, int domain_id)
+static int mbm_init(const struct resctrl_test *test,
+		    const struct user_params *uparams,
+		    const struct resctrl_val_param *param, int domain_id)
 {
 	int ret;
 
diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
index afe635b6e48d..c72045c74ac4 100644
--- a/tools/testing/selftests/resctrl/resctrl.h
+++ b/tools/testing/selftests/resctrl/resctrl.h
@@ -135,7 +135,9 @@ struct resctrl_val_param {
 	char			filename[64];
 	unsigned long		mask;
 	int			num_of_runs;
-	int			(*init)(const struct resctrl_val_param *param,
+	int			(*init)(const struct resctrl_test *test,
+					const struct user_params *uparams,
+					const struct resctrl_val_param *param,
 					int domain_id);
 	int			(*setup)(const struct resctrl_test *test,
 					 const struct user_params *uparams,
diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
index 7c08e936572d..a5a8badb83d4 100644
--- a/tools/testing/selftests/resctrl/resctrl_val.c
+++ b/tools/testing/selftests/resctrl/resctrl_val.c
@@ -569,7 +569,7 @@ int resctrl_val(const struct resctrl_test *test,
 		goto reset_affinity;
 
 	if (param->init) {
-		ret = param->init(param, domain_id);
+		ret = param->init(test, uparams, param, domain_id);
 		if (ret)
 			goto reset_affinity;
 	}
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 02/10] selftests/resctrl: Reduce interference from L2 occupancy during cache occupancy test
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
  2026-03-13 20:32 ` [PATCH v3 01/10] selftests/resctrl: Improve accuracy of cache occupancy test Reinette Chatre
@ 2026-03-13 20:32 ` Reinette Chatre
  2026-03-26 12:56   ` Ilpo Järvinen
  2026-03-13 20:32 ` [PATCH v3 03/10] selftests/resctrl: Do not store iMC counter value in counter config structure Reinette Chatre
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

The CMT test creates a new control group that is also capable of monitoring
and assigns the workload to it. The workload allocates a buffer that by
default fills a portion of the L3 and keeps reading from the buffer,
measuring the L3 occupancy at intervals. The test passes if the workload's
L3 occupancy is within 15% of the buffer size.

The CMT test does not take into account that some of the workload's data
may land in L2/L1. Matching L3 occupancy to the size of the buffer while
a portion of the buffer can be allocated into L2 is not accurate.

Take the L2 cache into account to improve test accuracy:
 - Reduce the workload's L2 cache allocation to the minimum on systems that
   support L2 cache allocation. Do so with a new utility in preparation for
   all L3 cache allocation tests needing the same capability.
 - Increase the buffer size to accommodate data that may be allocated into
   the L2 cache. Use a buffer size double the L3 portion to keep using the
   L3 portion size as goal for L3 occupancy while taking into account that
   some of the data may be in L2.

Running the CMT test on a sample system while introducing significant
cache misses using "stress-ng --matrix-3d 0 --matrix-3d-zyx" shows
significant improvement in L3 cache occupancy:

Before:

    # Starting CMT test ...
    # Mounting resctrl to "/sys/fs/resctrl"
    # Cache size :335544320
    # Writing benchmark parameters to resctrl FS
    # Write schema "L3:0=fffe0" to resctrl FS
    # Write schema "L3:0=1f" to resctrl FS
    # Benchmark PID: 7089
    # Checking for pass/fail
    # Pass: Check cache miss rate within 15%
    # Percent diff=12
    # Number of bits: 5
    # Average LLC val: 73269248
    # Cache span (bytes): 83886080
    ok 1 CMT: test

After:
    # Starting CMT test ...
    # Mounting resctrl to "/sys/fs/resctrl"
    # Cache size :335544320
    # Writing benchmark parameters to resctrl FS
    # Write schema "L3:0=fffe0" to resctrl FS
    # Write schema "L3:0=1f" to resctrl FS
    # Write schema "L2:1=0x1" to resctrl FS
    # Benchmark PID: 7171
    # Checking for pass/fail
    # Pass: Check cache miss rate within 15%
    # Percent diff=0
    # Number of bits: 5
    # Average LLC val: 83755008
    # Cache span (bytes): 83886080
    ok 1 CMT: test

Reported-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
Link: https://lore.kernel.org/lkml/aO+7MeSMV29VdbQs@e133380.arm.com/
---
Changes since v2:
- New patch split from v1's "selftests/resctrl: Improve accuracy of cache
  occupancy test". (Ilpo)
- Reword changelog. (Ilpo)
- Update data used in changelog to match code after patch split.
- Introduce utility to reduce L2 cache allocation. (Ilpo)
- Add Chen Yu's tag.
---
 tools/testing/selftests/resctrl/cache.c    | 13 +++++++++++++
 tools/testing/selftests/resctrl/cmt_test.c | 14 ++++++++++----
 tools/testing/selftests/resctrl/resctrl.h  |  3 +++
 3 files changed, 26 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/resctrl/cache.c b/tools/testing/selftests/resctrl/cache.c
index 1ff1104e6575..bef71b6feacc 100644
--- a/tools/testing/selftests/resctrl/cache.c
+++ b/tools/testing/selftests/resctrl/cache.c
@@ -173,6 +173,19 @@ int measure_llc_resctrl(const char *filename, pid_t bm_pid)
 	return print_results_cache(filename, bm_pid, llc_occu_resc);
 }
 
+/*
+ * Reduce L2 allocation to minimum when testing L3 cache allocation.
+ */
+int minimize_l2_occupancy(const struct resctrl_test *test,
+			  const struct user_params *uparams,
+			  const struct resctrl_val_param *param)
+{
+	if (!strcmp(test->resource, "L3") && resctrl_resource_exists("L2"))
+		return write_schemata(param->ctrlgrp, "0x1", uparams->cpu, "L2");
+
+	return 0;
+}
+
 /*
  * show_cache_info - Show generic cache test information
  * @no_of_bits:		Number of bits
diff --git a/tools/testing/selftests/resctrl/cmt_test.c b/tools/testing/selftests/resctrl/cmt_test.c
index 7bc6cf49c1c5..ccb6fe881a94 100644
--- a/tools/testing/selftests/resctrl/cmt_test.c
+++ b/tools/testing/selftests/resctrl/cmt_test.c
@@ -23,7 +23,9 @@
  * Initialize capacity bitmasks (CBMs) of:
  * - control group being tested per test parameters,
  * - default resource group as inverse of control group being tested to prevent
- *   other tasks from interfering with test.
+ *   other tasks from interfering with test,
+ * - L2 resource of control group being tested to minimize allocations into
+ *   L2 if possible to better predict L3 occupancy.
  */
 static int cmt_init(const struct resctrl_test *test,
 		    const struct user_params *uparams,
@@ -46,7 +48,11 @@ static int cmt_init(const struct resctrl_test *test,
 		return ret;
 
 	snprintf(schemata, sizeof(schemata), "%lx", param->mask);
-	return write_schemata(param->ctrlgrp, schemata, uparams->cpu, test->resource);
+	ret = write_schemata(param->ctrlgrp, schemata, uparams->cpu, test->resource);
+	if (ret)
+		return ret;
+
+	return minimize_l2_occupancy(test, uparams, param);
 }
 
 static int cmt_setup(const struct resctrl_test *test,
@@ -175,11 +181,11 @@ static int cmt_run_test(const struct resctrl_test *test, const struct user_param
 	span = cache_portion_size(cache_total_size, param.mask, long_mask);
 
 	if (uparams->fill_buf) {
-		fill_buf.buf_size = span;
+		fill_buf.buf_size = span * 2;
 		fill_buf.memflush = uparams->fill_buf->memflush;
 		param.fill_buf = &fill_buf;
 	} else if (!uparams->benchmark_cmd[0]) {
-		fill_buf.buf_size = span;
+		fill_buf.buf_size = span * 2;
 		fill_buf.memflush = true;
 		param.fill_buf = &fill_buf;
 	}
diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
index c72045c74ac4..7f2ab28be857 100644
--- a/tools/testing/selftests/resctrl/resctrl.h
+++ b/tools/testing/selftests/resctrl/resctrl.h
@@ -216,6 +216,9 @@ int perf_event_reset_enable(int pe_fd);
 int perf_event_measure(int pe_fd, struct perf_event_read *pe_read,
 		       const char *filename, pid_t bm_pid);
 int measure_llc_resctrl(const char *filename, pid_t bm_pid);
+int minimize_l2_occupancy(const struct resctrl_test *test,
+			  const struct user_params *uparams,
+			  const struct resctrl_val_param *param);
 void show_cache_info(int no_of_bits, __u64 avg_llc_val, size_t cache_span, bool lines);
 
 /*
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 03/10] selftests/resctrl: Do not store iMC counter value in counter config structure
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
  2026-03-13 20:32 ` [PATCH v3 01/10] selftests/resctrl: Improve accuracy of cache occupancy test Reinette Chatre
  2026-03-13 20:32 ` [PATCH v3 02/10] selftests/resctrl: Reduce interference from L2 occupancy during " Reinette Chatre
@ 2026-03-13 20:32 ` Reinette Chatre
  2026-03-13 20:32 ` [PATCH v3 04/10] selftests/resctrl: Prepare for parsing multiple events per iMC Reinette Chatre
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

The MBM and MBA tests compare MBM memory bandwidth measurements against
the memory bandwidth event values obtained from each memory controller's
PMU. The memory bandwidth event settings are discovered from the memory
controller details found in /sys/bus/event_source/devices/uncore_imc_N and
stored in struct imc_counter_config.

In addition to event settings struct imc_counter_config contains
imc_counter_config::return_value in which the associated event value is
stored on every read.

The event value is consumed and immediately recorded at regular intervals.
The stored value is never consumed afterwards, making its storage as part
of event configuration unnecessary.

Remove the return_value member from struct imc_counter_config. Instead
just use a more aptly named "measurement" local variable for use during
event reading.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
---
Changes since v2:
- Rename "return_value" -> "measurement". (Ilpo)
- Add Ilpo's Rb tag.
- Add Chen Yu's tag.
---
 tools/testing/selftests/resctrl/resctrl_val.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
index a5a8badb83d4..71d6f88cc1f7 100644
--- a/tools/testing/selftests/resctrl/resctrl_val.c
+++ b/tools/testing/selftests/resctrl/resctrl_val.c
@@ -32,7 +32,6 @@ struct imc_counter_config {
 	__u64 event;
 	__u64 umask;
 	struct perf_event_attr pe;
-	struct membw_read_format return_value;
 	int fd;
 };
 
@@ -312,23 +311,23 @@ static int get_read_mem_bw_imc(float *bw_imc)
 	 * Take overflow into consideration before calculating total bandwidth.
 	 */
 	for (imc = 0; imc < imcs; imc++) {
+		struct membw_read_format measurement;
 		struct imc_counter_config *r =
 			&imc_counters_config[imc];
 
-		if (read(r->fd, &r->return_value,
-			 sizeof(struct membw_read_format)) == -1) {
+		if (read(r->fd, &measurement, sizeof(measurement)) == -1) {
 			ksft_perror("Couldn't get read bandwidth through iMC");
 			return -1;
 		}
 
-		__u64 r_time_enabled = r->return_value.time_enabled;
-		__u64 r_time_running = r->return_value.time_running;
+		__u64 r_time_enabled = measurement.time_enabled;
+		__u64 r_time_running = measurement.time_running;
 
 		if (r_time_enabled != r_time_running)
 			of_mul_read = (float)r_time_enabled /
 					(float)r_time_running;
 
-		reads += r->return_value.value * of_mul_read * SCALE;
+		reads += measurement.value * of_mul_read * SCALE;
 	}
 
 	*bw_imc = reads;
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 04/10] selftests/resctrl: Prepare for parsing multiple events per iMC
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
                   ` (2 preceding siblings ...)
  2026-03-13 20:32 ` [PATCH v3 03/10] selftests/resctrl: Do not store iMC counter value in counter config structure Reinette Chatre
@ 2026-03-13 20:32 ` Reinette Chatre
  2026-03-26 13:03   ` Ilpo Järvinen
  2026-03-13 20:32 ` [PATCH v3 05/10] selftests/resctrl: Support multiple events associated with iMC Reinette Chatre
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

The events needed to read memory bandwidth are discovered by iterating
over every memory controller (iMC) within /sys/bus/event_source/devices.
Each iMC's PMU is assumed to have one event to measure read memory
bandwidth that is represented by the sysfs cas_count_read file. The event's
configuration is read from "cas_count_read" and stored as an element of
imc_counters_config[] by read_from_imc_dir() that receives the
index of the array where to store the configuration as argument.

It is possible that an iMC's PMU may have more than one event that should
be used to measure memory bandwidth.

Change semantics to not provide the index of the array to
read_from_imc_dir() but instead a pointer to the index. This enables
read_from_imc_dir() to store configurations for more than one event by
incrementing the index to imc_counters_config[] itself.

Ensure that the same type is consistently used for the index as it is
passed around during counter configuration.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
Reviewed-by: Zide Chen <zide.chen@intel.com>
---
Changes since v1:
- Add Zide Chen's RB tag.

Changes since v2:
- Add Chen Yu's tag.
---
 tools/testing/selftests/resctrl/resctrl_val.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
index 71d6f88cc1f7..6d766347e3fc 100644
--- a/tools/testing/selftests/resctrl/resctrl_val.c
+++ b/tools/testing/selftests/resctrl/resctrl_val.c
@@ -73,7 +73,7 @@ static void read_mem_bw_ioctl_perf_event_ioc_disable(int i)
  * @cas_count_cfg:	Config
  * @count:		iMC number
  */
-static void get_read_event_and_umask(char *cas_count_cfg, int count)
+static void get_read_event_and_umask(char *cas_count_cfg, unsigned int count)
 {
 	char *token[MAX_TOKENS];
 	int i = 0;
@@ -110,7 +110,7 @@ static int open_perf_read_event(int i, int cpu_no)
 }
 
 /* Get type and config of an iMC counter's read event. */
-static int read_from_imc_dir(char *imc_dir, int count)
+static int read_from_imc_dir(char *imc_dir, unsigned int *count)
 {
 	char cas_count_cfg[1024], imc_counter_cfg[1024], imc_counter_type[1024];
 	FILE *fp;
@@ -123,7 +123,7 @@ static int read_from_imc_dir(char *imc_dir, int count)
 
 		return -1;
 	}
-	if (fscanf(fp, "%u", &imc_counters_config[count].type) <= 0) {
+	if (fscanf(fp, "%u", &imc_counters_config[*count].type) <= 0) {
 		ksft_perror("Could not get iMC type");
 		fclose(fp);
 
@@ -147,7 +147,8 @@ static int read_from_imc_dir(char *imc_dir, int count)
 	}
 	fclose(fp);
 
-	get_read_event_and_umask(cas_count_cfg, count);
+	get_read_event_and_umask(cas_count_cfg, *count);
+	*count += 1;
 
 	return 0;
 }
@@ -196,13 +197,12 @@ static int num_of_imcs(void)
 			if (temp[0] >= '0' && temp[0] <= '9') {
 				sprintf(imc_dir, "%s/%s/", DYN_PMU_PATH,
 					ep->d_name);
-				ret = read_from_imc_dir(imc_dir, count);
+				ret = read_from_imc_dir(imc_dir, &count);
 				if (ret) {
 					closedir(dp);
 
 					return ret;
 				}
-				count++;
 			}
 		}
 		closedir(dp);
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 05/10] selftests/resctrl: Support multiple events associated with iMC
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
                   ` (3 preceding siblings ...)
  2026-03-13 20:32 ` [PATCH v3 04/10] selftests/resctrl: Prepare for parsing multiple events per iMC Reinette Chatre
@ 2026-03-13 20:32 ` Reinette Chatre
  2026-03-27 17:28   ` Ilpo Järvinen
  2026-03-13 20:32 ` [PATCH v3 06/10] selftests/resctrl: Increase size of buffer used in MBM and MBA tests Reinette Chatre
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

The resctrl selftests discover needed parameters to perf_event_open() via
sysfs. The PMU associated with every memory controller (iMC) is discovered
via the /sys/bus/event_source/devices/uncore_imc_N/type file while
the read memory bandwidth event type and umask is discovered via
/sys/bus/event_source/devices/uncore_imc_N/events/cas_count_read.

Newer systems may have multiple events that expose read memory bandwidth.
Running a recent kernel that includes
commit 6a8a48644c4b ("perf/x86/intel/uncore: Add per-scheduler IMC CAS count events")
on these systems expose the multiple events. For example,
 /sys/bus/event_source/devices/uncore_imc_N/events/cas_count_read_sch0
 /sys/bus/event_source/devices/uncore_imc_N/events/cas_count_read_sch1

Support parsing of iMC PMU properties when the PMU may have multiple events
to measure read memory bandwidth. The PMU only needs to be discovered once.
Split the parsing of event details from actual PMU discovery in order to
loop over all events associated with the PMU. Match all events with the
cas_count_read prefix instead of requiring there to be one file with that
name.

Make the parsing code more robust. With strings passed around to create
needed paths, use snprintf() instead of sprintf() to ensure there is
always enough space to create the path while using the standard PATH_MAX
for path lengths. Ensure there is enough room in imc_counters_config[]
before attempting to add an entry.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
Reviewed-by: Zide Chen <zide.chen@intel.com>
---
Changes since v1:
- Add Zide Chen's RB tag.

Changes since v2:
- Update changelog to note merged perf change that supports this change.
- Use PATH_MAX instead of magic number for path lengths. (Ilpo)
- Rename "org_count" -> "orig_count". (Ilpo)
- Rework flow surrounding fscanf() used in both parse_imc_read_bw_events()
  and read_from_imc_dir(). (Ilpo)
- Handle error first to reduce indentation. (Ilpo)
- Add Chen Yu's tag.
---
 tools/testing/selftests/resctrl/resctrl_val.c | 118 ++++++++++++++----
 1 file changed, 93 insertions(+), 25 deletions(-)

diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
index 6d766347e3fc..f20d2194c35f 100644
--- a/tools/testing/selftests/resctrl/resctrl_val.c
+++ b/tools/testing/selftests/resctrl/resctrl_val.c
@@ -11,10 +11,10 @@
 #include "resctrl.h"
 
 #define UNCORE_IMC		"uncore_imc"
-#define READ_FILE_NAME		"events/cas_count_read"
+#define READ_FILE_NAME		"cas_count_read"
 #define DYN_PMU_PATH		"/sys/bus/event_source/devices"
 #define SCALE			0.00006103515625
-#define MAX_IMCS		20
+#define MAX_IMCS		40
 #define MAX_TOKENS		5
 
 #define CON_MBM_LOCAL_BYTES_PATH		\
@@ -109,46 +109,114 @@ static int open_perf_read_event(int i, int cpu_no)
 	return 0;
 }
 
-/* Get type and config of an iMC counter's read event. */
-static int read_from_imc_dir(char *imc_dir, unsigned int *count)
+static int parse_imc_read_bw_events(char *imc_dir, unsigned int type,
+				    unsigned int *count)
 {
-	char cas_count_cfg[1024], imc_counter_cfg[1024], imc_counter_type[1024];
+	char imc_events_dir[PATH_MAX], imc_counter_cfg[PATH_MAX];
+	unsigned int orig_count = *count;
+	char cas_count_cfg[1024];
+	struct dirent *ep;
+	int path_len;
+	int ret = -1;
+	int num_cfg;
 	FILE *fp;
+	DIR *dp;
 
-	/* Get type of iMC counter */
-	sprintf(imc_counter_type, "%s%s", imc_dir, "type");
-	fp = fopen(imc_counter_type, "r");
-	if (!fp) {
-		ksft_perror("Failed to open iMC counter type file");
+	path_len = snprintf(imc_events_dir, sizeof(imc_events_dir), "%sevents",
+			    imc_dir);
+	if (path_len >= sizeof(imc_events_dir)) {
+		ksft_print_msg("Unable to create path to %sevents\n", imc_dir);
+		return -1;
+	}
 
+	dp = opendir(imc_events_dir);
+	if (!dp) {
+		ksft_perror("Unable to open PMU events directory");
 		return -1;
 	}
-	if (fscanf(fp, "%u", &imc_counters_config[*count].type) <= 0) {
-		ksft_perror("Could not get iMC type");
+
+	while ((ep = readdir(dp))) {
+		/*
+		 * Parse all event files with READ_FILE_NAME prefix that
+		 * contain the event number and umask. Skip files containing
+		 * "." that contain unused properties of event.
+		 */
+		if (!strstr(ep->d_name, READ_FILE_NAME) ||
+		    strchr(ep->d_name, '.'))
+			continue;
+
+		path_len = snprintf(imc_counter_cfg, sizeof(imc_counter_cfg),
+				    "%s/%s", imc_events_dir, ep->d_name);
+		if (path_len >= sizeof(imc_counter_cfg)) {
+			ksft_print_msg("Unable to create path to %s/%s\n",
+				       imc_events_dir, ep->d_name);
+			goto out_close;
+		}
+		fp = fopen(imc_counter_cfg, "r");
+		if (!fp) {
+			ksft_perror("Failed to open iMC config file");
+			goto out_close;
+		}
+		num_cfg = fscanf(fp, "%1023s", cas_count_cfg);
 		fclose(fp);
+		if (num_cfg <= 0) {
+			ksft_perror("Could not get iMC cas count read");
+			goto out_close;
+		}
+		if (*count >= MAX_IMCS) {
+			ksft_print_msg("Maximum iMC count exceeded\n");
+			goto out_close;
+		}
 
-		return -1;
+		imc_counters_config[*count].type = type;
+		get_read_event_and_umask(cas_count_cfg, *count);
+		/* Do not fail after incrementing *count. */
+		*count += 1;
 	}
-	fclose(fp);
+	if (*count == orig_count) {
+		ksft_print_msg("Unable to find events in %s\n", imc_events_dir);
+		goto out_close;
+	}
+	ret = 0;
+out_close:
+	closedir(dp);
+	return ret;
+}
 
-	/* Get read config */
-	sprintf(imc_counter_cfg, "%s%s", imc_dir, READ_FILE_NAME);
-	fp = fopen(imc_counter_cfg, "r");
-	if (!fp) {
-		ksft_perror("Failed to open iMC config file");
+/* Get type and config of an iMC counter's read event. */
+static int read_from_imc_dir(char *imc_dir, unsigned int *count)
+{
+	char imc_counter_type[PATH_MAX];
+	unsigned int type;
+	int path_len;
+	FILE *fp;
+	int ret;
 
+	/* Get type of iMC counter */
+	path_len = snprintf(imc_counter_type, sizeof(imc_counter_type),
+			    "%s%s", imc_dir, "type");
+	if (path_len >= sizeof(imc_counter_type)) {
+		ksft_print_msg("Unable to create path to %s%s\n",
+			       imc_dir, "type");
 		return -1;
 	}
-	if (fscanf(fp, "%1023s", cas_count_cfg) <= 0) {
-		ksft_perror("Could not get iMC cas count read");
-		fclose(fp);
+	fp = fopen(imc_counter_type, "r");
+	if (!fp) {
+		ksft_perror("Failed to open iMC counter type file");
 
 		return -1;
 	}
+	ret = fscanf(fp, "%u", &type);
 	fclose(fp);
-
-	get_read_event_and_umask(cas_count_cfg, *count);
-	*count += 1;
+	if (ret <= 0) {
+		ksft_perror("Could not get iMC type");
+		return -1;
+	}
+	ret = parse_imc_read_bw_events(imc_dir, type, count);
+	if (ret) {
+		ksft_print_msg("Unable to parse bandwidth event and umask\n");
+		return ret;
+	}
 
 	return 0;
 }
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 06/10] selftests/resctrl: Increase size of buffer used in MBM and MBA tests
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
                   ` (4 preceding siblings ...)
  2026-03-13 20:32 ` [PATCH v3 05/10] selftests/resctrl: Support multiple events associated with iMC Reinette Chatre
@ 2026-03-13 20:32 ` Reinette Chatre
  2026-03-27 17:30   ` Ilpo Järvinen
  2026-03-13 20:32 ` [PATCH v3 07/10] selftests/resctrl: Raise threshold at which MBM and PMU values are compared Reinette Chatre
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

Errata for Sierra Forest [1] (SRF42) and Granite Rapids [2] (GNR12)
describe the problem that MBM on Intel RDT may overcount memory bandwidth
measurements. The resctrl tests compare memory bandwidth reported by iMC
PMU to that reported by MBM causing the tests to fail on these systems
depending on the settings of the platform related to the errata.

Since the resctrl tests need to run under various conditions it is not
possible to ensure system settings are such that MBM will not overcount.
It has been observed that the overcounting can be controlled via the
buffer size used in the MBM and MBA tests that rely on comparisons
between iMC PMU and MBM measurements.

Running the MBM test on affected platforms with different buffer sizes it
can be observed that the difference between iMC PMU and MBM counts reduce
as the buffer size increases. After increasing the buffer size to more
than 4X the differences between iMC PMU and MBM become insignificant.

Increase the buffer size used in MBM and MBA tests to 4X L3 size to reduce
possibility of tests failing due to difference in counts reported by iMC
PMU and MBM.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
Link: https://edc.intel.com/content/www/us/en/design/products-and-solutions/processors-and-chipsets/sierra-forest/xeon-6700-series-processor-with-e-cores-specification-update/errata-details/ # [1]
Link: https://edc.intel.com/content/www/us/en/design/products-and-solutions/processors-and-chipsets/birch-stream/xeon-6900-6700-6500-series-processors-with-p-cores-specification-update/011US/errata-details/ # [2]
---
Changes since v2:
- Add Chen Yu's tag.
---
 tools/testing/selftests/resctrl/fill_buf.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
index 19a01a52dc1a..b9fa7968cd6e 100644
--- a/tools/testing/selftests/resctrl/fill_buf.c
+++ b/tools/testing/selftests/resctrl/fill_buf.c
@@ -139,6 +139,6 @@ ssize_t get_fill_buf_size(int cpu_no, const char *cache_type)
 	if (ret)
 		return ret;
 
-	return cache_total_size * 2 > MINIMUM_SPAN ?
-			cache_total_size * 2 : MINIMUM_SPAN;
+	return cache_total_size * 4 > MINIMUM_SPAN ?
+			cache_total_size * 4 : MINIMUM_SPAN;
 }
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 07/10] selftests/resctrl: Raise threshold at which MBM and PMU values are compared
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
                   ` (5 preceding siblings ...)
  2026-03-13 20:32 ` [PATCH v3 06/10] selftests/resctrl: Increase size of buffer used in MBM and MBA tests Reinette Chatre
@ 2026-03-13 20:32 ` Reinette Chatre
  2026-03-27 17:34   ` Ilpo Järvinen
  2026-03-13 20:32 ` [PATCH v3 08/10] selftests/resctrl: Remove requirement on cache miss rate Reinette Chatre
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

commit 501cfdba0a40 ("selftests/resctrl: Do not compare performance
counters and resctrl at low bandwidth") introduced a threshold under which
memory bandwidth values from MBM and performance counters are not compared.
This is needed because MBM and the PMUs do not have an identical view of
memory bandwidth since PMUs can count all memory traffic while MBM does not
count "overhead" (for example RAS) traffic that cannot be attributed to an
RMID. As a ratio this difference in view of memory bandwidth is pronounced
at low memory bandwidths.

The 750MiB threshold was chosen arbitrarily after comparisons on different
platforms. Exposed to more platforms after introduction this threshold has
proven to be inadequate.

Having accurate comparison between performance counters and MBM requires
careful management of system load as well as control of features that
introduce extra memory traffic, for example, patrol scrub. This is not
appropriate for the resctrl selftests that are intended to run on a
variety of systems with various configurations.

Increase the memory bandwidth threshold under which no comparison is made
between performance counters and MBM. Add additional leniency by increasing
the percentage of difference that will be tolerated between these counts.

There is no impact to the validity of the resctrl selftests results as a
measure of resctrl subsystem health.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
---
Changes since v2:
- Add Chen Yu's tag.
---
 tools/testing/selftests/resctrl/mba_test.c | 2 +-
 tools/testing/selftests/resctrl/mbm_test.c | 2 +-
 tools/testing/selftests/resctrl/resctrl.h  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/resctrl/mba_test.c b/tools/testing/selftests/resctrl/mba_test.c
index cd4c715b7ffd..39cee9898359 100644
--- a/tools/testing/selftests/resctrl/mba_test.c
+++ b/tools/testing/selftests/resctrl/mba_test.c
@@ -12,7 +12,7 @@
 
 #define RESULT_FILE_NAME	"result_mba"
 #define NUM_OF_RUNS		5
-#define MAX_DIFF_PERCENT	8
+#define MAX_DIFF_PERCENT	15
 #define ALLOCATION_MAX		100
 #define ALLOCATION_MIN		10
 #define ALLOCATION_STEP		10
diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
index 58201f844740..6dbbc3b76003 100644
--- a/tools/testing/selftests/resctrl/mbm_test.c
+++ b/tools/testing/selftests/resctrl/mbm_test.c
@@ -11,7 +11,7 @@
 #include "resctrl.h"
 
 #define RESULT_FILE_NAME	"result_mbm"
-#define MAX_DIFF_PERCENT	8
+#define MAX_DIFF_PERCENT	15
 #define NUM_OF_RUNS		5
 
 static int
diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
index 7f2ab28be857..3bad2d80c09b 100644
--- a/tools/testing/selftests/resctrl/resctrl.h
+++ b/tools/testing/selftests/resctrl/resctrl.h
@@ -55,7 +55,7 @@
  * and MBM respectively, for instance generating "overhead" traffic which
  * is not counted against any specific RMID.
  */
-#define THROTTLE_THRESHOLD	750
+#define THROTTLE_THRESHOLD	2500
 
 /*
  * fill_buf_param:	"fill_buf" benchmark parameters
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 08/10] selftests/resctrl: Remove requirement on cache miss rate
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
                   ` (6 preceding siblings ...)
  2026-03-13 20:32 ` [PATCH v3 07/10] selftests/resctrl: Raise threshold at which MBM and PMU values are compared Reinette Chatre
@ 2026-03-13 20:32 ` Reinette Chatre
  2026-03-27 17:45   ` Ilpo Järvinen
  2026-03-13 20:32 ` [PATCH v3 09/10] selftests/resctrl: Simplify perf usage in CAT test Reinette Chatre
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

As the CAT test reads the same buffer into different sized cache portions
it compares the number of cache misses against an expected percentage
based on the size of the cache portion.

Systems and test conditions vary. The CAT test is a test of resctrl
subsystem health and not a test of the hardware architecture so it is not
required to place requirements on the size of the difference in cache
misses, just that the number of cache misses when reading a buffer
increase as the cache portion used for the buffer decreases.

Remove additional constraint on how big the difference between cache
misses should be as the cache portion size changes. Only test that the
cache misses increase as the cache portion size decreases. This remains
a good sanity check of resctrl subsystem health while reducing impact
of hardware architectural differences and the various conditions under
which the test may run.

Increase the size difference between cache portions to additionally avoid
any consequences resulting from smaller increments.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
---
Changes since v2:
- Add Chen Yu's tag.
---
 tools/testing/selftests/resctrl/cat_test.c | 33 ++++------------------
 1 file changed, 5 insertions(+), 28 deletions(-)

diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
index f00b622c1460..8bc47f06679a 100644
--- a/tools/testing/selftests/resctrl/cat_test.c
+++ b/tools/testing/selftests/resctrl/cat_test.c
@@ -14,42 +14,20 @@
 #define RESULT_FILE_NAME	"result_cat"
 #define NUM_OF_RUNS		5
 
-/*
- * Minimum difference in LLC misses between a test with n+1 bits CBM to the
- * test with n bits is MIN_DIFF_PERCENT_PER_BIT * (n - 1). With e.g. 5 vs 4
- * bits in the CBM mask, the minimum difference must be at least
- * MIN_DIFF_PERCENT_PER_BIT * (4 - 1) = 3 percent.
- *
- * The relationship between number of used CBM bits and difference in LLC
- * misses is not expected to be linear. With a small number of bits, the
- * margin is smaller than with larger number of bits. For selftest purposes,
- * however, linear approach is enough because ultimately only pass/fail
- * decision has to be made and distinction between strong and stronger
- * signal is irrelevant.
- */
-#define MIN_DIFF_PERCENT_PER_BIT	1UL
-
 static int show_results_info(__u64 sum_llc_val, int no_of_bits,
 			     unsigned long cache_span,
-			     unsigned long min_diff_percent,
 			     unsigned long num_of_runs, bool platform,
 			     __s64 *prev_avg_llc_val)
 {
 	__u64 avg_llc_val = 0;
-	float avg_diff;
 	int ret = 0;
 
 	avg_llc_val = sum_llc_val / num_of_runs;
 	if (*prev_avg_llc_val) {
-		float delta = (__s64)(avg_llc_val - *prev_avg_llc_val);
-
-		avg_diff = delta / *prev_avg_llc_val;
-		ret = platform && (avg_diff * 100) < (float)min_diff_percent;
-
-		ksft_print_msg("%s Check cache miss rate changed more than %.1f%%\n",
-			       ret ? "Fail:" : "Pass:", (float)min_diff_percent);
+		ret = platform && (avg_llc_val < *prev_avg_llc_val);
 
-		ksft_print_msg("Percent diff=%.1f\n", avg_diff * 100);
+		ksft_print_msg("%s Check cache miss rate increased\n",
+			       ret ? "Fail:" : "Pass:");
 	}
 	*prev_avg_llc_val = avg_llc_val;
 
@@ -58,10 +36,10 @@ static int show_results_info(__u64 sum_llc_val, int no_of_bits,
 	return ret;
 }
 
-/* Remove the highest bit from CBM */
+/* Remove the highest bits from CBM */
 static unsigned long next_mask(unsigned long current_mask)
 {
-	return current_mask & (current_mask >> 1);
+	return current_mask & (current_mask >> 2);
 }
 
 static int check_results(struct resctrl_val_param *param, const char *cache_type,
@@ -112,7 +90,6 @@ static int check_results(struct resctrl_val_param *param, const char *cache_type
 
 		ret = show_results_info(sum_llc_perf_miss, bits,
 					alloc_size / 64,
-					MIN_DIFF_PERCENT_PER_BIT * (bits - 1),
 					runs, get_vendor() == ARCH_INTEL,
 					&prev_avg_llc_val);
 		if (ret)
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 09/10] selftests/resctrl: Simplify perf usage in CAT test
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
                   ` (7 preceding siblings ...)
  2026-03-13 20:32 ` [PATCH v3 08/10] selftests/resctrl: Remove requirement on cache miss rate Reinette Chatre
@ 2026-03-13 20:32 ` Reinette Chatre
  2026-03-27 17:47   ` Ilpo Järvinen
  2026-03-13 20:32 ` [PATCH v3 10/10] selftests/resctrl: Reduce L2 impact on " Reinette Chatre
  2026-03-31 19:13 ` [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Shuah Khan
  10 siblings, 1 reply; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

The CAT test relies on the PERF_COUNT_HW_CACHE_MISSES event to determine if
modifying a cache portion size is successful. This event is configured to
report the data as part of an event group, but no other events are added to
the group.

Remove the unnecessary PERF_FORMAT_GROUP format setting. This eliminates
the need for struct perf_event_read and results in read() of the associated
file descriptor to return just one value associated with the
PERF_COUNT_HW_CACHE_MISSES event of interest.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
---
Changes since v2:
- Add Chen Yu's tag.
---
 tools/testing/selftests/resctrl/cache.c    | 17 +++++------------
 tools/testing/selftests/resctrl/cat_test.c |  4 +---
 tools/testing/selftests/resctrl/resctrl.h  | 11 +----------
 3 files changed, 7 insertions(+), 25 deletions(-)

diff --git a/tools/testing/selftests/resctrl/cache.c b/tools/testing/selftests/resctrl/cache.c
index bef71b6feacc..df9bea584a2d 100644
--- a/tools/testing/selftests/resctrl/cache.c
+++ b/tools/testing/selftests/resctrl/cache.c
@@ -10,7 +10,6 @@ void perf_event_attr_initialize(struct perf_event_attr *pea, __u64 config)
 	memset(pea, 0, sizeof(*pea));
 	pea->type = PERF_TYPE_HARDWARE;
 	pea->size = sizeof(*pea);
-	pea->read_format = PERF_FORMAT_GROUP;
 	pea->exclude_kernel = 1;
 	pea->exclude_hv = 1;
 	pea->exclude_idle = 1;
@@ -37,19 +36,13 @@ int perf_event_reset_enable(int pe_fd)
 	return 0;
 }
 
-void perf_event_initialize_read_format(struct perf_event_read *pe_read)
-{
-	memset(pe_read, 0, sizeof(*pe_read));
-	pe_read->nr = 1;
-}
-
 int perf_open(struct perf_event_attr *pea, pid_t pid, int cpu_no)
 {
 	int pe_fd;
 
 	pe_fd = perf_event_open(pea, pid, cpu_no, -1, PERF_FLAG_FD_CLOEXEC);
 	if (pe_fd == -1) {
-		ksft_perror("Error opening leader");
+		ksft_perror("Unable to set up performance monitoring");
 		return -1;
 	}
 
@@ -132,9 +125,9 @@ static int print_results_cache(const char *filename, pid_t bm_pid, __u64 llc_val
  *
  * Return: =0 on success. <0 on failure.
  */
-int perf_event_measure(int pe_fd, struct perf_event_read *pe_read,
-		       const char *filename, pid_t bm_pid)
+int perf_event_measure(int pe_fd, const char *filename, pid_t bm_pid)
 {
+	__u64 value;
 	int ret;
 
 	/* Stop counters after one span to get miss rate */
@@ -142,13 +135,13 @@ int perf_event_measure(int pe_fd, struct perf_event_read *pe_read,
 	if (ret < 0)
 		return ret;
 
-	ret = read(pe_fd, pe_read, sizeof(*pe_read));
+	ret = read(pe_fd, &value, sizeof(value));
 	if (ret == -1) {
 		ksft_perror("Could not get perf value");
 		return -1;
 	}
 
-	return print_results_cache(filename, bm_pid, pe_read->values[0].value);
+	return print_results_cache(filename, bm_pid, value);
 }
 
 /*
diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
index 8bc47f06679a..6aac03147d41 100644
--- a/tools/testing/selftests/resctrl/cat_test.c
+++ b/tools/testing/selftests/resctrl/cat_test.c
@@ -135,7 +135,6 @@ static int cat_test(const struct resctrl_test *test,
 		    struct resctrl_val_param *param,
 		    size_t span, unsigned long current_mask)
 {
-	struct perf_event_read pe_read;
 	struct perf_event_attr pea;
 	cpu_set_t old_affinity;
 	unsigned char *buf;
@@ -159,7 +158,6 @@ static int cat_test(const struct resctrl_test *test,
 		goto reset_affinity;
 
 	perf_event_attr_initialize(&pea, PERF_COUNT_HW_CACHE_MISSES);
-	perf_event_initialize_read_format(&pe_read);
 	pe_fd = perf_open(&pea, bm_pid, uparams->cpu);
 	if (pe_fd < 0) {
 		ret = -1;
@@ -192,7 +190,7 @@ static int cat_test(const struct resctrl_test *test,
 
 			fill_cache_read(buf, span, true);
 
-			ret = perf_event_measure(pe_fd, &pe_read, param->filename, bm_pid);
+			ret = perf_event_measure(pe_fd, param->filename, bm_pid);
 			if (ret)
 				goto free_buf;
 		}
diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
index 3bad2d80c09b..175101022bf3 100644
--- a/tools/testing/selftests/resctrl/resctrl.h
+++ b/tools/testing/selftests/resctrl/resctrl.h
@@ -148,13 +148,6 @@ struct resctrl_val_param {
 	struct fill_buf_param	*fill_buf;
 };
 
-struct perf_event_read {
-	__u64 nr;			/* The number of events */
-	struct {
-		__u64 value;		/* The value of the event */
-	} values[2];
-};
-
 /*
  * Memory location that consumes values compiler must not optimize away.
  * Volatile ensures writes to this location cannot be optimized away by
@@ -210,11 +203,9 @@ unsigned int count_bits(unsigned long n);
 int snc_kernel_support(void);
 
 void perf_event_attr_initialize(struct perf_event_attr *pea, __u64 config);
-void perf_event_initialize_read_format(struct perf_event_read *pe_read);
 int perf_open(struct perf_event_attr *pea, pid_t pid, int cpu_no);
 int perf_event_reset_enable(int pe_fd);
-int perf_event_measure(int pe_fd, struct perf_event_read *pe_read,
-		       const char *filename, pid_t bm_pid);
+int perf_event_measure(int pe_fd, const char *filename, pid_t bm_pid);
 int measure_llc_resctrl(const char *filename, pid_t bm_pid);
 int minimize_l2_occupancy(const struct resctrl_test *test,
 			  const struct user_params *uparams,
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 10/10] selftests/resctrl: Reduce L2 impact on CAT test
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
                   ` (8 preceding siblings ...)
  2026-03-13 20:32 ` [PATCH v3 09/10] selftests/resctrl: Simplify perf usage in CAT test Reinette Chatre
@ 2026-03-13 20:32 ` Reinette Chatre
  2026-03-27 17:49   ` Ilpo Järvinen
  2026-03-31 19:13 ` [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Shuah Khan
  10 siblings, 1 reply; 28+ messages in thread
From: Reinette Chatre @ 2026-03-13 20:32 UTC (permalink / raw)
  To: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, reinette.chatre, linux-kselftest,
	linux-kernel, patches

The L3 CAT test loads a buffer into cache that is proportional to the L3
size allocated for the workload and measures cache misses when accessing
the buffer as a test of L3 occupancy. When loading the buffer it can be
assumed that a portion of the buffer will be loaded into the L2 cache and
depending on cache design may not be present in L3. It is thus possible
for data to not be in L3 but also not trigger an L3 cache miss when
accessed.

Reduce impact of L2 on the L3 CAT test by, if L2 allocation is supported,
minimizing the portion of L2 that the workload can allocate into. This
encourages most of buffer to be loaded into L3 and support better
comparison between buffer size, cache portion, and cache misses when
accessing the buffer.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Chen Yu <yu.c.chen@intel.com>
---
Changes since v2:
- Add Chen Yu's tag.
---
 tools/testing/selftests/resctrl/cat_test.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
index 6aac03147d41..371a2f26dc47 100644
--- a/tools/testing/selftests/resctrl/cat_test.c
+++ b/tools/testing/selftests/resctrl/cat_test.c
@@ -157,6 +157,10 @@ static int cat_test(const struct resctrl_test *test,
 	if (ret)
 		goto reset_affinity;
 
+	ret = minimize_l2_occupancy(test, uparams, param);
+	if (ret)
+		goto reset_affinity;
+
 	perf_event_attr_initialize(&pea, PERF_COUNT_HW_CACHE_MISSES);
 	pe_fd = perf_open(&pea, bm_pid, uparams->cpu);
 	if (pe_fd < 0) {
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 01/10] selftests/resctrl: Improve accuracy of cache occupancy test
  2026-03-13 20:32 ` [PATCH v3 01/10] selftests/resctrl: Improve accuracy of cache occupancy test Reinette Chatre
@ 2026-03-26 12:44   ` Ilpo Järvinen
  0 siblings, 0 replies; 28+ messages in thread
From: Ilpo Järvinen @ 2026-03-26 12:44 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

[-- Attachment #1: Type: text/plain, Size: 7992 bytes --]

On Fri, 13 Mar 2026, Reinette Chatre wrote:

> Dave Martin reported inconsistent CMT test failures. In one experiment
> the first run of the CMT test failed because of too large (24%) difference
> between measured and achievable cache occupancy while the second run passed
> with an acceptable 4% difference.
> 
> The CMT test is susceptible to interference from the rest of the system.
> This can be demonstrated with a utility like stress-ng by running the CMT
> test while introducing cache misses using:
> 
>    stress-ng --matrix-3d 0 --matrix-3d-zyx
> 
> Below shows an example of the CMT test failing because of a significant
> difference between measured and achievable cache occupancy when run with
> interference:
>     # Starting CMT test ...
>     # Mounting resctrl to "/sys/fs/resctrl"
>     # Cache size :335544320
>     # Writing benchmark parameters to resctrl FS
>     # Benchmark PID: 7011
>     # Checking for pass/fail
>     # Fail: Check cache miss rate within 15%
>     # Percent diff=99
>     # Number of bits: 5
>     # Average LLC val: 235929
>     # Cache span (bytes): 83886080
>     not ok 1 CMT: test
> 
> The CMT test creates a new control group that is also capable of monitoring
> and assigns the workload to it. The workload allocates a buffer that by
> default fills a portion of the L3 and keeps reading from the buffer,
> measuring the L3 occupancy at intervals. The test passes if the workload's
> L3 occupancy is within 15% of the buffer size.
> 
> By not adjusting any capacity bitmasks the workload shares the cache with
> the rest of the system. Any other task that may be running could evict
> the workload's data from the cache causing it to have low cache occupancy.
> 
> Reduce interference from the rest of the system by ensuring that the
> workload's control group uses the capacity bitmask found in the user
> parameters for L3 and that the rest of the system can only allocate into
> the inverse of the workload's L3 cache portion. Other tasks can thus no
> longer evict the workload's data from L3.
> 
> With the above adjustments the CMT test is more consistent. Repeating the
> CMT test while generating interference with stress-ng on a sample
> system after applying the fixes show significant improvement in test
> accuracy:
> 
>     # Starting CMT test ...
>     # Mounting resctrl to "/sys/fs/resctrl"
>     # Cache size :335544320
>     # Writing benchmark parameters to resctrl FS
>     # Write schema "L3:0=fffe0" to resctrl FS
>     # Write schema "L3:0=1f" to resctrl FS
>     # Benchmark PID: 7089
>     # Checking for pass/fail
>     # Pass: Check cache miss rate within 15%
>     # Percent diff=12
>     # Number of bits: 5
>     # Average LLC val: 73269248
>     # Cache span (bytes): 83886080
>     ok 1 CMT: test
> 
> Reported-by: Dave Martin <Dave.Martin@arm.com>
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
> Tested-by: Chen Yu <yu.c.chen@intel.com>
> Link: https://lore.kernel.org/lkml/aO+7MeSMV29VdbQs@e133380.arm.com/
> ---
> Changes since v1:
> - Fix typo in changelog: "data my be in L2" -> "data may be in L2".
> 
> Changes since v2:
> - Split patch to separate changes impacting L3 and L2 resource. (Ilpo)
> - Re-run tests after patch split to ensure test impact match patch
>   and update changelog with refreshed data.
> - Since fix is now split across two patches: "Closes:" -> "Link:"
> - Rename "long_mask" to "full_mask". (Ilpo)
> - Add Chen Yu's tag.
> ---
>  tools/testing/selftests/resctrl/cmt_test.c    | 26 +++++++++++++++++--
>  tools/testing/selftests/resctrl/mba_test.c    |  4 ++-
>  tools/testing/selftests/resctrl/mbm_test.c    |  4 ++-
>  tools/testing/selftests/resctrl/resctrl.h     |  4 ++-
>  tools/testing/selftests/resctrl/resctrl_val.c |  2 +-
>  5 files changed, 34 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/testing/selftests/resctrl/cmt_test.c b/tools/testing/selftests/resctrl/cmt_test.c
> index d09e693dc739..7bc6cf49c1c5 100644
> --- a/tools/testing/selftests/resctrl/cmt_test.c
> +++ b/tools/testing/selftests/resctrl/cmt_test.c
> @@ -19,12 +19,34 @@
>  #define CON_MON_LCC_OCCUP_PATH		\
>  	"%s/%s/mon_data/mon_L3_%02d/llc_occupancy"
>  
> -static int cmt_init(const struct resctrl_val_param *param, int domain_id)
> +/*
> + * Initialize capacity bitmasks (CBMs) of:
> + * - control group being tested per test parameters,
> + * - default resource group as inverse of control group being tested to prevent
> + *   other tasks from interfering with test.
> + */
> +static int cmt_init(const struct resctrl_test *test,
> +		    const struct user_params *uparams,
> +		    const struct resctrl_val_param *param, int domain_id)
>  {
> +	unsigned long full_mask;
> +	char schemata[64];
> +	int ret;
> +
>  	sprintf(llc_occup_path, CON_MON_LCC_OCCUP_PATH, RESCTRL_PATH,
>  		param->ctrlgrp, domain_id);
>  
> -	return 0;
> +	ret = get_full_cbm(test->resource, &full_mask);
> +	if (ret)
> +		return ret;
> +
> +	snprintf(schemata, sizeof(schemata), "%lx", ~param->mask & full_mask);
> +	ret = write_schemata("", schemata, uparams->cpu, test->resource);
> +	if (ret)
> +		return ret;
> +
> +	snprintf(schemata, sizeof(schemata), "%lx", param->mask);
> +	return write_schemata(param->ctrlgrp, schemata, uparams->cpu, test->resource);
>  }
>  
>  static int cmt_setup(const struct resctrl_test *test,
> diff --git a/tools/testing/selftests/resctrl/mba_test.c b/tools/testing/selftests/resctrl/mba_test.c
> index c7e9adc0368f..cd4c715b7ffd 100644
> --- a/tools/testing/selftests/resctrl/mba_test.c
> +++ b/tools/testing/selftests/resctrl/mba_test.c
> @@ -17,7 +17,9 @@
>  #define ALLOCATION_MIN		10
>  #define ALLOCATION_STEP		10
>  
> -static int mba_init(const struct resctrl_val_param *param, int domain_id)
> +static int mba_init(const struct resctrl_test *test,
> +		    const struct user_params *uparams,
> +		    const struct resctrl_val_param *param, int domain_id)
>  {
>  	int ret;
>  
> diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
> index 84d8bc250539..58201f844740 100644
> --- a/tools/testing/selftests/resctrl/mbm_test.c
> +++ b/tools/testing/selftests/resctrl/mbm_test.c
> @@ -83,7 +83,9 @@ static int check_results(size_t span)
>  	return ret;
>  }
>  
> -static int mbm_init(const struct resctrl_val_param *param, int domain_id)
> +static int mbm_init(const struct resctrl_test *test,
> +		    const struct user_params *uparams,
> +		    const struct resctrl_val_param *param, int domain_id)
>  {
>  	int ret;
>  
> diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
> index afe635b6e48d..c72045c74ac4 100644
> --- a/tools/testing/selftests/resctrl/resctrl.h
> +++ b/tools/testing/selftests/resctrl/resctrl.h
> @@ -135,7 +135,9 @@ struct resctrl_val_param {
>  	char			filename[64];
>  	unsigned long		mask;
>  	int			num_of_runs;
> -	int			(*init)(const struct resctrl_val_param *param,
> +	int			(*init)(const struct resctrl_test *test,
> +					const struct user_params *uparams,
> +					const struct resctrl_val_param *param,
>  					int domain_id);
>  	int			(*setup)(const struct resctrl_test *test,
>  					 const struct user_params *uparams,
> diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
> index 7c08e936572d..a5a8badb83d4 100644
> --- a/tools/testing/selftests/resctrl/resctrl_val.c
> +++ b/tools/testing/selftests/resctrl/resctrl_val.c
> @@ -569,7 +569,7 @@ int resctrl_val(const struct resctrl_test *test,
>  		goto reset_affinity;
>  
>  	if (param->init) {
> -		ret = param->init(param, domain_id);
> +		ret = param->init(test, uparams, param, domain_id);
>  		if (ret)
>  			goto reset_affinity;
>  	}
> 

Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>

-- 
 i.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 02/10] selftests/resctrl: Reduce interference from L2 occupancy during cache occupancy test
  2026-03-13 20:32 ` [PATCH v3 02/10] selftests/resctrl: Reduce interference from L2 occupancy during " Reinette Chatre
@ 2026-03-26 12:56   ` Ilpo Järvinen
  0 siblings, 0 replies; 28+ messages in thread
From: Ilpo Järvinen @ 2026-03-26 12:56 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

[-- Attachment #1: Type: text/plain, Size: 6903 bytes --]

On Fri, 13 Mar 2026, Reinette Chatre wrote:

> The CMT test creates a new control group that is also capable of monitoring
> and assigns the workload to it. The workload allocates a buffer that by
> default fills a portion of the L3 and keeps reading from the buffer,
> measuring the L3 occupancy at intervals. The test passes if the workload's
> L3 occupancy is within 15% of the buffer size.
> 
> The CMT test does not take into account that some of the workload's data
> may land in L2/L1. Matching L3 occupancy to the size of the buffer while
> a portion of the buffer can be allocated into L2 is not accurate.
> 
> Take the L2 cache into account to improve test accuracy:
>  - Reduce the workload's L2 cache allocation to the minimum on systems that
>    support L2 cache allocation. Do so with a new utility in preparation for
>    all L3 cache allocation tests needing the same capability.
>  - Increase the buffer size to accommodate data that may be allocated into
>    the L2 cache. Use a buffer size double the L3 portion to keep using the
>    L3 portion size as goal for L3 occupancy while taking into account that
>    some of the data may be in L2.
> 
> Running the CMT test on a sample system while introducing significant
> cache misses using "stress-ng --matrix-3d 0 --matrix-3d-zyx" shows
> significant improvement in L3 cache occupancy:
> 
> Before:
> 
>     # Starting CMT test ...
>     # Mounting resctrl to "/sys/fs/resctrl"
>     # Cache size :335544320
>     # Writing benchmark parameters to resctrl FS
>     # Write schema "L3:0=fffe0" to resctrl FS
>     # Write schema "L3:0=1f" to resctrl FS
>     # Benchmark PID: 7089
>     # Checking for pass/fail
>     # Pass: Check cache miss rate within 15%
>     # Percent diff=12
>     # Number of bits: 5
>     # Average LLC val: 73269248
>     # Cache span (bytes): 83886080
>     ok 1 CMT: test
> 
> After:
>     # Starting CMT test ...
>     # Mounting resctrl to "/sys/fs/resctrl"
>     # Cache size :335544320
>     # Writing benchmark parameters to resctrl FS
>     # Write schema "L3:0=fffe0" to resctrl FS
>     # Write schema "L3:0=1f" to resctrl FS
>     # Write schema "L2:1=0x1" to resctrl FS
>     # Benchmark PID: 7171
>     # Checking for pass/fail
>     # Pass: Check cache miss rate within 15%
>     # Percent diff=0
>     # Number of bits: 5
>     # Average LLC val: 83755008
>     # Cache span (bytes): 83886080
>     ok 1 CMT: test
> 
> Reported-by: Dave Martin <Dave.Martin@arm.com>
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
> Tested-by: Chen Yu <yu.c.chen@intel.com>
> Link: https://lore.kernel.org/lkml/aO+7MeSMV29VdbQs@e133380.arm.com/
> ---
> Changes since v2:
> - New patch split from v1's "selftests/resctrl: Improve accuracy of cache
>   occupancy test". (Ilpo)
> - Reword changelog. (Ilpo)
> - Update data used in changelog to match code after patch split.
> - Introduce utility to reduce L2 cache allocation. (Ilpo)
> - Add Chen Yu's tag.
> ---
>  tools/testing/selftests/resctrl/cache.c    | 13 +++++++++++++
>  tools/testing/selftests/resctrl/cmt_test.c | 14 ++++++++++----
>  tools/testing/selftests/resctrl/resctrl.h  |  3 +++
>  3 files changed, 26 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/testing/selftests/resctrl/cache.c b/tools/testing/selftests/resctrl/cache.c
> index 1ff1104e6575..bef71b6feacc 100644
> --- a/tools/testing/selftests/resctrl/cache.c
> +++ b/tools/testing/selftests/resctrl/cache.c
> @@ -173,6 +173,19 @@ int measure_llc_resctrl(const char *filename, pid_t bm_pid)
>  	return print_results_cache(filename, bm_pid, llc_occu_resc);
>  }
>  
> +/*
> + * Reduce L2 allocation to minimum when testing L3 cache allocation.
> + */
> +int minimize_l2_occupancy(const struct resctrl_test *test,
> +			  const struct user_params *uparams,
> +			  const struct resctrl_val_param *param)
> +{
> +	if (!strcmp(test->resource, "L3") && resctrl_resource_exists("L2"))
> +		return write_schemata(param->ctrlgrp, "0x1", uparams->cpu, "L2");
> +
> +	return 0;
> +}
> +
>  /*
>   * show_cache_info - Show generic cache test information
>   * @no_of_bits:		Number of bits
> diff --git a/tools/testing/selftests/resctrl/cmt_test.c b/tools/testing/selftests/resctrl/cmt_test.c
> index 7bc6cf49c1c5..ccb6fe881a94 100644
> --- a/tools/testing/selftests/resctrl/cmt_test.c
> +++ b/tools/testing/selftests/resctrl/cmt_test.c
> @@ -23,7 +23,9 @@
>   * Initialize capacity bitmasks (CBMs) of:
>   * - control group being tested per test parameters,
>   * - default resource group as inverse of control group being tested to prevent
> - *   other tasks from interfering with test.
> + *   other tasks from interfering with test,
> + * - L2 resource of control group being tested to minimize allocations into
> + *   L2 if possible to better predict L3 occupancy.
>   */
>  static int cmt_init(const struct resctrl_test *test,
>  		    const struct user_params *uparams,
> @@ -46,7 +48,11 @@ static int cmt_init(const struct resctrl_test *test,
>  		return ret;
>  
>  	snprintf(schemata, sizeof(schemata), "%lx", param->mask);
> -	return write_schemata(param->ctrlgrp, schemata, uparams->cpu, test->resource);
> +	ret = write_schemata(param->ctrlgrp, schemata, uparams->cpu, test->resource);
> +	if (ret)
> +		return ret;
> +
> +	return minimize_l2_occupancy(test, uparams, param);
>  }
>  
>  static int cmt_setup(const struct resctrl_test *test,
> @@ -175,11 +181,11 @@ static int cmt_run_test(const struct resctrl_test *test, const struct user_param
>  	span = cache_portion_size(cache_total_size, param.mask, long_mask);
>  
>  	if (uparams->fill_buf) {
> -		fill_buf.buf_size = span;
> +		fill_buf.buf_size = span * 2;
>  		fill_buf.memflush = uparams->fill_buf->memflush;
>  		param.fill_buf = &fill_buf;
>  	} else if (!uparams->benchmark_cmd[0]) {
> -		fill_buf.buf_size = span;
> +		fill_buf.buf_size = span * 2;
>  		fill_buf.memflush = true;
>  		param.fill_buf = &fill_buf;
>  	}
> diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
> index c72045c74ac4..7f2ab28be857 100644
> --- a/tools/testing/selftests/resctrl/resctrl.h
> +++ b/tools/testing/selftests/resctrl/resctrl.h
> @@ -216,6 +216,9 @@ int perf_event_reset_enable(int pe_fd);
>  int perf_event_measure(int pe_fd, struct perf_event_read *pe_read,
>  		       const char *filename, pid_t bm_pid);
>  int measure_llc_resctrl(const char *filename, pid_t bm_pid);
> +int minimize_l2_occupancy(const struct resctrl_test *test,
> +			  const struct user_params *uparams,
> +			  const struct resctrl_val_param *param);
>  void show_cache_info(int no_of_bits, __u64 avg_llc_val, size_t cache_span, bool lines);
>  
>  /*
> 

Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>

-- 
 i.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 04/10] selftests/resctrl: Prepare for parsing multiple events per iMC
  2026-03-13 20:32 ` [PATCH v3 04/10] selftests/resctrl: Prepare for parsing multiple events per iMC Reinette Chatre
@ 2026-03-26 13:03   ` Ilpo Järvinen
  2026-03-26 14:34     ` Reinette Chatre
  0 siblings, 1 reply; 28+ messages in thread
From: Ilpo Järvinen @ 2026-03-26 13:03 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

[-- Attachment #1: Type: text/plain, Size: 3506 bytes --]

On Fri, 13 Mar 2026, Reinette Chatre wrote:

> The events needed to read memory bandwidth are discovered by iterating
> over every memory controller (iMC) within /sys/bus/event_source/devices.
> Each iMC's PMU is assumed to have one event to measure read memory
> bandwidth that is represented by the sysfs cas_count_read file. The event's
> configuration is read from "cas_count_read" and stored as an element of
> imc_counters_config[] by read_from_imc_dir() that receives the
> index of the array where to store the configuration as argument.
> 
> It is possible that an iMC's PMU may have more than one event that should
> be used to measure memory bandwidth.
> 
> Change semantics to not provide the index of the array to
> read_from_imc_dir() but instead a pointer to the index. This enables
> read_from_imc_dir() to store configurations for more than one event by
> incrementing the index to imc_counters_config[] itself.
> 
> Ensure that the same type is consistently used for the index as it is
> passed around during counter configuration.
> 
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
> Tested-by: Chen Yu <yu.c.chen@intel.com>
> Reviewed-by: Zide Chen <zide.chen@intel.com>
> ---
> Changes since v1:
> - Add Zide Chen's RB tag.
> 
> Changes since v2:
> - Add Chen Yu's tag.
> ---
>  tools/testing/selftests/resctrl/resctrl_val.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
> index 71d6f88cc1f7..6d766347e3fc 100644
> --- a/tools/testing/selftests/resctrl/resctrl_val.c
> +++ b/tools/testing/selftests/resctrl/resctrl_val.c
> @@ -73,7 +73,7 @@ static void read_mem_bw_ioctl_perf_event_ioc_disable(int i)
>   * @cas_count_cfg:	Config
>   * @count:		iMC number
>   */
> -static void get_read_event_and_umask(char *cas_count_cfg, int count)
> +static void get_read_event_and_umask(char *cas_count_cfg, unsigned int count)
>  {
>  	char *token[MAX_TOKENS];
>  	int i = 0;
> @@ -110,7 +110,7 @@ static int open_perf_read_event(int i, int cpu_no)
>  }
>  
>  /* Get type and config of an iMC counter's read event. */
> -static int read_from_imc_dir(char *imc_dir, int count)
> +static int read_from_imc_dir(char *imc_dir, unsigned int *count)
>  {
>  	char cas_count_cfg[1024], imc_counter_cfg[1024], imc_counter_type[1024];
>  	FILE *fp;
> @@ -123,7 +123,7 @@ static int read_from_imc_dir(char *imc_dir, int count)
>  
>  		return -1;
>  	}
> -	if (fscanf(fp, "%u", &imc_counters_config[count].type) <= 0) {
> +	if (fscanf(fp, "%u", &imc_counters_config[*count].type) <= 0) {
>  		ksft_perror("Could not get iMC type");
>  		fclose(fp);
>  
> @@ -147,7 +147,8 @@ static int read_from_imc_dir(char *imc_dir, int count)
>  	}
>  	fclose(fp);
>  
> -	get_read_event_and_umask(cas_count_cfg, count);
> +	get_read_event_and_umask(cas_count_cfg, *count);
> +	*count += 1;
>  
>  	return 0;
>  }
> @@ -196,13 +197,12 @@ static int num_of_imcs(void)
>  			if (temp[0] >= '0' && temp[0] <= '9') {
>  				sprintf(imc_dir, "%s/%s/", DYN_PMU_PATH,
>  					ep->d_name);
> -				ret = read_from_imc_dir(imc_dir, count);
> +				ret = read_from_imc_dir(imc_dir, &count);
>  				if (ret) {
>  					closedir(dp);
>  
>  					return ret;
>  				}
> -				count++;
>  			}
>  		}
>  		closedir(dp);
> 

Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>

-- 
 i.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 04/10] selftests/resctrl: Prepare for parsing multiple events per iMC
  2026-03-26 13:03   ` Ilpo Järvinen
@ 2026-03-26 14:34     ` Reinette Chatre
  0 siblings, 0 replies; 28+ messages in thread
From: Reinette Chatre @ 2026-03-26 14:34 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

Hi Ilpo,

On 3/26/26 6:03 AM, Ilpo Järvinen wrote:
> On Fri, 13 Mar 2026, Reinette Chatre wrote:
> 
>> The events needed to read memory bandwidth are discovered by iterating
>> over every memory controller (iMC) within /sys/bus/event_source/devices.
>> Each iMC's PMU is assumed to have one event to measure read memory
>> bandwidth that is represented by the sysfs cas_count_read file. The event's
>> configuration is read from "cas_count_read" and stored as an element of
>> imc_counters_config[] by read_from_imc_dir() that receives the
>> index of the array where to store the configuration as argument.
>>
>> It is possible that an iMC's PMU may have more than one event that should
>> be used to measure memory bandwidth.
>>
>> Change semantics to not provide the index of the array to
>> read_from_imc_dir() but instead a pointer to the index. This enables
>> read_from_imc_dir() to store configurations for more than one event by
>> incrementing the index to imc_counters_config[] itself.
>>
>> Ensure that the same type is consistently used for the index as it is
>> passed around during counter configuration.
>>
>> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
>> Tested-by: Chen Yu <yu.c.chen@intel.com>
>> Reviewed-by: Zide Chen <zide.chen@intel.com>
>> ---

...

> 
> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
> 

Thank you very much for all your reviews. Much appreciated.

Reinette

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 05/10] selftests/resctrl: Support multiple events associated with iMC
  2026-03-13 20:32 ` [PATCH v3 05/10] selftests/resctrl: Support multiple events associated with iMC Reinette Chatre
@ 2026-03-27 17:28   ` Ilpo Järvinen
  0 siblings, 0 replies; 28+ messages in thread
From: Ilpo Järvinen @ 2026-03-27 17:28 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

[-- Attachment #1: Type: text/plain, Size: 7402 bytes --]

On Fri, 13 Mar 2026, Reinette Chatre wrote:

> The resctrl selftests discover needed parameters to perf_event_open() via
> sysfs. The PMU associated with every memory controller (iMC) is discovered
> via the /sys/bus/event_source/devices/uncore_imc_N/type file while
> the read memory bandwidth event type and umask is discovered via
> /sys/bus/event_source/devices/uncore_imc_N/events/cas_count_read.
> 
> Newer systems may have multiple events that expose read memory bandwidth.
> Running a recent kernel that includes
> commit 6a8a48644c4b ("perf/x86/intel/uncore: Add per-scheduler IMC CAS count events")
> on these systems expose the multiple events. For example,
>  /sys/bus/event_source/devices/uncore_imc_N/events/cas_count_read_sch0
>  /sys/bus/event_source/devices/uncore_imc_N/events/cas_count_read_sch1
> 
> Support parsing of iMC PMU properties when the PMU may have multiple events
> to measure read memory bandwidth. The PMU only needs to be discovered once.
> Split the parsing of event details from actual PMU discovery in order to
> loop over all events associated with the PMU. Match all events with the
> cas_count_read prefix instead of requiring there to be one file with that
> name.
> 
> Make the parsing code more robust. With strings passed around to create
> needed paths, use snprintf() instead of sprintf() to ensure there is
> always enough space to create the path while using the standard PATH_MAX
> for path lengths. Ensure there is enough room in imc_counters_config[]
> before attempting to add an entry.
> 
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
> Tested-by: Chen Yu <yu.c.chen@intel.com>
> Reviewed-by: Zide Chen <zide.chen@intel.com>

Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>

-- 
 i.

> ---
> Changes since v1:
> - Add Zide Chen's RB tag.
> 
> Changes since v2:
> - Update changelog to note merged perf change that supports this change.
> - Use PATH_MAX instead of magic number for path lengths. (Ilpo)
> - Rename "org_count" -> "orig_count". (Ilpo)
> - Rework flow surrounding fscanf() used in both parse_imc_read_bw_events()
>   and read_from_imc_dir(). (Ilpo)
> - Handle error first to reduce indentation. (Ilpo)
> - Add Chen Yu's tag.
> ---
>  tools/testing/selftests/resctrl/resctrl_val.c | 118 ++++++++++++++----
>  1 file changed, 93 insertions(+), 25 deletions(-)
> 
> diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
> index 6d766347e3fc..f20d2194c35f 100644
> --- a/tools/testing/selftests/resctrl/resctrl_val.c
> +++ b/tools/testing/selftests/resctrl/resctrl_val.c
> @@ -11,10 +11,10 @@
>  #include "resctrl.h"
>  
>  #define UNCORE_IMC		"uncore_imc"
> -#define READ_FILE_NAME		"events/cas_count_read"
> +#define READ_FILE_NAME		"cas_count_read"
>  #define DYN_PMU_PATH		"/sys/bus/event_source/devices"
>  #define SCALE			0.00006103515625
> -#define MAX_IMCS		20
> +#define MAX_IMCS		40
>  #define MAX_TOKENS		5
>  
>  #define CON_MBM_LOCAL_BYTES_PATH		\
> @@ -109,46 +109,114 @@ static int open_perf_read_event(int i, int cpu_no)
>  	return 0;
>  }
>  
> -/* Get type and config of an iMC counter's read event. */
> -static int read_from_imc_dir(char *imc_dir, unsigned int *count)
> +static int parse_imc_read_bw_events(char *imc_dir, unsigned int type,
> +				    unsigned int *count)
>  {
> -	char cas_count_cfg[1024], imc_counter_cfg[1024], imc_counter_type[1024];
> +	char imc_events_dir[PATH_MAX], imc_counter_cfg[PATH_MAX];
> +	unsigned int orig_count = *count;
> +	char cas_count_cfg[1024];
> +	struct dirent *ep;
> +	int path_len;
> +	int ret = -1;
> +	int num_cfg;
>  	FILE *fp;
> +	DIR *dp;
>  
> -	/* Get type of iMC counter */
> -	sprintf(imc_counter_type, "%s%s", imc_dir, "type");
> -	fp = fopen(imc_counter_type, "r");
> -	if (!fp) {
> -		ksft_perror("Failed to open iMC counter type file");
> +	path_len = snprintf(imc_events_dir, sizeof(imc_events_dir), "%sevents",
> +			    imc_dir);
> +	if (path_len >= sizeof(imc_events_dir)) {
> +		ksft_print_msg("Unable to create path to %sevents\n", imc_dir);
> +		return -1;
> +	}
>  
> +	dp = opendir(imc_events_dir);
> +	if (!dp) {
> +		ksft_perror("Unable to open PMU events directory");
>  		return -1;
>  	}
> -	if (fscanf(fp, "%u", &imc_counters_config[*count].type) <= 0) {
> -		ksft_perror("Could not get iMC type");
> +
> +	while ((ep = readdir(dp))) {
> +		/*
> +		 * Parse all event files with READ_FILE_NAME prefix that
> +		 * contain the event number and umask. Skip files containing
> +		 * "." that contain unused properties of event.
> +		 */
> +		if (!strstr(ep->d_name, READ_FILE_NAME) ||
> +		    strchr(ep->d_name, '.'))
> +			continue;
> +
> +		path_len = snprintf(imc_counter_cfg, sizeof(imc_counter_cfg),
> +				    "%s/%s", imc_events_dir, ep->d_name);
> +		if (path_len >= sizeof(imc_counter_cfg)) {
> +			ksft_print_msg("Unable to create path to %s/%s\n",
> +				       imc_events_dir, ep->d_name);
> +			goto out_close;
> +		}
> +		fp = fopen(imc_counter_cfg, "r");
> +		if (!fp) {
> +			ksft_perror("Failed to open iMC config file");
> +			goto out_close;
> +		}
> +		num_cfg = fscanf(fp, "%1023s", cas_count_cfg);
>  		fclose(fp);
> +		if (num_cfg <= 0) {
> +			ksft_perror("Could not get iMC cas count read");
> +			goto out_close;
> +		}
> +		if (*count >= MAX_IMCS) {
> +			ksft_print_msg("Maximum iMC count exceeded\n");
> +			goto out_close;
> +		}
>  
> -		return -1;
> +		imc_counters_config[*count].type = type;
> +		get_read_event_and_umask(cas_count_cfg, *count);
> +		/* Do not fail after incrementing *count. */
> +		*count += 1;
>  	}
> -	fclose(fp);
> +	if (*count == orig_count) {
> +		ksft_print_msg("Unable to find events in %s\n", imc_events_dir);
> +		goto out_close;
> +	}
> +	ret = 0;
> +out_close:
> +	closedir(dp);
> +	return ret;
> +}
>  
> -	/* Get read config */
> -	sprintf(imc_counter_cfg, "%s%s", imc_dir, READ_FILE_NAME);
> -	fp = fopen(imc_counter_cfg, "r");
> -	if (!fp) {
> -		ksft_perror("Failed to open iMC config file");
> +/* Get type and config of an iMC counter's read event. */
> +static int read_from_imc_dir(char *imc_dir, unsigned int *count)
> +{
> +	char imc_counter_type[PATH_MAX];
> +	unsigned int type;
> +	int path_len;
> +	FILE *fp;
> +	int ret;
>  
> +	/* Get type of iMC counter */
> +	path_len = snprintf(imc_counter_type, sizeof(imc_counter_type),
> +			    "%s%s", imc_dir, "type");
> +	if (path_len >= sizeof(imc_counter_type)) {
> +		ksft_print_msg("Unable to create path to %s%s\n",
> +			       imc_dir, "type");
>  		return -1;
>  	}
> -	if (fscanf(fp, "%1023s", cas_count_cfg) <= 0) {
> -		ksft_perror("Could not get iMC cas count read");
> -		fclose(fp);
> +	fp = fopen(imc_counter_type, "r");
> +	if (!fp) {
> +		ksft_perror("Failed to open iMC counter type file");
>  
>  		return -1;
>  	}
> +	ret = fscanf(fp, "%u", &type);
>  	fclose(fp);
> -
> -	get_read_event_and_umask(cas_count_cfg, *count);
> -	*count += 1;
> +	if (ret <= 0) {
> +		ksft_perror("Could not get iMC type");
> +		return -1;
> +	}
> +	ret = parse_imc_read_bw_events(imc_dir, type, count);
> +	if (ret) {
> +		ksft_print_msg("Unable to parse bandwidth event and umask\n");
> +		return ret;
> +	}
>  
>  	return 0;
>  }
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 06/10] selftests/resctrl: Increase size of buffer used in MBM and MBA tests
  2026-03-13 20:32 ` [PATCH v3 06/10] selftests/resctrl: Increase size of buffer used in MBM and MBA tests Reinette Chatre
@ 2026-03-27 17:30   ` Ilpo Järvinen
  0 siblings, 0 replies; 28+ messages in thread
From: Ilpo Järvinen @ 2026-03-27 17:30 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

[-- Attachment #1: Type: text/plain, Size: 2571 bytes --]

On Fri, 13 Mar 2026, Reinette Chatre wrote:

> Errata for Sierra Forest [1] (SRF42) and Granite Rapids [2] (GNR12)
> describe the problem that MBM on Intel RDT may overcount memory bandwidth
> measurements. The resctrl tests compare memory bandwidth reported by iMC
> PMU to that reported by MBM causing the tests to fail on these systems
> depending on the settings of the platform related to the errata.
> 
> Since the resctrl tests need to run under various conditions it is not
> possible to ensure system settings are such that MBM will not overcount.
> It has been observed that the overcounting can be controlled via the
> buffer size used in the MBM and MBA tests that rely on comparisons
> between iMC PMU and MBM measurements.
> 
> Running the MBM test on affected platforms with different buffer sizes it
> can be observed that the difference between iMC PMU and MBM counts reduce
> as the buffer size increases. After increasing the buffer size to more
> than 4X the differences between iMC PMU and MBM become insignificant.
> 
> Increase the buffer size used in MBM and MBA tests to 4X L3 size to reduce
> possibility of tests failing due to difference in counts reported by iMC
> PMU and MBM.
> 
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
> Tested-by: Chen Yu <yu.c.chen@intel.com>
> Link: https://edc.intel.com/content/www/us/en/design/products-and-solutions/processors-and-chipsets/sierra-forest/xeon-6700-series-processor-with-e-cores-specification-update/errata-details/ # [1]
> Link: https://edc.intel.com/content/www/us/en/design/products-and-solutions/processors-and-chipsets/birch-stream/xeon-6900-6700-6500-series-processors-with-p-cores-specification-update/011US/errata-details/ # [2]
> ---
> Changes since v2:
> - Add Chen Yu's tag.
> ---
>  tools/testing/selftests/resctrl/fill_buf.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
> index 19a01a52dc1a..b9fa7968cd6e 100644
> --- a/tools/testing/selftests/resctrl/fill_buf.c
> +++ b/tools/testing/selftests/resctrl/fill_buf.c
> @@ -139,6 +139,6 @@ ssize_t get_fill_buf_size(int cpu_no, const char *cache_type)
>  	if (ret)
>  		return ret;
>  
> -	return cache_total_size * 2 > MINIMUM_SPAN ?
> -			cache_total_size * 2 : MINIMUM_SPAN;
> +	return cache_total_size * 4 > MINIMUM_SPAN ?
> +			cache_total_size * 4 : MINIMUM_SPAN;
>  }
> 

Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>

-- 
 i.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 07/10] selftests/resctrl: Raise threshold at which MBM and PMU values are compared
  2026-03-13 20:32 ` [PATCH v3 07/10] selftests/resctrl: Raise threshold at which MBM and PMU values are compared Reinette Chatre
@ 2026-03-27 17:34   ` Ilpo Järvinen
  2026-03-27 23:19     ` Reinette Chatre
  0 siblings, 1 reply; 28+ messages in thread
From: Ilpo Järvinen @ 2026-03-27 17:34 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger,
	ilpo.jarvinen, fenghuay, peternewman, zide.chen, dapeng1.mi,
	ben.horgan, yu.c.chen, jason.zeng, linux-kselftest, linux-kernel,
	patches

[-- Attachment #1: Type: text/plain, Size: 3515 bytes --]

On Fri, 13 Mar 2026, Reinette Chatre wrote:

> commit 501cfdba0a40 ("selftests/resctrl: Do not compare performance

Should start with a capital letter.

> counters and resctrl at low bandwidth") introduced a threshold under which
> memory bandwidth values from MBM and performance counters are not compared.
> This is needed because MBM and the PMUs do not have an identical view of
> memory bandwidth since PMUs can count all memory traffic while MBM does not
> count "overhead" (for example RAS) traffic that cannot be attributed to an
> RMID. As a ratio this difference in view of memory bandwidth is pronounced
> at low memory bandwidths.
> 
> The 750MiB threshold was chosen arbitrarily after comparisons on different
> platforms. Exposed to more platforms after introduction this threshold has
> proven to be inadequate.
> 
> Having accurate comparison between performance counters and MBM requires
> careful management of system load as well as control of features that
> introduce extra memory traffic, for example, patrol scrub. This is not
> appropriate for the resctrl selftests that are intended to run on a
> variety of systems with various configurations.
> 
> Increase the memory bandwidth threshold under which no comparison is made
> between performance counters and MBM. Add additional leniency by increasing
> the percentage of difference that will be tolerated between these counts.
> 
> There is no impact to the validity of the resctrl selftests results as a
> measure of resctrl subsystem health.
> 
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
> Tested-by: Chen Yu <yu.c.chen@intel.com>
> ---
> Changes since v2:
> - Add Chen Yu's tag.
> ---
>  tools/testing/selftests/resctrl/mba_test.c | 2 +-
>  tools/testing/selftests/resctrl/mbm_test.c | 2 +-
>  tools/testing/selftests/resctrl/resctrl.h  | 2 +-
>  3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/testing/selftests/resctrl/mba_test.c b/tools/testing/selftests/resctrl/mba_test.c
> index cd4c715b7ffd..39cee9898359 100644
> --- a/tools/testing/selftests/resctrl/mba_test.c
> +++ b/tools/testing/selftests/resctrl/mba_test.c
> @@ -12,7 +12,7 @@
>  
>  #define RESULT_FILE_NAME	"result_mba"
>  #define NUM_OF_RUNS		5
> -#define MAX_DIFF_PERCENT	8
> +#define MAX_DIFF_PERCENT	15
>  #define ALLOCATION_MAX		100
>  #define ALLOCATION_MIN		10
>  #define ALLOCATION_STEP		10
> diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
> index 58201f844740..6dbbc3b76003 100644
> --- a/tools/testing/selftests/resctrl/mbm_test.c
> +++ b/tools/testing/selftests/resctrl/mbm_test.c
> @@ -11,7 +11,7 @@
>  #include "resctrl.h"
>  
>  #define RESULT_FILE_NAME	"result_mbm"
> -#define MAX_DIFF_PERCENT	8
> +#define MAX_DIFF_PERCENT	15
>  #define NUM_OF_RUNS		5
>  
>  static int
> diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
> index 7f2ab28be857..3bad2d80c09b 100644
> --- a/tools/testing/selftests/resctrl/resctrl.h
> +++ b/tools/testing/selftests/resctrl/resctrl.h
> @@ -55,7 +55,7 @@
>   * and MBM respectively, for instance generating "overhead" traffic which
>   * is not counted against any specific RMID.
>   */
> -#define THROTTLE_THRESHOLD	750
> +#define THROTTLE_THRESHOLD	2500
>  
>  /*
>   * fill_buf_param:	"fill_buf" benchmark parameters
> 

Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>

-- 
 i.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 08/10] selftests/resctrl: Remove requirement on cache miss rate
  2026-03-13 20:32 ` [PATCH v3 08/10] selftests/resctrl: Remove requirement on cache miss rate Reinette Chatre
@ 2026-03-27 17:45   ` Ilpo Järvinen
  2026-03-27 23:21     ` Reinette Chatre
  0 siblings, 1 reply; 28+ messages in thread
From: Ilpo Järvinen @ 2026-03-27 17:45 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

On Fri, 13 Mar 2026, Reinette Chatre wrote:

> As the CAT test reads the same buffer into different sized cache portions
> it compares the number of cache misses against an expected percentage
> based on the size of the cache portion.
> 
> Systems and test conditions vary. The CAT test is a test of resctrl
> subsystem health and not a test of the hardware architecture so it is not
> required to place requirements on the size of the difference in cache
> misses, just that the number of cache misses when reading a buffer
> increase as the cache portion used for the buffer decreases.
> 
> Remove additional constraint on how big the difference between cache
> misses should be as the cache portion size changes. Only test that the
> cache misses increase as the cache portion size decreases. This remains
> a good sanity check of resctrl subsystem health while reducing impact
> of hardware architectural differences and the various conditions under
> which the test may run.
> 
> Increase the size difference between cache portions to additionally avoid
> any consequences resulting from smaller increments.
> 
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
> Tested-by: Chen Yu <yu.c.chen@intel.com>
> ---
> Changes since v2:
> - Add Chen Yu's tag.
> ---
>  tools/testing/selftests/resctrl/cat_test.c | 33 ++++------------------
>  1 file changed, 5 insertions(+), 28 deletions(-)
> 
> diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
> index f00b622c1460..8bc47f06679a 100644
> --- a/tools/testing/selftests/resctrl/cat_test.c
> +++ b/tools/testing/selftests/resctrl/cat_test.c
> @@ -14,42 +14,20 @@
>  #define RESULT_FILE_NAME	"result_cat"
>  #define NUM_OF_RUNS		5
>  
> -/*
> - * Minimum difference in LLC misses between a test with n+1 bits CBM to the
> - * test with n bits is MIN_DIFF_PERCENT_PER_BIT * (n - 1). With e.g. 5 vs 4
> - * bits in the CBM mask, the minimum difference must be at least
> - * MIN_DIFF_PERCENT_PER_BIT * (4 - 1) = 3 percent.
> - *
> - * The relationship between number of used CBM bits and difference in LLC
> - * misses is not expected to be linear. With a small number of bits, the
> - * margin is smaller than with larger number of bits. For selftest purposes,
> - * however, linear approach is enough because ultimately only pass/fail
> - * decision has to be made and distinction between strong and stronger
> - * signal is irrelevant.
> - */
> -#define MIN_DIFF_PERCENT_PER_BIT	1UL
> -
>  static int show_results_info(__u64 sum_llc_val, int no_of_bits,
>  			     unsigned long cache_span,
> -			     unsigned long min_diff_percent,
>  			     unsigned long num_of_runs, bool platform,
>  			     __s64 *prev_avg_llc_val)
>  {
>  	__u64 avg_llc_val = 0;
> -	float avg_diff;
>  	int ret = 0;
>  
>  	avg_llc_val = sum_llc_val / num_of_runs;
>  	if (*prev_avg_llc_val) {
> -		float delta = (__s64)(avg_llc_val - *prev_avg_llc_val);
> -
> -		avg_diff = delta / *prev_avg_llc_val;
> -		ret = platform && (avg_diff * 100) < (float)min_diff_percent;
> -
> -		ksft_print_msg("%s Check cache miss rate changed more than %.1f%%\n",
> -			       ret ? "Fail:" : "Pass:", (float)min_diff_percent);
> +		ret = platform && (avg_llc_val < *prev_avg_llc_val);
>  
> -		ksft_print_msg("Percent diff=%.1f\n", avg_diff * 100);
> +		ksft_print_msg("%s Check cache miss rate increased\n",
> +			       ret ? "Fail:" : "Pass:");

While I'm fine with removing the amount of change check, this no longer 
shows any numbers which would be a bit annoying if/when there's a failure.

-- 
 i.

>  	}
>  	*prev_avg_llc_val = avg_llc_val;
>  
> @@ -58,10 +36,10 @@ static int show_results_info(__u64 sum_llc_val, int no_of_bits,
>  	return ret;
>  }
>  
> -/* Remove the highest bit from CBM */
> +/* Remove the highest bits from CBM */
>  static unsigned long next_mask(unsigned long current_mask)
>  {
> -	return current_mask & (current_mask >> 1);
> +	return current_mask & (current_mask >> 2);
>  }
>  
>  static int check_results(struct resctrl_val_param *param, const char *cache_type,
> @@ -112,7 +90,6 @@ static int check_results(struct resctrl_val_param *param, const char *cache_type
>  
>  		ret = show_results_info(sum_llc_perf_miss, bits,
>  					alloc_size / 64,
> -					MIN_DIFF_PERCENT_PER_BIT * (bits - 1),
>  					runs, get_vendor() == ARCH_INTEL,
>  					&prev_avg_llc_val);
>  		if (ret)
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 09/10] selftests/resctrl: Simplify perf usage in CAT test
  2026-03-13 20:32 ` [PATCH v3 09/10] selftests/resctrl: Simplify perf usage in CAT test Reinette Chatre
@ 2026-03-27 17:47   ` Ilpo Järvinen
  0 siblings, 0 replies; 28+ messages in thread
From: Ilpo Järvinen @ 2026-03-27 17:47 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

[-- Attachment #1: Type: text/plain, Size: 5745 bytes --]

On Fri, 13 Mar 2026, Reinette Chatre wrote:

> The CAT test relies on the PERF_COUNT_HW_CACHE_MISSES event to determine if
> modifying a cache portion size is successful. This event is configured to
> report the data as part of an event group, but no other events are added to
> the group.
> 
> Remove the unnecessary PERF_FORMAT_GROUP format setting. This eliminates
> the need for struct perf_event_read and results in read() of the associated
> file descriptor to return just one value associated with the
> PERF_COUNT_HW_CACHE_MISSES event of interest.
> 
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
> Tested-by: Chen Yu <yu.c.chen@intel.com>
> ---
> Changes since v2:
> - Add Chen Yu's tag.
> ---
>  tools/testing/selftests/resctrl/cache.c    | 17 +++++------------
>  tools/testing/selftests/resctrl/cat_test.c |  4 +---
>  tools/testing/selftests/resctrl/resctrl.h  | 11 +----------
>  3 files changed, 7 insertions(+), 25 deletions(-)
> 
> diff --git a/tools/testing/selftests/resctrl/cache.c b/tools/testing/selftests/resctrl/cache.c
> index bef71b6feacc..df9bea584a2d 100644
> --- a/tools/testing/selftests/resctrl/cache.c
> +++ b/tools/testing/selftests/resctrl/cache.c
> @@ -10,7 +10,6 @@ void perf_event_attr_initialize(struct perf_event_attr *pea, __u64 config)
>  	memset(pea, 0, sizeof(*pea));
>  	pea->type = PERF_TYPE_HARDWARE;
>  	pea->size = sizeof(*pea);
> -	pea->read_format = PERF_FORMAT_GROUP;
>  	pea->exclude_kernel = 1;
>  	pea->exclude_hv = 1;
>  	pea->exclude_idle = 1;
> @@ -37,19 +36,13 @@ int perf_event_reset_enable(int pe_fd)
>  	return 0;
>  }
>  
> -void perf_event_initialize_read_format(struct perf_event_read *pe_read)
> -{
> -	memset(pe_read, 0, sizeof(*pe_read));
> -	pe_read->nr = 1;
> -}
> -
>  int perf_open(struct perf_event_attr *pea, pid_t pid, int cpu_no)
>  {
>  	int pe_fd;
>  
>  	pe_fd = perf_event_open(pea, pid, cpu_no, -1, PERF_FLAG_FD_CLOEXEC);
>  	if (pe_fd == -1) {
> -		ksft_perror("Error opening leader");
> +		ksft_perror("Unable to set up performance monitoring");
>  		return -1;
>  	}
>  
> @@ -132,9 +125,9 @@ static int print_results_cache(const char *filename, pid_t bm_pid, __u64 llc_val
>   *
>   * Return: =0 on success. <0 on failure.
>   */
> -int perf_event_measure(int pe_fd, struct perf_event_read *pe_read,
> -		       const char *filename, pid_t bm_pid)
> +int perf_event_measure(int pe_fd, const char *filename, pid_t bm_pid)
>  {
> +	__u64 value;
>  	int ret;
>  
>  	/* Stop counters after one span to get miss rate */
> @@ -142,13 +135,13 @@ int perf_event_measure(int pe_fd, struct perf_event_read *pe_read,
>  	if (ret < 0)
>  		return ret;
>  
> -	ret = read(pe_fd, pe_read, sizeof(*pe_read));
> +	ret = read(pe_fd, &value, sizeof(value));
>  	if (ret == -1) {
>  		ksft_perror("Could not get perf value");
>  		return -1;
>  	}
>  
> -	return print_results_cache(filename, bm_pid, pe_read->values[0].value);
> +	return print_results_cache(filename, bm_pid, value);
>  }
>  
>  /*
> diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
> index 8bc47f06679a..6aac03147d41 100644
> --- a/tools/testing/selftests/resctrl/cat_test.c
> +++ b/tools/testing/selftests/resctrl/cat_test.c
> @@ -135,7 +135,6 @@ static int cat_test(const struct resctrl_test *test,
>  		    struct resctrl_val_param *param,
>  		    size_t span, unsigned long current_mask)
>  {
> -	struct perf_event_read pe_read;
>  	struct perf_event_attr pea;
>  	cpu_set_t old_affinity;
>  	unsigned char *buf;
> @@ -159,7 +158,6 @@ static int cat_test(const struct resctrl_test *test,
>  		goto reset_affinity;
>  
>  	perf_event_attr_initialize(&pea, PERF_COUNT_HW_CACHE_MISSES);
> -	perf_event_initialize_read_format(&pe_read);
>  	pe_fd = perf_open(&pea, bm_pid, uparams->cpu);
>  	if (pe_fd < 0) {
>  		ret = -1;
> @@ -192,7 +190,7 @@ static int cat_test(const struct resctrl_test *test,
>  
>  			fill_cache_read(buf, span, true);
>  
> -			ret = perf_event_measure(pe_fd, &pe_read, param->filename, bm_pid);
> +			ret = perf_event_measure(pe_fd, param->filename, bm_pid);
>  			if (ret)
>  				goto free_buf;
>  		}
> diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
> index 3bad2d80c09b..175101022bf3 100644
> --- a/tools/testing/selftests/resctrl/resctrl.h
> +++ b/tools/testing/selftests/resctrl/resctrl.h
> @@ -148,13 +148,6 @@ struct resctrl_val_param {
>  	struct fill_buf_param	*fill_buf;
>  };
>  
> -struct perf_event_read {
> -	__u64 nr;			/* The number of events */
> -	struct {
> -		__u64 value;		/* The value of the event */
> -	} values[2];
> -};
> -
>  /*
>   * Memory location that consumes values compiler must not optimize away.
>   * Volatile ensures writes to this location cannot be optimized away by
> @@ -210,11 +203,9 @@ unsigned int count_bits(unsigned long n);
>  int snc_kernel_support(void);
>  
>  void perf_event_attr_initialize(struct perf_event_attr *pea, __u64 config);
> -void perf_event_initialize_read_format(struct perf_event_read *pe_read);
>  int perf_open(struct perf_event_attr *pea, pid_t pid, int cpu_no);
>  int perf_event_reset_enable(int pe_fd);
> -int perf_event_measure(int pe_fd, struct perf_event_read *pe_read,
> -		       const char *filename, pid_t bm_pid);
> +int perf_event_measure(int pe_fd, const char *filename, pid_t bm_pid);
>  int measure_llc_resctrl(const char *filename, pid_t bm_pid);
>  int minimize_l2_occupancy(const struct resctrl_test *test,
>  			  const struct user_params *uparams,
> 

Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>


-- 
 i.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 10/10] selftests/resctrl: Reduce L2 impact on CAT test
  2026-03-13 20:32 ` [PATCH v3 10/10] selftests/resctrl: Reduce L2 impact on " Reinette Chatre
@ 2026-03-27 17:49   ` Ilpo Järvinen
  2026-03-27 23:22     ` Reinette Chatre
  0 siblings, 1 reply; 28+ messages in thread
From: Ilpo Järvinen @ 2026-03-27 17:49 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

[-- Attachment #1: Type: text/plain, Size: 1791 bytes --]

On Fri, 13 Mar 2026, Reinette Chatre wrote:

> The L3 CAT test loads a buffer into cache that is proportional to the L3
> size allocated for the workload and measures cache misses when accessing
> the buffer as a test of L3 occupancy. When loading the buffer it can be
> assumed that a portion of the buffer will be loaded into the L2 cache and
> depending on cache design may not be present in L3. It is thus possible
> for data to not be in L3 but also not trigger an L3 cache miss when
> accessed.
> 
> Reduce impact of L2 on the L3 CAT test by, if L2 allocation is supported,
> minimizing the portion of L2 that the workload can allocate into. This
> encourages most of buffer to be loaded into L3 and support better
> comparison between buffer size, cache portion, and cache misses when
> accessing the buffer.
> 
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
> Tested-by: Chen Yu <yu.c.chen@intel.com>
> ---
> Changes since v2:
> - Add Chen Yu's tag.
> ---
>  tools/testing/selftests/resctrl/cat_test.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
> index 6aac03147d41..371a2f26dc47 100644
> --- a/tools/testing/selftests/resctrl/cat_test.c
> +++ b/tools/testing/selftests/resctrl/cat_test.c
> @@ -157,6 +157,10 @@ static int cat_test(const struct resctrl_test *test,
>  	if (ret)
>  		goto reset_affinity;
>  
> +	ret = minimize_l2_occupancy(test, uparams, param);
> +	if (ret)
> +		goto reset_affinity;
> +
>  	perf_event_attr_initialize(&pea, PERF_COUNT_HW_CACHE_MISSES);
>  	pe_fd = perf_open(&pea, bm_pid, uparams->cpu);
>  	if (pe_fd < 0) {
> 

Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>

-- 
 i.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 07/10] selftests/resctrl: Raise threshold at which MBM and PMU values are compared
  2026-03-27 17:34   ` Ilpo Järvinen
@ 2026-03-27 23:19     ` Reinette Chatre
  0 siblings, 0 replies; 28+ messages in thread
From: Reinette Chatre @ 2026-03-27 23:19 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, linux-kernel, patches

Hi Ilpo,

On 3/27/26 10:34 AM, Ilpo Järvinen wrote:
> On Fri, 13 Mar 2026, Reinette Chatre wrote:
> 
>> commit 501cfdba0a40 ("selftests/resctrl: Do not compare performance
> 
> Should start with a capital letter.

Will do.

>>
> 
> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
> 

Thank you very much.

Reinette

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 08/10] selftests/resctrl: Remove requirement on cache miss rate
  2026-03-27 17:45   ` Ilpo Järvinen
@ 2026-03-27 23:21     ` Reinette Chatre
  2026-03-31  8:07       ` Ilpo Järvinen
  0 siblings, 1 reply; 28+ messages in thread
From: Reinette Chatre @ 2026-03-27 23:21 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

Hi Ilpo,

On 3/27/26 10:45 AM, Ilpo Järvinen wrote:
> On Fri, 13 Mar 2026, Reinette Chatre wrote:
>> -/*
>> - * Minimum difference in LLC misses between a test with n+1 bits CBM to the
>> - * test with n bits is MIN_DIFF_PERCENT_PER_BIT * (n - 1). With e.g. 5 vs 4
>> - * bits in the CBM mask, the minimum difference must be at least
>> - * MIN_DIFF_PERCENT_PER_BIT * (4 - 1) = 3 percent.
>> - *
>> - * The relationship between number of used CBM bits and difference in LLC
>> - * misses is not expected to be linear. With a small number of bits, the
>> - * margin is smaller than with larger number of bits. For selftest purposes,
>> - * however, linear approach is enough because ultimately only pass/fail
>> - * decision has to be made and distinction between strong and stronger
>> - * signal is irrelevant.
>> - */
>> -#define MIN_DIFF_PERCENT_PER_BIT	1UL
>> -
>>  static int show_results_info(__u64 sum_llc_val, int no_of_bits,
>>  			     unsigned long cache_span,
>> -			     unsigned long min_diff_percent,
>>  			     unsigned long num_of_runs, bool platform,
>>  			     __s64 *prev_avg_llc_val)
>>  {
>>  	__u64 avg_llc_val = 0;
>> -	float avg_diff;
>>  	int ret = 0;
>>  
>>  	avg_llc_val = sum_llc_val / num_of_runs;
>>  	if (*prev_avg_llc_val) {
>> -		float delta = (__s64)(avg_llc_val - *prev_avg_llc_val);
>> -
>> -		avg_diff = delta / *prev_avg_llc_val;
>> -		ret = platform && (avg_diff * 100) < (float)min_diff_percent;
>> -
>> -		ksft_print_msg("%s Check cache miss rate changed more than %.1f%%\n",
>> -			       ret ? "Fail:" : "Pass:", (float)min_diff_percent);
>> +		ret = platform && (avg_llc_val < *prev_avg_llc_val);
>>  
>> -		ksft_print_msg("Percent diff=%.1f\n", avg_diff * 100);
>> +		ksft_print_msg("%s Check cache miss rate increased\n",
>> +			       ret ? "Fail:" : "Pass:");
> 
> While I'm fine with removing the amount of change check, this no longer 
> shows any numbers which would be a bit annoying if/when there's a failure.
> 

This snippet only removes display of the number that is no longer computed ("avg_diff").
The values that are compared now, avg_llc_val and it previous value, are printed
in the call to show_cache_info() that follows this snippet but is not visible in the diff.
 
Below is an example of what a user running the CAT test will see after these changes.
Since show_cache_info() always prints avg_llc_val the user can obtain insight into failure
by considering it and its previous measurement.

# Starting L3_CAT test ...
# Mounting resctrl to "/sys/fs/resctrl"
# Cache size :117964800
# Writing benchmark parameters to resctrl FS
# Write schema "L2:1=0x1" to resctrl FS
# Write schema "L3:0=1fc0" to resctrl FS
# Write schema "L3:0=3f" to resctrl FS
# Write schema "L3:0=1ff0" to resctrl FS
# Write schema "L3:0=f" to resctrl FS
# Write schema "L3:0=1ffc" to resctrl FS
# Write schema "L3:0=3" to resctrl FS
# Checking for pass/fail
# Number of bits: 6
# Average LLC val: 445092
# Cache span (lines): 737280
# Pass: Check cache miss rate increased
# Number of bits: 4
# Average LLC val: 724472
# Cache span (lines): 491520
# Pass: Check cache miss rate increased
# Number of bits: 2
# Average LLC val: 1085470
# Cache span (lines): 245760
ok 4 L3_CAT: test

Reinette


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 10/10] selftests/resctrl: Reduce L2 impact on CAT test
  2026-03-27 17:49   ` Ilpo Järvinen
@ 2026-03-27 23:22     ` Reinette Chatre
  0 siblings, 0 replies; 28+ messages in thread
From: Reinette Chatre @ 2026-03-27 23:22 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

Hi Ilpo,

On 3/27/26 10:49 AM, Ilpo Järvinen wrote:
> On Fri, 13 Mar 2026, Reinette Chatre wrote:
> 
>> The L3 CAT test loads a buffer into cache that is proportional to the L3
>> size allocated for the workload and measures cache misses when accessing
>> the buffer as a test of L3 occupancy. When loading the buffer it can be
>> assumed that a portion of the buffer will be loaded into the L2 cache and
>> depending on cache design may not be present in L3. It is thus possible
>> for data to not be in L3 but also not trigger an L3 cache miss when
>> accessed.
>>
>> Reduce impact of L2 on the L3 CAT test by, if L2 allocation is supported,
>> minimizing the portion of L2 that the workload can allocate into. This
>> encourages most of buffer to be loaded into L3 and support better
>> comparison between buffer size, cache portion, and cache misses when
>> accessing the buffer.
>>
>> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
>> Tested-by: Chen Yu <yu.c.chen@intel.com>
>> ---
>> Changes since v2:
>> - Add Chen Yu's tag.
>> ---
>>  tools/testing/selftests/resctrl/cat_test.c | 4 ++++
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/tools/testing/selftests/resctrl/cat_test.c b/tools/testing/selftests/resctrl/cat_test.c
>> index 6aac03147d41..371a2f26dc47 100644
>> --- a/tools/testing/selftests/resctrl/cat_test.c
>> +++ b/tools/testing/selftests/resctrl/cat_test.c
>> @@ -157,6 +157,10 @@ static int cat_test(const struct resctrl_test *test,
>>  	if (ret)
>>  		goto reset_affinity;
>>  
>> +	ret = minimize_l2_occupancy(test, uparams, param);
>> +	if (ret)
>> +		goto reset_affinity;
>> +
>>  	perf_event_attr_initialize(&pea, PERF_COUNT_HW_CACHE_MISSES);
>>  	pe_fd = perf_open(&pea, bm_pid, uparams->cpu);
>>  	if (pe_fd < 0) {
>>
> 
> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
> 

Thank you very much for this and all the other reviews. Much appreciated.

Reinette


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 08/10] selftests/resctrl: Remove requirement on cache miss rate
  2026-03-27 23:21     ` Reinette Chatre
@ 2026-03-31  8:07       ` Ilpo Järvinen
  2026-03-31 17:39         ` Reinette Chatre
  0 siblings, 1 reply; 28+ messages in thread
From: Ilpo Järvinen @ 2026-03-31  8:07 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches

[-- Attachment #1: Type: text/plain, Size: 3687 bytes --]

On Fri, 27 Mar 2026, Reinette Chatre wrote:

> Hi Ilpo,
> 
> On 3/27/26 10:45 AM, Ilpo Järvinen wrote:
> > On Fri, 13 Mar 2026, Reinette Chatre wrote:
> >> -/*
> >> - * Minimum difference in LLC misses between a test with n+1 bits CBM to the
> >> - * test with n bits is MIN_DIFF_PERCENT_PER_BIT * (n - 1). With e.g. 5 vs 4
> >> - * bits in the CBM mask, the minimum difference must be at least
> >> - * MIN_DIFF_PERCENT_PER_BIT * (4 - 1) = 3 percent.
> >> - *
> >> - * The relationship between number of used CBM bits and difference in LLC
> >> - * misses is not expected to be linear. With a small number of bits, the
> >> - * margin is smaller than with larger number of bits. For selftest purposes,
> >> - * however, linear approach is enough because ultimately only pass/fail
> >> - * decision has to be made and distinction between strong and stronger
> >> - * signal is irrelevant.
> >> - */
> >> -#define MIN_DIFF_PERCENT_PER_BIT	1UL
> >> -
> >>  static int show_results_info(__u64 sum_llc_val, int no_of_bits,
> >>  			     unsigned long cache_span,
> >> -			     unsigned long min_diff_percent,
> >>  			     unsigned long num_of_runs, bool platform,
> >>  			     __s64 *prev_avg_llc_val)
> >>  {
> >>  	__u64 avg_llc_val = 0;
> >> -	float avg_diff;
> >>  	int ret = 0;
> >>  
> >>  	avg_llc_val = sum_llc_val / num_of_runs;
> >>  	if (*prev_avg_llc_val) {
> >> -		float delta = (__s64)(avg_llc_val - *prev_avg_llc_val);
> >> -
> >> -		avg_diff = delta / *prev_avg_llc_val;
> >> -		ret = platform && (avg_diff * 100) < (float)min_diff_percent;
> >> -
> >> -		ksft_print_msg("%s Check cache miss rate changed more than %.1f%%\n",
> >> -			       ret ? "Fail:" : "Pass:", (float)min_diff_percent);
> >> +		ret = platform && (avg_llc_val < *prev_avg_llc_val);
> >>  
> >> -		ksft_print_msg("Percent diff=%.1f\n", avg_diff * 100);
> >> +		ksft_print_msg("%s Check cache miss rate increased\n",
> >> +			       ret ? "Fail:" : "Pass:");
> > 
> > While I'm fine with removing the amount of change check, this no longer 
> > shows any numbers which would be a bit annoying if/when there's a failure.
> > 
> 
> This snippet only removes display of the number that is no longer computed ("avg_diff").
> The values that are compared now, avg_llc_val and it previous value, are printed
> in the call to show_cache_info() that follows this snippet but is not visible in the diff.
>  
> Below is an example of what a user running the CAT test will see after these changes.
> Since show_cache_info() always prints avg_llc_val the user can obtain insight into failure
> by considering it and its previous measurement.
> 
> # Starting L3_CAT test ...
> # Mounting resctrl to "/sys/fs/resctrl"
> # Cache size :117964800
> # Writing benchmark parameters to resctrl FS
> # Write schema "L2:1=0x1" to resctrl FS
> # Write schema "L3:0=1fc0" to resctrl FS
> # Write schema "L3:0=3f" to resctrl FS
> # Write schema "L3:0=1ff0" to resctrl FS
> # Write schema "L3:0=f" to resctrl FS
> # Write schema "L3:0=1ffc" to resctrl FS
> # Write schema "L3:0=3" to resctrl FS
> # Checking for pass/fail
> # Number of bits: 6
> # Average LLC val: 445092
> # Cache span (lines): 737280
> # Pass: Check cache miss rate increased
> # Number of bits: 4
> # Average LLC val: 724472
> # Cache span (lines): 491520
> # Pass: Check cache miss rate increased
> # Number of bits: 2
> # Average LLC val: 1085470
> # Cache span (lines): 245760
> ok 4 L3_CAT: test

Okay, I didn't remember there was another place printing the numbers.
No problem with this then,

Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>

-- 
 i.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 08/10] selftests/resctrl: Remove requirement on cache miss rate
  2026-03-31  8:07       ` Ilpo Järvinen
@ 2026-03-31 17:39         ` Reinette Chatre
  0 siblings, 0 replies; 28+ messages in thread
From: Reinette Chatre @ 2026-03-31 17:39 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: shuah, Dave.Martin, james.morse, tony.luck, babu.moger, fenghuay,
	peternewman, zide.chen, dapeng1.mi, ben.horgan, yu.c.chen,
	jason.zeng, linux-kselftest, LKML, patches



On 3/31/26 1:07 AM, Ilpo Järvinen wrote:
> On Fri, 27 Mar 2026, Reinette Chatre wrote:

>> Below is an example of what a user running the CAT test will see after these changes.
>> Since show_cache_info() always prints avg_llc_val the user can obtain insight into failure
>> by considering it and its previous measurement.
>>
>> # Starting L3_CAT test ...
>> # Mounting resctrl to "/sys/fs/resctrl"
>> # Cache size :117964800
>> # Writing benchmark parameters to resctrl FS
>> # Write schema "L2:1=0x1" to resctrl FS
>> # Write schema "L3:0=1fc0" to resctrl FS
>> # Write schema "L3:0=3f" to resctrl FS
>> # Write schema "L3:0=1ff0" to resctrl FS
>> # Write schema "L3:0=f" to resctrl FS
>> # Write schema "L3:0=1ffc" to resctrl FS
>> # Write schema "L3:0=3" to resctrl FS
>> # Checking for pass/fail
>> # Number of bits: 6
>> # Average LLC val: 445092
>> # Cache span (lines): 737280
>> # Pass: Check cache miss rate increased
>> # Number of bits: 4
>> # Average LLC val: 724472
>> # Cache span (lines): 491520
>> # Pass: Check cache miss rate increased
>> # Number of bits: 2
>> # Average LLC val: 1085470
>> # Cache span (lines): 245760
>> ok 4 L3_CAT: test
> 
> Okay, I didn't remember there was another place printing the numbers.
> No problem with this then,
> 
> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
> 

Thank you very much Ilpo.

Reinette

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms
  2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
                   ` (9 preceding siblings ...)
  2026-03-13 20:32 ` [PATCH v3 10/10] selftests/resctrl: Reduce L2 impact on " Reinette Chatre
@ 2026-03-31 19:13 ` Shuah Khan
  2026-03-31 20:22   ` Reinette Chatre
  10 siblings, 1 reply; 28+ messages in thread
From: Shuah Khan @ 2026-03-31 19:13 UTC (permalink / raw)
  To: Reinette Chatre, shuah, Dave.Martin, james.morse, tony.luck,
	babu.moger, ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, linux-kselftest, linux-kernel, patches,
	Shuah Khan

On 3/13/26 14:32, Reinette Chatre wrote:
> Changes since v2:
> - v2: https://lore.kernel.org/linux-patches/cover.1772582958.git.reinette.chatre@intel.com/
> - Rebased on top of v7.0-rc3.
> - Split "selftests/resctrl: Improve accuracy of cache occupancy test" into
>    changes impacting L3 and L2 respectively. (Ilpo)
> - "long_mask" -> "full_mask", "return_value" -> "measurement", "org_count"
>    -> "orig_count". (Ilpo)
> - Use PATH_MAX where appropriate. (Ilpo)
> - Handle errors first to reduce indentation. (Ilpo)
> - Detailed changes in changelogs.
> - No functional changes since v2. Series tested by running 20 iterations of all
>    tests on Emerald Rapids, Granite Rapids, Sapphire Rapids, Ice Lake, Sierra
>    Forest, and Broadwell.
> 
> Changes since v1:
> - v1: https://lore.kernel.org/lkml/cover.1770406608.git.reinette.chatre@intel.com/
> - The new perf interface that resctrl selftests can utilize has been accepted and
>    merged into v7.0-rc2. This series can thus now be considered for inclusion.
>    For reference,
>    commit 6a8a48644c4b ("perf/x86/intel/uncore: Add per-scheduler IMC CAS count events")
>    The resctrl selftest changes making use of the new perf interface are backward
>    compatible. The selftests do not require a v7.0-rc2 kernel to run but the
>    tests can only pass on recent Intel platforms running v7.0-rc2 or later.
> - Combine the two outstanding resctrl selftest submissions into one series
>    for easier tracking:
>    https://lore.kernel.org/lkml/084e82b5c29d75f16f24af8768d50d39ba0118a5.1769101788.git.reinette.chatre@intel.com/
>    https://lore.kernel.org/lkml/cover.1770406608.git.reinette.chatre@intel.com/
> - Fix typo in changelog of "selftests/resctrl: Improve accuracy of cache
>    occupancy test": "the data my be in L2" -> "the data may be in L2"
> - Add Zide Chen's RB tags.
> 
> Cover letter updated to be accurate wrt perf changes:
> 
> The resctrl selftests fail on recent Intel platforms. Intermittent failures
> in the CAT test and permanent failures of MBM and MBA tests on new platforms
> like Sierra Forest and Granite Rapids.
> 
> The MBM and MBA resctrl selftests both generate memory traffic and compare the
> memory bandwidth measurements between the iMC PMUs and MBM to determine pass or
> fail. Both these tests are failing on recent platforms like Sierra Forest and
> Granite Rapids that have two events that need to be read and combined
> for a total memory bandwidth count instead of the single event available on
> earlier platforms.
> 
> resctrl selftests prefer to obtain event details via sysfs instead of adding
> model specific details on which events to read. Enhancements to perf to expose
> the new event details are available since:
>   commit 6a8a48644c4b ("perf/x86/intel/uncore: Add per-scheduler IMC CAS count events")
> This series demonstrates use of the new sysfs interface to perf
> to obtain accurate iMC read memory bandwidth measurements.
> 
> An additional issue with all the tests is that these selftests are part
> performance tests and determine pass/fail on performance heuristics selected
> after running the tests on a variety of platforms. When new platforms
> arrive the previous heuristics may cause the tests to fail. These failures are
> not because of an issue with the resctrl subsystem the tests intend to test
> but because of the architectural changes in the new platforms.
> 
> Adapt the resctrl tests to not be as sensitive to architectural changes
> while adjusting the remaining heuristics to ensure tests pass on a variety
> of platforms. More details in individual patches.
> 
> Tested by running 100 iterations of all tests on Emerald Rapids, Granite
> Rapids, Sapphire Rapids, Ice Lake, Sierra Forest, and Broadwell.
> 
> Reinette Chatre (10):
>    selftests/resctrl: Improve accuracy of cache occupancy test
>    selftests/resctrl: Reduce interference from L2 occupancy during cache
>      occupancy test
>    selftests/resctrl: Do not store iMC counter value in counter config
>      structure
>    selftests/resctrl: Prepare for parsing multiple events per iMC
>    selftests/resctrl: Support multiple events associated with iMC
>    selftests/resctrl: Increase size of buffer used in MBM and MBA tests
>    selftests/resctrl: Raise threshold at which MBM and PMU values are
>      compared
>    selftests/resctrl: Remove requirement on cache miss rate
>    selftests/resctrl: Simplify perf usage in CAT test
>    selftests/resctrl: Reduce L2 impact on CAT test
> 

Let me know if this series is ready to go into Linux 7.1

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms
  2026-03-31 19:13 ` [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Shuah Khan
@ 2026-03-31 20:22   ` Reinette Chatre
  0 siblings, 0 replies; 28+ messages in thread
From: Reinette Chatre @ 2026-03-31 20:22 UTC (permalink / raw)
  To: Shuah Khan, shuah, Dave.Martin, james.morse, tony.luck,
	babu.moger, ilpo.jarvinen
  Cc: fenghuay, peternewman, zide.chen, dapeng1.mi, ben.horgan,
	yu.c.chen, jason.zeng, linux-kselftest, linux-kernel, patches

Hi Shuah,

On 3/31/26 12:13 PM, Shuah Khan wrote:
> Let me know if this series is ready to go into Linux 7.1

Thank you very much for checking in on this series. Yes, this is ready to go in at your convenience.

Reinette

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2026-03-31 20:23 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-13 20:32 [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Reinette Chatre
2026-03-13 20:32 ` [PATCH v3 01/10] selftests/resctrl: Improve accuracy of cache occupancy test Reinette Chatre
2026-03-26 12:44   ` Ilpo Järvinen
2026-03-13 20:32 ` [PATCH v3 02/10] selftests/resctrl: Reduce interference from L2 occupancy during " Reinette Chatre
2026-03-26 12:56   ` Ilpo Järvinen
2026-03-13 20:32 ` [PATCH v3 03/10] selftests/resctrl: Do not store iMC counter value in counter config structure Reinette Chatre
2026-03-13 20:32 ` [PATCH v3 04/10] selftests/resctrl: Prepare for parsing multiple events per iMC Reinette Chatre
2026-03-26 13:03   ` Ilpo Järvinen
2026-03-26 14:34     ` Reinette Chatre
2026-03-13 20:32 ` [PATCH v3 05/10] selftests/resctrl: Support multiple events associated with iMC Reinette Chatre
2026-03-27 17:28   ` Ilpo Järvinen
2026-03-13 20:32 ` [PATCH v3 06/10] selftests/resctrl: Increase size of buffer used in MBM and MBA tests Reinette Chatre
2026-03-27 17:30   ` Ilpo Järvinen
2026-03-13 20:32 ` [PATCH v3 07/10] selftests/resctrl: Raise threshold at which MBM and PMU values are compared Reinette Chatre
2026-03-27 17:34   ` Ilpo Järvinen
2026-03-27 23:19     ` Reinette Chatre
2026-03-13 20:32 ` [PATCH v3 08/10] selftests/resctrl: Remove requirement on cache miss rate Reinette Chatre
2026-03-27 17:45   ` Ilpo Järvinen
2026-03-27 23:21     ` Reinette Chatre
2026-03-31  8:07       ` Ilpo Järvinen
2026-03-31 17:39         ` Reinette Chatre
2026-03-13 20:32 ` [PATCH v3 09/10] selftests/resctrl: Simplify perf usage in CAT test Reinette Chatre
2026-03-27 17:47   ` Ilpo Järvinen
2026-03-13 20:32 ` [PATCH v3 10/10] selftests/resctrl: Reduce L2 impact on " Reinette Chatre
2026-03-27 17:49   ` Ilpo Järvinen
2026-03-27 23:22     ` Reinette Chatre
2026-03-31 19:13 ` [PATCH v3 00/10] selftests/resctrl: Fixes and improvements focused on Intel platforms Shuah Khan
2026-03-31 20:22   ` Reinette Chatre

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox