From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: igt-dev@lists.freedesktop.org
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Matthew Brost" <matthew.brost@intel.com>,
"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>
Subject: [PATCH i-g-t] tests/intel/xe_evict: Reduce allocations to maximum working set
Date: Fri, 14 Jun 2024 17:30:00 +0200 [thread overview]
Message-ID: <20240614153001.9387-1-thomas.hellstrom@linux.intel.com> (raw)
Current xe kmd allows for a maximum working set of VRAM plus
half of system memory, or if the working set is allowed only in
VRAM, the working set is limited to VRAM.
Some subtests attempt to exceed that. Detect when that happens
and limit the working set accordingly.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
tests/intel/xe_evict.c | 72 ++++++++++++++++++++++++++++++++++--------
1 file changed, 59 insertions(+), 13 deletions(-)
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index eebdbc84b..af5e5e5b6 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -458,6 +458,33 @@ static uint64_t calc_bo_size(uint64_t vram_size, int mul, int div)
return (ALIGN(vram_size, SZ_256M) * mul) / div; /* small-bar */
}
+static unsigned int working_set(uint64_t vram_size, uint64_t system_size,
+ uint64_t bo_size, unsigned int num_threads,
+ unsigned int flags)
+{
+ uint64_t set_size;
+ uint64_t total_size;
+
+ set_size = (vram_size - 1) / bo_size;
+
+ /*
+ * Working set resizes also in system?
+ * Currently system graphics memory is limited to 50% of total.
+ */
+ if (!(flags & !(THREADED | MULTI_VM)))
+ set_size += (system_size / 2) / bo_size;
+
+ /* All bos must fit in memory, assuming no swapping */
+ total_size = ((vram_size - 1) / bo_size + system_size / bo_size) /
+ num_threads;
+
+ if (set_size > total_size)
+ set_size = total_size;
+
+ /* bos are only created on half of the execs. */
+ return set_size * 2;
+}
+
/**
* SUBTEST: evict-%s
* Description: %arg[1] evict test.
@@ -748,6 +775,7 @@ igt_main
{ NULL },
};
uint64_t vram_size;
+ uint64_t system_size;
int fd;
igt_fixture {
@@ -755,14 +783,16 @@ igt_main
igt_require(xe_has_vram(fd));
vram_size = xe_visible_vram_size(fd, 0);
igt_assert(vram_size);
+ system_size = igt_get_avail_ram_mb() << 20;
/* Test requires SRAM to about as big as VRAM. For example, small-cm creates
* (448 / 2) BOs with a size (1 / 128) of the total VRAM size. For
* simplicity ensure the SRAM size >= VRAM before running this test.
*/
- igt_skip_on_f(igt_get_avail_ram_mb() < (vram_size >> 20),
- "System memory %lu MiB is less than local memory %lu MiB\n",
- igt_get_avail_ram_mb(), vram_size >> 20);
+ igt_skip_on_f(system_size < vram_size,
+ "System memory %llu MiB is less than local memory %llu MiB\n",
+ (unsigned long long)system_size >> 20,
+ (unsigned long long)vram_size >> 20);
xe_for_each_engine(fd, hwe)
if (hwe->engine_class != DRM_XE_ENGINE_CLASS_COPY)
@@ -770,25 +800,41 @@ igt_main
}
for (const struct section *s = sections; s->name; s++) {
- igt_subtest_f("evict-%s", s->name)
- test_evict(fd, hwe, s->n_exec_queues, s->n_execs,
- calc_bo_size(vram_size, s->mul, s->div),
+ igt_subtest_f("evict-%s", s->name) {
+ uint64_t bo_size = calc_bo_size(vram_size, s->mul, s->div);
+ int ws = working_set(vram_size, system_size, bo_size,
+ 1, s->flags);
+
+ igt_debug("Max working set %d n_execs %d\n", ws, s->n_execs);
+ test_evict(fd, hwe, s->n_exec_queues,
+ min(ws, s->n_execs), bo_size,
s->flags, NULL);
+ }
}
for (const struct section_cm *s = sections_cm; s->name; s++) {
- igt_subtest_f("evict-%s", s->name)
- test_evict_cm(fd, hwe, s->n_exec_queues, s->n_execs,
- calc_bo_size(vram_size, s->mul, s->div),
+ igt_subtest_f("evict-%s", s->name) {
+ uint64_t bo_size = calc_bo_size(vram_size, s->mul, s->div);
+ int ws = working_set(vram_size, system_size, bo_size,
+ 1, s->flags);
+
+ igt_debug("Max working set %d n_execs %d\n", ws, s->n_execs);
+ test_evict_cm(fd, hwe, s->n_exec_queues,
+ min(ws, s->n_execs), bo_size,
s->flags, NULL);
+ }
}
for (const struct section_threads *s = sections_threads; s->name; s++) {
- igt_subtest_f("evict-%s", s->name)
+ igt_subtest_f("evict-%s", s->name) {
+ uint64_t bo_size = calc_bo_size(vram_size, s->mul, s->div);
+ int ws = working_set(vram_size, system_size, bo_size,
+ s->n_threads, s->flags);
+
+ igt_debug("Max working set %d n_execs %d\n", ws, s->n_execs);
threads(fd, hwe, s->n_threads, s->n_exec_queues,
- s->n_execs,
- calc_bo_size(vram_size, s->mul, s->div),
- s->flags);
+ min(ws, s->n_execs), bo_size, s->flags);
+ }
}
igt_fixture
--
2.44.0
next reply other threads:[~2024-06-14 15:30 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-14 15:30 Thomas Hellström [this message]
2024-06-14 16:45 ` ✓ CI.xeBAT: success for tests/intel/xe_evict: Reduce allocations to maximum working set Patchwork
2024-06-14 16:55 ` ✓ Fi.CI.BAT: " Patchwork
2024-06-15 3:29 ` ✓ CI.xeFULL: " Patchwork
2024-06-17 6:42 ` ✓ Fi.CI.IGT: " Patchwork
2024-06-17 7:14 ` [PATCH i-g-t] " Zbigniew Kempczyński
2024-06-17 7:17 ` Zbigniew Kempczyński
2024-06-17 9:50 ` Thomas Hellström
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240614153001.9387-1-thomas.hellstrom@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=igt-dev@lists.freedesktop.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox