From: Matthew Auld <matthew.auld@intel.com>
To: Karthik Poosa <karthik.poosa@intel.com>, igt-dev@lists.freedesktop.org
Cc: anshuman.gupta@intel.com, badal.nilawar@intel.com,
riana.tauro@intel.com, rodrigo.vivi@intel.com,
raag.jadav@intel.com
Subject: Re: [PATCH i-g-t v1 1/1] drm/xe/xe_pm: Have high VRAM usage during system suspend
Date: Wed, 1 Apr 2026 11:29:36 +0100 [thread overview]
Message-ID: <ea87318d-3405-4083-a5d9-b4a04ce91343@intel.com> (raw)
In-Reply-To: <20260331181356.4133309-2-karthik.poosa@intel.com>
On 31/03/2026 19:13, Karthik Poosa wrote:
> Create high VRAM usage by allocating a large BO prior to system suspend.
> This increases eviction time, helping to expose any unknown issues in the
> suspend‑resume flow.
>
> Signed-off-by: Karthik Poosa <karthik.poosa@intel.com>
> ---
> tests/intel/xe_pm.c | 86 +++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 84 insertions(+), 2 deletions(-)
>
> diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> index 54f2e9d18..bff3b1cac 100644
> --- a/tests/intel/xe_pm.c
> +++ b/tests/intel/xe_pm.c
> @@ -69,6 +69,8 @@ static pthread_cond_t suspend_cond = PTHREAD_COND_INITIALIZER;
> static pthread_mutex_t child_ready_lock = PTHREAD_MUTEX_INITIALIZER;
> static pthread_cond_t child_ready_cond = PTHREAD_COND_INITIALIZER;
> static bool child_ready = false;
> +uint32_t *map_large_buf;
> +uint64_t buf_size = 0;
>
> typedef struct {
> device_t device;
> @@ -871,6 +873,75 @@ static void i2c_test(device_t device, int sysfs_fd, enum igt_acpi_d_state d_stat
> close(i2c_fd);
> }
>
> +static void alloc_large_buf(int fd_xe)
> +{
> + struct drm_xe_query_mem_regions *mem_regions;
> + uint64_t vram_used_mb = 0, vram_total_mb = 0;
> + struct drm_xe_device_query query = {
> + .extensions = 0,
> + .query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
> + .size = 0,
> + .data = 0,
> + };
> + uint32_t bo, placement;
> + int i = 0;
> +
> + igt_require(xe_has_vram(fd_xe));
IIUC this is going to now skip the entire subtest on igpu?
> + placement = vram_memory(fd_xe, 0);
> + igt_require_f(placement, "Device doesn't support vram memory region\n");
> +
> + igt_assert_eq(igt_ioctl(fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
> + igt_assert_neq(query.size, 0);
> +
> + mem_regions = malloc(query.size);
> + igt_assert(mem_regions);
> +
> + query.data = to_user_pointer(mem_regions);
> + igt_assert_eq(igt_ioctl(fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
> +
> + for (i = 0; i < mem_regions->num_mem_regions; i++) {
> + if (mem_regions->mem_regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
> + vram_used_mb += (mem_regions->mem_regions[i].used / (1024 * 1024));
> + vram_total_mb += (mem_regions->mem_regions[i].total_size / (1024 * 1024));
> + }
Will this be well behaved on multi-tile? Maybe add a break on the first
instance?
Also will this potentially run into issues with RAM sizing? When we move
stuff out of VRAM we kick it out to RAM, so needs to fit.
> + }
> +
> + igt_debug("Before large_buf alloc vram total %lu MB, used vram_used %lu MB\n", vram_total_mb, vram_used_mb);
> +
> + // Allocate a BO of the size of available free VRAM
> + buf_size = (vram_total_mb-vram_used_mb-1)*1024*1024;
> + buf_size = ALIGN(buf_size, xe_get_default_alignment(fd_xe));
> + igt_debug("Creating large_buf of size %lu MB\n", (buf_size/(1024*1024)));
> + bo = xe_bo_create(fd_xe, 0, buf_size , placement, DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
> + igt_require(bo);
> + map_large_buf = xe_bo_map(fd_xe, bo, buf_size);
> + igt_assert(map_large_buf != MAP_FAILED);
> + memset(map_large_buf, 0, buf_size);
> +
> + for (i = 0; i < buf_size / sizeof(*map_large_buf); i++) {
> + map_large_buf[i] = 0xDEADBEAF;
Is this not going to be too slow, if this massive BO? Also do we need
non-zero pages for this scenario?
Other option is maybe creating a few hundred small VRAM BOs, and then
trigger suspend. I think that was roughly my original repro. Main thing
is just to somehow get a good number of GPU jobs from the suspend, with
the hope that at least one is signalled but not yet freed. There should
be at least one job per BO. Big BO also works though, with roughly one
GPU job per ~8M. Maybe if we go with one big BO we can make the size
something like ~80% or perhaps even way smaller? RAM sizing is one
concern, but also some small allocation triggering eviction before the
suspend kicks in. It might be that going really big doesn't actually
help much with hitting the race.
> + }
> +
> + query.data = to_user_pointer(mem_regions);
> + igt_assert_eq(igt_ioctl(fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
> + for (i = 0; i < mem_regions->num_mem_regions; i++) {
> + if (mem_regions->mem_regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
> + vram_used_mb += (mem_regions->mem_regions[i].used / (1024 * 1024));
> + vram_total_mb += (mem_regions->mem_regions[i].total_size / (1024 * 1024));
> + }
> + }
> + igt_info("After alloc vram total %lu MB, used vram_used %lu MB\n", vram_total_mb, vram_used_mb);
> +
> + free(mem_regions);
> +}
> +
> +static void free_large_buf(int fd_xe)
> +{
> + igt_info("Freeing large_buf\n");
> + if (map_large_buf)
> + munmap(map_large_buf, buf_size);
> +}
> +
> int igt_main()
> {
> device_t device;
> @@ -925,26 +996,34 @@ int igt_main()
> }
>
> for (const struct s_state *s = s_states; s->name; s++) {
> +
> igt_subtest_f("%s-basic", s->name) {
> enum igt_suspend_test test = s->state == SUSPEND_STATE_DISK ?
> SUSPEND_TEST_DEVICES : SUSPEND_TEST_NONE;
> + alloc_large_buf(device.fd_xe);
> igt_system_suspend_autoresume(s->state, test);
> + free_large_buf(device.fd_xe);
> }
>
> igt_subtest_f("%s-basic-exec", s->name) {
> + alloc_large_buf(device.fd_xe);
> test_exec(device, 1, 2, s->state, NO_RPM, 0);
> + free_large_buf(device.fd_xe);
> }
>
> igt_subtest_f("%s-exec-after", s->name) {
> enum igt_suspend_test test = s->state == SUSPEND_STATE_DISK ?
> SUSPEND_TEST_DEVICES : SUSPEND_TEST_NONE;
> -
> + alloc_large_buf(device.fd_xe);
> igt_system_suspend_autoresume(s->state, test);
> test_exec(device, 1, 2, NO_SUSPEND, NO_RPM, 0);
> + free_large_buf(device.fd_xe);
> }
>
> igt_subtest_f("%s-multiple-execs", s->name) {
> + alloc_large_buf(device.fd_xe);
> test_exec(device, 16, 32, s->state, NO_RPM, 0);
> + free_large_buf(device.fd_xe);
> }
>
> for (const struct vm_op *op = vm_op; op->name; op++) {
> @@ -962,8 +1041,11 @@ int igt_main()
> }
> }
>
> - igt_subtest_f("%s-mocs", s->name)
> + igt_subtest_f("%s-mocs", s->name) {
> + alloc_large_buf(device.fd_xe);
> test_mocs_suspend_resume(device, s->state, NO_RPM);
> + free_large_buf(device.fd_xe);
> + }
> }
>
> igt_fixture() {
next prev parent reply other threads:[~2026-04-01 10:29 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-31 18:13 [PATCH i-g-t v1 0/1] Update xe_pm test with high VRAM usage Karthik Poosa
2026-03-31 18:13 ` [PATCH i-g-t v1 1/1] drm/xe/xe_pm: Have high VRAM usage during system suspend Karthik Poosa
2026-04-01 10:29 ` Matthew Auld [this message]
2026-04-10 13:56 ` Poosa, Karthik
2026-04-03 6:43 ` Zbigniew Kempczyński
2026-03-31 23:31 ` ✓ i915.CI.BAT: success for Update xe_pm test with high VRAM usage Patchwork
2026-03-31 23:56 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-01 7:08 ` ✗ Xe.CI.FULL: failure " Patchwork
2026-04-01 16:46 ` ✓ i915.CI.Full: success " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ea87318d-3405-4083-a5d9-b4a04ce91343@intel.com \
--to=matthew.auld@intel.com \
--cc=anshuman.gupta@intel.com \
--cc=badal.nilawar@intel.com \
--cc=igt-dev@lists.freedesktop.org \
--cc=karthik.poosa@intel.com \
--cc=raag.jadav@intel.com \
--cc=riana.tauro@intel.com \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox