Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/2] TTM shrinker fragmentation / partial restore fixes
@ 2026-05-05 20:04 Matthew Brost
  2026-05-05 20:04 ` [PATCH v5 1/2] drm/ttm: Drop tt->restore after successful restore Matthew Brost
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Matthew Brost @ 2026-05-05 20:04 UTC (permalink / raw)
  To: intel-xe, dri-devel

Fix shrinker fragmentation / partial restore fixes. Related to [1],
continuation of [2], patches have details.

Series working shrinker getting triggered in a loop plus the below error
injection:

cd /sys/kernel/debug/ttm/backup_fault_inject
echo 100 > probability
echo 3000 > interval
echo -1 > times
echo 0 > space
echo 0 > verbose
echo M > task-filter
cd -

Matt

[1] https://patchwork.freedesktop.org/series/165330/
[2] https://patchwork.freedesktop.org/series/165877/

v3:
 - Address sashiko concerns
v4:
 - Actual 'Save alloc in snapshot on restore failure (sashiko)' (missed
   local change)
v5:
 - Reordert tm_pool_apply_caching before restore free (sashiko)
 - Add ttm_pool_backup_folio helper (Thomas)

Matthew Brost (2):
  drm/ttm: Drop tt->restore after successful restore
  drm/ttm/pool: back up at native page order

 drivers/gpu/drm/ttm/ttm_pool.c | 125 ++++++++++++++++++++++++++++-----
 1 file changed, 106 insertions(+), 19 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v5 1/2] drm/ttm: Drop tt->restore after successful restore
  2026-05-05 20:04 [PATCH v5 0/2] TTM shrinker fragmentation / partial restore fixes Matthew Brost
@ 2026-05-05 20:04 ` Matthew Brost
  2026-05-05 20:04 ` [PATCH v5 2/2] drm/ttm/pool: back up at native page order Matthew Brost
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Matthew Brost @ 2026-05-05 20:04 UTC (permalink / raw)
  To: intel-xe, dri-devel
  Cc: Thomas Hellström, Christian Koenig, Huang Rui, Matthew Auld,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, linux-kernel, stable

ttm_pool_restore_and_alloc() can successfully complete the restore
process via ttm_pool_restore_commit(), but tt->restore is not dropped
afterward. As a result, subsequent backup/restore flows observe what
appears to be a completed restore, while in reality shmem handles are
still installed in tt->pages, leading to the stack trace below.

Fix this by freeing and dropping tt->restore in
ttm_pool_restore_and_alloc() upon successful completion of the restore.

20545 [  309.784531] RIP: 0010:sg_alloc_append_table_from_pages+0x38c/0x490
20547 [  309.809570] RSP: 0018:ffffc9000623b838 EFLAGS: 00010206
20548 [  309.814827] RAX: 0000000000001000 RBX: ffff88816e42a160 RCX: 0000000000000000
20549 [  309.821986] RDX: 0000000000002000 RSI: 0000000000000003 RDI: 0000000000001000
20550 [  309.829147] RBP: ffff88816e42a168 R08: 0000000000000002 R09: 000000007ffff000
20551 [  309.836310] R10: ffffc9000623b928 R11: 0000000000000000 R12: 000000007ffff000
20552 [  309.843471] R13: ffff88815ba5a100 R14: 0000000000000000 R15: 0000000000000001
20553 [  309.850634] FS:  00007f9ff305e700(0000) GS:ffff888276c94000(0000) knlGS:0000000000000000
20554 [  309.858749] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
20555 [  309.864519] CR2: 00007f9fca701000 CR3: 00000001565e2005 CR4: 0000000008f70ef0
20556 [  309.871678] PKRU: 55555558
20557 [  309.874403] Call Trace:
20558 [  309.876866]  <TASK>
20559 [  309.878988]  sg_alloc_table_from_pages_segment+0x60/0x100
20560 [  309.884415]  ? ttm_resource_manager_usage+0x36/0x60 [ttm]
20561 [  309.889845]  ? xe_tt_map_sg+0x7d/0xd0 [xe]
20562 [  309.894045]  xe_tt_map_sg+0x7d/0xd0 [xe]
20563 [  309.898037]  xe_bo_move+0x927/0xaa0 [xe]
20564 [  309.902029]  ttm_bo_handle_move_mem+0xba/0x170 [ttm]
20565 [  309.907022]  ttm_bo_validate+0xbe/0x190 [ttm]
20566 [  309.911405]  xe_bo_validate+0x9a/0x120 [xe]
20567 [  309.915663]  xe_gpuvm_validate+0xd9/0x140 [xe]
20568 [  309.920206]  drm_gpuvm_validate+0x2f0/0x5b0 [drm_gpuvm]
20569 [  309.925459]  ? drm_exec_lock_obj+0x63/0x210 [drm_exec]
20570 [  309.930627]  xe_vm_validate_rebind+0x46/0xb0 [xe]
20571 [  309.935428]  xe_exec_fn+0x20/0x40 [xe]
20572 [  309.939249]  drm_gpuvm_exec_lock+0x78/0xc0 [drm_gpuvm]
20573 [  309.944410]  xe_validation_exec_lock+0x5a/0xa0 [xe]
20574 [  309.949385]  xe_exec_ioctl+0x806/0xc30 [xe]
20575 [  309.953639]  ? ttwu_queue_wakelist+0xd9/0xf0
20576 [  309.957935]  ? __pfx_xe_exec_fn+0x10/0x10 [xe]
20577 [  309.962449]  ? __wake_up_common+0x73/0xa0
20578 [  309.966482]  ? __pfx_xe_exec_ioctl+0x10/0x10 [xe]
20579 [  309.971263]  drm_ioctl_kernel+0xa3/0x100
20580 [  309.975209]  drm_ioctl+0x213/0x440
20581 [  309.978637]  ? __pfx_xe_exec_ioctl+0x10/0x10 [xe]
20582 [  309.983415]  xe_drm_ioctl+0x67/0xd0 [xe]
20583 [  309.987408]  __x64_sys_ioctl+0x7f/0xd0

Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: dri-devel@lists.freedesktop.org
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org
Fixes: b63d715b8090 ("drm/ttm/pool, drm/ttm/tt: Provide a helper to shrink pages")
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

---

v3:
 - Call ttm_pool_apply_caching after freeing local restore (sashiko)
 - Save alloc in snapshot on restore failure (sashiko)
v4:
 - Actual 'Save alloc in snapshot on restore failure (sashiko)'
v5:
 - kfree retore after ttm_pool_apply_caching (sashiko)
---
 drivers/gpu/drm/ttm/ttm_pool.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index 278bbe7a11ad..d380a3c7fe40 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -902,6 +902,7 @@ int ttm_pool_restore_and_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 {
 	struct ttm_pool_tt_restore *restore = tt->restore;
 	struct ttm_pool_alloc_state alloc;
+	int ret;
 
 	if (WARN_ON(!ttm_tt_is_backed_up(tt)))
 		return -EINVAL;
@@ -925,14 +926,22 @@ int ttm_pool_restore_and_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 	} else {
 		alloc = restore->snapshot_alloc;
 		if (ttm_pool_restore_valid(restore)) {
-			int ret = ttm_pool_restore_commit(restore, tt->backup,
-							  ctx, &alloc);
+			ret = ttm_pool_restore_commit(restore, tt->backup,
+						      ctx, &alloc);
 
 			if (ret)
 				return ret;
 		}
-		if (!alloc.remaining_pages)
+		if (!alloc.remaining_pages) {
+			ret = ttm_pool_apply_caching(&alloc);
+			if (ret)
+				return ret;
+
+			kfree(tt->restore);
+			tt->restore = NULL;
+
 			return 0;
+		}
 	}
 
 	return __ttm_pool_alloc(pool, tt, ctx, &alloc, restore);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v5 2/2] drm/ttm/pool: back up at native page order
  2026-05-05 20:04 [PATCH v5 0/2] TTM shrinker fragmentation / partial restore fixes Matthew Brost
  2026-05-05 20:04 ` [PATCH v5 1/2] drm/ttm: Drop tt->restore after successful restore Matthew Brost
@ 2026-05-05 20:04 ` Matthew Brost
  2026-05-06 14:23   ` Thomas Hellström
  2026-05-05 20:19 ` ✗ CI.checkpatch: warning for TTM shrinker fragmentation / partial restore fixes Patchwork
  2026-05-05 20:20 ` ✓ CI.KUnit: success " Patchwork
  3 siblings, 1 reply; 10+ messages in thread
From: Matthew Brost @ 2026-05-05 20:04 UTC (permalink / raw)
  To: intel-xe, dri-devel
  Cc: Christian Koenig, Huang Rui, Matthew Auld, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	linux-kernel, stable, Thomas Hellström

ttm_pool_split_for_swap() splits high-order pool pages into order-0
pages during backup so each 4K page can be released to the system as
soon as it has been written to shmem. While this minimizes the
allocator's working set during reclaim, it actively fragments memory:
every TTM-backed compound page that the shrinker touches is shattered
into order-0 pages, even when the rest of the system would prefer that
the high-order block stay intact. Under sustained kswapd pressure this
is enough to drive other parts of MM into recovery loops from which
they cannot easily escape, because the memory TTM just freed is no
longer contiguous.

Stop unconditionally splitting on the backup path and back up each
compound at its native order in ttm_pool_backup():

  - For each non-handle slot, read the order from the head page and
    back up all 1<<order subpages to consecutive shmem indices,
    writing the resulting handles into tt->pages[] as we go.
  - On success, the compound is freed once at its native order. No
    split_page(), no per-4K refcount juggling, no fragmentation
    introduced from this path.
  - Slots that already hold a backup handle from a previous partial
    attempt are skipped. A compound that would extend past a
    fault-injection-truncated num_pages is skipped rather than split.

A per-subpage backup failure cannot be made fully atomic: backing up a
subpage allocates a shmem folio before the source page can be released,
so under true OOM any subpage in a compound (not just the first) may
fail to be backed up with the rest of the source compound still live
and contiguous. To make forward progress in that case, fall back to
splitting the source compound and backing up its remaining subpages
individually:

  - On the first per-subpage failure for a compound (and only if
    order > 0), call ttm_pool_split_for_swap() to split the source
    compound, release the subpages whose contents already live in
    shmem (their handles in tt->pages stay valid), and retry the
    failing subpage at order 0.
  - Subsequent successful subpage backups in the now-split compound
    free their source page individually as soon as the handle is
    written.
  - A second failure after splitting terminates the loop with partial
    progress; the remaining order-0 subpages stay in tt->pages as
    plain page pointers and are cleaned up by the normal
    ttm_pool_drop_backed_up() / ttm_pool_free_range() paths.

This restores the original split-on-OOM fallback behavior while
keeping the common, non-OOM case fragmentation-free. It also
preserves the "partial backup is allowed" contract: shrunken is
incremented per backed-up subpage so the caller still sees forward
progress when a compound only partially succeeds.

The restore-side leftover-page branch in ttm_pool_restore_commit() is
left as-is for now: that path can still split a previously-retained
compound, but in practice it is unreachable under realistic workloads
(per profiling we have not been able to trigger it), so it is not
worth complicating the restore state machine to avoid the split there.
If it ever becomes a problem in practice it can be addressed
independently.

ttm_pool_split_for_swap() itself is retained both for the OOM
fallback above and for the restore path's remaining caller. The
DMA-mapped pre-backup unmap loop, the purge path, ttm_pool_free_*,
and ttm_pool_unmap_and_free() already operate at native order and
are unchanged.

Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Huang Rui <ray.huang@amd.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: dri-devel@lists.freedesktop.org
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org
Fixes: b63d715b8090 ("drm/ttm/pool, drm/ttm/tt: Provide a helper to shrink pages")
Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Assisted-by: Claude:claude-opus-4.6
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

---

A follow-up should attempt writeback to shmem at folio order as well,
but the API for doing so is unclear and may be incomplete.

This patch is related to the pending series [1] and significantly
reduces the likelihood of Xe entering a kswapd loop under fragmentation.
The kswapd → shrinker → Xe shrinker → TTM backup path is still
exercised; however, with this change the backup path no longer worsens
fragmentation, which previously amplified reclaim pressure and
reinforced the kswapd loop.

Nonetheless, the pathological case that [1] aims to address still exists
and requires a proper solution. Even with this patch, a kswapd loop due
to severe fragmentation can still be triggered, although it is now
substantially harder to reproduce.

v2:
 - Split pages and free immediately if backup fails are higher order
   (Thomas)
v3:
 - Skip handles in purge path (sashiko)
v5:
 - Refactor into ttm_pool_backup_folio (Thomas)

[1] https://patchwork.freedesktop.org/series/165330/
---
 drivers/gpu/drm/ttm/ttm_pool.c | 110 ++++++++++++++++++++++++++++-----
 1 file changed, 94 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index d380a3c7fe40..78efc8524133 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -1019,6 +1019,70 @@ void ttm_pool_drop_backed_up(struct ttm_tt *tt)
 	ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt->num_pages);
 }
 
+static int ttm_pool_backup_folio(struct ttm_pool *pool, struct ttm_tt *tt,
+				 struct file *backup, struct folio *folio,
+				 unsigned int order, bool writeback,
+				 pgoff_t idx, gfp_t page_gfp, gfp_t alloc_gfp)
+{
+	struct page *page = folio_page(folio, 0);
+	int shrunken = 0, npages = 1UL << order, ret = 0, i;
+	bool folio_has_been_split = false;
+
+	for (i = 0; i < npages; ++i) {
+		s64 shandle;
+
+try_again_after_split:
+		if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
+		    should_fail(&backup_fault_inject, 1))
+			shandle = -ENOMEM;
+		else
+			shandle = ttm_backup_backup_page(backup, page + i,
+							 writeback, idx + i,
+							 page_gfp, alloc_gfp);
+
+		if (shandle < 0 && !folio_has_been_split && order) {
+			pgoff_t j;
+
+			/*
+			 * True OOM: could not allocate a shmem folio
+			 * for the next subpage. Fall back to splitting
+			 * the source compound and backing up subpages
+			 * individually. Release the already-backed-up
+			 * subpages whose contents now live in shmem;
+			 * any further failure terminates the loop with
+			 * partial progress (handled by the caller).
+			 */
+			folio_has_been_split = true;
+			ttm_pool_split_for_swap(pool, page);
+
+			for (j = 0; j < i; ++j) {
+				__free_pages_gpu_account(page + j, 0, false);
+				shrunken++;
+			}
+
+			goto try_again_after_split;
+		} else if (shandle < 0) {
+			ret = shandle;
+			goto out;
+		} else if (folio_has_been_split) {
+			__free_pages_gpu_account(page + i, 0, false);
+			shrunken++;
+		}
+
+		tt->pages[idx + i] = ttm_backup_handle_to_page_ptr(shandle);
+	}
+
+	if (!folio_has_been_split) {
+		/* Compound fully backed up; free at native order. */
+		page->private = 0;
+		__free_pages_gpu_account(page, order, false);
+		shrunken += npages;
+	}
+
+out:
+	return shrunken ? shrunken : ret;
+}
+
 /**
  * ttm_pool_backup() - Back up or purge a struct ttm_tt
  * @pool: The pool used when allocating the struct ttm_tt.
@@ -1045,12 +1109,11 @@ long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *tt,
 {
 	struct file *backup = tt->backup;
 	struct page *page;
-	unsigned long handle;
 	gfp_t alloc_gfp;
 	gfp_t gfp;
 	int ret = 0;
 	pgoff_t shrunken = 0;
-	pgoff_t i, num_pages;
+	pgoff_t i, num_pages, npages;
 
 	if (WARN_ON(ttm_tt_is_backed_up(tt)))
 		return -EINVAL;
@@ -1070,7 +1133,8 @@ long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *tt,
 			unsigned int order;
 
 			page = tt->pages[i];
-			if (unlikely(!page)) {
+			if (unlikely(!page ||
+				     ttm_backup_page_ptr_is_handle(page))) {
 				num_pages = 1;
 				continue;
 			}
@@ -1106,26 +1170,40 @@ long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *tt,
 	if (IS_ENABLED(CONFIG_FAULT_INJECTION) && should_fail(&backup_fault_inject, 1))
 		num_pages = DIV_ROUND_UP(num_pages, 2);
 
-	for (i = 0; i < num_pages; ++i) {
-		s64 shandle;
+	for (i = 0; i < num_pages; i += npages) {
+		unsigned int order;
 
+		npages = 1;
 		page = tt->pages[i];
 		if (unlikely(!page))
 			continue;
 
-		ttm_pool_split_for_swap(pool, page);
+		/* Already-handled entry from a previous attempt. */
+		if (unlikely(ttm_backup_page_ptr_is_handle(page)))
+			continue;
 
-		shandle = ttm_backup_backup_page(backup, page, flags->writeback, i,
-						 gfp, alloc_gfp);
-		if (shandle < 0) {
-			/* We allow partially shrunken tts */
-			ret = shandle;
+		order = ttm_pool_page_order(pool, page);
+		npages = 1UL << order;
+
+		/*
+		 * Back up the compound atomically at its native order. If
+		 * fault injection truncated num_pages mid-compound, skip
+		 * the partial tail rather than splitting.
+		 */
+		if (unlikely(i + npages > num_pages))
+			break;
+
+		ret = ttm_pool_backup_folio(pool, tt, backup, page_folio(page),
+					    order, flags->writeback, i, gfp,
+					    alloc_gfp);
+		if (unlikely(ret < 0))
+			break;
+
+		shrunken += ret;
+
+		/* partial backup */
+		if (unlikely(ret != npages))
 			break;
-		}
-		handle = shandle;
-		tt->pages[i] = ttm_backup_handle_to_page_ptr(handle);
-		__free_pages_gpu_account(page, 0, false);
-		shrunken++;
 	}
 
 	return shrunken ? shrunken : ret;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* ✗ CI.checkpatch: warning for TTM shrinker fragmentation / partial restore fixes
  2026-05-05 20:04 [PATCH v5 0/2] TTM shrinker fragmentation / partial restore fixes Matthew Brost
  2026-05-05 20:04 ` [PATCH v5 1/2] drm/ttm: Drop tt->restore after successful restore Matthew Brost
  2026-05-05 20:04 ` [PATCH v5 2/2] drm/ttm/pool: back up at native page order Matthew Brost
@ 2026-05-05 20:19 ` Patchwork
  2026-05-05 20:20 ` ✓ CI.KUnit: success " Patchwork
  3 siblings, 0 replies; 10+ messages in thread
From: Patchwork @ 2026-05-05 20:19 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker fragmentation / partial restore fixes
URL   : https://patchwork.freedesktop.org/series/166020/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
c8c12e558adaef7a4d125d83b6e1f8824bc13b82
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 99b83e38cf79a39f5b07c6d2a426047826e7c518
Author: Matthew Brost <matthew.brost@intel.com>
Date:   Tue May 5 13:04:43 2026 -0700

    drm/ttm/pool: back up at native page order
    
    ttm_pool_split_for_swap() splits high-order pool pages into order-0
    pages during backup so each 4K page can be released to the system as
    soon as it has been written to shmem. While this minimizes the
    allocator's working set during reclaim, it actively fragments memory:
    every TTM-backed compound page that the shrinker touches is shattered
    into order-0 pages, even when the rest of the system would prefer that
    the high-order block stay intact. Under sustained kswapd pressure this
    is enough to drive other parts of MM into recovery loops from which
    they cannot easily escape, because the memory TTM just freed is no
    longer contiguous.
    
    Stop unconditionally splitting on the backup path and back up each
    compound at its native order in ttm_pool_backup():
    
      - For each non-handle slot, read the order from the head page and
        back up all 1<<order subpages to consecutive shmem indices,
        writing the resulting handles into tt->pages[] as we go.
      - On success, the compound is freed once at its native order. No
        split_page(), no per-4K refcount juggling, no fragmentation
        introduced from this path.
      - Slots that already hold a backup handle from a previous partial
        attempt are skipped. A compound that would extend past a
        fault-injection-truncated num_pages is skipped rather than split.
    
    A per-subpage backup failure cannot be made fully atomic: backing up a
    subpage allocates a shmem folio before the source page can be released,
    so under true OOM any subpage in a compound (not just the first) may
    fail to be backed up with the rest of the source compound still live
    and contiguous. To make forward progress in that case, fall back to
    splitting the source compound and backing up its remaining subpages
    individually:
    
      - On the first per-subpage failure for a compound (and only if
        order > 0), call ttm_pool_split_for_swap() to split the source
        compound, release the subpages whose contents already live in
        shmem (their handles in tt->pages stay valid), and retry the
        failing subpage at order 0.
      - Subsequent successful subpage backups in the now-split compound
        free their source page individually as soon as the handle is
        written.
      - A second failure after splitting terminates the loop with partial
        progress; the remaining order-0 subpages stay in tt->pages as
        plain page pointers and are cleaned up by the normal
        ttm_pool_drop_backed_up() / ttm_pool_free_range() paths.
    
    This restores the original split-on-OOM fallback behavior while
    keeping the common, non-OOM case fragmentation-free. It also
    preserves the "partial backup is allowed" contract: shrunken is
    incremented per backed-up subpage so the caller still sees forward
    progress when a compound only partially succeeds.
    
    The restore-side leftover-page branch in ttm_pool_restore_commit() is
    left as-is for now: that path can still split a previously-retained
    compound, but in practice it is unreachable under realistic workloads
    (per profiling we have not been able to trigger it), so it is not
    worth complicating the restore state machine to avoid the split there.
    If it ever becomes a problem in practice it can be addressed
    independently.
    
    ttm_pool_split_for_swap() itself is retained both for the OOM
    fallback above and for the restore path's remaining caller. The
    DMA-mapped pre-backup unmap loop, the purge path, ttm_pool_free_*,
    and ttm_pool_unmap_and_free() already operate at native order and
    are unchanged.
    
    Cc: Christian Koenig <christian.koenig@amd.com>
    Cc: Huang Rui <ray.huang@amd.com>
    Cc: Matthew Auld <matthew.auld@intel.com>
    Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
    Cc: Maxime Ripard <mripard@kernel.org>
    Cc: Thomas Zimmermann <tzimmermann@suse.de>
    Cc: David Airlie <airlied@gmail.com>
    Cc: Simona Vetter <simona@ffwll.ch>
    Cc: dri-devel@lists.freedesktop.org
    Cc: linux-kernel@vger.kernel.org
    Cc: stable@vger.kernel.org
    Fixes: b63d715b8090 ("drm/ttm/pool, drm/ttm/tt: Provide a helper to shrink pages")
    Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
    Assisted-by: Claude:claude-opus-4.6
    Signed-off-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch 91fb48e802df6b1799b8c6a2a4d8fa67a718989e drm-intel
40a5967b4834 drm/ttm: Drop tt->restore after successful restore
-:18: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#18: 
20545 [  309.784531] RIP: 0010:sg_alloc_append_table_from_pages+0x38c/0x490

total: 0 errors, 1 warnings, 0 checks, 32 lines checked
99b83e38cf79 drm/ttm/pool: back up at native page order



^ permalink raw reply	[flat|nested] 10+ messages in thread

* ✓ CI.KUnit: success for TTM shrinker fragmentation / partial restore fixes
  2026-05-05 20:04 [PATCH v5 0/2] TTM shrinker fragmentation / partial restore fixes Matthew Brost
                   ` (2 preceding siblings ...)
  2026-05-05 20:19 ` ✗ CI.checkpatch: warning for TTM shrinker fragmentation / partial restore fixes Patchwork
@ 2026-05-05 20:20 ` Patchwork
  3 siblings, 0 replies; 10+ messages in thread
From: Patchwork @ 2026-05-05 20:20 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker fragmentation / partial restore fixes
URL   : https://patchwork.freedesktop.org/series/166020/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[20:19:36] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:19:40] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:20:11] Starting KUnit Kernel (1/1)...
[20:20:11] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:20:11] ================== guc_buf (11 subtests) ===================
[20:20:11] [PASSED] test_smallest
[20:20:11] [PASSED] test_largest
[20:20:11] [PASSED] test_granular
[20:20:11] [PASSED] test_unique
[20:20:11] [PASSED] test_overlap
[20:20:11] [PASSED] test_reusable
[20:20:11] [PASSED] test_too_big
[20:20:11] [PASSED] test_flush
[20:20:11] [PASSED] test_lookup
[20:20:11] [PASSED] test_data
[20:20:11] [PASSED] test_class
[20:20:11] ===================== [PASSED] guc_buf =====================
[20:20:11] =================== guc_dbm (7 subtests) ===================
[20:20:11] [PASSED] test_empty
[20:20:11] [PASSED] test_default
[20:20:11] ======================== test_size  ========================
[20:20:11] [PASSED] 4
[20:20:11] [PASSED] 8
[20:20:11] [PASSED] 32
[20:20:11] [PASSED] 256
[20:20:11] ==================== [PASSED] test_size ====================
[20:20:11] ======================= test_reuse  ========================
[20:20:11] [PASSED] 4
[20:20:11] [PASSED] 8
[20:20:11] [PASSED] 32
[20:20:11] [PASSED] 256
[20:20:11] =================== [PASSED] test_reuse ====================
[20:20:11] =================== test_range_overlap  ====================
[20:20:11] [PASSED] 4
[20:20:11] [PASSED] 8
[20:20:11] [PASSED] 32
[20:20:11] [PASSED] 256
[20:20:11] =============== [PASSED] test_range_overlap ================
[20:20:11] =================== test_range_compact  ====================
[20:20:11] [PASSED] 4
[20:20:11] [PASSED] 8
[20:20:11] [PASSED] 32
[20:20:11] [PASSED] 256
[20:20:11] =============== [PASSED] test_range_compact ================
[20:20:11] ==================== test_range_spare  =====================
[20:20:11] [PASSED] 4
[20:20:11] [PASSED] 8
[20:20:11] [PASSED] 32
[20:20:11] [PASSED] 256
[20:20:11] ================ [PASSED] test_range_spare =================
[20:20:11] ===================== [PASSED] guc_dbm =====================
[20:20:11] =================== guc_idm (6 subtests) ===================
[20:20:11] [PASSED] bad_init
[20:20:11] [PASSED] no_init
[20:20:11] [PASSED] init_fini
[20:20:11] [PASSED] check_used
[20:20:11] [PASSED] check_quota
[20:20:11] [PASSED] check_all
[20:20:11] ===================== [PASSED] guc_idm =====================
[20:20:11] ================== no_relay (3 subtests) ===================
[20:20:11] [PASSED] xe_drops_guc2pf_if_not_ready
[20:20:11] [PASSED] xe_drops_guc2vf_if_not_ready
[20:20:11] [PASSED] xe_rejects_send_if_not_ready
[20:20:11] ==================== [PASSED] no_relay =====================
[20:20:11] ================== pf_relay (14 subtests) ==================
[20:20:11] [PASSED] pf_rejects_guc2pf_too_short
[20:20:11] [PASSED] pf_rejects_guc2pf_too_long
[20:20:11] [PASSED] pf_rejects_guc2pf_no_payload
[20:20:11] [PASSED] pf_fails_no_payload
[20:20:11] [PASSED] pf_fails_bad_origin
[20:20:11] [PASSED] pf_fails_bad_type
[20:20:11] [PASSED] pf_txn_reports_error
[20:20:11] [PASSED] pf_txn_sends_pf2guc
[20:20:11] [PASSED] pf_sends_pf2guc
[20:20:11] [SKIPPED] pf_loopback_nop
[20:20:11] [SKIPPED] pf_loopback_echo
[20:20:11] [SKIPPED] pf_loopback_fail
[20:20:11] [SKIPPED] pf_loopback_busy
[20:20:11] [SKIPPED] pf_loopback_retry
[20:20:11] ==================== [PASSED] pf_relay =====================
[20:20:11] ================== vf_relay (3 subtests) ===================
[20:20:11] [PASSED] vf_rejects_guc2vf_too_short
[20:20:11] [PASSED] vf_rejects_guc2vf_too_long
[20:20:11] [PASSED] vf_rejects_guc2vf_no_payload
[20:20:11] ==================== [PASSED] vf_relay =====================
[20:20:11] ================ pf_gt_config (9 subtests) =================
[20:20:11] [PASSED] fair_contexts_1vf
[20:20:11] [PASSED] fair_doorbells_1vf
[20:20:11] [PASSED] fair_ggtt_1vf
[20:20:11] ====================== fair_vram_1vf  ======================
[20:20:11] [PASSED] 3.50 GiB
[20:20:11] [PASSED] 11.5 GiB
[20:20:11] [PASSED] 15.5 GiB
[20:20:11] [PASSED] 31.5 GiB
[20:20:11] [PASSED] 63.5 GiB
[20:20:11] [PASSED] 1.91 GiB
[20:20:11] ================== [PASSED] fair_vram_1vf ==================
[20:20:11] ================ fair_vram_1vf_admin_only  =================
[20:20:11] [PASSED] 3.50 GiB
[20:20:11] [PASSED] 11.5 GiB
[20:20:11] [PASSED] 15.5 GiB
[20:20:11] [PASSED] 31.5 GiB
[20:20:11] [PASSED] 63.5 GiB
[20:20:11] [PASSED] 1.91 GiB
[20:20:11] ============ [PASSED] fair_vram_1vf_admin_only =============
[20:20:11] ====================== fair_contexts  ======================
[20:20:11] [PASSED] 1 VF
[20:20:11] [PASSED] 2 VFs
[20:20:11] [PASSED] 3 VFs
[20:20:11] [PASSED] 4 VFs
[20:20:11] [PASSED] 5 VFs
[20:20:11] [PASSED] 6 VFs
[20:20:11] [PASSED] 7 VFs
[20:20:11] [PASSED] 8 VFs
[20:20:11] [PASSED] 9 VFs
[20:20:11] [PASSED] 10 VFs
[20:20:11] [PASSED] 11 VFs
[20:20:11] [PASSED] 12 VFs
[20:20:11] [PASSED] 13 VFs
[20:20:11] [PASSED] 14 VFs
[20:20:11] [PASSED] 15 VFs
[20:20:11] [PASSED] 16 VFs
[20:20:11] [PASSED] 17 VFs
[20:20:11] [PASSED] 18 VFs
[20:20:11] [PASSED] 19 VFs
[20:20:11] [PASSED] 20 VFs
[20:20:11] [PASSED] 21 VFs
[20:20:11] [PASSED] 22 VFs
[20:20:11] [PASSED] 23 VFs
[20:20:11] [PASSED] 24 VFs
[20:20:11] [PASSED] 25 VFs
[20:20:11] [PASSED] 26 VFs
[20:20:11] [PASSED] 27 VFs
[20:20:11] [PASSED] 28 VFs
[20:20:11] [PASSED] 29 VFs
[20:20:11] [PASSED] 30 VFs
[20:20:11] [PASSED] 31 VFs
[20:20:11] [PASSED] 32 VFs
[20:20:11] [PASSED] 33 VFs
[20:20:11] [PASSED] 34 VFs
[20:20:11] [PASSED] 35 VFs
[20:20:11] [PASSED] 36 VFs
[20:20:11] [PASSED] 37 VFs
[20:20:11] [PASSED] 38 VFs
[20:20:11] [PASSED] 39 VFs
[20:20:11] [PASSED] 40 VFs
[20:20:11] [PASSED] 41 VFs
[20:20:11] [PASSED] 42 VFs
[20:20:11] [PASSED] 43 VFs
[20:20:11] [PASSED] 44 VFs
[20:20:11] [PASSED] 45 VFs
[20:20:11] [PASSED] 46 VFs
[20:20:11] [PASSED] 47 VFs
[20:20:11] [PASSED] 48 VFs
[20:20:11] [PASSED] 49 VFs
[20:20:11] [PASSED] 50 VFs
[20:20:11] [PASSED] 51 VFs
[20:20:11] [PASSED] 52 VFs
[20:20:11] [PASSED] 53 VFs
[20:20:11] [PASSED] 54 VFs
[20:20:11] [PASSED] 55 VFs
[20:20:11] [PASSED] 56 VFs
[20:20:11] [PASSED] 57 VFs
[20:20:11] [PASSED] 58 VFs
[20:20:11] [PASSED] 59 VFs
[20:20:11] [PASSED] 60 VFs
[20:20:11] [PASSED] 61 VFs
[20:20:11] [PASSED] 62 VFs
[20:20:11] [PASSED] 63 VFs
[20:20:11] ================== [PASSED] fair_contexts ==================
[20:20:11] ===================== fair_doorbells  ======================
[20:20:11] [PASSED] 1 VF
[20:20:11] [PASSED] 2 VFs
[20:20:11] [PASSED] 3 VFs
[20:20:11] [PASSED] 4 VFs
[20:20:11] [PASSED] 5 VFs
[20:20:11] [PASSED] 6 VFs
[20:20:11] [PASSED] 7 VFs
[20:20:11] [PASSED] 8 VFs
[20:20:11] [PASSED] 9 VFs
[20:20:11] [PASSED] 10 VFs
[20:20:11] [PASSED] 11 VFs
[20:20:11] [PASSED] 12 VFs
[20:20:11] [PASSED] 13 VFs
[20:20:11] [PASSED] 14 VFs
[20:20:11] [PASSED] 15 VFs
[20:20:11] [PASSED] 16 VFs
[20:20:11] [PASSED] 17 VFs
[20:20:11] [PASSED] 18 VFs
[20:20:11] [PASSED] 19 VFs
[20:20:11] [PASSED] 20 VFs
[20:20:11] [PASSED] 21 VFs
[20:20:11] [PASSED] 22 VFs
[20:20:11] [PASSED] 23 VFs
[20:20:11] [PASSED] 24 VFs
[20:20:11] [PASSED] 25 VFs
[20:20:11] [PASSED] 26 VFs
[20:20:11] [PASSED] 27 VFs
[20:20:11] [PASSED] 28 VFs
[20:20:11] [PASSED] 29 VFs
[20:20:11] [PASSED] 30 VFs
[20:20:11] [PASSED] 31 VFs
[20:20:11] [PASSED] 32 VFs
[20:20:11] [PASSED] 33 VFs
[20:20:11] [PASSED] 34 VFs
[20:20:11] [PASSED] 35 VFs
[20:20:11] [PASSED] 36 VFs
[20:20:11] [PASSED] 37 VFs
[20:20:11] [PASSED] 38 VFs
[20:20:11] [PASSED] 39 VFs
[20:20:11] [PASSED] 40 VFs
[20:20:11] [PASSED] 41 VFs
[20:20:11] [PASSED] 42 VFs
[20:20:11] [PASSED] 43 VFs
[20:20:11] [PASSED] 44 VFs
[20:20:11] [PASSED] 45 VFs
[20:20:11] [PASSED] 46 VFs
[20:20:11] [PASSED] 47 VFs
[20:20:11] [PASSED] 48 VFs
[20:20:11] [PASSED] 49 VFs
[20:20:11] [PASSED] 50 VFs
[20:20:11] [PASSED] 51 VFs
[20:20:11] [PASSED] 52 VFs
[20:20:11] [PASSED] 53 VFs
[20:20:11] [PASSED] 54 VFs
[20:20:11] [PASSED] 55 VFs
[20:20:11] [PASSED] 56 VFs
[20:20:11] [PASSED] 57 VFs
[20:20:11] [PASSED] 58 VFs
[20:20:11] [PASSED] 59 VFs
[20:20:11] [PASSED] 60 VFs
[20:20:11] [PASSED] 61 VFs
[20:20:11] [PASSED] 62 VFs
[20:20:11] [PASSED] 63 VFs
[20:20:11] ================= [PASSED] fair_doorbells ==================
[20:20:11] ======================== fair_ggtt  ========================
[20:20:11] [PASSED] 1 VF
[20:20:11] [PASSED] 2 VFs
[20:20:11] [PASSED] 3 VFs
[20:20:11] [PASSED] 4 VFs
[20:20:11] [PASSED] 5 VFs
[20:20:11] [PASSED] 6 VFs
[20:20:11] [PASSED] 7 VFs
[20:20:11] [PASSED] 8 VFs
[20:20:11] [PASSED] 9 VFs
[20:20:11] [PASSED] 10 VFs
[20:20:11] [PASSED] 11 VFs
[20:20:11] [PASSED] 12 VFs
[20:20:11] [PASSED] 13 VFs
[20:20:12] [PASSED] 14 VFs
[20:20:12] [PASSED] 15 VFs
[20:20:12] [PASSED] 16 VFs
[20:20:12] [PASSED] 17 VFs
[20:20:12] [PASSED] 18 VFs
[20:20:12] [PASSED] 19 VFs
[20:20:12] [PASSED] 20 VFs
[20:20:12] [PASSED] 21 VFs
[20:20:12] [PASSED] 22 VFs
[20:20:12] [PASSED] 23 VFs
[20:20:12] [PASSED] 24 VFs
[20:20:12] [PASSED] 25 VFs
[20:20:12] [PASSED] 26 VFs
[20:20:12] [PASSED] 27 VFs
[20:20:12] [PASSED] 28 VFs
[20:20:12] [PASSED] 29 VFs
[20:20:12] [PASSED] 30 VFs
[20:20:12] [PASSED] 31 VFs
[20:20:12] [PASSED] 32 VFs
[20:20:12] [PASSED] 33 VFs
[20:20:12] [PASSED] 34 VFs
[20:20:12] [PASSED] 35 VFs
[20:20:12] [PASSED] 36 VFs
[20:20:12] [PASSED] 37 VFs
[20:20:12] [PASSED] 38 VFs
[20:20:12] [PASSED] 39 VFs
[20:20:12] [PASSED] 40 VFs
[20:20:12] [PASSED] 41 VFs
[20:20:12] [PASSED] 42 VFs
[20:20:12] [PASSED] 43 VFs
[20:20:12] [PASSED] 44 VFs
[20:20:12] [PASSED] 45 VFs
[20:20:12] [PASSED] 46 VFs
[20:20:12] [PASSED] 47 VFs
[20:20:12] [PASSED] 48 VFs
[20:20:12] [PASSED] 49 VFs
[20:20:12] [PASSED] 50 VFs
[20:20:12] [PASSED] 51 VFs
[20:20:12] [PASSED] 52 VFs
[20:20:12] [PASSED] 53 VFs
[20:20:12] [PASSED] 54 VFs
[20:20:12] [PASSED] 55 VFs
[20:20:12] [PASSED] 56 VFs
[20:20:12] [PASSED] 57 VFs
[20:20:12] [PASSED] 58 VFs
[20:20:12] [PASSED] 59 VFs
[20:20:12] [PASSED] 60 VFs
[20:20:12] [PASSED] 61 VFs
[20:20:12] [PASSED] 62 VFs
[20:20:12] [PASSED] 63 VFs
[20:20:12] ==================== [PASSED] fair_ggtt ====================
[20:20:12] ======================== fair_vram  ========================
[20:20:12] [PASSED] 1 VF
[20:20:12] [PASSED] 2 VFs
[20:20:12] [PASSED] 3 VFs
[20:20:12] [PASSED] 4 VFs
[20:20:12] [PASSED] 5 VFs
[20:20:12] [PASSED] 6 VFs
[20:20:12] [PASSED] 7 VFs
[20:20:12] [PASSED] 8 VFs
[20:20:12] [PASSED] 9 VFs
[20:20:12] [PASSED] 10 VFs
[20:20:12] [PASSED] 11 VFs
[20:20:12] [PASSED] 12 VFs
[20:20:12] [PASSED] 13 VFs
[20:20:12] [PASSED] 14 VFs
[20:20:12] [PASSED] 15 VFs
[20:20:12] [PASSED] 16 VFs
[20:20:12] [PASSED] 17 VFs
[20:20:12] [PASSED] 18 VFs
[20:20:12] [PASSED] 19 VFs
[20:20:12] [PASSED] 20 VFs
[20:20:12] [PASSED] 21 VFs
[20:20:12] [PASSED] 22 VFs
[20:20:12] [PASSED] 23 VFs
[20:20:12] [PASSED] 24 VFs
[20:20:12] [PASSED] 25 VFs
[20:20:12] [PASSED] 26 VFs
[20:20:12] [PASSED] 27 VFs
[20:20:12] [PASSED] 28 VFs
[20:20:12] [PASSED] 29 VFs
[20:20:12] [PASSED] 30 VFs
[20:20:12] [PASSED] 31 VFs
[20:20:12] [PASSED] 32 VFs
[20:20:12] [PASSED] 33 VFs
[20:20:12] [PASSED] 34 VFs
[20:20:12] [PASSED] 35 VFs
[20:20:12] [PASSED] 36 VFs
[20:20:12] [PASSED] 37 VFs
[20:20:12] [PASSED] 38 VFs
[20:20:12] [PASSED] 39 VFs
[20:20:12] [PASSED] 40 VFs
[20:20:12] [PASSED] 41 VFs
[20:20:12] [PASSED] 42 VFs
[20:20:12] [PASSED] 43 VFs
[20:20:12] [PASSED] 44 VFs
[20:20:12] [PASSED] 45 VFs
[20:20:12] [PASSED] 46 VFs
[20:20:12] [PASSED] 47 VFs
[20:20:12] [PASSED] 48 VFs
[20:20:12] [PASSED] 49 VFs
[20:20:12] [PASSED] 50 VFs
[20:20:12] [PASSED] 51 VFs
[20:20:12] [PASSED] 52 VFs
[20:20:12] [PASSED] 53 VFs
[20:20:12] [PASSED] 54 VFs
[20:20:12] [PASSED] 55 VFs
[20:20:12] [PASSED] 56 VFs
[20:20:12] [PASSED] 57 VFs
[20:20:12] [PASSED] 58 VFs
[20:20:12] [PASSED] 59 VFs
[20:20:12] [PASSED] 60 VFs
[20:20:12] [PASSED] 61 VFs
[20:20:12] [PASSED] 62 VFs
[20:20:12] [PASSED] 63 VFs
[20:20:12] ==================== [PASSED] fair_vram ====================
[20:20:12] ================== [PASSED] pf_gt_config ===================
[20:20:12] ===================== lmtt (1 subtest) =====================
[20:20:12] ======================== test_ops  =========================
[20:20:12] [PASSED] 2-level
[20:20:12] [PASSED] multi-level
[20:20:12] ==================== [PASSED] test_ops =====================
[20:20:12] ====================== [PASSED] lmtt =======================
[20:20:12] ================= pf_service (11 subtests) =================
[20:20:12] [PASSED] pf_negotiate_any
[20:20:12] [PASSED] pf_negotiate_base_match
[20:20:12] [PASSED] pf_negotiate_base_newer
[20:20:12] [PASSED] pf_negotiate_base_next
[20:20:12] [SKIPPED] pf_negotiate_base_older
[20:20:12] [PASSED] pf_negotiate_base_prev
[20:20:12] [PASSED] pf_negotiate_latest_match
[20:20:12] [PASSED] pf_negotiate_latest_newer
[20:20:12] [PASSED] pf_negotiate_latest_next
[20:20:12] [SKIPPED] pf_negotiate_latest_older
[20:20:12] [SKIPPED] pf_negotiate_latest_prev
[20:20:12] =================== [PASSED] pf_service ====================
[20:20:12] ================= xe_guc_g2g (2 subtests) ==================
[20:20:12] ============== xe_live_guc_g2g_kunit_default  ==============
[20:20:12] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[20:20:12] ============== xe_live_guc_g2g_kunit_allmem  ===============
[20:20:12] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[20:20:12] =================== [SKIPPED] xe_guc_g2g ===================
[20:20:12] =================== xe_mocs (2 subtests) ===================
[20:20:12] ================ xe_live_mocs_kernel_kunit  ================
[20:20:12] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[20:20:12] ================ xe_live_mocs_reset_kunit  =================
[20:20:12] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[20:20:12] ==================== [SKIPPED] xe_mocs =====================
[20:20:12] ================= xe_migrate (2 subtests) ==================
[20:20:12] ================= xe_migrate_sanity_kunit  =================
[20:20:12] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[20:20:12] ================== xe_validate_ccs_kunit  ==================
[20:20:12] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[20:20:12] =================== [SKIPPED] xe_migrate ===================
[20:20:12] ================== xe_dma_buf (1 subtest) ==================
[20:20:12] ==================== xe_dma_buf_kunit  =====================
[20:20:12] ================ [SKIPPED] xe_dma_buf_kunit ================
[20:20:12] =================== [SKIPPED] xe_dma_buf ===================
[20:20:12] ================= xe_bo_shrink (1 subtest) =================
[20:20:12] =================== xe_bo_shrink_kunit  ====================
[20:20:12] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[20:20:12] ================== [SKIPPED] xe_bo_shrink ==================
[20:20:12] ==================== xe_bo (2 subtests) ====================
[20:20:12] ================== xe_ccs_migrate_kunit  ===================
[20:20:12] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[20:20:12] ==================== xe_bo_evict_kunit  ====================
[20:20:12] =============== [SKIPPED] xe_bo_evict_kunit ================
[20:20:12] ===================== [SKIPPED] xe_bo ======================
[20:20:12] ==================== args (13 subtests) ====================
[20:20:12] [PASSED] count_args_test
[20:20:12] [PASSED] call_args_example
[20:20:12] [PASSED] call_args_test
[20:20:12] [PASSED] drop_first_arg_example
[20:20:12] [PASSED] drop_first_arg_test
[20:20:12] [PASSED] first_arg_example
[20:20:12] [PASSED] first_arg_test
[20:20:12] [PASSED] last_arg_example
[20:20:12] [PASSED] last_arg_test
[20:20:12] [PASSED] pick_arg_example
[20:20:12] [PASSED] if_args_example
[20:20:12] [PASSED] if_args_test
[20:20:12] [PASSED] sep_comma_example
[20:20:12] ====================== [PASSED] args =======================
[20:20:12] =================== xe_pci (3 subtests) ====================
[20:20:12] ==================== check_graphics_ip  ====================
[20:20:12] [PASSED] 12.00 Xe_LP
[20:20:12] [PASSED] 12.10 Xe_LP+
[20:20:12] [PASSED] 12.55 Xe_HPG
[20:20:12] [PASSED] 12.60 Xe_HPC
[20:20:12] [PASSED] 12.70 Xe_LPG
[20:20:12] [PASSED] 12.71 Xe_LPG
[20:20:12] [PASSED] 12.74 Xe_LPG+
[20:20:12] [PASSED] 20.01 Xe2_HPG
[20:20:12] [PASSED] 20.02 Xe2_HPG
[20:20:12] [PASSED] 20.04 Xe2_LPG
[20:20:12] [PASSED] 30.00 Xe3_LPG
[20:20:12] [PASSED] 30.01 Xe3_LPG
[20:20:12] [PASSED] 30.03 Xe3_LPG
[20:20:12] [PASSED] 30.04 Xe3_LPG
[20:20:12] [PASSED] 30.05 Xe3_LPG
[20:20:12] [PASSED] 35.10 Xe3p_LPG
[20:20:12] [PASSED] 35.11 Xe3p_XPC
[20:20:12] ================ [PASSED] check_graphics_ip ================
[20:20:12] ===================== check_media_ip  ======================
[20:20:12] [PASSED] 12.00 Xe_M
[20:20:12] [PASSED] 12.55 Xe_HPM
[20:20:12] [PASSED] 13.00 Xe_LPM+
[20:20:12] [PASSED] 13.01 Xe2_HPM
[20:20:12] [PASSED] 20.00 Xe2_LPM
[20:20:12] [PASSED] 30.00 Xe3_LPM
[20:20:12] [PASSED] 30.02 Xe3_LPM
[20:20:12] [PASSED] 35.00 Xe3p_LPM
[20:20:12] [PASSED] 35.03 Xe3p_HPM
[20:20:12] ================= [PASSED] check_media_ip ==================
[20:20:12] =================== check_platform_desc  ===================
[20:20:12] [PASSED] 0x9A60 (TIGERLAKE)
[20:20:12] [PASSED] 0x9A68 (TIGERLAKE)
[20:20:12] [PASSED] 0x9A70 (TIGERLAKE)
[20:20:12] [PASSED] 0x9A40 (TIGERLAKE)
[20:20:12] [PASSED] 0x9A49 (TIGERLAKE)
[20:20:12] [PASSED] 0x9A59 (TIGERLAKE)
[20:20:12] [PASSED] 0x9A78 (TIGERLAKE)
[20:20:12] [PASSED] 0x9AC0 (TIGERLAKE)
[20:20:12] [PASSED] 0x9AC9 (TIGERLAKE)
[20:20:12] [PASSED] 0x9AD9 (TIGERLAKE)
[20:20:12] [PASSED] 0x9AF8 (TIGERLAKE)
[20:20:12] [PASSED] 0x4C80 (ROCKETLAKE)
[20:20:12] [PASSED] 0x4C8A (ROCKETLAKE)
[20:20:12] [PASSED] 0x4C8B (ROCKETLAKE)
[20:20:12] [PASSED] 0x4C8C (ROCKETLAKE)
[20:20:12] [PASSED] 0x4C90 (ROCKETLAKE)
[20:20:12] [PASSED] 0x4C9A (ROCKETLAKE)
[20:20:12] [PASSED] 0x4680 (ALDERLAKE_S)
[20:20:12] [PASSED] 0x4682 (ALDERLAKE_S)
[20:20:12] [PASSED] 0x4688 (ALDERLAKE_S)
[20:20:12] [PASSED] 0x468A (ALDERLAKE_S)
[20:20:12] [PASSED] 0x468B (ALDERLAKE_S)
[20:20:12] [PASSED] 0x4690 (ALDERLAKE_S)
[20:20:12] [PASSED] 0x4692 (ALDERLAKE_S)
[20:20:12] [PASSED] 0x4693 (ALDERLAKE_S)
[20:20:12] [PASSED] 0x46A0 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46A1 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46A2 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46A3 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46A6 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46A8 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46AA (ALDERLAKE_P)
[20:20:12] [PASSED] 0x462A (ALDERLAKE_P)
[20:20:12] [PASSED] 0x4626 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x4628 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46B0 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46B1 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46B2 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46B3 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46C0 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46C1 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46C2 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46C3 (ALDERLAKE_P)
[20:20:12] [PASSED] 0x46D0 (ALDERLAKE_N)
[20:20:12] [PASSED] 0x46D1 (ALDERLAKE_N)
[20:20:12] [PASSED] 0x46D2 (ALDERLAKE_N)
[20:20:12] [PASSED] 0x46D3 (ALDERLAKE_N)
[20:20:12] [PASSED] 0x46D4 (ALDERLAKE_N)
[20:20:12] [PASSED] 0xA721 (ALDERLAKE_P)
[20:20:12] [PASSED] 0xA7A1 (ALDERLAKE_P)
[20:20:12] [PASSED] 0xA7A9 (ALDERLAKE_P)
[20:20:12] [PASSED] 0xA7AC (ALDERLAKE_P)
[20:20:12] [PASSED] 0xA7AD (ALDERLAKE_P)
[20:20:12] [PASSED] 0xA720 (ALDERLAKE_P)
[20:20:12] [PASSED] 0xA7A0 (ALDERLAKE_P)
[20:20:12] [PASSED] 0xA7A8 (ALDERLAKE_P)
[20:20:12] [PASSED] 0xA7AA (ALDERLAKE_P)
[20:20:12] [PASSED] 0xA7AB (ALDERLAKE_P)
[20:20:12] [PASSED] 0xA780 (ALDERLAKE_S)
[20:20:12] [PASSED] 0xA781 (ALDERLAKE_S)
[20:20:12] [PASSED] 0xA782 (ALDERLAKE_S)
[20:20:12] [PASSED] 0xA783 (ALDERLAKE_S)
[20:20:12] [PASSED] 0xA788 (ALDERLAKE_S)
[20:20:12] [PASSED] 0xA789 (ALDERLAKE_S)
[20:20:12] [PASSED] 0xA78A (ALDERLAKE_S)
[20:20:12] [PASSED] 0xA78B (ALDERLAKE_S)
[20:20:12] [PASSED] 0x4905 (DG1)
[20:20:12] [PASSED] 0x4906 (DG1)
[20:20:12] [PASSED] 0x4907 (DG1)
[20:20:12] [PASSED] 0x4908 (DG1)
[20:20:12] [PASSED] 0x4909 (DG1)
[20:20:12] [PASSED] 0x56C0 (DG2)
[20:20:12] [PASSED] 0x56C2 (DG2)
[20:20:12] [PASSED] 0x56C1 (DG2)
[20:20:12] [PASSED] 0x7D51 (METEORLAKE)
[20:20:12] [PASSED] 0x7DD1 (METEORLAKE)
[20:20:12] [PASSED] 0x7D41 (METEORLAKE)
[20:20:12] [PASSED] 0x7D67 (METEORLAKE)
[20:20:12] [PASSED] 0xB640 (METEORLAKE)
[20:20:12] [PASSED] 0x56A0 (DG2)
[20:20:12] [PASSED] 0x56A1 (DG2)
[20:20:12] [PASSED] 0x56A2 (DG2)
[20:20:12] [PASSED] 0x56BE (DG2)
[20:20:12] [PASSED] 0x56BF (DG2)
[20:20:12] [PASSED] 0x5690 (DG2)
[20:20:12] [PASSED] 0x5691 (DG2)
[20:20:12] [PASSED] 0x5692 (DG2)
[20:20:12] [PASSED] 0x56A5 (DG2)
[20:20:12] [PASSED] 0x56A6 (DG2)
[20:20:12] [PASSED] 0x56B0 (DG2)
[20:20:12] [PASSED] 0x56B1 (DG2)
[20:20:12] [PASSED] 0x56BA (DG2)
[20:20:12] [PASSED] 0x56BB (DG2)
[20:20:12] [PASSED] 0x56BC (DG2)
[20:20:12] [PASSED] 0x56BD (DG2)
[20:20:12] [PASSED] 0x5693 (DG2)
[20:20:12] [PASSED] 0x5694 (DG2)
[20:20:12] [PASSED] 0x5695 (DG2)
[20:20:12] [PASSED] 0x56A3 (DG2)
[20:20:12] [PASSED] 0x56A4 (DG2)
[20:20:12] [PASSED] 0x56B2 (DG2)
[20:20:12] [PASSED] 0x56B3 (DG2)
[20:20:12] [PASSED] 0x5696 (DG2)
[20:20:12] [PASSED] 0x5697 (DG2)
[20:20:12] [PASSED] 0xB69 (PVC)
[20:20:12] [PASSED] 0xB6E (PVC)
[20:20:12] [PASSED] 0xBD4 (PVC)
[20:20:12] [PASSED] 0xBD5 (PVC)
[20:20:12] [PASSED] 0xBD6 (PVC)
[20:20:12] [PASSED] 0xBD7 (PVC)
[20:20:12] [PASSED] 0xBD8 (PVC)
[20:20:12] [PASSED] 0xBD9 (PVC)
[20:20:12] [PASSED] 0xBDA (PVC)
[20:20:12] [PASSED] 0xBDB (PVC)
[20:20:12] [PASSED] 0xBE0 (PVC)
[20:20:12] [PASSED] 0xBE1 (PVC)
[20:20:12] [PASSED] 0xBE5 (PVC)
[20:20:12] [PASSED] 0x7D40 (METEORLAKE)
[20:20:12] [PASSED] 0x7D45 (METEORLAKE)
[20:20:12] [PASSED] 0x7D55 (METEORLAKE)
[20:20:12] [PASSED] 0x7D60 (METEORLAKE)
[20:20:12] [PASSED] 0x7DD5 (METEORLAKE)
[20:20:12] [PASSED] 0x6420 (LUNARLAKE)
[20:20:12] [PASSED] 0x64A0 (LUNARLAKE)
[20:20:12] [PASSED] 0x64B0 (LUNARLAKE)
[20:20:12] [PASSED] 0xE202 (BATTLEMAGE)
[20:20:12] [PASSED] 0xE209 (BATTLEMAGE)
[20:20:12] [PASSED] 0xE20B (BATTLEMAGE)
[20:20:12] [PASSED] 0xE20C (BATTLEMAGE)
[20:20:12] [PASSED] 0xE20D (BATTLEMAGE)
[20:20:12] [PASSED] 0xE210 (BATTLEMAGE)
[20:20:12] [PASSED] 0xE211 (BATTLEMAGE)
[20:20:12] [PASSED] 0xE212 (BATTLEMAGE)
[20:20:12] [PASSED] 0xE216 (BATTLEMAGE)
[20:20:12] [PASSED] 0xE220 (BATTLEMAGE)
[20:20:12] [PASSED] 0xE221 (BATTLEMAGE)
[20:20:12] [PASSED] 0xE222 (BATTLEMAGE)
[20:20:12] [PASSED] 0xE223 (BATTLEMAGE)
[20:20:12] [PASSED] 0xB080 (PANTHERLAKE)
[20:20:12] [PASSED] 0xB081 (PANTHERLAKE)
[20:20:12] [PASSED] 0xB082 (PANTHERLAKE)
[20:20:12] [PASSED] 0xB083 (PANTHERLAKE)
[20:20:12] [PASSED] 0xB084 (PANTHERLAKE)
[20:20:12] [PASSED] 0xB085 (PANTHERLAKE)
[20:20:12] [PASSED] 0xB086 (PANTHERLAKE)
[20:20:12] [PASSED] 0xB087 (PANTHERLAKE)
[20:20:12] [PASSED] 0xB08F (PANTHERLAKE)
[20:20:12] [PASSED] 0xB090 (PANTHERLAKE)
[20:20:12] [PASSED] 0xB0A0 (PANTHERLAKE)
[20:20:12] [PASSED] 0xB0B0 (PANTHERLAKE)
[20:20:12] [PASSED] 0xFD80 (PANTHERLAKE)
[20:20:12] [PASSED] 0xFD81 (PANTHERLAKE)
[20:20:12] [PASSED] 0xD740 (NOVALAKE_S)
[20:20:12] [PASSED] 0xD741 (NOVALAKE_S)
[20:20:12] [PASSED] 0xD742 (NOVALAKE_S)
[20:20:12] [PASSED] 0xD743 (NOVALAKE_S)
[20:20:12] [PASSED] 0xD744 (NOVALAKE_S)
[20:20:12] [PASSED] 0xD745 (NOVALAKE_S)
[20:20:12] [PASSED] 0x674C (CRESCENTISLAND)
[20:20:12] [PASSED] 0xD750 (NOVALAKE_P)
[20:20:12] [PASSED] 0xD751 (NOVALAKE_P)
[20:20:12] [PASSED] 0xD752 (NOVALAKE_P)
[20:20:12] [PASSED] 0xD753 (NOVALAKE_P)
[20:20:12] [PASSED] 0xD754 (NOVALAKE_P)
[20:20:12] [PASSED] 0xD755 (NOVALAKE_P)
[20:20:12] [PASSED] 0xD756 (NOVALAKE_P)
[20:20:12] [PASSED] 0xD757 (NOVALAKE_P)
[20:20:12] [PASSED] 0xD75F (NOVALAKE_P)
[20:20:12] =============== [PASSED] check_platform_desc ===============
[20:20:12] ===================== [PASSED] xe_pci ======================
[20:20:12] =================== xe_rtp (2 subtests) ====================
[20:20:12] =============== xe_rtp_process_to_sr_tests  ================
[20:20:12] [PASSED] coalesce-same-reg
[20:20:12] [PASSED] no-match-no-add
[20:20:12] [PASSED] match-or
[20:20:12] [PASSED] match-or-xfail
[20:20:12] [PASSED] no-match-no-add-multiple-rules
[20:20:12] [PASSED] two-regs-two-entries
[20:20:12] [PASSED] clr-one-set-other
[20:20:12] [PASSED] set-field
[20:20:12] [PASSED] conflict-duplicate
[20:20:12] [PASSED] conflict-not-disjoint
[20:20:12] [PASSED] conflict-reg-type
[20:20:12] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[20:20:12] ================== xe_rtp_process_tests  ===================
[20:20:12] [PASSED] active1
[20:20:12] [PASSED] active2
[20:20:12] [PASSED] active-inactive
[20:20:12] [PASSED] inactive-active
[20:20:12] [PASSED] inactive-1st_or_active-inactive
[20:20:12] [PASSED] inactive-2nd_or_active-inactive
[20:20:12] [PASSED] inactive-last_or_active-inactive
[20:20:12] [PASSED] inactive-no_or_active-inactive
[20:20:12] ============== [PASSED] xe_rtp_process_tests ===============
[20:20:12] ===================== [PASSED] xe_rtp ======================
[20:20:12] ==================== xe_wa (1 subtest) =====================
[20:20:12] ======================== xe_wa_gt  =========================
[20:20:12] [PASSED] TIGERLAKE B0
[20:20:12] [PASSED] DG1 A0
[20:20:12] [PASSED] DG1 B0
[20:20:12] [PASSED] ALDERLAKE_S A0
[20:20:12] [PASSED] ALDERLAKE_S B0
[20:20:12] [PASSED] ALDERLAKE_S C0
[20:20:12] [PASSED] ALDERLAKE_S D0
[20:20:12] [PASSED] ALDERLAKE_P A0
[20:20:12] [PASSED] ALDERLAKE_P B0
[20:20:12] [PASSED] ALDERLAKE_P C0
[20:20:12] [PASSED] ALDERLAKE_S RPLS D0
[20:20:12] [PASSED] ALDERLAKE_P RPLU E0
[20:20:12] [PASSED] DG2 G10 C0
[20:20:12] [PASSED] DG2 G11 B1
[20:20:12] [PASSED] DG2 G12 A1
[20:20:12] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[20:20:12] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[20:20:12] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[20:20:12] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[20:20:12] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[20:20:12] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[20:20:12] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[20:20:12] ==================== [PASSED] xe_wa_gt =====================
[20:20:12] ====================== [PASSED] xe_wa ======================
[20:20:12] ============================================================
[20:20:12] Testing complete. Ran 597 tests: passed: 579, skipped: 18
[20:20:12] Elapsed time: 36.083s total, 4.365s configuring, 31.052s building, 0.610s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[20:20:12] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:20:14] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:20:38] Starting KUnit Kernel (1/1)...
[20:20:38] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:20:38] ============ drm_test_pick_cmdline (2 subtests) ============
[20:20:38] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[20:20:38] =============== drm_test_pick_cmdline_named  ===============
[20:20:38] [PASSED] NTSC
[20:20:38] [PASSED] NTSC-J
[20:20:38] [PASSED] PAL
[20:20:38] [PASSED] PAL-M
[20:20:38] =========== [PASSED] drm_test_pick_cmdline_named ===========
[20:20:38] ============== [PASSED] drm_test_pick_cmdline ==============
[20:20:38] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[20:20:38] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[20:20:38] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[20:20:38] =========== drm_validate_clone_mode (2 subtests) ===========
[20:20:38] ============== drm_test_check_in_clone_mode  ===============
[20:20:38] [PASSED] in_clone_mode
[20:20:38] [PASSED] not_in_clone_mode
[20:20:38] ========== [PASSED] drm_test_check_in_clone_mode ===========
[20:20:38] =============== drm_test_check_valid_clones  ===============
[20:20:38] [PASSED] not_in_clone_mode
[20:20:38] [PASSED] valid_clone
[20:20:38] [PASSED] invalid_clone
[20:20:38] =========== [PASSED] drm_test_check_valid_clones ===========
[20:20:38] ============= [PASSED] drm_validate_clone_mode =============
[20:20:38] ============= drm_validate_modeset (1 subtest) =============
[20:20:38] [PASSED] drm_test_check_connector_changed_modeset
[20:20:38] ============== [PASSED] drm_validate_modeset ===============
[20:20:38] ====== drm_test_bridge_get_current_state (2 subtests) ======
[20:20:38] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[20:20:38] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[20:20:38] ======== [PASSED] drm_test_bridge_get_current_state ========
[20:20:38] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[20:20:38] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[20:20:38] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[20:20:38] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[20:20:38] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[20:20:38] ============== drm_bridge_alloc (2 subtests) ===============
[20:20:38] [PASSED] drm_test_drm_bridge_alloc_basic
[20:20:38] [PASSED] drm_test_drm_bridge_alloc_get_put
[20:20:38] ================ [PASSED] drm_bridge_alloc =================
[20:20:38] ============= drm_cmdline_parser (40 subtests) =============
[20:20:38] [PASSED] drm_test_cmdline_force_d_only
[20:20:38] [PASSED] drm_test_cmdline_force_D_only_dvi
[20:20:38] [PASSED] drm_test_cmdline_force_D_only_hdmi
[20:20:38] [PASSED] drm_test_cmdline_force_D_only_not_digital
[20:20:38] [PASSED] drm_test_cmdline_force_e_only
[20:20:38] [PASSED] drm_test_cmdline_res
[20:20:38] [PASSED] drm_test_cmdline_res_vesa
[20:20:38] [PASSED] drm_test_cmdline_res_vesa_rblank
[20:20:38] [PASSED] drm_test_cmdline_res_rblank
[20:20:38] [PASSED] drm_test_cmdline_res_bpp
[20:20:38] [PASSED] drm_test_cmdline_res_refresh
[20:20:38] [PASSED] drm_test_cmdline_res_bpp_refresh
[20:20:38] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[20:20:38] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[20:20:38] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[20:20:38] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[20:20:38] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[20:20:38] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[20:20:38] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[20:20:38] [PASSED] drm_test_cmdline_res_margins_force_on
[20:20:38] [PASSED] drm_test_cmdline_res_vesa_margins
[20:20:38] [PASSED] drm_test_cmdline_name
[20:20:38] [PASSED] drm_test_cmdline_name_bpp
[20:20:38] [PASSED] drm_test_cmdline_name_option
[20:20:38] [PASSED] drm_test_cmdline_name_bpp_option
[20:20:38] [PASSED] drm_test_cmdline_rotate_0
[20:20:38] [PASSED] drm_test_cmdline_rotate_90
[20:20:38] [PASSED] drm_test_cmdline_rotate_180
[20:20:38] [PASSED] drm_test_cmdline_rotate_270
[20:20:38] [PASSED] drm_test_cmdline_hmirror
[20:20:38] [PASSED] drm_test_cmdline_vmirror
[20:20:38] [PASSED] drm_test_cmdline_margin_options
[20:20:38] [PASSED] drm_test_cmdline_multiple_options
[20:20:38] [PASSED] drm_test_cmdline_bpp_extra_and_option
[20:20:38] [PASSED] drm_test_cmdline_extra_and_option
[20:20:38] [PASSED] drm_test_cmdline_freestanding_options
[20:20:38] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[20:20:38] [PASSED] drm_test_cmdline_panel_orientation
[20:20:38] ================ drm_test_cmdline_invalid  =================
[20:20:38] [PASSED] margin_only
[20:20:38] [PASSED] interlace_only
[20:20:38] [PASSED] res_missing_x
[20:20:38] [PASSED] res_missing_y
[20:20:38] [PASSED] res_bad_y
[20:20:38] [PASSED] res_missing_y_bpp
[20:20:38] [PASSED] res_bad_bpp
[20:20:38] [PASSED] res_bad_refresh
[20:20:38] [PASSED] res_bpp_refresh_force_on_off
[20:20:38] [PASSED] res_invalid_mode
[20:20:38] [PASSED] res_bpp_wrong_place_mode
[20:20:38] [PASSED] name_bpp_refresh
[20:20:38] [PASSED] name_refresh
[20:20:38] [PASSED] name_refresh_wrong_mode
[20:20:38] [PASSED] name_refresh_invalid_mode
[20:20:38] [PASSED] rotate_multiple
[20:20:38] [PASSED] rotate_invalid_val
[20:20:38] [PASSED] rotate_truncated
[20:20:38] [PASSED] invalid_option
[20:20:38] [PASSED] invalid_tv_option
[20:20:38] [PASSED] truncated_tv_option
[20:20:38] ============ [PASSED] drm_test_cmdline_invalid =============
[20:20:38] =============== drm_test_cmdline_tv_options  ===============
[20:20:38] [PASSED] NTSC
[20:20:38] [PASSED] NTSC_443
[20:20:38] [PASSED] NTSC_J
[20:20:38] [PASSED] PAL
[20:20:38] [PASSED] PAL_M
[20:20:38] [PASSED] PAL_N
[20:20:38] [PASSED] SECAM
[20:20:38] [PASSED] MONO_525
[20:20:38] [PASSED] MONO_625
[20:20:38] =========== [PASSED] drm_test_cmdline_tv_options ===========
[20:20:38] =============== [PASSED] drm_cmdline_parser ================
[20:20:38] ========== drmm_connector_hdmi_init (20 subtests) ==========
[20:20:38] [PASSED] drm_test_connector_hdmi_init_valid
[20:20:38] [PASSED] drm_test_connector_hdmi_init_bpc_8
[20:20:38] [PASSED] drm_test_connector_hdmi_init_bpc_10
[20:20:38] [PASSED] drm_test_connector_hdmi_init_bpc_12
[20:20:38] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[20:20:38] [PASSED] drm_test_connector_hdmi_init_bpc_null
[20:20:38] [PASSED] drm_test_connector_hdmi_init_formats_empty
[20:20:38] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[20:20:38] === drm_test_connector_hdmi_init_formats_yuv420_allowed  ===
[20:20:38] [PASSED] supported_formats=0x9 yuv420_allowed=1
[20:20:38] [PASSED] supported_formats=0x9 yuv420_allowed=0
[20:20:38] [PASSED] supported_formats=0x5 yuv420_allowed=1
[20:20:38] [PASSED] supported_formats=0x5 yuv420_allowed=0
[20:20:38] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[20:20:38] [PASSED] drm_test_connector_hdmi_init_null_ddc
[20:20:38] [PASSED] drm_test_connector_hdmi_init_null_product
[20:20:38] [PASSED] drm_test_connector_hdmi_init_null_vendor
[20:20:38] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[20:20:38] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[20:20:38] [PASSED] drm_test_connector_hdmi_init_product_valid
[20:20:38] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[20:20:38] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[20:20:38] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[20:20:38] ========= drm_test_connector_hdmi_init_type_valid  =========
[20:20:38] [PASSED] HDMI-A
[20:20:38] [PASSED] HDMI-B
[20:20:38] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[20:20:38] ======== drm_test_connector_hdmi_init_type_invalid  ========
[20:20:38] [PASSED] Unknown
[20:20:38] [PASSED] VGA
[20:20:38] [PASSED] DVI-I
[20:20:38] [PASSED] DVI-D
[20:20:38] [PASSED] DVI-A
[20:20:38] [PASSED] Composite
[20:20:38] [PASSED] SVIDEO
[20:20:38] [PASSED] LVDS
[20:20:38] [PASSED] Component
[20:20:38] [PASSED] DIN
[20:20:38] [PASSED] DP
[20:20:38] [PASSED] TV
[20:20:38] [PASSED] eDP
[20:20:38] [PASSED] Virtual
[20:20:38] [PASSED] DSI
[20:20:38] [PASSED] DPI
[20:20:38] [PASSED] Writeback
[20:20:38] [PASSED] SPI
[20:20:38] [PASSED] USB
[20:20:38] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[20:20:38] ============ [PASSED] drmm_connector_hdmi_init =============
[20:20:38] ============= drmm_connector_init (3 subtests) =============
[20:20:38] [PASSED] drm_test_drmm_connector_init
[20:20:38] [PASSED] drm_test_drmm_connector_init_null_ddc
[20:20:38] ========= drm_test_drmm_connector_init_type_valid  =========
[20:20:38] [PASSED] Unknown
[20:20:38] [PASSED] VGA
[20:20:38] [PASSED] DVI-I
[20:20:38] [PASSED] DVI-D
[20:20:38] [PASSED] DVI-A
[20:20:38] [PASSED] Composite
[20:20:38] [PASSED] SVIDEO
[20:20:38] [PASSED] LVDS
[20:20:38] [PASSED] Component
[20:20:38] [PASSED] DIN
[20:20:38] [PASSED] DP
[20:20:38] [PASSED] HDMI-A
[20:20:38] [PASSED] HDMI-B
[20:20:38] [PASSED] TV
[20:20:38] [PASSED] eDP
[20:20:38] [PASSED] Virtual
[20:20:38] [PASSED] DSI
[20:20:38] [PASSED] DPI
[20:20:38] [PASSED] Writeback
[20:20:38] [PASSED] SPI
[20:20:38] [PASSED] USB
[20:20:38] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[20:20:38] =============== [PASSED] drmm_connector_init ===============
[20:20:38] ========= drm_connector_dynamic_init (6 subtests) ==========
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_init
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_init_properties
[20:20:38] ===== drm_test_drm_connector_dynamic_init_type_valid  ======
[20:20:38] [PASSED] Unknown
[20:20:38] [PASSED] VGA
[20:20:38] [PASSED] DVI-I
[20:20:38] [PASSED] DVI-D
[20:20:38] [PASSED] DVI-A
[20:20:38] [PASSED] Composite
[20:20:38] [PASSED] SVIDEO
[20:20:38] [PASSED] LVDS
[20:20:38] [PASSED] Component
[20:20:38] [PASSED] DIN
[20:20:38] [PASSED] DP
[20:20:38] [PASSED] HDMI-A
[20:20:38] [PASSED] HDMI-B
[20:20:38] [PASSED] TV
[20:20:38] [PASSED] eDP
[20:20:38] [PASSED] Virtual
[20:20:38] [PASSED] DSI
[20:20:38] [PASSED] DPI
[20:20:38] [PASSED] Writeback
[20:20:38] [PASSED] SPI
[20:20:38] [PASSED] USB
[20:20:38] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[20:20:38] ======== drm_test_drm_connector_dynamic_init_name  =========
[20:20:38] [PASSED] Unknown
[20:20:38] [PASSED] VGA
[20:20:38] [PASSED] DVI-I
[20:20:38] [PASSED] DVI-D
[20:20:38] [PASSED] DVI-A
[20:20:38] [PASSED] Composite
[20:20:38] [PASSED] SVIDEO
[20:20:38] [PASSED] LVDS
[20:20:38] [PASSED] Component
[20:20:38] [PASSED] DIN
[20:20:38] [PASSED] DP
[20:20:38] [PASSED] HDMI-A
[20:20:38] [PASSED] HDMI-B
[20:20:38] [PASSED] TV
[20:20:38] [PASSED] eDP
[20:20:38] [PASSED] Virtual
[20:20:38] [PASSED] DSI
[20:20:38] [PASSED] DPI
[20:20:38] [PASSED] Writeback
[20:20:38] [PASSED] SPI
[20:20:38] [PASSED] USB
[20:20:38] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[20:20:38] =========== [PASSED] drm_connector_dynamic_init ============
[20:20:38] ==== drm_connector_dynamic_register_early (4 subtests) =====
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[20:20:38] ====== [PASSED] drm_connector_dynamic_register_early =======
[20:20:38] ======= drm_connector_dynamic_register (7 subtests) ========
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[20:20:38] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[20:20:38] ========= [PASSED] drm_connector_dynamic_register ==========
[20:20:38] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[20:20:38] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[20:20:38] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[20:20:38] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[20:20:38] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[20:20:38] ========== drm_test_get_tv_mode_from_name_valid  ===========
[20:20:38] [PASSED] NTSC
[20:20:38] [PASSED] NTSC-443
[20:20:38] [PASSED] NTSC-J
[20:20:38] [PASSED] PAL
[20:20:38] [PASSED] PAL-M
[20:20:38] [PASSED] PAL-N
[20:20:38] [PASSED] SECAM
[20:20:38] [PASSED] Mono
[20:20:38] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[20:20:38] [PASSED] drm_test_get_tv_mode_from_name_truncated
[20:20:38] ============ [PASSED] drm_get_tv_mode_from_name ============
[20:20:38] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[20:20:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[20:20:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[20:20:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[20:20:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[20:20:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[20:20:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[20:20:38] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[20:20:38] [PASSED] VIC 96
[20:20:38] [PASSED] VIC 97
[20:20:38] [PASSED] VIC 101
[20:20:38] [PASSED] VIC 102
[20:20:38] [PASSED] VIC 106
[20:20:38] [PASSED] VIC 107
[20:20:38] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[20:20:38] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[20:20:38] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[20:20:38] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[20:20:38] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[20:20:38] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[20:20:38] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[20:20:38] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[20:20:38] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[20:20:38] [PASSED] Automatic
[20:20:38] [PASSED] Full
[20:20:38] [PASSED] Limited 16:235
[20:20:38] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[20:20:38] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[20:20:38] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[20:20:38] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[20:20:38] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[20:20:38] [PASSED] RGB
[20:20:38] [PASSED] YUV 4:2:0
[20:20:38] [PASSED] YUV 4:2:2
[20:20:38] [PASSED] YUV 4:4:4
[20:20:38] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[20:20:38] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[20:20:38] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[20:20:38] ============= drm_damage_helper (21 subtests) ==============
[20:20:38] [PASSED] drm_test_damage_iter_no_damage
[20:20:38] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[20:20:38] [PASSED] drm_test_damage_iter_no_damage_src_moved
[20:20:38] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[20:20:38] [PASSED] drm_test_damage_iter_no_damage_not_visible
[20:20:38] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[20:20:38] [PASSED] drm_test_damage_iter_no_damage_no_fb
[20:20:38] [PASSED] drm_test_damage_iter_simple_damage
[20:20:38] [PASSED] drm_test_damage_iter_single_damage
[20:20:38] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[20:20:38] [PASSED] drm_test_damage_iter_single_damage_outside_src
[20:20:38] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[20:20:38] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[20:20:38] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[20:20:38] [PASSED] drm_test_damage_iter_single_damage_src_moved
[20:20:38] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[20:20:38] [PASSED] drm_test_damage_iter_damage
[20:20:38] [PASSED] drm_test_damage_iter_damage_one_intersect
[20:20:38] [PASSED] drm_test_damage_iter_damage_one_outside
[20:20:38] [PASSED] drm_test_damage_iter_damage_src_moved
[20:20:38] [PASSED] drm_test_damage_iter_damage_not_visible
[20:20:38] ================ [PASSED] drm_damage_helper ================
[20:20:38] ============== drm_dp_mst_helper (3 subtests) ==============
[20:20:38] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[20:20:38] [PASSED] Clock 154000 BPP 30 DSC disabled
[20:20:38] [PASSED] Clock 234000 BPP 30 DSC disabled
[20:20:38] [PASSED] Clock 297000 BPP 24 DSC disabled
[20:20:38] [PASSED] Clock 332880 BPP 24 DSC enabled
[20:20:38] [PASSED] Clock 324540 BPP 24 DSC enabled
[20:20:38] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[20:20:38] ============== drm_test_dp_mst_calc_pbn_div  ===============
[20:20:38] [PASSED] Link rate 2000000 lane count 4
[20:20:38] [PASSED] Link rate 2000000 lane count 2
[20:20:38] [PASSED] Link rate 2000000 lane count 1
[20:20:38] [PASSED] Link rate 1350000 lane count 4
[20:20:38] [PASSED] Link rate 1350000 lane count 2
[20:20:38] [PASSED] Link rate 1350000 lane count 1
[20:20:38] [PASSED] Link rate 1000000 lane count 4
[20:20:38] [PASSED] Link rate 1000000 lane count 2
[20:20:38] [PASSED] Link rate 1000000 lane count 1
[20:20:38] [PASSED] Link rate 810000 lane count 4
[20:20:38] [PASSED] Link rate 810000 lane count 2
[20:20:38] [PASSED] Link rate 810000 lane count 1
[20:20:38] [PASSED] Link rate 540000 lane count 4
[20:20:38] [PASSED] Link rate 540000 lane count 2
[20:20:38] [PASSED] Link rate 540000 lane count 1
[20:20:38] [PASSED] Link rate 270000 lane count 4
[20:20:38] [PASSED] Link rate 270000 lane count 2
[20:20:38] [PASSED] Link rate 270000 lane count 1
[20:20:38] [PASSED] Link rate 162000 lane count 4
[20:20:38] [PASSED] Link rate 162000 lane count 2
[20:20:38] [PASSED] Link rate 162000 lane count 1
[20:20:38] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[20:20:38] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[20:20:38] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[20:20:38] [PASSED] DP_POWER_UP_PHY with port number
[20:20:38] [PASSED] DP_POWER_DOWN_PHY with port number
[20:20:38] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[20:20:38] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[20:20:38] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[20:20:38] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[20:20:38] [PASSED] DP_QUERY_PAYLOAD with port number
[20:20:38] [PASSED] DP_QUERY_PAYLOAD with VCPI
[20:20:38] [PASSED] DP_REMOTE_DPCD_READ with port number
[20:20:38] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[20:20:38] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[20:20:38] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[20:20:38] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[20:20:38] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[20:20:38] [PASSED] DP_REMOTE_I2C_READ with port number
[20:20:38] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[20:20:38] [PASSED] DP_REMOTE_I2C_READ with transactions array
[20:20:38] [PASSED] DP_REMOTE_I2C_WRITE with port number
[20:20:38] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[20:20:38] [PASSED] DP_REMOTE_I2C_WRITE with data array
[20:20:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[20:20:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[20:20:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[20:20:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[20:20:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[20:20:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[20:20:38] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[20:20:38] ================ [PASSED] drm_dp_mst_helper ================
[20:20:38] ================== drm_exec (7 subtests) ===================
[20:20:38] [PASSED] sanitycheck
[20:20:38] [PASSED] test_lock
[20:20:38] [PASSED] test_lock_unlock
[20:20:38] [PASSED] test_duplicates
[20:20:38] [PASSED] test_prepare
[20:20:38] [PASSED] test_prepare_array
[20:20:38] [PASSED] test_multiple_loops
[20:20:38] ==================== [PASSED] drm_exec =====================
[20:20:38] =========== drm_format_helper_test (17 subtests) ===========
[20:20:38] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[20:20:38] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[20:20:38] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[20:20:38] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[20:20:38] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[20:20:38] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[20:20:38] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[20:20:38] ============= drm_test_fb_xrgb8888_to_bgr888  ==============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[20:20:38] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[20:20:38] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[20:20:38] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[20:20:38] ============== drm_test_fb_xrgb8888_to_mono  ===============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[20:20:38] ==================== drm_test_fb_swab  =====================
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ================ [PASSED] drm_test_fb_swab =================
[20:20:38] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[20:20:38] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[20:20:38] [PASSED] single_pixel_source_buffer
[20:20:38] [PASSED] single_pixel_clip_rectangle
[20:20:38] [PASSED] well_known_colors
[20:20:38] [PASSED] destination_pitch
[20:20:38] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[20:20:38] ================= drm_test_fb_clip_offset  =================
[20:20:38] [PASSED] pass through
[20:20:38] [PASSED] horizontal offset
[20:20:38] [PASSED] vertical offset
[20:20:38] [PASSED] horizontal and vertical offset
[20:20:38] [PASSED] horizontal offset (custom pitch)
[20:20:38] [PASSED] vertical offset (custom pitch)
[20:20:38] [PASSED] horizontal and vertical offset (custom pitch)
[20:20:38] ============= [PASSED] drm_test_fb_clip_offset =============
[20:20:38] =================== drm_test_fb_memcpy  ====================
[20:20:38] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[20:20:38] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[20:20:38] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[20:20:38] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[20:20:38] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[20:20:38] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[20:20:38] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[20:20:38] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[20:20:38] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[20:20:38] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[20:20:38] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[20:20:38] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[20:20:38] =============== [PASSED] drm_test_fb_memcpy ================
[20:20:38] ============= [PASSED] drm_format_helper_test ==============
[20:20:38] ================= drm_format (18 subtests) =================
[20:20:38] [PASSED] drm_test_format_block_width_invalid
[20:20:38] [PASSED] drm_test_format_block_width_one_plane
[20:20:38] [PASSED] drm_test_format_block_width_two_plane
[20:20:38] [PASSED] drm_test_format_block_width_three_plane
[20:20:38] [PASSED] drm_test_format_block_width_tiled
[20:20:38] [PASSED] drm_test_format_block_height_invalid
[20:20:38] [PASSED] drm_test_format_block_height_one_plane
[20:20:38] [PASSED] drm_test_format_block_height_two_plane
[20:20:38] [PASSED] drm_test_format_block_height_three_plane
[20:20:38] [PASSED] drm_test_format_block_height_tiled
[20:20:38] [PASSED] drm_test_format_min_pitch_invalid
[20:20:38] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[20:20:38] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[20:20:38] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[20:20:38] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[20:20:38] [PASSED] drm_test_format_min_pitch_two_plane
[20:20:38] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[20:20:38] [PASSED] drm_test_format_min_pitch_tiled
[20:20:38] =================== [PASSED] drm_format ====================
[20:20:38] ============== drm_framebuffer (10 subtests) ===============
[20:20:38] ========== drm_test_framebuffer_check_src_coords  ==========
[20:20:38] [PASSED] Success: source fits into fb
[20:20:38] [PASSED] Fail: overflowing fb with x-axis coordinate
[20:20:38] [PASSED] Fail: overflowing fb with y-axis coordinate
[20:20:38] [PASSED] Fail: overflowing fb with source width
[20:20:38] [PASSED] Fail: overflowing fb with source height
[20:20:38] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[20:20:38] [PASSED] drm_test_framebuffer_cleanup
[20:20:38] =============== drm_test_framebuffer_create  ===============
[20:20:38] [PASSED] ABGR8888 normal sizes
[20:20:38] [PASSED] ABGR8888 max sizes
[20:20:38] [PASSED] ABGR8888 pitch greater than min required
[20:20:38] [PASSED] ABGR8888 pitch less than min required
[20:20:38] [PASSED] ABGR8888 Invalid width
[20:20:38] [PASSED] ABGR8888 Invalid buffer handle
[20:20:38] [PASSED] No pixel format
[20:20:38] [PASSED] ABGR8888 Width 0
[20:20:38] [PASSED] ABGR8888 Height 0
[20:20:38] [PASSED] ABGR8888 Out of bound height * pitch combination
[20:20:38] [PASSED] ABGR8888 Large buffer offset
[20:20:38] [PASSED] ABGR8888 Buffer offset for inexistent plane
[20:20:38] [PASSED] ABGR8888 Invalid flag
[20:20:38] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[20:20:38] [PASSED] ABGR8888 Valid buffer modifier
[20:20:38] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[20:20:38] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[20:20:38] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[20:20:38] [PASSED] NV12 Normal sizes
[20:20:38] [PASSED] NV12 Max sizes
[20:20:38] [PASSED] NV12 Invalid pitch
[20:20:38] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[20:20:38] [PASSED] NV12 different  modifier per-plane
[20:20:38] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[20:20:38] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[20:20:38] [PASSED] NV12 Modifier for inexistent plane
[20:20:38] [PASSED] NV12 Handle for inexistent plane
[20:20:38] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[20:20:38] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[20:20:38] [PASSED] YVU420 Normal sizes
[20:20:38] [PASSED] YVU420 Max sizes
[20:20:38] [PASSED] YVU420 Invalid pitch
[20:20:38] [PASSED] YVU420 Different pitches
[20:20:38] [PASSED] YVU420 Different buffer offsets/pitches
[20:20:38] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[20:20:38] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[20:20:38] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[20:20:38] [PASSED] YVU420 Valid modifier
[20:20:38] [PASSED] YVU420 Different modifiers per plane
[20:20:38] [PASSED] YVU420 Modifier for inexistent plane
[20:20:38] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[20:20:38] [PASSED] X0L2 Normal sizes
[20:20:38] [PASSED] X0L2 Max sizes
[20:20:38] [PASSED] X0L2 Invalid pitch
[20:20:38] [PASSED] X0L2 Pitch greater than minimum required
[20:20:38] [PASSED] X0L2 Handle for inexistent plane
[20:20:38] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[20:20:38] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[20:20:38] [PASSED] X0L2 Valid modifier
[20:20:38] [PASSED] X0L2 Modifier for inexistent plane
[20:20:38] =========== [PASSED] drm_test_framebuffer_create ===========
[20:20:38] [PASSED] drm_test_framebuffer_free
[20:20:38] [PASSED] drm_test_framebuffer_init
[20:20:38] [PASSED] drm_test_framebuffer_init_bad_format
[20:20:38] [PASSED] drm_test_framebuffer_init_dev_mismatch
[20:20:38] [PASSED] drm_test_framebuffer_lookup
[20:20:38] [PASSED] drm_test_framebuffer_lookup_inexistent
[20:20:38] [PASSED] drm_test_framebuffer_modifiers_not_supported
[20:20:38] ================= [PASSED] drm_framebuffer =================
[20:20:38] ================ drm_gem_shmem (8 subtests) ================
[20:20:38] [PASSED] drm_gem_shmem_test_obj_create
[20:20:38] [PASSED] drm_gem_shmem_test_obj_create_private
[20:20:38] [PASSED] drm_gem_shmem_test_pin_pages
[20:20:38] [PASSED] drm_gem_shmem_test_vmap
[20:20:38] [PASSED] drm_gem_shmem_test_get_sg_table
[20:20:38] [PASSED] drm_gem_shmem_test_get_pages_sgt
[20:20:38] [PASSED] drm_gem_shmem_test_madvise
[20:20:38] [PASSED] drm_gem_shmem_test_purge
[20:20:38] ================== [PASSED] drm_gem_shmem ==================
[20:20:38] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[20:20:38] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[20:20:38] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[20:20:38] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[20:20:38] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[20:20:38] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[20:20:38] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[20:20:38] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420  =======
[20:20:38] [PASSED] Automatic
[20:20:38] [PASSED] Full
[20:20:38] [PASSED] Limited 16:235
[20:20:38] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[20:20:38] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[20:20:38] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[20:20:38] [PASSED] drm_test_check_disable_connector
[20:20:38] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[20:20:38] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[20:20:38] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[20:20:38] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[20:20:38] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[20:20:38] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[20:20:38] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[20:20:38] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[20:20:38] [PASSED] drm_test_check_output_bpc_dvi
[20:20:38] [PASSED] drm_test_check_output_bpc_format_vic_1
[20:20:38] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[20:20:38] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[20:20:38] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[20:20:38] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[20:20:38] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[20:20:38] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[20:20:38] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[20:20:38] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[20:20:38] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[20:20:38] [PASSED] drm_test_check_broadcast_rgb_value
[20:20:38] [PASSED] drm_test_check_bpc_8_value
[20:20:38] [PASSED] drm_test_check_bpc_10_value
[20:20:38] [PASSED] drm_test_check_bpc_12_value
[20:20:38] [PASSED] drm_test_check_format_value
[20:20:38] [PASSED] drm_test_check_tmds_char_value
[20:20:38] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[20:20:38] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[20:20:38] [PASSED] drm_test_check_mode_valid
[20:20:38] [PASSED] drm_test_check_mode_valid_reject
[20:20:38] [PASSED] drm_test_check_mode_valid_reject_rate
[20:20:38] [PASSED] drm_test_check_mode_valid_reject_max_clock
[20:20:38] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[20:20:38] = drm_atomic_helper_connector_hdmi_infoframes (5 subtests) =
[20:20:38] [PASSED] drm_test_check_infoframes
[20:20:38] [PASSED] drm_test_check_reject_avi_infoframe
[20:20:38] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_8
[20:20:38] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_10
[20:20:38] [PASSED] drm_test_check_reject_audio_infoframe
[20:20:38] === [PASSED] drm_atomic_helper_connector_hdmi_infoframes ===
[20:20:38] ================= drm_managed (2 subtests) =================
[20:20:38] [PASSED] drm_test_managed_release_action
[20:20:38] [PASSED] drm_test_managed_run_action
[20:20:38] =================== [PASSED] drm_managed ===================
[20:20:38] =================== drm_mm (6 subtests) ====================
[20:20:38] [PASSED] drm_test_mm_init
[20:20:38] [PASSED] drm_test_mm_debug
[20:20:38] [PASSED] drm_test_mm_align32
[20:20:38] [PASSED] drm_test_mm_align64
[20:20:38] [PASSED] drm_test_mm_lowest
[20:20:38] [PASSED] drm_test_mm_highest
[20:20:38] ===================== [PASSED] drm_mm ======================
[20:20:38] ============= drm_modes_analog_tv (5 subtests) =============
[20:20:38] [PASSED] drm_test_modes_analog_tv_mono_576i
[20:20:38] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[20:20:38] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[20:20:38] [PASSED] drm_test_modes_analog_tv_pal_576i
[20:20:38] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[20:20:38] =============== [PASSED] drm_modes_analog_tv ===============
[20:20:38] ============== drm_plane_helper (2 subtests) ===============
[20:20:38] =============== drm_test_check_plane_state  ================
[20:20:38] [PASSED] clipping_simple
[20:20:38] [PASSED] clipping_rotate_reflect
[20:20:38] [PASSED] positioning_simple
[20:20:38] [PASSED] upscaling
[20:20:38] [PASSED] downscaling
[20:20:38] [PASSED] rounding1
[20:20:38] [PASSED] rounding2
[20:20:38] [PASSED] rounding3
[20:20:38] [PASSED] rounding4
[20:20:38] =========== [PASSED] drm_test_check_plane_state ============
[20:20:38] =========== drm_test_check_invalid_plane_state  ============
[20:20:38] [PASSED] positioning_invalid
[20:20:38] [PASSED] upscaling_invalid
[20:20:38] [PASSED] downscaling_invalid
[20:20:38] ======= [PASSED] drm_test_check_invalid_plane_state ========
[20:20:38] ================ [PASSED] drm_plane_helper =================
[20:20:38] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[20:20:38] ====== drm_test_connector_helper_tv_get_modes_check  =======
[20:20:38] [PASSED] None
[20:20:38] [PASSED] PAL
[20:20:38] [PASSED] NTSC
[20:20:38] [PASSED] Both, NTSC Default
[20:20:38] [PASSED] Both, PAL Default
[20:20:38] [PASSED] Both, NTSC Default, with PAL on command-line
[20:20:38] [PASSED] Both, PAL Default, with NTSC on command-line
[20:20:38] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[20:20:38] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[20:20:38] ================== drm_rect (9 subtests) ===================
[20:20:38] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[20:20:38] [PASSED] drm_test_rect_clip_scaled_not_clipped
[20:20:38] [PASSED] drm_test_rect_clip_scaled_clipped
[20:20:38] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[20:20:38] ================= drm_test_rect_intersect  =================
[20:20:38] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[20:20:38] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[20:20:38] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[20:20:38] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[20:20:38] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[20:20:38] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[20:20:38] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[20:20:38] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[20:20:38] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[20:20:38] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[20:20:38] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[20:20:38] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[20:20:38] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[20:20:38] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[20:20:38] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[20:20:38] ============= [PASSED] drm_test_rect_intersect =============
[20:20:38] ================ drm_test_rect_calc_hscale  ================
[20:20:38] [PASSED] normal use
[20:20:38] [PASSED] out of max range
[20:20:38] [PASSED] out of min range
[20:20:38] [PASSED] zero dst
[20:20:38] [PASSED] negative src
[20:20:38] [PASSED] negative dst
[20:20:38] ============ [PASSED] drm_test_rect_calc_hscale ============
[20:20:38] ================ drm_test_rect_calc_vscale  ================
[20:20:38] [PASSED] normal use
[20:20:38] [PASSED] out of max range
[20:20:38] [PASSED] out of min range
[20:20:38] [PASSED] zero dst
[20:20:38] [PASSED] negative src
[20:20:38] [PASSED] negative dst
[20:20:38] ============ [PASSED] drm_test_rect_calc_vscale ============
[20:20:38] ================== drm_test_rect_rotate  ===================
[20:20:38] [PASSED] reflect-x
[20:20:38] [PASSED] reflect-y
[20:20:38] [PASSED] rotate-0
[20:20:38] [PASSED] rotate-90
[20:20:38] [PASSED] rotate-180
[20:20:38] [PASSED] rotate-270
[20:20:38] ============== [PASSED] drm_test_rect_rotate ===============
[20:20:38] ================ drm_test_rect_rotate_inv  =================
[20:20:38] [PASSED] reflect-x
[20:20:38] [PASSED] reflect-y
[20:20:38] [PASSED] rotate-0
[20:20:38] [PASSED] rotate-90
[20:20:38] [PASSED] rotate-180
[20:20:38] [PASSED] rotate-270
[20:20:38] ============ [PASSED] drm_test_rect_rotate_inv =============
[20:20:38] ==================== [PASSED] drm_rect =====================
[20:20:38] ============ drm_sysfb_modeset_test (1 subtest) ============
[20:20:38] ============ drm_test_sysfb_build_fourcc_list  =============
[20:20:38] [PASSED] no native formats
[20:20:38] [PASSED] XRGB8888 as native format
[20:20:38] [PASSED] remove duplicates
[20:20:38] [PASSED] convert alpha formats
[20:20:38] [PASSED] random formats
[20:20:38] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[20:20:38] ============= [PASSED] drm_sysfb_modeset_test ==============
[20:20:38] ================== drm_fixp (2 subtests) ===================
[20:20:38] [PASSED] drm_test_int2fixp
[20:20:38] [PASSED] drm_test_sm2fixp
[20:20:38] ==================== [PASSED] drm_fixp =====================
[20:20:38] ============================================================
[20:20:38] Testing complete. Ran 621 tests: passed: 621
[20:20:38] Elapsed time: 26.567s total, 1.774s configuring, 24.628s building, 0.113s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[20:20:38] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:20:40] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:20:50] Starting KUnit Kernel (1/1)...
[20:20:50] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:20:50] ================= ttm_device (5 subtests) ==================
[20:20:50] [PASSED] ttm_device_init_basic
[20:20:50] [PASSED] ttm_device_init_multiple
[20:20:50] [PASSED] ttm_device_fini_basic
[20:20:50] [PASSED] ttm_device_init_no_vma_man
[20:20:50] ================== ttm_device_init_pools  ==================
[20:20:50] [PASSED] No DMA allocations, no DMA32 required
[20:20:50] [PASSED] DMA allocations, DMA32 required
[20:20:50] [PASSED] No DMA allocations, DMA32 required
[20:20:50] [PASSED] DMA allocations, no DMA32 required
[20:20:50] ============== [PASSED] ttm_device_init_pools ==============
[20:20:50] =================== [PASSED] ttm_device ====================
[20:20:50] ================== ttm_pool (8 subtests) ===================
[20:20:50] ================== ttm_pool_alloc_basic  ===================
[20:20:50] [PASSED] One page
[20:20:50] [PASSED] More than one page
[20:20:50] [PASSED] Above the allocation limit
[20:20:50] [PASSED] One page, with coherent DMA mappings enabled
[20:20:50] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[20:20:50] ============== [PASSED] ttm_pool_alloc_basic ===============
[20:20:50] ============== ttm_pool_alloc_basic_dma_addr  ==============
[20:20:50] [PASSED] One page
[20:20:50] [PASSED] More than one page
[20:20:50] [PASSED] Above the allocation limit
[20:20:50] [PASSED] One page, with coherent DMA mappings enabled
[20:20:50] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[20:20:50] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[20:20:50] [PASSED] ttm_pool_alloc_order_caching_match
[20:20:50] [PASSED] ttm_pool_alloc_caching_mismatch
[20:20:50] [PASSED] ttm_pool_alloc_order_mismatch
[20:20:50] [PASSED] ttm_pool_free_dma_alloc
[20:20:50] [PASSED] ttm_pool_free_no_dma_alloc
[20:20:50] [PASSED] ttm_pool_fini_basic
[20:20:50] ==================== [PASSED] ttm_pool =====================
[20:20:50] ================ ttm_resource (8 subtests) =================
[20:20:50] ================= ttm_resource_init_basic  =================
[20:20:50] [PASSED] Init resource in TTM_PL_SYSTEM
[20:20:50] [PASSED] Init resource in TTM_PL_VRAM
[20:20:50] [PASSED] Init resource in a private placement
[20:20:50] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[20:20:50] ============= [PASSED] ttm_resource_init_basic =============
[20:20:50] [PASSED] ttm_resource_init_pinned
[20:20:50] [PASSED] ttm_resource_fini_basic
[20:20:50] [PASSED] ttm_resource_manager_init_basic
[20:20:50] [PASSED] ttm_resource_manager_usage_basic
[20:20:50] [PASSED] ttm_resource_manager_set_used_basic
[20:20:50] [PASSED] ttm_sys_man_alloc_basic
[20:20:50] [PASSED] ttm_sys_man_free_basic
[20:20:50] ================== [PASSED] ttm_resource ===================
[20:20:50] =================== ttm_tt (15 subtests) ===================
[20:20:50] ==================== ttm_tt_init_basic  ====================
[20:20:50] [PASSED] Page-aligned size
[20:20:50] [PASSED] Extra pages requested
[20:20:50] ================ [PASSED] ttm_tt_init_basic ================
[20:20:50] [PASSED] ttm_tt_init_misaligned
[20:20:50] [PASSED] ttm_tt_fini_basic
[20:20:50] [PASSED] ttm_tt_fini_sg
[20:20:50] [PASSED] ttm_tt_fini_shmem
[20:20:50] [PASSED] ttm_tt_create_basic
[20:20:50] [PASSED] ttm_tt_create_invalid_bo_type
[20:20:50] [PASSED] ttm_tt_create_ttm_exists
[20:20:50] [PASSED] ttm_tt_create_failed
[20:20:50] [PASSED] ttm_tt_destroy_basic
[20:20:50] [PASSED] ttm_tt_populate_null_ttm
[20:20:50] [PASSED] ttm_tt_populate_populated_ttm
[20:20:50] [PASSED] ttm_tt_unpopulate_basic
[20:20:50] [PASSED] ttm_tt_unpopulate_empty_ttm
[20:20:50] [PASSED] ttm_tt_swapin_basic
[20:20:50] ===================== [PASSED] ttm_tt ======================
[20:20:50] =================== ttm_bo (14 subtests) ===================
[20:20:50] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[20:20:50] [PASSED] Cannot be interrupted and sleeps
[20:20:50] [PASSED] Cannot be interrupted, locks straight away
[20:20:50] [PASSED] Can be interrupted, sleeps
[20:20:50] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[20:20:50] [PASSED] ttm_bo_reserve_locked_no_sleep
[20:20:50] [PASSED] ttm_bo_reserve_no_wait_ticket
[20:20:50] [PASSED] ttm_bo_reserve_double_resv
[20:20:50] [PASSED] ttm_bo_reserve_interrupted
[20:20:50] [PASSED] ttm_bo_reserve_deadlock
[20:20:50] [PASSED] ttm_bo_unreserve_basic
[20:20:50] [PASSED] ttm_bo_unreserve_pinned
[20:20:50] [PASSED] ttm_bo_unreserve_bulk
[20:20:50] [PASSED] ttm_bo_fini_basic
[20:20:50] [PASSED] ttm_bo_fini_shared_resv
[20:20:50] [PASSED] ttm_bo_pin_basic
[20:20:50] [PASSED] ttm_bo_pin_unpin_resource
[20:20:50] [PASSED] ttm_bo_multiple_pin_one_unpin
[20:20:50] ===================== [PASSED] ttm_bo ======================
[20:20:50] ============== ttm_bo_validate (22 subtests) ===============
[20:20:50] ============== ttm_bo_init_reserved_sys_man  ===============
[20:20:50] [PASSED] Buffer object for userspace
[20:20:50] [PASSED] Kernel buffer object
[20:20:50] [PASSED] Shared buffer object
[20:20:50] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[20:20:50] ============== ttm_bo_init_reserved_mock_man  ==============
[20:20:50] [PASSED] Buffer object for userspace
[20:20:50] [PASSED] Kernel buffer object
[20:20:50] [PASSED] Shared buffer object
[20:20:50] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[20:20:50] [PASSED] ttm_bo_init_reserved_resv
[20:20:50] ================== ttm_bo_validate_basic  ==================
[20:20:50] [PASSED] Buffer object for userspace
[20:20:50] [PASSED] Kernel buffer object
[20:20:50] [PASSED] Shared buffer object
[20:20:50] ============== [PASSED] ttm_bo_validate_basic ==============
[20:20:50] [PASSED] ttm_bo_validate_invalid_placement
[20:20:50] ============= ttm_bo_validate_same_placement  ==============
[20:20:50] [PASSED] System manager
[20:20:50] [PASSED] VRAM manager
[20:20:50] ========= [PASSED] ttm_bo_validate_same_placement ==========
[20:20:50] [PASSED] ttm_bo_validate_failed_alloc
[20:20:50] [PASSED] ttm_bo_validate_pinned
[20:20:50] [PASSED] ttm_bo_validate_busy_placement
[20:20:50] ================ ttm_bo_validate_multihop  =================
[20:20:50] [PASSED] Buffer object for userspace
[20:20:50] [PASSED] Kernel buffer object
[20:20:50] [PASSED] Shared buffer object
[20:20:50] ============ [PASSED] ttm_bo_validate_multihop =============
[20:20:50] ========== ttm_bo_validate_no_placement_signaled  ==========
[20:20:50] [PASSED] Buffer object in system domain, no page vector
[20:20:50] [PASSED] Buffer object in system domain with an existing page vector
[20:20:50] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[20:20:50] ======== ttm_bo_validate_no_placement_not_signaled  ========
[20:20:50] [PASSED] Buffer object for userspace
[20:20:50] [PASSED] Kernel buffer object
[20:20:50] [PASSED] Shared buffer object
[20:20:50] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[20:20:50] [PASSED] ttm_bo_validate_move_fence_signaled
[20:20:50] ========= ttm_bo_validate_move_fence_not_signaled  =========
[20:20:50] [PASSED] Waits for GPU
[20:20:50] [PASSED] Tries to lock straight away
[20:20:50] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[20:20:50] [PASSED] ttm_bo_validate_swapout
[20:20:50] [PASSED] ttm_bo_validate_happy_evict
[20:20:50] [PASSED] ttm_bo_validate_all_pinned_evict
[20:20:50] [PASSED] ttm_bo_validate_allowed_only_evict
[20:20:50] [PASSED] ttm_bo_validate_deleted_evict
[20:20:50] [PASSED] ttm_bo_validate_busy_domain_evict
[20:20:50] [PASSED] ttm_bo_validate_evict_gutting
[20:20:50] [PASSED] ttm_bo_validate_recrusive_evict
[20:20:50] ================= [PASSED] ttm_bo_validate =================
[20:20:50] ============================================================
[20:20:50] Testing complete. Ran 102 tests: passed: 102
[20:20:50] Elapsed time: 11.787s total, 1.762s configuring, 9.760s building, 0.230s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 2/2] drm/ttm/pool: back up at native page order
  2026-05-05 20:04 ` [PATCH v5 2/2] drm/ttm/pool: back up at native page order Matthew Brost
@ 2026-05-06 14:23   ` Thomas Hellström
  2026-05-06 16:14     ` Matthew Brost
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Hellström @ 2026-05-06 14:23 UTC (permalink / raw)
  To: Matthew Brost, intel-xe, dri-devel
  Cc: Christian Koenig, Huang Rui, Matthew Auld, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	linux-kernel, stable

Hi, Matt

On Tue, 2026-05-05 at 13:04 -0700, Matthew Brost wrote:
> ttm_pool_split_for_swap() splits high-order pool pages into order-0
> pages during backup so each 4K page can be released to the system as
> soon as it has been written to shmem. While this minimizes the
> allocator's working set during reclaim, it actively fragments memory:
> every TTM-backed compound page that the shrinker touches is shattered
> into order-0 pages, even when the rest of the system would prefer
> that
> the high-order block stay intact. Under sustained kswapd pressure
> this
> is enough to drive other parts of MM into recovery loops from which
> they cannot easily escape, because the memory TTM just freed is no
> longer contiguous.
> 
> Stop unconditionally splitting on the backup path and back up each
> compound at its native order in ttm_pool_backup():
> 
>   - For each non-handle slot, read the order from the head page and
>     back up all 1<<order subpages to consecutive shmem indices,
>     writing the resulting handles into tt->pages[] as we go.
>   - On success, the compound is freed once at its native order. No
>     split_page(), no per-4K refcount juggling, no fragmentation
>     introduced from this path.
>   - Slots that already hold a backup handle from a previous partial
>     attempt are skipped. A compound that would extend past a
>     fault-injection-truncated num_pages is skipped rather than split.
> 
> A per-subpage backup failure cannot be made fully atomic: backing up
> a
> subpage allocates a shmem folio before the source page can be
> released,
> so under true OOM any subpage in a compound (not just the first) may
> fail to be backed up with the rest of the source compound still live
> and contiguous. To make forward progress in that case, fall back to
> splitting the source compound and backing up its remaining subpages
> individually:
> 
>   - On the first per-subpage failure for a compound (and only if
>     order > 0), call ttm_pool_split_for_swap() to split the source
>     compound, release the subpages whose contents already live in
>     shmem (their handles in tt->pages stay valid), and retry the
>     failing subpage at order 0.
>   - Subsequent successful subpage backups in the now-split compound
>     free their source page individually as soon as the handle is
>     written.
>   - A second failure after splitting terminates the loop with partial
>     progress; the remaining order-0 subpages stay in tt->pages as
>     plain page pointers and are cleaned up by the normal
>     ttm_pool_drop_backed_up() / ttm_pool_free_range() paths.
> 
> This restores the original split-on-OOM fallback behavior while
> keeping the common, non-OOM case fragmentation-free. It also
> preserves the "partial backup is allowed" contract: shrunken is
> incremented per backed-up subpage so the caller still sees forward
> progress when a compound only partially succeeds.
> 
> The restore-side leftover-page branch in ttm_pool_restore_commit() is
> left as-is for now: that path can still split a previously-retained
> compound, but in practice it is unreachable under realistic workloads
> (per profiling we have not been able to trigger it), so it is not
> worth complicating the restore state machine to avoid the split
> there.
> If it ever becomes a problem in practice it can be addressed
> independently.
> 
> ttm_pool_split_for_swap() itself is retained both for the OOM
> fallback above and for the restore path's remaining caller. The
> DMA-mapped pre-backup unmap loop, the purge path, ttm_pool_free_*,
> and ttm_pool_unmap_and_free() already operate at native order and
> are unchanged.
> 
> Cc: Christian Koenig <christian.koenig@amd.com>
> Cc: Huang Rui <ray.huang@amd.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: David Airlie <airlied@gmail.com>
> Cc: Simona Vetter <simona@ffwll.ch>
> Cc: dri-devel@lists.freedesktop.org
> Cc: linux-kernel@vger.kernel.org
> Cc: stable@vger.kernel.org
> Fixes: b63d715b8090 ("drm/ttm/pool, drm/ttm/tt: Provide a helper to
> shrink pages")
> Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Assisted-by: Claude:claude-opus-4.6
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> 
> ---
> 
> A follow-up should attempt writeback to shmem at folio order as well,
> but the API for doing so is unclear and may be incomplete.
> 
> This patch is related to the pending series [1] and significantly
> reduces the likelihood of Xe entering a kswapd loop under
> fragmentation.
> The kswapd → shrinker → Xe shrinker → TTM backup path is still
> exercised; however, with this change the backup path no longer
> worsens
> fragmentation, which previously amplified reclaim pressure and
> reinforced the kswapd loop.
> 
> Nonetheless, the pathological case that [1] aims to address still
> exists
> and requires a proper solution. Even with this patch, a kswapd loop
> due
> to severe fragmentation can still be triggered, although it is now
> substantially harder to reproduce.
> 
> v2:
>  - Split pages and free immediately if backup fails are higher order
>    (Thomas)
> v3:
>  - Skip handles in purge path (sashiko)
> v5:
>  - Refactor into ttm_pool_backup_folio (Thomas)
> 
> [1] https://patchwork.freedesktop.org/series/165330/
> ---
>  drivers/gpu/drm/ttm/ttm_pool.c | 110 ++++++++++++++++++++++++++++---
> --
>  1 file changed, 94 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
> b/drivers/gpu/drm/ttm/ttm_pool.c
> index d380a3c7fe40..78efc8524133 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -1019,6 +1019,70 @@ void ttm_pool_drop_backed_up(struct ttm_tt
> *tt)
>  	ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt-
> >num_pages);
>  }
>  
> +static int ttm_pool_backup_folio(struct ttm_pool *pool, struct
> ttm_tt *tt,
> +				 struct file *backup, struct folio
> *folio,
> +				 unsigned int order, bool writeback,
> +				 pgoff_t idx, gfp_t page_gfp, gfp_t
> alloc_gfp)

I don't really understand why we can't end up with a
ttm_backup_backup_folio(), which I believe is the proper layering,
already at this point? Please see a suggestion at 

https://gitlab.freedesktop.org/thomash/xe-vibe/-/commits/ttm_swapout?ref_type=heads

Here the splitting logic is kept in the ttm_pool, but ttm_backup
supports handing large folios to it.

Although the cumulative diffstat becomes larger, the end code becomes
smaller and IMO easier to read, and we don't need to introduce code
that we immediately have to refactor.

But I'm starting to question the general approach: Even if the
*shrinker* can recover from a total kernel memory reserve depletion, it
can't really be considered a reasonable practice, since if we
frequently deplete the reserves, *other* important allocations in the
system like GFP_ATOMIC, PF_MEMALLOC may spuriously start to fail and
people will have a hard time finding out why.

So I actually don't think we can be avoiding the splitting without
direct insertion. FWIW, up until recently when shmem started supporting
huge page swapping, other GPU drivers basically also split pages at
swapout.

Another idea for improving on the compaction loop, perhaps worth trying
is this change, shamelessly stolen from i915:

https://gitlab.freedesktop.org/thomash/xe-vibe/-/commits/shrinker_batch?ref_type=heads

/Thomas


> +{
> +	struct page *page = folio_page(folio, 0);
> +	int shrunken = 0, npages = 1UL << order, ret = 0, i;
> +	bool folio_has_been_split = false;
> +
> +	for (i = 0; i < npages; ++i) {
> +		s64 shandle;
> +
> +try_again_after_split:
> +		if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
> +		    should_fail(&backup_fault_inject, 1))
> +			shandle = -ENOMEM;
> +		else
> +			shandle = ttm_backup_backup_page(backup,
> page + i,
> +							 writeback,
> idx + i,
> +							 page_gfp,
> alloc_gfp);
> +
> +		if (shandle < 0 && !folio_has_been_split && order) {
> +			pgoff_t j;
> +
> +			/*
> +			 * True OOM: could not allocate a shmem
> folio
> +			 * for the next subpage. Fall back to
> splitting
> +			 * the source compound and backing up
> subpages
> +			 * individually. Release the already-backed-
> up
> +			 * subpages whose contents now live in
> shmem;
> +			 * any further failure terminates the loop
> with
> +			 * partial progress (handled by the caller).
> +			 */
> +			folio_has_been_split = true;
> +			ttm_pool_split_for_swap(pool, page);
> +
> +			for (j = 0; j < i; ++j) {
> +				__free_pages_gpu_account(page + j,
> 0, false);
> +				shrunken++;
> +			}
> +
> +			goto try_again_after_split;
> +		} else if (shandle < 0) {
> +			ret = shandle;
> +			goto out;
> +		} else if (folio_has_been_split) {
> +			__free_pages_gpu_account(page + i, 0,
> false);
> +			shrunken++;
> +		}
> +
> +		tt->pages[idx + i] =
> ttm_backup_handle_to_page_ptr(shandle);
> +	}
> +
> +	if (!folio_has_been_split) {
> +		/* Compound fully backed up; free at native order.
> */
> +		page->private = 0;
> +		__free_pages_gpu_account(page, order, false);
> +		shrunken += npages;
> +	}
> +
> +out:
> +	return shrunken ? shrunken : ret;
> +}
> +
>  /**
>   * ttm_pool_backup() - Back up or purge a struct ttm_tt
>   * @pool: The pool used when allocating the struct ttm_tt.
> @@ -1045,12 +1109,11 @@ long ttm_pool_backup(struct ttm_pool *pool,
> struct ttm_tt *tt,
>  {
>  	struct file *backup = tt->backup;
>  	struct page *page;
> -	unsigned long handle;
>  	gfp_t alloc_gfp;
>  	gfp_t gfp;
>  	int ret = 0;
>  	pgoff_t shrunken = 0;
> -	pgoff_t i, num_pages;
> +	pgoff_t i, num_pages, npages;
>  
>  	if (WARN_ON(ttm_tt_is_backed_up(tt)))
>  		return -EINVAL;
> @@ -1070,7 +1133,8 @@ long ttm_pool_backup(struct ttm_pool *pool,
> struct ttm_tt *tt,
>  			unsigned int order;
>  
>  			page = tt->pages[i];
> -			if (unlikely(!page)) {
> +			if (unlikely(!page ||
> +				    
> ttm_backup_page_ptr_is_handle(page))) {
>  				num_pages = 1;
>  				continue;
>  			}
> @@ -1106,26 +1170,40 @@ long ttm_pool_backup(struct ttm_pool *pool,
> struct ttm_tt *tt,
>  	if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
> should_fail(&backup_fault_inject, 1))
>  		num_pages = DIV_ROUND_UP(num_pages, 2);
>  
> -	for (i = 0; i < num_pages; ++i) {
> -		s64 shandle;
> +	for (i = 0; i < num_pages; i += npages) {
> +		unsigned int order;
>  
> +		npages = 1;
>  		page = tt->pages[i];
>  		if (unlikely(!page))
>  			continue;
>  
> -		ttm_pool_split_for_swap(pool, page);
> +		/* Already-handled entry from a previous attempt. */
> +		if (unlikely(ttm_backup_page_ptr_is_handle(page)))
> +			continue;
>  
> -		shandle = ttm_backup_backup_page(backup, page,
> flags->writeback, i,
> -						 gfp, alloc_gfp);
> -		if (shandle < 0) {
> -			/* We allow partially shrunken tts */
> -			ret = shandle;
> +		order = ttm_pool_page_order(pool, page);
> +		npages = 1UL << order;
> +
> +		/*
> +		 * Back up the compound atomically at its native
> order. If
> +		 * fault injection truncated num_pages mid-compound,
> skip
> +		 * the partial tail rather than splitting.
> +		 */
> +		if (unlikely(i + npages > num_pages))
> +			break;
> +
> +		ret = ttm_pool_backup_folio(pool, tt, backup,
> page_folio(page),
> +					    order, flags->writeback,
> i, gfp,
> +					    alloc_gfp);
> +		if (unlikely(ret < 0))
> +			break;
> +
> +		shrunken += ret;
> +
> +		/* partial backup */
> +		if (unlikely(ret != npages))
>  			break;
> -		}
> -		handle = shandle;
> -		tt->pages[i] =
> ttm_backup_handle_to_page_ptr(handle);
> -		__free_pages_gpu_account(page, 0, false);
> -		shrunken++;
>  	}
>  
>  	return shrunken ? shrunken : ret;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 2/2] drm/ttm/pool: back up at native page order
  2026-05-06 14:23   ` Thomas Hellström
@ 2026-05-06 16:14     ` Matthew Brost
  2026-05-06 16:16       ` Matthew Brost
  2026-05-06 16:26       ` Thomas Hellström
  0 siblings, 2 replies; 10+ messages in thread
From: Matthew Brost @ 2026-05-06 16:14 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: intel-xe, dri-devel, Christian Koenig, Huang Rui, Matthew Auld,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, linux-kernel, stable

On Wed, May 06, 2026 at 04:23:29PM +0200, Thomas Hellström wrote:
> Hi, Matt
> 
> On Tue, 2026-05-05 at 13:04 -0700, Matthew Brost wrote:
> > ttm_pool_split_for_swap() splits high-order pool pages into order-0
> > pages during backup so each 4K page can be released to the system as
> > soon as it has been written to shmem. While this minimizes the
> > allocator's working set during reclaim, it actively fragments memory:
> > every TTM-backed compound page that the shrinker touches is shattered
> > into order-0 pages, even when the rest of the system would prefer
> > that
> > the high-order block stay intact. Under sustained kswapd pressure
> > this
> > is enough to drive other parts of MM into recovery loops from which
> > they cannot easily escape, because the memory TTM just freed is no
> > longer contiguous.
> > 
> > Stop unconditionally splitting on the backup path and back up each
> > compound at its native order in ttm_pool_backup():
> > 
> >   - For each non-handle slot, read the order from the head page and
> >     back up all 1<<order subpages to consecutive shmem indices,
> >     writing the resulting handles into tt->pages[] as we go.
> >   - On success, the compound is freed once at its native order. No
> >     split_page(), no per-4K refcount juggling, no fragmentation
> >     introduced from this path.
> >   - Slots that already hold a backup handle from a previous partial
> >     attempt are skipped. A compound that would extend past a
> >     fault-injection-truncated num_pages is skipped rather than split.
> > 
> > A per-subpage backup failure cannot be made fully atomic: backing up
> > a
> > subpage allocates a shmem folio before the source page can be
> > released,
> > so under true OOM any subpage in a compound (not just the first) may
> > fail to be backed up with the rest of the source compound still live
> > and contiguous. To make forward progress in that case, fall back to
> > splitting the source compound and backing up its remaining subpages
> > individually:
> > 
> >   - On the first per-subpage failure for a compound (and only if
> >     order > 0), call ttm_pool_split_for_swap() to split the source
> >     compound, release the subpages whose contents already live in
> >     shmem (their handles in tt->pages stay valid), and retry the
> >     failing subpage at order 0.
> >   - Subsequent successful subpage backups in the now-split compound
> >     free their source page individually as soon as the handle is
> >     written.
> >   - A second failure after splitting terminates the loop with partial
> >     progress; the remaining order-0 subpages stay in tt->pages as
> >     plain page pointers and are cleaned up by the normal
> >     ttm_pool_drop_backed_up() / ttm_pool_free_range() paths.
> > 
> > This restores the original split-on-OOM fallback behavior while
> > keeping the common, non-OOM case fragmentation-free. It also
> > preserves the "partial backup is allowed" contract: shrunken is
> > incremented per backed-up subpage so the caller still sees forward
> > progress when a compound only partially succeeds.
> > 
> > The restore-side leftover-page branch in ttm_pool_restore_commit() is
> > left as-is for now: that path can still split a previously-retained
> > compound, but in practice it is unreachable under realistic workloads
> > (per profiling we have not been able to trigger it), so it is not
> > worth complicating the restore state machine to avoid the split
> > there.
> > If it ever becomes a problem in practice it can be addressed
> > independently.
> > 
> > ttm_pool_split_for_swap() itself is retained both for the OOM
> > fallback above and for the restore path's remaining caller. The
> > DMA-mapped pre-backup unmap loop, the purge path, ttm_pool_free_*,
> > and ttm_pool_unmap_and_free() already operate at native order and
> > are unchanged.
> > 
> > Cc: Christian Koenig <christian.koenig@amd.com>
> > Cc: Huang Rui <ray.huang@amd.com>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > Cc: Maxime Ripard <mripard@kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: David Airlie <airlied@gmail.com>
> > Cc: Simona Vetter <simona@ffwll.ch>
> > Cc: dri-devel@lists.freedesktop.org
> > Cc: linux-kernel@vger.kernel.org
> > Cc: stable@vger.kernel.org
> > Fixes: b63d715b8090 ("drm/ttm/pool, drm/ttm/tt: Provide a helper to
> > shrink pages")
> > Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Assisted-by: Claude:claude-opus-4.6
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > 
> > ---
> > 
> > A follow-up should attempt writeback to shmem at folio order as well,
> > but the API for doing so is unclear and may be incomplete.
> > 
> > This patch is related to the pending series [1] and significantly
> > reduces the likelihood of Xe entering a kswapd loop under
> > fragmentation.
> > The kswapd → shrinker → Xe shrinker → TTM backup path is still
> > exercised; however, with this change the backup path no longer
> > worsens
> > fragmentation, which previously amplified reclaim pressure and
> > reinforced the kswapd loop.
> > 
> > Nonetheless, the pathological case that [1] aims to address still
> > exists
> > and requires a proper solution. Even with this patch, a kswapd loop
> > due
> > to severe fragmentation can still be triggered, although it is now
> > substantially harder to reproduce.
> > 
> > v2:
> >  - Split pages and free immediately if backup fails are higher order
> >    (Thomas)
> > v3:
> >  - Skip handles in purge path (sashiko)
> > v5:
> >  - Refactor into ttm_pool_backup_folio (Thomas)
> > 
> > [1] https://patchwork.freedesktop.org/series/165330/
> > ---
> >  drivers/gpu/drm/ttm/ttm_pool.c | 110 ++++++++++++++++++++++++++++---
> > --
> >  1 file changed, 94 insertions(+), 16 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
> > b/drivers/gpu/drm/ttm/ttm_pool.c
> > index d380a3c7fe40..78efc8524133 100644
> > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > @@ -1019,6 +1019,70 @@ void ttm_pool_drop_backed_up(struct ttm_tt
> > *tt)
> >  	ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt-
> > >num_pages);
> >  }
> >  
> > +static int ttm_pool_backup_folio(struct ttm_pool *pool, struct
> > ttm_tt *tt,
> > +				 struct file *backup, struct folio
> > *folio,
> > +				 unsigned int order, bool writeback,
> > +				 pgoff_t idx, gfp_t page_gfp, gfp_t
> > alloc_gfp)
> 
> I don't really understand why we can't end up with a
> ttm_backup_backup_folio(), which I believe is the proper layering,
> already at this point? Please see a suggestion at 
> 
> https://gitlab.freedesktop.org/thomash/xe-vibe/-/commits/ttm_swapout?ref_type=heads
> 
> Here the splitting logic is kept in the ttm_pool, but ttm_backup
> supports handing large folios to it.
> 
> Although the cumulative diffstat becomes larger, the end code becomes
> smaller and IMO easier to read, and we don't need to introduce code
> that we immediately have to refactor.

That version looks fine too. If that is preference no issue.

My goal with this series is get something than can reasonably be
backported to LTS kernels so the desktop doesn't frequently enter kswapd
because of fragmentation. We now have at least 3 reports of this being
an issue.

This is larger fix [1] which works in tandem but seemly unlikely to
backportable given it add new concepts to the core MM [1].

[1] https://patchwork.freedesktop.org/series/165329/

> 
> But I'm starting to question the general approach: Even if the
> *shrinker* can recover from a total kernel memory reserve depletion, it
> can't really be considered a reasonable practice, since if we
> frequently deplete the reserves, *other* important allocations in the
> system like GFP_ATOMIC, PF_MEMALLOC may spuriously start to fail and
> people will have a hard time finding out why.
> 

Wouldn’t GFP_ATOMIC enter direct reclaim, hit our shrinker, and
eventually make progress—i.e., take the split path if needed? I’m not
100% sure, but my initial reaction is that this concern may not be
valid; however, MM is hard to reason about.

Again, FWIW, I’ve tried a lot of things to trigger OOM—for example,
running WebGL tabs and then kicking off various very memory-intensive
workloads from the CLI—and I still haven’t hit OOM or seen memory
allocation failures or warnings.

> So I actually don't think we can be avoiding the splitting without
> direct insertion. FWIW, up until recently when shmem started supporting

I agree direct insertion is better solution. Do you think this something
we could reasonably get working and backport? I haven't done any
research on direct insertion yet, thus why I'm asking.

> huge page swapping, other GPU drivers basically also split pages at
> swapout.

I wonder if other drivers have the same issue? The deadly combo is allow
GPUs to subscribe all of system memory, allocate THP pages (or higher
order pages), and split them in the shrinker. Xe might be the only
driver with right combo to hit this but not 100% sure without a deep
dive.

> 
> Another idea for improving on the compaction loop, perhaps worth trying
> is this change, shamelessly stolen from i915:
> 
> https://gitlab.freedesktop.org/thomash/xe-vibe/-/commits/shrinker_batch?ref_type=heads
> 

I'd have to give this a try - I'm quickly running out of time before I
leave for month though.

Matt

> /Thomas
> 
> 
> > +{
> > +	struct page *page = folio_page(folio, 0);
> > +	int shrunken = 0, npages = 1UL << order, ret = 0, i;
> > +	bool folio_has_been_split = false;
> > +
> > +	for (i = 0; i < npages; ++i) {
> > +		s64 shandle;
> > +
> > +try_again_after_split:
> > +		if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
> > +		    should_fail(&backup_fault_inject, 1))
> > +			shandle = -ENOMEM;
> > +		else
> > +			shandle = ttm_backup_backup_page(backup,
> > page + i,
> > +							 writeback,
> > idx + i,
> > +							 page_gfp,
> > alloc_gfp);
> > +
> > +		if (shandle < 0 && !folio_has_been_split && order) {
> > +			pgoff_t j;
> > +
> > +			/*
> > +			 * True OOM: could not allocate a shmem
> > folio
> > +			 * for the next subpage. Fall back to
> > splitting
> > +			 * the source compound and backing up
> > subpages
> > +			 * individually. Release the already-backed-
> > up
> > +			 * subpages whose contents now live in
> > shmem;
> > +			 * any further failure terminates the loop
> > with
> > +			 * partial progress (handled by the caller).
> > +			 */
> > +			folio_has_been_split = true;
> > +			ttm_pool_split_for_swap(pool, page);
> > +
> > +			for (j = 0; j < i; ++j) {
> > +				__free_pages_gpu_account(page + j,
> > 0, false);
> > +				shrunken++;
> > +			}
> > +
> > +			goto try_again_after_split;
> > +		} else if (shandle < 0) {
> > +			ret = shandle;
> > +			goto out;
> > +		} else if (folio_has_been_split) {
> > +			__free_pages_gpu_account(page + i, 0,
> > false);
> > +			shrunken++;
> > +		}
> > +
> > +		tt->pages[idx + i] =
> > ttm_backup_handle_to_page_ptr(shandle);
> > +	}
> > +
> > +	if (!folio_has_been_split) {
> > +		/* Compound fully backed up; free at native order.
> > */
> > +		page->private = 0;
> > +		__free_pages_gpu_account(page, order, false);
> > +		shrunken += npages;
> > +	}
> > +
> > +out:
> > +	return shrunken ? shrunken : ret;
> > +}
> > +
> >  /**
> >   * ttm_pool_backup() - Back up or purge a struct ttm_tt
> >   * @pool: The pool used when allocating the struct ttm_tt.
> > @@ -1045,12 +1109,11 @@ long ttm_pool_backup(struct ttm_pool *pool,
> > struct ttm_tt *tt,
> >  {
> >  	struct file *backup = tt->backup;
> >  	struct page *page;
> > -	unsigned long handle;
> >  	gfp_t alloc_gfp;
> >  	gfp_t gfp;
> >  	int ret = 0;
> >  	pgoff_t shrunken = 0;
> > -	pgoff_t i, num_pages;
> > +	pgoff_t i, num_pages, npages;
> >  
> >  	if (WARN_ON(ttm_tt_is_backed_up(tt)))
> >  		return -EINVAL;
> > @@ -1070,7 +1133,8 @@ long ttm_pool_backup(struct ttm_pool *pool,
> > struct ttm_tt *tt,
> >  			unsigned int order;
> >  
> >  			page = tt->pages[i];
> > -			if (unlikely(!page)) {
> > +			if (unlikely(!page ||
> > +				    
> > ttm_backup_page_ptr_is_handle(page))) {
> >  				num_pages = 1;
> >  				continue;
> >  			}
> > @@ -1106,26 +1170,40 @@ long ttm_pool_backup(struct ttm_pool *pool,
> > struct ttm_tt *tt,
> >  	if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
> > should_fail(&backup_fault_inject, 1))
> >  		num_pages = DIV_ROUND_UP(num_pages, 2);
> >  
> > -	for (i = 0; i < num_pages; ++i) {
> > -		s64 shandle;
> > +	for (i = 0; i < num_pages; i += npages) {
> > +		unsigned int order;
> >  
> > +		npages = 1;
> >  		page = tt->pages[i];
> >  		if (unlikely(!page))
> >  			continue;
> >  
> > -		ttm_pool_split_for_swap(pool, page);
> > +		/* Already-handled entry from a previous attempt. */
> > +		if (unlikely(ttm_backup_page_ptr_is_handle(page)))
> > +			continue;
> >  
> > -		shandle = ttm_backup_backup_page(backup, page,
> > flags->writeback, i,
> > -						 gfp, alloc_gfp);
> > -		if (shandle < 0) {
> > -			/* We allow partially shrunken tts */
> > -			ret = shandle;
> > +		order = ttm_pool_page_order(pool, page);
> > +		npages = 1UL << order;
> > +
> > +		/*
> > +		 * Back up the compound atomically at its native
> > order. If
> > +		 * fault injection truncated num_pages mid-compound,
> > skip
> > +		 * the partial tail rather than splitting.
> > +		 */
> > +		if (unlikely(i + npages > num_pages))
> > +			break;
> > +
> > +		ret = ttm_pool_backup_folio(pool, tt, backup,
> > page_folio(page),
> > +					    order, flags->writeback,
> > i, gfp,
> > +					    alloc_gfp);
> > +		if (unlikely(ret < 0))
> > +			break;
> > +
> > +		shrunken += ret;
> > +
> > +		/* partial backup */
> > +		if (unlikely(ret != npages))
> >  			break;
> > -		}
> > -		handle = shandle;
> > -		tt->pages[i] =
> > ttm_backup_handle_to_page_ptr(handle);
> > -		__free_pages_gpu_account(page, 0, false);
> > -		shrunken++;
> >  	}
> >  
> >  	return shrunken ? shrunken : ret;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 2/2] drm/ttm/pool: back up at native page order
  2026-05-06 16:14     ` Matthew Brost
@ 2026-05-06 16:16       ` Matthew Brost
  2026-05-06 16:26       ` Thomas Hellström
  1 sibling, 0 replies; 10+ messages in thread
From: Matthew Brost @ 2026-05-06 16:16 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: intel-xe, dri-devel, Christian Koenig, Huang Rui, Matthew Auld,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, linux-kernel, stable

On Wed, May 06, 2026 at 09:14:13AM -0700, Matthew Brost wrote:
> On Wed, May 06, 2026 at 04:23:29PM +0200, Thomas Hellström wrote:
> > Hi, Matt
> > 
> > On Tue, 2026-05-05 at 13:04 -0700, Matthew Brost wrote:
> > > ttm_pool_split_for_swap() splits high-order pool pages into order-0
> > > pages during backup so each 4K page can be released to the system as
> > > soon as it has been written to shmem. While this minimizes the
> > > allocator's working set during reclaim, it actively fragments memory:
> > > every TTM-backed compound page that the shrinker touches is shattered
> > > into order-0 pages, even when the rest of the system would prefer
> > > that
> > > the high-order block stay intact. Under sustained kswapd pressure
> > > this
> > > is enough to drive other parts of MM into recovery loops from which
> > > they cannot easily escape, because the memory TTM just freed is no
> > > longer contiguous.
> > > 
> > > Stop unconditionally splitting on the backup path and back up each
> > > compound at its native order in ttm_pool_backup():
> > > 
> > >   - For each non-handle slot, read the order from the head page and
> > >     back up all 1<<order subpages to consecutive shmem indices,
> > >     writing the resulting handles into tt->pages[] as we go.
> > >   - On success, the compound is freed once at its native order. No
> > >     split_page(), no per-4K refcount juggling, no fragmentation
> > >     introduced from this path.
> > >   - Slots that already hold a backup handle from a previous partial
> > >     attempt are skipped. A compound that would extend past a
> > >     fault-injection-truncated num_pages is skipped rather than split.
> > > 
> > > A per-subpage backup failure cannot be made fully atomic: backing up
> > > a
> > > subpage allocates a shmem folio before the source page can be
> > > released,
> > > so under true OOM any subpage in a compound (not just the first) may
> > > fail to be backed up with the rest of the source compound still live
> > > and contiguous. To make forward progress in that case, fall back to
> > > splitting the source compound and backing up its remaining subpages
> > > individually:
> > > 
> > >   - On the first per-subpage failure for a compound (and only if
> > >     order > 0), call ttm_pool_split_for_swap() to split the source
> > >     compound, release the subpages whose contents already live in
> > >     shmem (their handles in tt->pages stay valid), and retry the
> > >     failing subpage at order 0.
> > >   - Subsequent successful subpage backups in the now-split compound
> > >     free their source page individually as soon as the handle is
> > >     written.
> > >   - A second failure after splitting terminates the loop with partial
> > >     progress; the remaining order-0 subpages stay in tt->pages as
> > >     plain page pointers and are cleaned up by the normal
> > >     ttm_pool_drop_backed_up() / ttm_pool_free_range() paths.
> > > 
> > > This restores the original split-on-OOM fallback behavior while
> > > keeping the common, non-OOM case fragmentation-free. It also
> > > preserves the "partial backup is allowed" contract: shrunken is
> > > incremented per backed-up subpage so the caller still sees forward
> > > progress when a compound only partially succeeds.
> > > 
> > > The restore-side leftover-page branch in ttm_pool_restore_commit() is
> > > left as-is for now: that path can still split a previously-retained
> > > compound, but in practice it is unreachable under realistic workloads
> > > (per profiling we have not been able to trigger it), so it is not
> > > worth complicating the restore state machine to avoid the split
> > > there.
> > > If it ever becomes a problem in practice it can be addressed
> > > independently.
> > > 
> > > ttm_pool_split_for_swap() itself is retained both for the OOM
> > > fallback above and for the restore path's remaining caller. The
> > > DMA-mapped pre-backup unmap loop, the purge path, ttm_pool_free_*,
> > > and ttm_pool_unmap_and_free() already operate at native order and
> > > are unchanged.
> > > 
> > > Cc: Christian Koenig <christian.koenig@amd.com>
> > > Cc: Huang Rui <ray.huang@amd.com>
> > > Cc: Matthew Auld <matthew.auld@intel.com>
> > > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > > Cc: Maxime Ripard <mripard@kernel.org>
> > > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > > Cc: David Airlie <airlied@gmail.com>
> > > Cc: Simona Vetter <simona@ffwll.ch>
> > > Cc: dri-devel@lists.freedesktop.org
> > > Cc: linux-kernel@vger.kernel.org
> > > Cc: stable@vger.kernel.org
> > > Fixes: b63d715b8090 ("drm/ttm/pool, drm/ttm/tt: Provide a helper to
> > > shrink pages")
> > > Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > Assisted-by: Claude:claude-opus-4.6
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > 
> > > ---
> > > 
> > > A follow-up should attempt writeback to shmem at folio order as well,
> > > but the API for doing so is unclear and may be incomplete.
> > > 
> > > This patch is related to the pending series [1] and significantly
> > > reduces the likelihood of Xe entering a kswapd loop under
> > > fragmentation.
> > > The kswapd → shrinker → Xe shrinker → TTM backup path is still
> > > exercised; however, with this change the backup path no longer
> > > worsens
> > > fragmentation, which previously amplified reclaim pressure and
> > > reinforced the kswapd loop.
> > > 
> > > Nonetheless, the pathological case that [1] aims to address still
> > > exists
> > > and requires a proper solution. Even with this patch, a kswapd loop
> > > due
> > > to severe fragmentation can still be triggered, although it is now
> > > substantially harder to reproduce.
> > > 
> > > v2:
> > >  - Split pages and free immediately if backup fails are higher order
> > >    (Thomas)
> > > v3:
> > >  - Skip handles in purge path (sashiko)
> > > v5:
> > >  - Refactor into ttm_pool_backup_folio (Thomas)
> > > 
> > > [1] https://patchwork.freedesktop.org/series/165330/
> > > ---
> > >  drivers/gpu/drm/ttm/ttm_pool.c | 110 ++++++++++++++++++++++++++++---
> > > --
> > >  1 file changed, 94 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
> > > b/drivers/gpu/drm/ttm/ttm_pool.c
> > > index d380a3c7fe40..78efc8524133 100644
> > > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > > @@ -1019,6 +1019,70 @@ void ttm_pool_drop_backed_up(struct ttm_tt
> > > *tt)
> > >  	ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt-
> > > >num_pages);
> > >  }
> > >  
> > > +static int ttm_pool_backup_folio(struct ttm_pool *pool, struct
> > > ttm_tt *tt,
> > > +				 struct file *backup, struct folio
> > > *folio,
> > > +				 unsigned int order, bool writeback,
> > > +				 pgoff_t idx, gfp_t page_gfp, gfp_t
> > > alloc_gfp)
> > 
> > I don't really understand why we can't end up with a
> > ttm_backup_backup_folio(), which I believe is the proper layering,
> > already at this point? Please see a suggestion at 
> > 
> > https://gitlab.freedesktop.org/thomash/xe-vibe/-/commits/ttm_swapout?ref_type=heads
> > 
> > Here the splitting logic is kept in the ttm_pool, but ttm_backup
> > supports handing large folios to it.
> > 
> > Although the cumulative diffstat becomes larger, the end code becomes
> > smaller and IMO easier to read, and we don't need to introduce code
> > that we immediately have to refactor.
> 
> That version looks fine too. If that is preference no issue.
> 
> My goal with this series is get something than can reasonably be
> backported to LTS kernels so the desktop doesn't frequently enter kswapd
> because of fragmentation. We now have at least 3 reports of this being
> an issue.
> 
> This is larger fix [1] which works in tandem but seemly unlikely to
> backportable given it add new concepts to the core MM [1].
> 
> [1] https://patchwork.freedesktop.org/series/165329/
> 
> > 
> > But I'm starting to question the general approach: Even if the
> > *shrinker* can recover from a total kernel memory reserve depletion, it
> > can't really be considered a reasonable practice, since if we
> > frequently deplete the reserves, *other* important allocations in the
> > system like GFP_ATOMIC, PF_MEMALLOC may spuriously start to fail and
> > people will have a hard time finding out why.
> > 
> 
> Wouldn’t GFP_ATOMIC enter direct reclaim, hit our shrinker, and
> eventually make progress—i.e., take the split path if needed? I’m not
> 100% sure, but my initial reaction is that this concern may not be
> valid; however, MM is hard to reason about.
> 
> Again, FWIW, I’ve tried a lot of things to trigger OOM—for example,
> running WebGL tabs and then kicking off various very memory-intensive
> workloads from the CLI—and I still haven’t hit OOM or seen memory
> allocation failures or warnings.
> 
> > So I actually don't think we can be avoiding the splitting without
> > direct insertion. FWIW, up until recently when shmem started supporting
> 
> I agree direct insertion is better solution. Do you think this something
> we could reasonably get working and backport? I haven't done any
> research on direct insertion yet, thus why I'm asking.
> 
> > huge page swapping, other GPU drivers basically also split pages at
> > swapout.
> 
> I wonder if other drivers have the same issue? The deadly combo is allow
> GPUs to subscribe all of system memory, allocate THP pages (or higher
> order pages), and split them in the shrinker. Xe might be the only
> driver with right combo to hit this but not 100% sure without a deep
> dive.
> 

+ For completeness the THP allocation must have GFP flags to enter reclaim.

Matt

> > 
> > Another idea for improving on the compaction loop, perhaps worth trying
> > is this change, shamelessly stolen from i915:
> > 
> > https://gitlab.freedesktop.org/thomash/xe-vibe/-/commits/shrinker_batch?ref_type=heads
> > 
> 
> I'd have to give this a try - I'm quickly running out of time before I
> leave for month though.
> 
> Matt
> 
> > /Thomas
> > 
> > 
> > > +{
> > > +	struct page *page = folio_page(folio, 0);
> > > +	int shrunken = 0, npages = 1UL << order, ret = 0, i;
> > > +	bool folio_has_been_split = false;
> > > +
> > > +	for (i = 0; i < npages; ++i) {
> > > +		s64 shandle;
> > > +
> > > +try_again_after_split:
> > > +		if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
> > > +		    should_fail(&backup_fault_inject, 1))
> > > +			shandle = -ENOMEM;
> > > +		else
> > > +			shandle = ttm_backup_backup_page(backup,
> > > page + i,
> > > +							 writeback,
> > > idx + i,
> > > +							 page_gfp,
> > > alloc_gfp);
> > > +
> > > +		if (shandle < 0 && !folio_has_been_split && order) {
> > > +			pgoff_t j;
> > > +
> > > +			/*
> > > +			 * True OOM: could not allocate a shmem
> > > folio
> > > +			 * for the next subpage. Fall back to
> > > splitting
> > > +			 * the source compound and backing up
> > > subpages
> > > +			 * individually. Release the already-backed-
> > > up
> > > +			 * subpages whose contents now live in
> > > shmem;
> > > +			 * any further failure terminates the loop
> > > with
> > > +			 * partial progress (handled by the caller).
> > > +			 */
> > > +			folio_has_been_split = true;
> > > +			ttm_pool_split_for_swap(pool, page);
> > > +
> > > +			for (j = 0; j < i; ++j) {
> > > +				__free_pages_gpu_account(page + j,
> > > 0, false);
> > > +				shrunken++;
> > > +			}
> > > +
> > > +			goto try_again_after_split;
> > > +		} else if (shandle < 0) {
> > > +			ret = shandle;
> > > +			goto out;
> > > +		} else if (folio_has_been_split) {
> > > +			__free_pages_gpu_account(page + i, 0,
> > > false);
> > > +			shrunken++;
> > > +		}
> > > +
> > > +		tt->pages[idx + i] =
> > > ttm_backup_handle_to_page_ptr(shandle);
> > > +	}
> > > +
> > > +	if (!folio_has_been_split) {
> > > +		/* Compound fully backed up; free at native order.
> > > */
> > > +		page->private = 0;
> > > +		__free_pages_gpu_account(page, order, false);
> > > +		shrunken += npages;
> > > +	}
> > > +
> > > +out:
> > > +	return shrunken ? shrunken : ret;
> > > +}
> > > +
> > >  /**
> > >   * ttm_pool_backup() - Back up or purge a struct ttm_tt
> > >   * @pool: The pool used when allocating the struct ttm_tt.
> > > @@ -1045,12 +1109,11 @@ long ttm_pool_backup(struct ttm_pool *pool,
> > > struct ttm_tt *tt,
> > >  {
> > >  	struct file *backup = tt->backup;
> > >  	struct page *page;
> > > -	unsigned long handle;
> > >  	gfp_t alloc_gfp;
> > >  	gfp_t gfp;
> > >  	int ret = 0;
> > >  	pgoff_t shrunken = 0;
> > > -	pgoff_t i, num_pages;
> > > +	pgoff_t i, num_pages, npages;
> > >  
> > >  	if (WARN_ON(ttm_tt_is_backed_up(tt)))
> > >  		return -EINVAL;
> > > @@ -1070,7 +1133,8 @@ long ttm_pool_backup(struct ttm_pool *pool,
> > > struct ttm_tt *tt,
> > >  			unsigned int order;
> > >  
> > >  			page = tt->pages[i];
> > > -			if (unlikely(!page)) {
> > > +			if (unlikely(!page ||
> > > +				    
> > > ttm_backup_page_ptr_is_handle(page))) {
> > >  				num_pages = 1;
> > >  				continue;
> > >  			}
> > > @@ -1106,26 +1170,40 @@ long ttm_pool_backup(struct ttm_pool *pool,
> > > struct ttm_tt *tt,
> > >  	if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
> > > should_fail(&backup_fault_inject, 1))
> > >  		num_pages = DIV_ROUND_UP(num_pages, 2);
> > >  
> > > -	for (i = 0; i < num_pages; ++i) {
> > > -		s64 shandle;
> > > +	for (i = 0; i < num_pages; i += npages) {
> > > +		unsigned int order;
> > >  
> > > +		npages = 1;
> > >  		page = tt->pages[i];
> > >  		if (unlikely(!page))
> > >  			continue;
> > >  
> > > -		ttm_pool_split_for_swap(pool, page);
> > > +		/* Already-handled entry from a previous attempt. */
> > > +		if (unlikely(ttm_backup_page_ptr_is_handle(page)))
> > > +			continue;
> > >  
> > > -		shandle = ttm_backup_backup_page(backup, page,
> > > flags->writeback, i,
> > > -						 gfp, alloc_gfp);
> > > -		if (shandle < 0) {
> > > -			/* We allow partially shrunken tts */
> > > -			ret = shandle;
> > > +		order = ttm_pool_page_order(pool, page);
> > > +		npages = 1UL << order;
> > > +
> > > +		/*
> > > +		 * Back up the compound atomically at its native
> > > order. If
> > > +		 * fault injection truncated num_pages mid-compound,
> > > skip
> > > +		 * the partial tail rather than splitting.
> > > +		 */
> > > +		if (unlikely(i + npages > num_pages))
> > > +			break;
> > > +
> > > +		ret = ttm_pool_backup_folio(pool, tt, backup,
> > > page_folio(page),
> > > +					    order, flags->writeback,
> > > i, gfp,
> > > +					    alloc_gfp);
> > > +		if (unlikely(ret < 0))
> > > +			break;
> > > +
> > > +		shrunken += ret;
> > > +
> > > +		/* partial backup */
> > > +		if (unlikely(ret != npages))
> > >  			break;
> > > -		}
> > > -		handle = shandle;
> > > -		tt->pages[i] =
> > > ttm_backup_handle_to_page_ptr(handle);
> > > -		__free_pages_gpu_account(page, 0, false);
> > > -		shrunken++;
> > >  	}
> > >  
> > >  	return shrunken ? shrunken : ret;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 2/2] drm/ttm/pool: back up at native page order
  2026-05-06 16:14     ` Matthew Brost
  2026-05-06 16:16       ` Matthew Brost
@ 2026-05-06 16:26       ` Thomas Hellström
  2026-05-06 18:05         ` Matthew Brost
  1 sibling, 1 reply; 10+ messages in thread
From: Thomas Hellström @ 2026-05-06 16:26 UTC (permalink / raw)
  To: Matthew Brost
  Cc: intel-xe, dri-devel, Christian Koenig, Huang Rui, Matthew Auld,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, linux-kernel, stable

On Wed, 2026-05-06 at 09:14 -0700, Matthew Brost wrote:
> On Wed, May 06, 2026 at 04:23:29PM +0200, Thomas Hellström wrote:
> > Hi, Matt
> > 
> > On Tue, 2026-05-05 at 13:04 -0700, Matthew Brost wrote:
> > > ttm_pool_split_for_swap() splits high-order pool pages into
> > > order-0
> > > pages during backup so each 4K page can be released to the system
> > > as
> > > soon as it has been written to shmem. While this minimizes the
> > > allocator's working set during reclaim, it actively fragments
> > > memory:
> > > every TTM-backed compound page that the shrinker touches is
> > > shattered
> > > into order-0 pages, even when the rest of the system would prefer
> > > that
> > > the high-order block stay intact. Under sustained kswapd pressure
> > > this
> > > is enough to drive other parts of MM into recovery loops from
> > > which
> > > they cannot easily escape, because the memory TTM just freed is
> > > no
> > > longer contiguous.
> > > 
> > > Stop unconditionally splitting on the backup path and back up
> > > each
> > > compound at its native order in ttm_pool_backup():
> > > 
> > >   - For each non-handle slot, read the order from the head page
> > > and
> > >     back up all 1<<order subpages to consecutive shmem indices,
> > >     writing the resulting handles into tt->pages[] as we go.
> > >   - On success, the compound is freed once at its native order.
> > > No
> > >     split_page(), no per-4K refcount juggling, no fragmentation
> > >     introduced from this path.
> > >   - Slots that already hold a backup handle from a previous
> > > partial
> > >     attempt are skipped. A compound that would extend past a
> > >     fault-injection-truncated num_pages is skipped rather than
> > > split.
> > > 
> > > A per-subpage backup failure cannot be made fully atomic: backing
> > > up
> > > a
> > > subpage allocates a shmem folio before the source page can be
> > > released,
> > > so under true OOM any subpage in a compound (not just the first)
> > > may
> > > fail to be backed up with the rest of the source compound still
> > > live
> > > and contiguous. To make forward progress in that case, fall back
> > > to
> > > splitting the source compound and backing up its remaining
> > > subpages
> > > individually:
> > > 
> > >   - On the first per-subpage failure for a compound (and only if
> > >     order > 0), call ttm_pool_split_for_swap() to split the
> > > source
> > >     compound, release the subpages whose contents already live in
> > >     shmem (their handles in tt->pages stay valid), and retry the
> > >     failing subpage at order 0.
> > >   - Subsequent successful subpage backups in the now-split
> > > compound
> > >     free their source page individually as soon as the handle is
> > >     written.
> > >   - A second failure after splitting terminates the loop with
> > > partial
> > >     progress; the remaining order-0 subpages stay in tt->pages as
> > >     plain page pointers and are cleaned up by the normal
> > >     ttm_pool_drop_backed_up() / ttm_pool_free_range() paths.
> > > 
> > > This restores the original split-on-OOM fallback behavior while
> > > keeping the common, non-OOM case fragmentation-free. It also
> > > preserves the "partial backup is allowed" contract: shrunken is
> > > incremented per backed-up subpage so the caller still sees
> > > forward
> > > progress when a compound only partially succeeds.
> > > 
> > > The restore-side leftover-page branch in
> > > ttm_pool_restore_commit() is
> > > left as-is for now: that path can still split a previously-
> > > retained
> > > compound, but in practice it is unreachable under realistic
> > > workloads
> > > (per profiling we have not been able to trigger it), so it is not
> > > worth complicating the restore state machine to avoid the split
> > > there.
> > > If it ever becomes a problem in practice it can be addressed
> > > independently.
> > > 
> > > ttm_pool_split_for_swap() itself is retained both for the OOM
> > > fallback above and for the restore path's remaining caller. The
> > > DMA-mapped pre-backup unmap loop, the purge path,
> > > ttm_pool_free_*,
> > > and ttm_pool_unmap_and_free() already operate at native order and
> > > are unchanged.
> > > 
> > > Cc: Christian Koenig <christian.koenig@amd.com>
> > > Cc: Huang Rui <ray.huang@amd.com>
> > > Cc: Matthew Auld <matthew.auld@intel.com>
> > > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > > Cc: Maxime Ripard <mripard@kernel.org>
> > > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > > Cc: David Airlie <airlied@gmail.com>
> > > Cc: Simona Vetter <simona@ffwll.ch>
> > > Cc: dri-devel@lists.freedesktop.org
> > > Cc: linux-kernel@vger.kernel.org
> > > Cc: stable@vger.kernel.org
> > > Fixes: b63d715b8090 ("drm/ttm/pool, drm/ttm/tt: Provide a helper
> > > to
> > > shrink pages")
> > > Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > Assisted-by: Claude:claude-opus-4.6
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > 
> > > ---
> > > 
> > > A follow-up should attempt writeback to shmem at folio order as
> > > well,
> > > but the API for doing so is unclear and may be incomplete.
> > > 
> > > This patch is related to the pending series [1] and significantly
> > > reduces the likelihood of Xe entering a kswapd loop under
> > > fragmentation.
> > > The kswapd → shrinker → Xe shrinker → TTM backup path is still
> > > exercised; however, with this change the backup path no longer
> > > worsens
> > > fragmentation, which previously amplified reclaim pressure and
> > > reinforced the kswapd loop.
> > > 
> > > Nonetheless, the pathological case that [1] aims to address still
> > > exists
> > > and requires a proper solution. Even with this patch, a kswapd
> > > loop
> > > due
> > > to severe fragmentation can still be triggered, although it is
> > > now
> > > substantially harder to reproduce.
> > > 
> > > v2:
> > >  - Split pages and free immediately if backup fails are higher
> > > order
> > >    (Thomas)
> > > v3:
> > >  - Skip handles in purge path (sashiko)
> > > v5:
> > >  - Refactor into ttm_pool_backup_folio (Thomas)
> > > 
> > > [1] https://patchwork.freedesktop.org/series/165330/
> > > ---
> > >  drivers/gpu/drm/ttm/ttm_pool.c | 110
> > > ++++++++++++++++++++++++++++---
> > > --
> > >  1 file changed, 94 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
> > > b/drivers/gpu/drm/ttm/ttm_pool.c
> > > index d380a3c7fe40..78efc8524133 100644
> > > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > > @@ -1019,6 +1019,70 @@ void ttm_pool_drop_backed_up(struct ttm_tt
> > > *tt)
> > >  	ttm_pool_free_range(NULL, tt, ttm_cached, start_page,
> > > tt-
> > > > num_pages);
> > >  }
> > >  
> > > +static int ttm_pool_backup_folio(struct ttm_pool *pool, struct
> > > ttm_tt *tt,
> > > +				 struct file *backup, struct
> > > folio
> > > *folio,
> > > +				 unsigned int order, bool
> > > writeback,
> > > +				 pgoff_t idx, gfp_t page_gfp,
> > > gfp_t
> > > alloc_gfp)
> > 
> > I don't really understand why we can't end up with a
> > ttm_backup_backup_folio(), which I believe is the proper layering,
> > already at this point? Please see a suggestion at 
> > 
> > https://gitlab.freedesktop.org/thomash/xe-vibe/-/commits/ttm_swapout?ref_type=heads
> > 
> > Here the splitting logic is kept in the ttm_pool, but ttm_backup
> > supports handing large folios to it.
> > 
> > Although the cumulative diffstat becomes larger, the end code
> > becomes
> > smaller and IMO easier to read, and we don't need to introduce code
> > that we immediately have to refactor.
> 
> That version looks fine too. If that is preference no issue.

Cool. Note that there is a bug in that we don't pass the folio order
into ttm_backup_backup_folio(). I'm force-pushing a fix for that.


> 
> My goal with this series is get something than can reasonably be
> backported to LTS kernels so the desktop doesn't frequently enter
> kswapd
> because of fragmentation. We now have at least 3 reports of this
> being
> an issue.
> 
> This is larger fix [1] which works in tandem but seemly unlikely to
> backportable given it add new concepts to the core MM [1].
> 
> [1] https://patchwork.freedesktop.org/series/165329/
> 
> > 
> > But I'm starting to question the general approach: Even if the
> > *shrinker* can recover from a total kernel memory reserve
> > depletion, it
> > can't really be considered a reasonable practice, since if we
> > frequently deplete the reserves, *other* important allocations in
> > the
> > system like GFP_ATOMIC, PF_MEMALLOC may spuriously start to fail
> > and
> > people will have a hard time finding out why.
> > 
> 
> Wouldn’t GFP_ATOMIC enter direct reclaim, hit our shrinker, and
> eventually make progress—i.e., take the split path if needed? I’m not
> 100% sure, but my initial reaction is that this concern may not be
> valid; however, MM is hard to reason about.

No, GFP_ATOMIC just uses what's available without any reclaim at all.
It's more aggressive than GFP_NOWAIT in that it allows dipping into the
kernel reserves.

> 
> Again, FWIW, I’ve tried a lot of things to trigger OOM—for example,
> running WebGL tabs and then kicking off various very memory-intensive
> workloads from the CLI—and I still haven’t hit OOM or seen memory
> allocation failures or warnings.
> 
> > So I actually don't think we can be avoiding the splitting without
> > direct insertion. FWIW, up until recently when shmem started
> > supporting
> 
> I agree direct insertion is better solution. Do you think this
> something
> we could reasonably get working and backport? I haven't done any
> research on direct insertion yet, thus why I'm asking.

Yes I think so. The problem would be to get it accepted. Looking into
that now, but hitting various kinds of subtle issues.

Thanks,
Thomas


> 
> > huge page swapping, other GPU drivers basically also split pages at
> > swapout.
> 
> I wonder if other drivers have the same issue? The deadly combo is
> allow
> GPUs to subscribe all of system memory, allocate THP pages (or higher
> order pages), and split them in the shrinker. Xe might be the only
> driver with right combo to hit this but not 100% sure without a deep
> dive.
> 
> > 
> > Another idea for improving on the compaction loop, perhaps worth
> > trying
> > is this change, shamelessly stolen from i915:
> > 
> > https://gitlab.freedesktop.org/thomash/xe-vibe/-/commits/shrinker_batch?ref_type=heads
> > 
> 
> I'd have to give this a try - I'm quickly running out of time before
> I
> leave for month though.
> 
> Matt
> 
> > /Thomas
> > 
> > 
> > > +{
> > > +	struct page *page = folio_page(folio, 0);
> > > +	int shrunken = 0, npages = 1UL << order, ret = 0, i;
> > > +	bool folio_has_been_split = false;
> > > +
> > > +	for (i = 0; i < npages; ++i) {
> > > +		s64 shandle;
> > > +
> > > +try_again_after_split:
> > > +		if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
> > > +		    should_fail(&backup_fault_inject, 1))
> > > +			shandle = -ENOMEM;
> > > +		else
> > > +			shandle = ttm_backup_backup_page(backup,
> > > page + i,
> > > +							
> > > writeback,
> > > idx + i,
> > > +							
> > > page_gfp,
> > > alloc_gfp);
> > > +
> > > +		if (shandle < 0 && !folio_has_been_split &&
> > > order) {
> > > +			pgoff_t j;
> > > +
> > > +			/*
> > > +			 * True OOM: could not allocate a shmem
> > > folio
> > > +			 * for the next subpage. Fall back to
> > > splitting
> > > +			 * the source compound and backing up
> > > subpages
> > > +			 * individually. Release the already-
> > > backed-
> > > up
> > > +			 * subpages whose contents now live in
> > > shmem;
> > > +			 * any further failure terminates the
> > > loop
> > > with
> > > +			 * partial progress (handled by the
> > > caller).
> > > +			 */
> > > +			folio_has_been_split = true;
> > > +			ttm_pool_split_for_swap(pool, page);
> > > +
> > > +			for (j = 0; j < i; ++j) {
> > > +				__free_pages_gpu_account(page +
> > > j,
> > > 0, false);
> > > +				shrunken++;
> > > +			}
> > > +
> > > +			goto try_again_after_split;
> > > +		} else if (shandle < 0) {
> > > +			ret = shandle;
> > > +			goto out;
> > > +		} else if (folio_has_been_split) {
> > > +			__free_pages_gpu_account(page + i, 0,
> > > false);
> > > +			shrunken++;
> > > +		}
> > > +
> > > +		tt->pages[idx + i] =
> > > ttm_backup_handle_to_page_ptr(shandle);
> > > +	}
> > > +
> > > +	if (!folio_has_been_split) {
> > > +		/* Compound fully backed up; free at native
> > > order.
> > > */
> > > +		page->private = 0;
> > > +		__free_pages_gpu_account(page, order, false);
> > > +		shrunken += npages;
> > > +	}
> > > +
> > > +out:
> > > +	return shrunken ? shrunken : ret;
> > > +}
> > > +
> > >  /**
> > >   * ttm_pool_backup() - Back up or purge a struct ttm_tt
> > >   * @pool: The pool used when allocating the struct ttm_tt.
> > > @@ -1045,12 +1109,11 @@ long ttm_pool_backup(struct ttm_pool
> > > *pool,
> > > struct ttm_tt *tt,
> > >  {
> > >  	struct file *backup = tt->backup;
> > >  	struct page *page;
> > > -	unsigned long handle;
> > >  	gfp_t alloc_gfp;
> > >  	gfp_t gfp;
> > >  	int ret = 0;
> > >  	pgoff_t shrunken = 0;
> > > -	pgoff_t i, num_pages;
> > > +	pgoff_t i, num_pages, npages;
> > >  
> > >  	if (WARN_ON(ttm_tt_is_backed_up(tt)))
> > >  		return -EINVAL;
> > > @@ -1070,7 +1133,8 @@ long ttm_pool_backup(struct ttm_pool *pool,
> > > struct ttm_tt *tt,
> > >  			unsigned int order;
> > >  
> > >  			page = tt->pages[i];
> > > -			if (unlikely(!page)) {
> > > +			if (unlikely(!page ||
> > > +				    
> > > ttm_backup_page_ptr_is_handle(page))) {
> > >  				num_pages = 1;
> > >  				continue;
> > >  			}
> > > @@ -1106,26 +1170,40 @@ long ttm_pool_backup(struct ttm_pool
> > > *pool,
> > > struct ttm_tt *tt,
> > >  	if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
> > > should_fail(&backup_fault_inject, 1))
> > >  		num_pages = DIV_ROUND_UP(num_pages, 2);
> > >  
> > > -	for (i = 0; i < num_pages; ++i) {
> > > -		s64 shandle;
> > > +	for (i = 0; i < num_pages; i += npages) {
> > > +		unsigned int order;
> > >  
> > > +		npages = 1;
> > >  		page = tt->pages[i];
> > >  		if (unlikely(!page))
> > >  			continue;
> > >  
> > > -		ttm_pool_split_for_swap(pool, page);
> > > +		/* Already-handled entry from a previous
> > > attempt. */
> > > +		if
> > > (unlikely(ttm_backup_page_ptr_is_handle(page)))
> > > +			continue;
> > >  
> > > -		shandle = ttm_backup_backup_page(backup, page,
> > > flags->writeback, i,
> > > -						 gfp,
> > > alloc_gfp);
> > > -		if (shandle < 0) {
> > > -			/* We allow partially shrunken tts */
> > > -			ret = shandle;
> > > +		order = ttm_pool_page_order(pool, page);
> > > +		npages = 1UL << order;
> > > +
> > > +		/*
> > > +		 * Back up the compound atomically at its native
> > > order. If
> > > +		 * fault injection truncated num_pages mid-
> > > compound,
> > > skip
> > > +		 * the partial tail rather than splitting.
> > > +		 */
> > > +		if (unlikely(i + npages > num_pages))
> > > +			break;
> > > +
> > > +		ret = ttm_pool_backup_folio(pool, tt, backup,
> > > page_folio(page),
> > > +					    order, flags-
> > > >writeback,
> > > i, gfp,
> > > +					    alloc_gfp);
> > > +		if (unlikely(ret < 0))
> > > +			break;
> > > +
> > > +		shrunken += ret;
> > > +
> > > +		/* partial backup */
> > > +		if (unlikely(ret != npages))
> > >  			break;
> > > -		}
> > > -		handle = shandle;
> > > -		tt->pages[i] =
> > > ttm_backup_handle_to_page_ptr(handle);
> > > -		__free_pages_gpu_account(page, 0, false);
> > > -		shrunken++;
> > >  	}
> > >  
> > >  	return shrunken ? shrunken : ret;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 2/2] drm/ttm/pool: back up at native page order
  2026-05-06 16:26       ` Thomas Hellström
@ 2026-05-06 18:05         ` Matthew Brost
  0 siblings, 0 replies; 10+ messages in thread
From: Matthew Brost @ 2026-05-06 18:05 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: intel-xe, dri-devel, Christian Koenig, Huang Rui, Matthew Auld,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, linux-kernel, stable

On Wed, May 06, 2026 at 06:26:43PM +0200, Thomas Hellström wrote:
> On Wed, 2026-05-06 at 09:14 -0700, Matthew Brost wrote:
> > On Wed, May 06, 2026 at 04:23:29PM +0200, Thomas Hellström wrote:
> > > Hi, Matt
> > > 
> > > On Tue, 2026-05-05 at 13:04 -0700, Matthew Brost wrote:
> > > > ttm_pool_split_for_swap() splits high-order pool pages into
> > > > order-0
> > > > pages during backup so each 4K page can be released to the system
> > > > as
> > > > soon as it has been written to shmem. While this minimizes the
> > > > allocator's working set during reclaim, it actively fragments
> > > > memory:
> > > > every TTM-backed compound page that the shrinker touches is
> > > > shattered
> > > > into order-0 pages, even when the rest of the system would prefer
> > > > that
> > > > the high-order block stay intact. Under sustained kswapd pressure
> > > > this
> > > > is enough to drive other parts of MM into recovery loops from
> > > > which
> > > > they cannot easily escape, because the memory TTM just freed is
> > > > no
> > > > longer contiguous.
> > > > 
> > > > Stop unconditionally splitting on the backup path and back up
> > > > each
> > > > compound at its native order in ttm_pool_backup():
> > > > 
> > > >   - For each non-handle slot, read the order from the head page
> > > > and
> > > >     back up all 1<<order subpages to consecutive shmem indices,
> > > >     writing the resulting handles into tt->pages[] as we go.
> > > >   - On success, the compound is freed once at its native order.
> > > > No
> > > >     split_page(), no per-4K refcount juggling, no fragmentation
> > > >     introduced from this path.
> > > >   - Slots that already hold a backup handle from a previous
> > > > partial
> > > >     attempt are skipped. A compound that would extend past a
> > > >     fault-injection-truncated num_pages is skipped rather than
> > > > split.
> > > > 
> > > > A per-subpage backup failure cannot be made fully atomic: backing
> > > > up
> > > > a
> > > > subpage allocates a shmem folio before the source page can be
> > > > released,
> > > > so under true OOM any subpage in a compound (not just the first)
> > > > may
> > > > fail to be backed up with the rest of the source compound still
> > > > live
> > > > and contiguous. To make forward progress in that case, fall back
> > > > to
> > > > splitting the source compound and backing up its remaining
> > > > subpages
> > > > individually:
> > > > 
> > > >   - On the first per-subpage failure for a compound (and only if
> > > >     order > 0), call ttm_pool_split_for_swap() to split the
> > > > source
> > > >     compound, release the subpages whose contents already live in
> > > >     shmem (their handles in tt->pages stay valid), and retry the
> > > >     failing subpage at order 0.
> > > >   - Subsequent successful subpage backups in the now-split
> > > > compound
> > > >     free their source page individually as soon as the handle is
> > > >     written.
> > > >   - A second failure after splitting terminates the loop with
> > > > partial
> > > >     progress; the remaining order-0 subpages stay in tt->pages as
> > > >     plain page pointers and are cleaned up by the normal
> > > >     ttm_pool_drop_backed_up() / ttm_pool_free_range() paths.
> > > > 
> > > > This restores the original split-on-OOM fallback behavior while
> > > > keeping the common, non-OOM case fragmentation-free. It also
> > > > preserves the "partial backup is allowed" contract: shrunken is
> > > > incremented per backed-up subpage so the caller still sees
> > > > forward
> > > > progress when a compound only partially succeeds.
> > > > 
> > > > The restore-side leftover-page branch in
> > > > ttm_pool_restore_commit() is
> > > > left as-is for now: that path can still split a previously-
> > > > retained
> > > > compound, but in practice it is unreachable under realistic
> > > > workloads
> > > > (per profiling we have not been able to trigger it), so it is not
> > > > worth complicating the restore state machine to avoid the split
> > > > there.
> > > > If it ever becomes a problem in practice it can be addressed
> > > > independently.
> > > > 
> > > > ttm_pool_split_for_swap() itself is retained both for the OOM
> > > > fallback above and for the restore path's remaining caller. The
> > > > DMA-mapped pre-backup unmap loop, the purge path,
> > > > ttm_pool_free_*,
> > > > and ttm_pool_unmap_and_free() already operate at native order and
> > > > are unchanged.
> > > > 
> > > > Cc: Christian Koenig <christian.koenig@amd.com>
> > > > Cc: Huang Rui <ray.huang@amd.com>
> > > > Cc: Matthew Auld <matthew.auld@intel.com>
> > > > Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> > > > Cc: Maxime Ripard <mripard@kernel.org>
> > > > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > > > Cc: David Airlie <airlied@gmail.com>
> > > > Cc: Simona Vetter <simona@ffwll.ch>
> > > > Cc: dri-devel@lists.freedesktop.org
> > > > Cc: linux-kernel@vger.kernel.org
> > > > Cc: stable@vger.kernel.org
> > > > Fixes: b63d715b8090 ("drm/ttm/pool, drm/ttm/tt: Provide a helper
> > > > to
> > > > shrink pages")
> > > > Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > > Assisted-by: Claude:claude-opus-4.6
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > 
> > > > ---
> > > > 
> > > > A follow-up should attempt writeback to shmem at folio order as
> > > > well,
> > > > but the API for doing so is unclear and may be incomplete.
> > > > 
> > > > This patch is related to the pending series [1] and significantly
> > > > reduces the likelihood of Xe entering a kswapd loop under
> > > > fragmentation.
> > > > The kswapd → shrinker → Xe shrinker → TTM backup path is still
> > > > exercised; however, with this change the backup path no longer
> > > > worsens
> > > > fragmentation, which previously amplified reclaim pressure and
> > > > reinforced the kswapd loop.
> > > > 
> > > > Nonetheless, the pathological case that [1] aims to address still
> > > > exists
> > > > and requires a proper solution. Even with this patch, a kswapd
> > > > loop
> > > > due
> > > > to severe fragmentation can still be triggered, although it is
> > > > now
> > > > substantially harder to reproduce.
> > > > 
> > > > v2:
> > > >  - Split pages and free immediately if backup fails are higher
> > > > order
> > > >    (Thomas)
> > > > v3:
> > > >  - Skip handles in purge path (sashiko)
> > > > v5:
> > > >  - Refactor into ttm_pool_backup_folio (Thomas)
> > > > 
> > > > [1] https://patchwork.freedesktop.org/series/165330/
> > > > ---
> > > >  drivers/gpu/drm/ttm/ttm_pool.c | 110
> > > > ++++++++++++++++++++++++++++---
> > > > --
> > > >  1 file changed, 94 insertions(+), 16 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
> > > > b/drivers/gpu/drm/ttm/ttm_pool.c
> > > > index d380a3c7fe40..78efc8524133 100644
> > > > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > > > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > > > @@ -1019,6 +1019,70 @@ void ttm_pool_drop_backed_up(struct ttm_tt
> > > > *tt)
> > > >  	ttm_pool_free_range(NULL, tt, ttm_cached, start_page,
> > > > tt-
> > > > > num_pages);
> > > >  }
> > > >  
> > > > +static int ttm_pool_backup_folio(struct ttm_pool *pool, struct
> > > > ttm_tt *tt,
> > > > +				 struct file *backup, struct
> > > > folio
> > > > *folio,
> > > > +				 unsigned int order, bool
> > > > writeback,
> > > > +				 pgoff_t idx, gfp_t page_gfp,
> > > > gfp_t
> > > > alloc_gfp)
> > > 
> > > I don't really understand why we can't end up with a
> > > ttm_backup_backup_folio(), which I believe is the proper layering,
> > > already at this point? Please see a suggestion at 
> > > 
> > > https://gitlab.freedesktop.org/thomash/xe-vibe/-/commits/ttm_swapout?ref_type=heads
> > > 
> > > Here the splitting logic is kept in the ttm_pool, but ttm_backup
> > > supports handing large folios to it.
> > > 
> > > Although the cumulative diffstat becomes larger, the end code
> > > becomes
> > > smaller and IMO easier to read, and we don't need to introduce code
> > > that we immediately have to refactor.
> > 
> > That version looks fine too. If that is preference no issue.
> 
> Cool. Note that there is a bug in that we don't pass the folio order
> into ttm_backup_backup_folio(). I'm force-pushing a fix for that.
> 
> 
> > 
> > My goal with this series is get something than can reasonably be
> > backported to LTS kernels so the desktop doesn't frequently enter
> > kswapd
> > because of fragmentation. We now have at least 3 reports of this
> > being
> > an issue.
> > 
> > This is larger fix [1] which works in tandem but seemly unlikely to
> > backportable given it add new concepts to the core MM [1].
> > 
> > [1] https://patchwork.freedesktop.org/series/165329/
> > 
> > > 
> > > But I'm starting to question the general approach: Even if the
> > > *shrinker* can recover from a total kernel memory reserve
> > > depletion, it
> > > can't really be considered a reasonable practice, since if we
> > > frequently deplete the reserves, *other* important allocations in
> > > the
> > > system like GFP_ATOMIC, PF_MEMALLOC may spuriously start to fail
> > > and
> > > people will have a hard time finding out why.
> > > 
> > 
> > Wouldn’t GFP_ATOMIC enter direct reclaim, hit our shrinker, and
> > eventually make progress—i.e., take the split path if needed? I’m not
> > 100% sure, but my initial reaction is that this concern may not be
> > valid; however, MM is hard to reason about.
> 
> No, GFP_ATOMIC just uses what's available without any reclaim at all.
> It's more aggressive than GFP_NOWAIT in that it allows dipping into the
> kernel reserves.
> 

Right - wrote this before I had my coffee.

> > 
> > Again, FWIW, I’ve tried a lot of things to trigger OOM—for example,
> > running WebGL tabs and then kicking off various very memory-intensive
> > workloads from the CLI—and I still haven’t hit OOM or seen memory
> > allocation failures or warnings.
> > 
> > > So I actually don't think we can be avoiding the splitting without
> > > direct insertion. FWIW, up until recently when shmem started
> > > supporting
> > 
> > I agree direct insertion is better solution. Do you think this
> > something
> > we could reasonably get working and backport? I haven't done any
> > research on direct insertion yet, thus why I'm asking.
> 
> Yes I think so. The problem would be to get it accepted. Looking into
> that now, but hitting various kinds of subtle issues.
> 

Ok, I'm pretty unlikely to get the shrinker work to the finish line
before I go - fine with whatever lands in either part:

- Shrinking THP should not make fragmentation worse (this patch), a
  version of this should get Xe reasonably stable, hopefully this fix
  can be backported.

- Avoid evicting working sets under fragmentation ([1] above)

Matt

> Thanks,
> Thomas
> 
> 
> > 
> > > huge page swapping, other GPU drivers basically also split pages at
> > > swapout.
> > 
> > I wonder if other drivers have the same issue? The deadly combo is
> > allow
> > GPUs to subscribe all of system memory, allocate THP pages (or higher
> > order pages), and split them in the shrinker. Xe might be the only
> > driver with right combo to hit this but not 100% sure without a deep
> > dive.
> > 
> > > 
> > > Another idea for improving on the compaction loop, perhaps worth
> > > trying
> > > is this change, shamelessly stolen from i915:
> > > 
> > > https://gitlab.freedesktop.org/thomash/xe-vibe/-/commits/shrinker_batch?ref_type=heads
> > > 
> > 
> > I'd have to give this a try - I'm quickly running out of time before
> > I
> > leave for month though.
> > 
> > Matt
> > 
> > > /Thomas
> > > 
> > > 
> > > > +{
> > > > +	struct page *page = folio_page(folio, 0);
> > > > +	int shrunken = 0, npages = 1UL << order, ret = 0, i;
> > > > +	bool folio_has_been_split = false;
> > > > +
> > > > +	for (i = 0; i < npages; ++i) {
> > > > +		s64 shandle;
> > > > +
> > > > +try_again_after_split:
> > > > +		if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
> > > > +		    should_fail(&backup_fault_inject, 1))
> > > > +			shandle = -ENOMEM;
> > > > +		else
> > > > +			shandle = ttm_backup_backup_page(backup,
> > > > page + i,
> > > > +							
> > > > writeback,
> > > > idx + i,
> > > > +							
> > > > page_gfp,
> > > > alloc_gfp);
> > > > +
> > > > +		if (shandle < 0 && !folio_has_been_split &&
> > > > order) {
> > > > +			pgoff_t j;
> > > > +
> > > > +			/*
> > > > +			 * True OOM: could not allocate a shmem
> > > > folio
> > > > +			 * for the next subpage. Fall back to
> > > > splitting
> > > > +			 * the source compound and backing up
> > > > subpages
> > > > +			 * individually. Release the already-
> > > > backed-
> > > > up
> > > > +			 * subpages whose contents now live in
> > > > shmem;
> > > > +			 * any further failure terminates the
> > > > loop
> > > > with
> > > > +			 * partial progress (handled by the
> > > > caller).
> > > > +			 */
> > > > +			folio_has_been_split = true;
> > > > +			ttm_pool_split_for_swap(pool, page);
> > > > +
> > > > +			for (j = 0; j < i; ++j) {
> > > > +				__free_pages_gpu_account(page +
> > > > j,
> > > > 0, false);
> > > > +				shrunken++;
> > > > +			}
> > > > +
> > > > +			goto try_again_after_split;
> > > > +		} else if (shandle < 0) {
> > > > +			ret = shandle;
> > > > +			goto out;
> > > > +		} else if (folio_has_been_split) {
> > > > +			__free_pages_gpu_account(page + i, 0,
> > > > false);
> > > > +			shrunken++;
> > > > +		}
> > > > +
> > > > +		tt->pages[idx + i] =
> > > > ttm_backup_handle_to_page_ptr(shandle);
> > > > +	}
> > > > +
> > > > +	if (!folio_has_been_split) {
> > > > +		/* Compound fully backed up; free at native
> > > > order.
> > > > */
> > > > +		page->private = 0;
> > > > +		__free_pages_gpu_account(page, order, false);
> > > > +		shrunken += npages;
> > > > +	}
> > > > +
> > > > +out:
> > > > +	return shrunken ? shrunken : ret;
> > > > +}
> > > > +
> > > >  /**
> > > >   * ttm_pool_backup() - Back up or purge a struct ttm_tt
> > > >   * @pool: The pool used when allocating the struct ttm_tt.
> > > > @@ -1045,12 +1109,11 @@ long ttm_pool_backup(struct ttm_pool
> > > > *pool,
> > > > struct ttm_tt *tt,
> > > >  {
> > > >  	struct file *backup = tt->backup;
> > > >  	struct page *page;
> > > > -	unsigned long handle;
> > > >  	gfp_t alloc_gfp;
> > > >  	gfp_t gfp;
> > > >  	int ret = 0;
> > > >  	pgoff_t shrunken = 0;
> > > > -	pgoff_t i, num_pages;
> > > > +	pgoff_t i, num_pages, npages;
> > > >  
> > > >  	if (WARN_ON(ttm_tt_is_backed_up(tt)))
> > > >  		return -EINVAL;
> > > > @@ -1070,7 +1133,8 @@ long ttm_pool_backup(struct ttm_pool *pool,
> > > > struct ttm_tt *tt,
> > > >  			unsigned int order;
> > > >  
> > > >  			page = tt->pages[i];
> > > > -			if (unlikely(!page)) {
> > > > +			if (unlikely(!page ||
> > > > +				    
> > > > ttm_backup_page_ptr_is_handle(page))) {
> > > >  				num_pages = 1;
> > > >  				continue;
> > > >  			}
> > > > @@ -1106,26 +1170,40 @@ long ttm_pool_backup(struct ttm_pool
> > > > *pool,
> > > > struct ttm_tt *tt,
> > > >  	if (IS_ENABLED(CONFIG_FAULT_INJECTION) &&
> > > > should_fail(&backup_fault_inject, 1))
> > > >  		num_pages = DIV_ROUND_UP(num_pages, 2);
> > > >  
> > > > -	for (i = 0; i < num_pages; ++i) {
> > > > -		s64 shandle;
> > > > +	for (i = 0; i < num_pages; i += npages) {
> > > > +		unsigned int order;
> > > >  
> > > > +		npages = 1;
> > > >  		page = tt->pages[i];
> > > >  		if (unlikely(!page))
> > > >  			continue;
> > > >  
> > > > -		ttm_pool_split_for_swap(pool, page);
> > > > +		/* Already-handled entry from a previous
> > > > attempt. */
> > > > +		if
> > > > (unlikely(ttm_backup_page_ptr_is_handle(page)))
> > > > +			continue;
> > > >  
> > > > -		shandle = ttm_backup_backup_page(backup, page,
> > > > flags->writeback, i,
> > > > -						 gfp,
> > > > alloc_gfp);
> > > > -		if (shandle < 0) {
> > > > -			/* We allow partially shrunken tts */
> > > > -			ret = shandle;
> > > > +		order = ttm_pool_page_order(pool, page);
> > > > +		npages = 1UL << order;
> > > > +
> > > > +		/*
> > > > +		 * Back up the compound atomically at its native
> > > > order. If
> > > > +		 * fault injection truncated num_pages mid-
> > > > compound,
> > > > skip
> > > > +		 * the partial tail rather than splitting.
> > > > +		 */
> > > > +		if (unlikely(i + npages > num_pages))
> > > > +			break;
> > > > +
> > > > +		ret = ttm_pool_backup_folio(pool, tt, backup,
> > > > page_folio(page),
> > > > +					    order, flags-
> > > > >writeback,
> > > > i, gfp,
> > > > +					    alloc_gfp);
> > > > +		if (unlikely(ret < 0))
> > > > +			break;
> > > > +
> > > > +		shrunken += ret;
> > > > +
> > > > +		/* partial backup */
> > > > +		if (unlikely(ret != npages))
> > > >  			break;
> > > > -		}
> > > > -		handle = shandle;
> > > > -		tt->pages[i] =
> > > > ttm_backup_handle_to_page_ptr(handle);
> > > > -		__free_pages_gpu_account(page, 0, false);
> > > > -		shrunken++;
> > > >  	}
> > > >  
> > > >  	return shrunken ? shrunken : ret;

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2026-05-06 18:05 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-05 20:04 [PATCH v5 0/2] TTM shrinker fragmentation / partial restore fixes Matthew Brost
2026-05-05 20:04 ` [PATCH v5 1/2] drm/ttm: Drop tt->restore after successful restore Matthew Brost
2026-05-05 20:04 ` [PATCH v5 2/2] drm/ttm/pool: back up at native page order Matthew Brost
2026-05-06 14:23   ` Thomas Hellström
2026-05-06 16:14     ` Matthew Brost
2026-05-06 16:16       ` Matthew Brost
2026-05-06 16:26       ` Thomas Hellström
2026-05-06 18:05         ` Matthew Brost
2026-05-05 20:19 ` ✗ CI.checkpatch: warning for TTM shrinker fragmentation / partial restore fixes Patchwork
2026-05-05 20:20 ` ✓ CI.KUnit: success " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox