intel-xe.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker
@ 2024-11-15 15:01 Thomas Hellström
  2024-11-15 15:01 ` [PATCH v14 1/8] drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini() Thomas Hellström
                   ` (25 more replies)
  0 siblings, 26 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-11-15 15:01 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Somalapuram Amaranath,
	Christian König, Matthew Brost, Paulo Zanoni, dri-devel,
	Simona Vetter

This series implements TTM shrinker / eviction helpers and an xe bo
shrinker. It builds on a previous series, *and obsoletes that one*.

https://lore.kernel.org/linux-mm/b7491378-defd-4f1c-31e2-29e4c77e2d67@amd.com/T/

Where the comment about layering
https://lore.kernel.org/linux-mm/b7491378-defd-4f1c-31e2-29e4c77e2d67@amd.com/T/#ma918844aa8a6efe8768fdcda0c6590d5c93850c9

now addressed, and this version also implements shmem objects for backup
rather than direct swap-cache insertions, which was used in the previuos
series. It turns out that with per-page backup / shrinking, shmem objects
appears to work just as well as direct swap-cache insertions with the
added benefit that was introduced in the previous TTM shrinker series to
avoid running out of swap entries isn't really needed.

The series earlier consisted of a LRU traversal part and the current part.
The LRU traversal part is merged, but is still mentioned in the history
below.

Patch 1 balances ttm_resource_cursor_fini() with an init function. It
makes patch 5 more straightforward.

Patch 2 introduces a backup implemententaion.

Patch 3 introduces functionality in the ttm_pool code for page-by-page shrinking
and recovery. It avoids having to temporarily allocate a huge amount of
memory to be able to shrink a buffer object. It also introduces the
possibility to immediately write-back pages if needed.

Patch 4 Adds a simple error injection to the above code to help increase
test coverage.

Patch 5 Implements a macro for LRU iteration.

Patch 6 Introduces driver-facing helpers for shrinking.

Patch 7 Implements the xe bo shrinker.

Patch 8 Increases (removes) the XE_PL_TT watermark.

v2:
- Squash obsolete revision history in the patch commit messages.
- Fix a couple of review comments by Christian
- Don't store the mem_type in the TTM managers but in the
  resource cursor.
- Rename introduced TTM *back_up* function names to *backup*
- Add ttm pool recovery fault injection.
- Shrinker xe kunit test
- Various bugfixes

v3:
- Address some review comments from Matthew Brost and Christian König.
- Use the restartable LRU walk for TTM swapping and eviction.
- Provide a POC drm_exec locking implementation for exhaustive
  eviction. (Christian König).

v4:
- Remove the RFC exhaustive eviction part. While the path to exhaustive
  eviction is pretty clear and demonstrated in v3, there is still some
  drm_exec work that needs to be agreed and implemented.
- Add shrinker power management. On some hw we need to wake when shrinking.
- Fix the lru walker helper for -EALREADY errors.
- Add drm/xe: Increase the XE_PL_TT watermark.

v5:
- Update also TTM kunit tests
- Handle ghost- and zombie objects in the shrinker.
- A couple of compile- and UAF fixes reported by Kernel Build Robot and
  Dan Carpenter.

v6:
- Address review comments from Matthew Brost on the
  restartable LRU traversal path.

v7:
- Split out TTM restartable LRU traversal path and merge that.
- Adapt the review comments on that series.

v8:
- Address review comments from Matthew Brost as detailed in the
  respective patches.

v9:
- Rebase and fix compilation errors

v10:
- Use a LRU iteration macro rather than a function with a callback.
- Rebasing and cleanups
- Address some additional review comments from Matt Brost.
- Drop the shrinker selftest. It was already merged as a swapout
  self-test.

v11:
- Move more core interaction to additional TTM helpers.
- Don't back up without __GFP_FS, and don't start writeback without __GFP_IO.
- Rebase.

v12:
- Fix an indentation flaw.
- Rebase

v13:
- Remove the backup base-class, and use direct calls for ttm_backup
  (Christian König).
- Rebase on the ttm_backup changes.
- Move shrunken bos from the LRU list to the unevictable list.
- Provide an accessor function with sanity checks to set the
- ttm_tt::backup field.
- Update documentation.

v14:
- Update documentation of ttm_backup_bytes_avail().
- Work around converting between struct file * and struct ttm-backup *.
- Don't set up backup for imported buffers.

Cc: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: <dri-devel@lists.freedesktop.org>

Thomas Hellström (8):
  drm/ttm: Balance ttm_resource_cursor_init() and
    ttm_resource_cursor_fini()
  drm/ttm: Provide a shmem backup implementation
  drm/ttm/pool: Provide a helper to shrink pages
  drm/ttm: Use fault-injection to test error paths
  drm/ttm: Add a macro to perform LRU iteration
  drm/ttm: Add helpers for shrinking
  drm/xe: Add a shrinker for xe bos
  drm/xe: Increase the XE_PL_TT watermark

 drivers/gpu/drm/ttm/Makefile         |   2 +-
 drivers/gpu/drm/ttm/ttm_backup.c     | 204 +++++++++++++
 drivers/gpu/drm/ttm/ttm_bo.c         |   3 +-
 drivers/gpu/drm/ttm/ttm_bo_util.c    | 250 +++++++++++++++-
 drivers/gpu/drm/ttm/ttm_pool.c       | 421 ++++++++++++++++++++++++++-
 drivers/gpu/drm/ttm/ttm_resource.c   |  35 ++-
 drivers/gpu/drm/ttm/ttm_tt.c         |  66 +++++
 drivers/gpu/drm/xe/Makefile          |   1 +
 drivers/gpu/drm/xe/tests/xe_bo.c     |   6 +-
 drivers/gpu/drm/xe/xe_bo.c           | 195 ++++++++++++-
 drivers/gpu/drm/xe/xe_bo.h           |  36 +++
 drivers/gpu/drm/xe/xe_device.c       |   8 +
 drivers/gpu/drm/xe/xe_device_types.h |   2 +
 drivers/gpu/drm/xe/xe_shrinker.c     | 258 ++++++++++++++++
 drivers/gpu/drm/xe/xe_shrinker.h     |  18 ++
 drivers/gpu/drm/xe/xe_ttm_sys_mgr.c  |   3 +-
 include/drm/ttm/ttm_backup.h         |  74 +++++
 include/drm/ttm/ttm_bo.h             |  92 ++++++
 include/drm/ttm/ttm_pool.h           |   6 +
 include/drm/ttm/ttm_resource.h       |  11 +-
 include/drm/ttm/ttm_tt.h             |  34 ++-
 21 files changed, 1668 insertions(+), 57 deletions(-)
 create mode 100644 drivers/gpu/drm/ttm/ttm_backup.c
 create mode 100644 drivers/gpu/drm/xe/xe_shrinker.c
 create mode 100644 drivers/gpu/drm/xe/xe_shrinker.h
 create mode 100644 include/drm/ttm/ttm_backup.h

-- 
2.46.2


^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH v14 1/8] drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini()
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
@ 2024-11-15 15:01 ` Thomas Hellström
  2024-11-20 10:51   ` Christian König
  2024-11-15 15:01 ` [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation Thomas Hellström
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-11-15 15:01 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Matthew Brost, Christian König,
	Somalapuram Amaranath, Paulo Zanoni, Simona Vetter, dri-devel

Make the interface more symmetric by providing and using a
ttm_resource_cursor_init().

v10:
- Fix a stray newline (Matthew Brost)
- Update kerneldoc (Matthew Brost)

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo.c       |  3 ++-
 drivers/gpu/drm/ttm/ttm_bo_util.c  |  3 ++-
 drivers/gpu/drm/ttm/ttm_resource.c | 35 ++++++++++++++++++++----------
 include/drm/ttm/ttm_resource.h     | 11 +++++-----
 4 files changed, 34 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 48c5365efca1..06d6a452c4f4 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -450,7 +450,8 @@ int ttm_bo_evict_first(struct ttm_device *bdev, struct ttm_resource_manager *man
 	int ret = 0;
 
 	spin_lock(&bdev->lru_lock);
-	res = ttm_resource_manager_first(man, &cursor);
+	ttm_resource_cursor_init(&cursor, man);
+	res = ttm_resource_manager_first(&cursor);
 	ttm_resource_cursor_fini(&cursor);
 	if (!res) {
 		ret = -ENOENT;
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index d939925efa81..917096bd5f68 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -865,7 +865,8 @@ s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev,
 	s64 lret;
 
 	spin_lock(&bdev->lru_lock);
-	ttm_resource_manager_for_each_res(man, &cursor, res) {
+	ttm_resource_cursor_init(&cursor, man);
+	ttm_resource_manager_for_each_res(&cursor, res) {
 		struct ttm_buffer_object *bo = res->bo;
 		bool bo_needs_unlock = false;
 		bool bo_locked = false;
diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
index a87665eb28a6..e19360cc7930 100644
--- a/drivers/gpu/drm/ttm/ttm_resource.c
+++ b/drivers/gpu/drm/ttm/ttm_resource.c
@@ -81,6 +81,23 @@ static void ttm_bulk_move_drop_cursors(struct ttm_lru_bulk_move *bulk)
 		ttm_resource_cursor_clear_bulk(cursor);
 }
 
+/**
+ * ttm_resource_cursor_init() - Initialize a struct ttm_resource_cursor
+ * @cursor: The cursor to initialize.
+ * @man: The resource manager.
+ *
+ * Initialize the cursor before using it for iteration.
+ */
+void ttm_resource_cursor_init(struct ttm_resource_cursor *cursor,
+			      struct ttm_resource_manager *man)
+{
+	cursor->priority = 0;
+	cursor->man = man;
+	ttm_lru_item_init(&cursor->hitch, TTM_LRU_HITCH);
+	INIT_LIST_HEAD(&cursor->bulk_link);
+	INIT_LIST_HEAD(&cursor->hitch.link);
+}
+
 /**
  * ttm_resource_cursor_fini() - Finalize the LRU list cursor usage
  * @cursor: The struct ttm_resource_cursor to finalize.
@@ -593,7 +610,6 @@ ttm_resource_cursor_check_bulk(struct ttm_resource_cursor *cursor,
 /**
  * ttm_resource_manager_first() - Start iterating over the resources
  * of a resource manager
- * @man: resource manager to iterate over
  * @cursor: cursor to record the position
  *
  * Initializes the cursor and starts iterating. When done iterating,
@@ -602,17 +618,16 @@ ttm_resource_cursor_check_bulk(struct ttm_resource_cursor *cursor,
  * Return: The first resource from the resource manager.
  */
 struct ttm_resource *
-ttm_resource_manager_first(struct ttm_resource_manager *man,
-			   struct ttm_resource_cursor *cursor)
+ttm_resource_manager_first(struct ttm_resource_cursor *cursor)
 {
-	lockdep_assert_held(&man->bdev->lru_lock);
+	struct ttm_resource_manager *man = cursor->man;
 
-	cursor->priority = 0;
-	cursor->man = man;
-	ttm_lru_item_init(&cursor->hitch, TTM_LRU_HITCH);
-	INIT_LIST_HEAD(&cursor->bulk_link);
-	list_add(&cursor->hitch.link, &man->lru[cursor->priority]);
+	if (WARN_ON_ONCE(!man))
+		return NULL;
+
+	lockdep_assert_held(&man->bdev->lru_lock);
 
+	list_move(&cursor->hitch.link, &man->lru[cursor->priority]);
 	return ttm_resource_manager_next(cursor);
 }
 
@@ -648,8 +663,6 @@ ttm_resource_manager_next(struct ttm_resource_cursor *cursor)
 		ttm_resource_cursor_clear_bulk(cursor);
 	}
 
-	ttm_resource_cursor_fini(cursor);
-
 	return NULL;
 }
 
diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
index be034be56ba1..e1f3b95d73b6 100644
--- a/include/drm/ttm/ttm_resource.h
+++ b/include/drm/ttm/ttm_resource.h
@@ -325,6 +325,9 @@ struct ttm_resource_cursor {
 	unsigned int priority;
 };
 
+void ttm_resource_cursor_init(struct ttm_resource_cursor *cursor,
+			      struct ttm_resource_manager *man);
+
 void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor);
 
 /**
@@ -456,8 +459,7 @@ void ttm_resource_manager_debug(struct ttm_resource_manager *man,
 				struct drm_printer *p);
 
 struct ttm_resource *
-ttm_resource_manager_first(struct ttm_resource_manager *man,
-			   struct ttm_resource_cursor *cursor);
+ttm_resource_manager_first(struct ttm_resource_cursor *cursor);
 struct ttm_resource *
 ttm_resource_manager_next(struct ttm_resource_cursor *cursor);
 
@@ -466,14 +468,13 @@ ttm_lru_first_res_or_null(struct list_head *head);
 
 /**
  * ttm_resource_manager_for_each_res - iterate over all resources
- * @man: the resource manager
  * @cursor: struct ttm_resource_cursor for the current position
  * @res: the current resource
  *
  * Iterate over all the evictable resources in a resource manager.
  */
-#define ttm_resource_manager_for_each_res(man, cursor, res)		\
-	for (res = ttm_resource_manager_first(man, cursor); res;	\
+#define ttm_resource_manager_for_each_res(cursor, res)	\
+	for (res = ttm_resource_manager_first(cursor); res;	\
 	     res = ttm_resource_manager_next(cursor))
 
 struct ttm_kmap_iter *
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
  2024-11-15 15:01 ` [PATCH v14 1/8] drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini() Thomas Hellström
@ 2024-11-15 15:01 ` Thomas Hellström
  2024-11-19 13:40   ` Christian König
  2024-11-15 15:01 ` [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages Thomas Hellström
                   ` (23 subsequent siblings)
  25 siblings, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-11-15 15:01 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Christian König,
	Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Provide a standalone shmem backup implementation.
Given the ttm_backup interface, this could
later on be extended to providing other backup
implementation than shmem, with one use-case being
GPU swapout to a user-provided fd.

v5:
- Fix a UAF. (kernel test robot, Dan Carptenter)
v6:
- Rename ttm_backup_shmem_copy_page() function argument
  (Matthew Brost)
- Add some missing documentation
v8:
- Use folio_file_page to get to the page we want to writeback
  instead of using the first page of the folio.
v13:
- Remove the base class abstraction (Christian König)
- Include ttm_backup_bytes_avail().
v14:
- Fix kerneldoc for ttm_backup_bytes_avail() (0-day)
- Work around casting of __randomize_layout struct pointer (0-day)

Cc: Christian König <christian.koenig@amd.com>
Cc: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com> #v13
---
 drivers/gpu/drm/ttm/Makefile     |   2 +-
 drivers/gpu/drm/ttm/ttm_backup.c | 204 +++++++++++++++++++++++++++++++
 include/drm/ttm/ttm_backup.h     |  74 +++++++++++
 3 files changed, 279 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/ttm/ttm_backup.c
 create mode 100644 include/drm/ttm/ttm_backup.h

diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile
index dad298127226..40d07a35293a 100644
--- a/drivers/gpu/drm/ttm/Makefile
+++ b/drivers/gpu/drm/ttm/Makefile
@@ -4,7 +4,7 @@
 
 ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \
 	ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o \
-	ttm_device.o ttm_sys_manager.o
+	ttm_device.o ttm_sys_manager.o ttm_backup.o
 ttm-$(CONFIG_AGP) += ttm_agp_backend.o
 
 obj-$(CONFIG_DRM_TTM) += ttm.o
diff --git a/drivers/gpu/drm/ttm/ttm_backup.c b/drivers/gpu/drm/ttm/ttm_backup.c
new file mode 100644
index 000000000000..bf16bb0c594e
--- /dev/null
+++ b/drivers/gpu/drm/ttm/ttm_backup.c
@@ -0,0 +1,204 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#include <drm/ttm/ttm_backup.h>
+#include <linux/page-flags.h>
+#include <linux/swap.h>
+
+/*
+ * Casting from randomized struct file * to struct ttm_backup * is fine since
+ * struct ttm_backup is never defined nor dereferenced.
+ */
+static struct file *ttm_backup_to_file(struct ttm_backup *backup)
+{
+	return (void *)backup;
+}
+
+static struct ttm_backup *ttm_file_to_backup(struct file *file)
+{
+	return (void *)file;
+}
+
+/*
+ * Need to map shmem indices to handle since a handle value
+ * of 0 means error, following the swp_entry_t convention.
+ */
+static unsigned long ttm_backup_shmem_idx_to_handle(pgoff_t idx)
+{
+	return (unsigned long)idx + 1;
+}
+
+static pgoff_t ttm_backup_handle_to_shmem_idx(pgoff_t handle)
+{
+	return handle - 1;
+}
+
+/**
+ * ttm_backup_drop() - release memory associated with a handle
+ * @backup: The struct backup pointer used to obtain the handle
+ * @handle: The handle obtained from the @backup_page function.
+ */
+void ttm_backup_drop(struct ttm_backup *backup, pgoff_t handle)
+{
+	loff_t start = ttm_backup_handle_to_shmem_idx(handle);
+
+	start <<= PAGE_SHIFT;
+	shmem_truncate_range(file_inode(ttm_backup_to_file(backup)), start,
+			     start + PAGE_SIZE - 1);
+}
+
+/**
+ * ttm_backup_copy_page() - Copy the contents of a previously backed
+ * up page
+ * @backup: The struct backup pointer used to back up the page.
+ * @dst: The struct page to copy into.
+ * @handle: The handle returned when the page was backed up.
+ * @intr: Try to perform waits interruptable or at least killable.
+ *
+ * Return: 0 on success, Negative error code on failure, notably
+ * -EINTR if @intr was set to true and a signal is pending.
+ */
+int ttm_backup_copy_page(struct ttm_backup *backup, struct page *dst,
+			 pgoff_t handle, bool intr)
+{
+	struct file *filp = ttm_backup_to_file(backup);
+	struct address_space *mapping = filp->f_mapping;
+	struct folio *from_folio;
+	pgoff_t idx = ttm_backup_handle_to_shmem_idx(handle);
+
+	from_folio = shmem_read_folio(mapping, idx);
+	if (IS_ERR(from_folio))
+		return PTR_ERR(from_folio);
+
+	copy_highpage(dst, folio_file_page(from_folio, idx));
+	folio_put(from_folio);
+
+	return 0;
+}
+
+/**
+ * ttm_backup_backup_page() - Backup a page
+ * @backup: The struct backup pointer to use.
+ * @page: The page to back up.
+ * @writeback: Whether to perform immediate writeback of the page.
+ * This may have performance implications.
+ * @idx: A unique integer for each page and each struct backup.
+ * This allows the backup implementation to avoid managing
+ * its address space separately.
+ * @page_gfp: The gfp value used when the page was allocated.
+ * This is used for accounting purposes.
+ * @alloc_gfp: The gpf to be used when allocating memory.
+ *
+ * Context: If called from reclaim context, the caller needs to
+ * assert that the shrinker gfp has __GFP_FS set, to avoid
+ * deadlocking on lock_page(). If @writeback is set to true and
+ * called from reclaim context, the caller also needs to assert
+ * that the shrinker gfp has __GFP_IO set, since without it,
+ * we're not allowed to start backup IO.
+ *
+ * Return: A handle on success. 0 on failure.
+ * (This is following the swp_entry_t convention).
+ *
+ * Note: This function could be extended to back up a folio and
+ * implementations would then split the folio internally if needed.
+ * Drawback is that the caller would then have to keep track of
+ * the folio size- and usage.
+ */
+unsigned long
+ttm_backup_backup_page(struct ttm_backup *backup, struct page *page,
+		       bool writeback, pgoff_t idx, gfp_t page_gfp,
+		       gfp_t alloc_gfp)
+{
+	struct file *filp = ttm_backup_to_file(backup);
+	struct address_space *mapping = filp->f_mapping;
+	unsigned long handle = 0;
+	struct folio *to_folio;
+	int ret;
+
+	to_folio = shmem_read_folio_gfp(mapping, idx, alloc_gfp);
+	if (IS_ERR(to_folio))
+		return handle;
+
+	folio_mark_accessed(to_folio);
+	folio_lock(to_folio);
+	folio_mark_dirty(to_folio);
+	copy_highpage(folio_file_page(to_folio, idx), page);
+	handle = ttm_backup_shmem_idx_to_handle(idx);
+
+	if (writeback && !folio_mapped(to_folio) &&
+	    folio_clear_dirty_for_io(to_folio)) {
+		struct writeback_control wbc = {
+			.sync_mode = WB_SYNC_NONE,
+			.nr_to_write = SWAP_CLUSTER_MAX,
+			.range_start = 0,
+			.range_end = LLONG_MAX,
+			.for_reclaim = 1,
+		};
+		folio_set_reclaim(to_folio);
+		ret = mapping->a_ops->writepage(folio_file_page(to_folio, idx), &wbc);
+		if (!folio_test_writeback(to_folio))
+			folio_clear_reclaim(to_folio);
+		/* If writepage succeeds, it unlocks the folio */
+		if (ret)
+			folio_unlock(to_folio);
+	} else {
+		folio_unlock(to_folio);
+	}
+
+	folio_put(to_folio);
+
+	return handle;
+}
+
+/**
+ * ttm_backup_fini() - Free the struct backup resources after last use.
+ * @backup: Pointer to the struct backup whose resources to free.
+ *
+ * After a call to this function, it's illegal to use the @backup pointer.
+ */
+void ttm_backup_fini(struct ttm_backup *backup)
+{
+	fput(ttm_backup_to_file(backup));
+}
+
+/**
+ * ttm_backup_bytes_avail() - Report the approximate number of bytes of backup space
+ * left for backup.
+ *
+ * This function is intended also for driver use to indicate whether a
+ * backup attempt is meaningful.
+ *
+ * Return: An approximate size of backup space available.
+ */
+u64 ttm_backup_bytes_avail(void)
+{
+	/*
+	 * The idea behind backing up to shmem is that shmem objects may
+	 * eventually be swapped out. So no point swapping out if there
+	 * is no or low swap-space available. But the accuracy of this
+	 * number also depends on shmem actually swapping out backed-up
+	 * shmem objects without too much buffering.
+	 */
+	return (u64)get_nr_swap_pages() << PAGE_SHIFT;
+}
+EXPORT_SYMBOL_GPL(ttm_backup_bytes_avail);
+
+/**
+ * ttm_backup_shmem_create() - Create a shmem-based struct backup.
+ * @size: The maximum size (in bytes) to back up.
+ *
+ * Create a backup utilizing shmem objects.
+ *
+ * Return: A pointer to a struct ttm_backup on success,
+ * an error pointer on error.
+ */
+struct ttm_backup *ttm_backup_shmem_create(loff_t size)
+{
+	struct file *filp;
+
+	filp = shmem_file_setup("ttm shmem backup", size, 0);
+
+	return ttm_file_to_backup(filp);
+}
diff --git a/include/drm/ttm/ttm_backup.h b/include/drm/ttm/ttm_backup.h
new file mode 100644
index 000000000000..20609da7e281
--- /dev/null
+++ b/include/drm/ttm/ttm_backup.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#ifndef _TTM_BACKUP_H_
+#define _TTM_BACKUP_H_
+
+#include <linux/mm_types.h>
+#include <linux/shmem_fs.h>
+
+struct ttm_backup;
+
+/**
+ * ttm_backup_handle_to_page_ptr() - Convert handle to struct page pointer
+ * @handle: The handle to convert.
+ *
+ * Converts an opaque handle received from the
+ * struct ttm_backoup_ops::backup_page() function to an (invalid)
+ * struct page pointer suitable for a struct page array.
+ *
+ * Return: An (invalid) struct page pointer.
+ */
+static inline struct page *
+ttm_backup_handle_to_page_ptr(unsigned long handle)
+{
+	return (struct page *)(handle << 1 | 1);
+}
+
+/**
+ * ttm_backup_page_ptr_is_handle() - Whether a struct page pointer is a handle
+ * @page: The struct page pointer to check.
+ *
+ * Return: true if the struct page pointer is a handld returned from
+ * ttm_backup_handle_to_page_ptr(). False otherwise.
+ */
+static inline bool ttm_backup_page_ptr_is_handle(const struct page *page)
+{
+	return (unsigned long)page & 1;
+}
+
+/**
+ * ttm_backup_page_ptr_to_handle() - Convert a struct page pointer to a handle
+ * @page: The struct page pointer to convert
+ *
+ * Return: The handle that was previously used in
+ * ttm_backup_handle_to_page_ptr() to obtain a struct page pointer, suitable
+ * for use as argument in the struct ttm_backup_ops drop() or
+ * copy_backed_up_page() functions.
+ */
+static inline unsigned long
+ttm_backup_page_ptr_to_handle(const struct page *page)
+{
+	WARN_ON(!ttm_backup_page_ptr_is_handle(page));
+	return (unsigned long)page >> 1;
+}
+
+void ttm_backup_drop(struct ttm_backup *backup, pgoff_t handle);
+
+int ttm_backup_copy_page(struct ttm_backup *backup, struct page *dst,
+			 pgoff_t handle, bool intr);
+
+unsigned long
+ttm_backup_backup_page(struct ttm_backup *backup, struct page *page,
+		       bool writeback, pgoff_t idx, gfp_t page_gfp,
+		       gfp_t alloc_gfp);
+
+void ttm_backup_fini(struct ttm_backup *backup);
+
+u64 ttm_backup_bytes_avail(void);
+
+struct ttm_backup *ttm_backup_shmem_create(loff_t size);
+
+#endif
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
  2024-11-15 15:01 ` [PATCH v14 1/8] drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini() Thomas Hellström
  2024-11-15 15:01 ` [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation Thomas Hellström
@ 2024-11-15 15:01 ` Thomas Hellström
  2024-12-03 13:12   ` Christian König
  2024-11-15 15:01 ` [PATCH v14 4/8] drm/ttm: Use fault-injection to test error paths Thomas Hellström
                   ` (22 subsequent siblings)
  25 siblings, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-11-15 15:01 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Christian König,
	Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Provide a helper to shrink ttm_tt page-vectors on a per-page
basis. A ttm_backup backend could then in theory get away with
allocating a single temporary page for each struct ttm_tt.

This is accomplished by splitting larger pages before trying to
back them up.

In the future we could allow ttm_backup to handle backing up
large pages as well, but currently there's no benefit in
doing that, since the shmem backup backend would have to
split those anyway to avoid allocating too much temporary
memory, and if the backend instead inserts pages into the
swap-cache, those are split on reclaim by the core.

Due to potential backup- and recover errors, allow partially swapped
out struct ttm_tt's, although mark them as swapped out stopping them
from being swapped out a second time. More details in the ttm_pool.c
DOC section.

v2:
- A couple of cleanups and error fixes in ttm_pool_back_up_tt.
- s/back_up/backup/
- Add a writeback parameter to the exported interface.
v8:
- Use a struct for flags for readability (Matt Brost)
- Address misc other review comments (Matt Brost)
v9:
- Update the kerneldoc for the ttm_tt::backup field.
v10:
- Rebase.
v13:
- Rebase on ttm_backup interface change. Update kerneldoc.
- Rebase and adjust ttm_tt_is_swapped().

Cc: Christian König <christian.koenig@amd.com>
Cc: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/ttm/ttm_pool.c | 396 +++++++++++++++++++++++++++++++--
 drivers/gpu/drm/ttm/ttm_tt.c   |  37 +++
 include/drm/ttm/ttm_pool.h     |   6 +
 include/drm/ttm/ttm_tt.h       |  32 ++-
 4 files changed, 457 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index 8504dbe19c1a..f58864439edb 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -41,6 +41,7 @@
 #include <asm/set_memory.h>
 #endif
 
+#include <drm/ttm/ttm_backup.h>
 #include <drm/ttm/ttm_pool.h>
 #include <drm/ttm/ttm_tt.h>
 #include <drm/ttm/ttm_bo.h>
@@ -58,6 +59,32 @@ struct ttm_pool_dma {
 	unsigned long vaddr;
 };
 
+/**
+ * struct ttm_pool_tt_restore - State representing restore from backup
+ * @alloced_pages: Total number of already allocated pages for the ttm_tt.
+ * @restored_pages: Number of (sub) pages restored from swap for this
+ *		     chunk of 1 << @order pages.
+ * @first_page: The ttm page ptr representing for @old_pages[0].
+ * @caching_divide: Page pointer where subsequent pages are cached.
+ * @old_pages: Backup copy of page pointers that were replaced by the new
+ *	       page allocation.
+ * @pool: The pool used for page allocation while restoring.
+ * @order: The order of the last page allocated while restoring.
+ *
+ * Recovery from backup might fail when we've recovered less than the
+ * full ttm_tt. In order not to loose any data (yet), keep information
+ * around that allows us to restart a failed ttm backup recovery.
+ */
+struct ttm_pool_tt_restore {
+	pgoff_t alloced_pages;
+	pgoff_t restored_pages;
+	struct page **first_page;
+	struct page **caching_divide;
+	struct ttm_pool *pool;
+	unsigned int order;
+	struct page *old_pages[];
+};
+
 static unsigned long page_pool_size;
 
 MODULE_PARM_DESC(page_pool_size, "Number of pages in the WC/UC/DMA pool");
@@ -354,11 +381,105 @@ static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p)
 	return p->private;
 }
 
+/*
+ * To be able to insert single pages into backup directly,
+ * we need to split multi-order page allocations and make them look
+ * like single-page allocations.
+ */
+static void ttm_pool_split_for_swap(struct ttm_pool *pool, struct page *p)
+{
+	unsigned int order = ttm_pool_page_order(pool, p);
+	pgoff_t nr;
+
+	if (!order)
+		return;
+
+	split_page(p, order);
+	nr = 1UL << order;
+	while (nr--)
+		(p++)->private = 0;
+}
+
+/**
+ * DOC: Partial backup and restoration of a struct ttm_tt.
+ *
+ * Swapout using ttm_backup_backup_page() and swapin using
+ * ttm_backup_copy_page() may fail.
+ * The former most likely due to lack of swap-space or memory, the latter due
+ * to lack of memory or because of signal interruption during waits.
+ *
+ * Backup failure is easily handled by using a ttm_tt pages vector that holds
+ * both swap entries and page pointers. This has to be taken into account when
+ * restoring such a ttm_tt from backup, and when freeing it while backed up.
+ * When restoring, for simplicity, new pages are actually allocated from the
+ * pool and the contents of any old pages are copied in and then the old pages
+ * are released.
+ *
+ * For restoration failures, the struct ttm_pool_tt_restore holds sufficient state
+ * to be able to resume an interrupted restore, and that structure is freed once
+ * the restoration is complete. If the struct ttm_tt is destroyed while there
+ * is a valid struct ttm_pool_tt_restore attached, that is also properly taken
+ * care of.
+ */
+
+static bool ttm_pool_restore_valid(const struct ttm_pool_tt_restore *restore)
+{
+	return restore && restore->restored_pages < (1 << restore->order);
+}
+
+static int ttm_pool_restore_tt(struct ttm_pool_tt_restore *restore,
+			       struct ttm_backup *backup,
+			       struct ttm_operation_ctx *ctx)
+{
+	unsigned int i, nr = 1 << restore->order;
+	int ret = 0;
+
+	if (!ttm_pool_restore_valid(restore))
+		return 0;
+
+	for (i = restore->restored_pages; i < nr; ++i) {
+		struct page *p = restore->old_pages[i];
+
+		if (ttm_backup_page_ptr_is_handle(p)) {
+			unsigned long handle = ttm_backup_page_ptr_to_handle(p);
+
+			if (handle == 0)
+				continue;
+
+			ret = ttm_backup_copy_page
+				(backup, restore->first_page[i],
+				 handle, ctx->interruptible);
+			if (ret)
+				break;
+
+			ttm_backup_drop(backup, handle);
+		} else if (p) {
+			/*
+			 * We could probably avoid splitting the old page
+			 * using clever logic, but ATM we don't care, as
+			 * we prioritize releasing memory ASAP. Note that
+			 * here, the old retained page is always write-back
+			 * cached.
+			 */
+			ttm_pool_split_for_swap(restore->pool, p);
+			copy_highpage(restore->first_page[i], p);
+			__free_pages(p, 0);
+		}
+
+		restore->restored_pages++;
+		restore->old_pages[i] = NULL;
+		cond_resched();
+	}
+
+	return ret;
+}
+
 /* Called when we got a page, either from a pool or newly allocated */
 static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order,
 				   struct page *p, dma_addr_t **dma_addr,
 				   unsigned long *num_pages,
-				   struct page ***pages)
+				   struct page ***pages,
+				   struct ttm_pool_tt_restore *restore)
 {
 	unsigned int i;
 	int r;
@@ -369,6 +490,16 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order,
 			return r;
 	}
 
+	if (restore) {
+		memcpy(restore->old_pages, *pages,
+		       (1 << order) * sizeof(*restore->old_pages));
+		memset(*pages, 0, (1 << order) * sizeof(**pages));
+		restore->order = order;
+		restore->restored_pages = 0;
+		restore->first_page = *pages;
+		restore->alloced_pages += 1UL << order;
+	}
+
 	*num_pages -= 1 << order;
 	for (i = 1 << order; i; --i, ++(*pages), ++p)
 		**pages = p;
@@ -394,22 +525,39 @@ static void ttm_pool_free_range(struct ttm_pool *pool, struct ttm_tt *tt,
 				pgoff_t start_page, pgoff_t end_page)
 {
 	struct page **pages = &tt->pages[start_page];
+	struct ttm_backup *backup = tt->backup;
 	unsigned int order;
 	pgoff_t i, nr;
 
 	for (i = start_page; i < end_page; i += nr, pages += nr) {
 		struct ttm_pool_type *pt = NULL;
+		struct page *p = *pages;
+
+		if (ttm_backup_page_ptr_is_handle(p)) {
+			unsigned long handle = ttm_backup_page_ptr_to_handle(p);
+
+			nr = 1;
+			if (handle != 0)
+				ttm_backup_drop(backup, handle);
+			continue;
+		}
+
+		if (pool) {
+			order = ttm_pool_page_order(pool, p);
+			nr = (1UL << order);
+			if (tt->dma_address)
+				ttm_pool_unmap(pool, tt->dma_address[i], nr);
 
-		order = ttm_pool_page_order(pool, *pages);
-		nr = (1UL << order);
-		if (tt->dma_address)
-			ttm_pool_unmap(pool, tt->dma_address[i], nr);
+			pt = ttm_pool_select_type(pool, caching, order);
+		} else {
+			order = p->private;
+			nr = (1UL << order);
+		}
 
-		pt = ttm_pool_select_type(pool, caching, order);
 		if (pt)
-			ttm_pool_type_give(pt, *pages);
+			ttm_pool_type_give(pt, p);
 		else
-			ttm_pool_free_page(pool, caching, order, *pages);
+			ttm_pool_free_page(pool, caching, order, p);
 	}
 }
 
@@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 	else
 		gfp_flags |= GFP_HIGHUSER;
 
-	for (order = min_t(unsigned int, MAX_PAGE_ORDER, __fls(num_pages));
-	     num_pages;
-	     order = min_t(unsigned int, order, __fls(num_pages))) {
+	order = min_t(unsigned int, MAX_PAGE_ORDER, __fls(num_pages));
+
+	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
+		if (!tt->restore) {
+			gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
+
+			if (ctx->gfp_retry_mayfail)
+				gfp |= __GFP_RETRY_MAYFAIL;
+
+			tt->restore =
+				kvzalloc(struct_size(tt->restore, old_pages,
+						     (size_t)1 << order), gfp);
+			if (!tt->restore)
+				return -ENOMEM;
+		} else if (ttm_pool_restore_valid(tt->restore)) {
+			struct ttm_pool_tt_restore *restore = tt->restore;
+
+			num_pages -= restore->alloced_pages;
+			order = min_t(unsigned int, order, __fls(num_pages));
+			pages += restore->alloced_pages;
+			r = ttm_pool_restore_tt(restore, tt->backup, ctx);
+			if (r)
+				return r;
+			caching = restore->caching_divide;
+		}
+
+		tt->restore->pool = pool;
+	}
+
+	for (; num_pages; order = min_t(unsigned int, order, __fls(num_pages))) {
 		struct ttm_pool_type *pt;
 
 		page_caching = tt->caching;
@@ -472,11 +647,19 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 				r = ttm_pool_page_allocated(pool, order, p,
 							    &dma_addr,
 							    &num_pages,
-							    &pages);
+							    &pages,
+							    tt->restore);
 				if (r)
 					goto error_free_page;
 
 				caching = pages;
+				if (ttm_pool_restore_valid(tt->restore)) {
+					r = ttm_pool_restore_tt(tt->restore, tt->backup,
+								ctx);
+					if (r)
+						goto error_free_all;
+				}
+
 				if (num_pages < (1 << order))
 					break;
 
@@ -496,9 +679,17 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 				caching = pages;
 			}
 			r = ttm_pool_page_allocated(pool, order, p, &dma_addr,
-						    &num_pages, &pages);
+						    &num_pages, &pages,
+						    tt->restore);
 			if (r)
 				goto error_free_page;
+
+			if (ttm_pool_restore_valid(tt->restore)) {
+				r = ttm_pool_restore_tt(tt->restore, tt->backup, ctx);
+				if (r)
+					goto error_free_all;
+			}
+
 			if (PageHighMem(p))
 				caching = pages;
 		}
@@ -517,12 +708,26 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
 	if (r)
 		goto error_free_all;
 
+	if (tt->restore) {
+		kvfree(tt->restore);
+		tt->restore = NULL;
+	}
+
+	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)
+		tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP |
+				    TTM_TT_FLAG_SWAPPED);
+
 	return 0;
 
 error_free_page:
 	ttm_pool_free_page(pool, page_caching, order, p);
 
 error_free_all:
+	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
+		tt->restore->caching_divide = caching;
+		return r;
+	}
+
 	num_pages = tt->num_pages - num_pages;
 	caching_divide = caching - tt->pages;
 	ttm_pool_free_range(pool, tt, tt->caching, 0, caching_divide);
@@ -549,6 +754,171 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt)
 }
 EXPORT_SYMBOL(ttm_pool_free);
 
+/**
+ * ttm_pool_release_backed_up() - Release content of a swapped-out struct ttm_tt
+ * @tt: The struct ttm_tt.
+ *
+ * Release handles with associated content or any remaining pages of
+ * a backed-up struct ttm_tt.
+ */
+void ttm_pool_release_backed_up(struct ttm_tt *tt)
+{
+	struct ttm_backup *backup = tt->backup;
+	struct ttm_pool_tt_restore *restore;
+	pgoff_t i, start_page = 0;
+	unsigned long handle;
+
+	if (!(tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP))
+		return;
+
+	restore = tt->restore;
+
+	if (ttm_pool_restore_valid(restore)) {
+		pgoff_t nr = 1UL << restore->order;
+
+		for (i = restore->restored_pages; i < nr; ++i) {
+			struct page *p = restore->old_pages[i];
+
+			if (ttm_backup_page_ptr_is_handle(p)) {
+				handle = ttm_backup_page_ptr_to_handle(p);
+				if (handle == 0)
+					continue;
+
+				ttm_backup_drop(backup, handle);
+			} else if (p) {
+				ttm_pool_split_for_swap(restore->pool, p);
+				__free_pages(p, 0);
+			}
+		}
+	}
+
+	if (restore) {
+		pgoff_t mid = restore->caching_divide - tt->pages;
+
+		start_page = restore->alloced_pages;
+		/* Pages that might be dma-mapped and non-cached */
+		ttm_pool_free_range(restore->pool, tt, tt->caching,
+				    0, mid);
+		/* Pages that might be dma-mapped but cached */
+		ttm_pool_free_range(restore->pool, tt, ttm_cached,
+				    mid, restore->alloced_pages);
+	}
+
+	/* Shrunken pages. Cached and not dma-mapped. */
+	ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt->num_pages);
+
+	if (restore) {
+		kvfree(restore);
+		tt->restore = NULL;
+	}
+
+	tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP | TTM_TT_FLAG_SWAPPED);
+}
+
+/**
+ * ttm_pool_backup_tt() - Back up or purge a struct ttm_tt
+ * @pool: The pool used when allocating the struct ttm_tt.
+ * @ttm: The struct ttm_tt.
+ * @flags: Flags to govern the backup behaviour.
+ *
+ * Back up or purge a struct ttm_tt. If @purge is true, then
+ * all pages will be freed directly to the system rather than to the pool
+ * they were allocated from, making the function behave similarly to
+ * ttm_pool_free(). If @purge is false the pages will be backed up instead,
+ * exchanged for handles.
+ * A subsequent call to ttm_pool_alloc() will then read back the content and
+ * a subsequent call to ttm_pool_release_shrunken() will drop it.
+ * If backup of a page fails for whatever reason, @ttm will still be
+ * partially backed up, retaining those pages for which backup fails.
+ *
+ * Return: Number of pages actually backed up or freed, or negative
+ * error code on error.
+ */
+long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm,
+			const struct ttm_backup_flags *flags)
+{
+	struct ttm_backup *backup = ttm->backup;
+	struct page *page;
+	unsigned long handle;
+	gfp_t alloc_gfp;
+	gfp_t gfp;
+	int ret = 0;
+	pgoff_t shrunken = 0;
+	pgoff_t i, num_pages;
+
+	if ((!ttm_backup_bytes_avail() && !flags->purge) ||
+	    pool->use_dma_alloc ||
+	    (ttm->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP))
+		return -EBUSY;
+
+#ifdef CONFIG_X86
+	/* Anything returned to the system needs to be cached. */
+	if (ttm->caching != ttm_cached)
+		set_pages_array_wb(ttm->pages, ttm->num_pages);
+#endif
+
+	if (ttm->dma_address || flags->purge) {
+		for (i = 0; i < ttm->num_pages; i += num_pages) {
+			unsigned int order;
+
+			page = ttm->pages[i];
+			if (unlikely(!page)) {
+				num_pages = 1;
+				continue;
+			}
+
+			order = ttm_pool_page_order(pool, page);
+			num_pages = 1UL << order;
+			if (ttm->dma_address)
+				ttm_pool_unmap(pool, ttm->dma_address[i],
+					       num_pages);
+			if (flags->purge) {
+				shrunken += num_pages;
+				page->private = 0;
+				__free_pages(page, order);
+				memset(ttm->pages + i, 0,
+				       num_pages * sizeof(*ttm->pages));
+			}
+		}
+	}
+
+	if (flags->purge)
+		return shrunken;
+
+	if (pool->use_dma32)
+		gfp = GFP_DMA32;
+	else
+		gfp = GFP_HIGHUSER;
+
+	alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN | __GFP_RETRY_MAYFAIL;
+
+	for (i = 0; i < ttm->num_pages; ++i) {
+		page = ttm->pages[i];
+		if (unlikely(!page))
+			continue;
+
+		ttm_pool_split_for_swap(pool, page);
+
+		handle = ttm_backup_backup_page(backup, page, flags->writeback, i,
+						gfp, alloc_gfp);
+		if (handle) {
+			ttm->pages[i] = ttm_backup_handle_to_page_ptr(handle);
+			put_page(page);
+			shrunken++;
+		} else {
+			/* We allow partially shrunken tts */
+			ret = -ENOMEM;
+			break;
+		}
+	}
+
+	if (shrunken)
+		ttm->page_flags |= (TTM_TT_FLAG_PRIV_BACKED_UP |
+				    TTM_TT_FLAG_SWAPPED);
+
+	return shrunken ? shrunken : ret;
+}
+
 /**
  * ttm_pool_init - Initialize a pool
  *
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 3baf215eca23..dd4eabe4ad79 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -40,6 +40,7 @@
 #include <drm/drm_cache.h>
 #include <drm/drm_device.h>
 #include <drm/drm_util.h>
+#include <drm/ttm/ttm_backup.h>
 #include <drm/ttm/ttm_bo.h>
 #include <drm/ttm/ttm_tt.h>
 
@@ -158,6 +159,8 @@ static void ttm_tt_init_fields(struct ttm_tt *ttm,
 	ttm->swap_storage = NULL;
 	ttm->sg = bo->sg;
 	ttm->caching = caching;
+	ttm->restore = NULL;
+	ttm->backup = NULL;
 }
 
 int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
@@ -182,6 +185,12 @@ void ttm_tt_fini(struct ttm_tt *ttm)
 		fput(ttm->swap_storage);
 	ttm->swap_storage = NULL;
 
+	ttm_pool_release_backed_up(ttm);
+	if (ttm->backup) {
+		ttm_backup_fini(ttm->backup);
+		ttm->backup = NULL;
+	}
+
 	if (ttm->pages)
 		kvfree(ttm->pages);
 	else
@@ -253,6 +262,34 @@ int ttm_tt_swapin(struct ttm_tt *ttm)
 }
 EXPORT_SYMBOL_FOR_TESTS_ONLY(ttm_tt_swapin);
 
+/**
+ * ttm_tt_backup() - Helper to back up a struct ttm_tt.
+ * @bdev: The TTM device.
+ * @tt: The struct ttm_tt.
+ * @flags: Flags that govern the backup behaviour.
+ *
+ * Update the page accounting and call ttm_pool_shrink_tt to free pages
+ * or back them up.
+ *
+ * Return: Number of pages freed or swapped out, or negative error code on
+ * error.
+ */
+long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
+		   const struct ttm_backup_flags flags)
+{
+	long ret;
+
+	if (WARN_ON(IS_ERR_OR_NULL(tt->backup)))
+		return 0;
+
+	ret = ttm_pool_backup_tt(&bdev->pool, tt, &flags);
+
+	if (ret > 0)
+		tt->page_flags &= ~TTM_TT_FLAG_PRIV_POPULATED;
+
+	return ret;
+}
+
 /**
  * ttm_tt_swapout - swap out tt object
  *
diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h
index 160d954a261e..3112a4be835c 100644
--- a/include/drm/ttm/ttm_pool.h
+++ b/include/drm/ttm/ttm_pool.h
@@ -33,6 +33,7 @@
 
 struct device;
 struct seq_file;
+struct ttm_backup_flags;
 struct ttm_operation_ctx;
 struct ttm_pool;
 struct ttm_tt;
@@ -89,6 +90,11 @@ void ttm_pool_fini(struct ttm_pool *pool);
 
 int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m);
 
+void ttm_pool_release_backed_up(struct ttm_tt *tt);
+
+long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm,
+			const struct ttm_backup_flags *flags);
+
 int ttm_pool_mgr_init(unsigned long num_pages);
 void ttm_pool_mgr_fini(void);
 
diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
index 991edafdb2dd..6ca2fc7b2a26 100644
--- a/include/drm/ttm/ttm_tt.h
+++ b/include/drm/ttm/ttm_tt.h
@@ -32,11 +32,13 @@
 #include <drm/ttm/ttm_caching.h>
 #include <drm/ttm/ttm_kmap_iter.h>
 
+struct ttm_backup;
 struct ttm_device;
 struct ttm_tt;
 struct ttm_resource;
 struct ttm_buffer_object;
 struct ttm_operation_ctx;
+struct ttm_pool_tt_restore;
 
 /**
  * struct ttm_tt - This is a structure holding the pages, caching- and aperture
@@ -88,6 +90,9 @@ struct ttm_tt {
 	 * TTM_TT_FLAG_PRIV_POPULATED: TTM internal only. DO NOT USE. This is
 	 * set by TTM after ttm_tt_populate() has successfully returned, and is
 	 * then unset when TTM calls ttm_tt_unpopulate().
+	 *
+	 * TTM_TT_FLAG_PRIV_BACKED_UP: TTM internal only. This is set if the
+	 * struct ttm_tt has been (possibly partially) backed up.
 	 */
 #define TTM_TT_FLAG_SWAPPED		BIT(0)
 #define TTM_TT_FLAG_ZERO_ALLOC		BIT(1)
@@ -96,6 +101,7 @@ struct ttm_tt {
 #define TTM_TT_FLAG_DECRYPTED		BIT(4)
 
 #define TTM_TT_FLAG_PRIV_POPULATED	BIT(5)
+#define TTM_TT_FLAG_PRIV_BACKED_UP	BIT(6)
 	uint32_t page_flags;
 	/** @num_pages: Number of pages in the page array. */
 	uint32_t num_pages;
@@ -105,11 +111,20 @@ struct ttm_tt {
 	dma_addr_t *dma_address;
 	/** @swap_storage: Pointer to shmem struct file for swap storage. */
 	struct file *swap_storage;
+	/**
+	 * @backup: Pointer to backup struct for backed up tts.
+	 * Could be unified with @swap_storage. Meanwhile, the driver's
+	 * ttm_tt_create() callback is responsible for assigning
+	 * this field.
+	 */
+	struct ttm_backup *backup;
 	/**
 	 * @caching: The current caching state of the pages, see enum
 	 * ttm_caching.
 	 */
 	enum ttm_caching caching;
+	/** @restore: Partial restoration from backup state. TTM private */
+	struct ttm_pool_tt_restore *restore;
 };
 
 /**
@@ -131,7 +146,7 @@ static inline bool ttm_tt_is_populated(struct ttm_tt *tt)
 
 static inline bool ttm_tt_is_swapped(const struct ttm_tt *tt)
 {
-	return tt->page_flags & TTM_TT_FLAG_SWAPPED;
+	return tt->page_flags & (TTM_TT_FLAG_SWAPPED | TTM_TT_FLAG_PRIV_BACKED_UP);
 }
 
 /**
@@ -235,6 +250,21 @@ void ttm_tt_mgr_init(unsigned long num_pages, unsigned long num_dma32_pages);
 struct ttm_kmap_iter *ttm_kmap_iter_tt_init(struct ttm_kmap_iter_tt *iter_tt,
 					    struct ttm_tt *tt);
 unsigned long ttm_tt_pages_limit(void);
+
+/**
+ * struct ttm_backup_flags - Flags to govern backup behaviour.
+ * @purge: Free pages without backing up. Bypass pools.
+ * @writeback: Attempt to copy contents directly to swap space, even
+ * if that means blocking on writes to external memory.
+ */
+struct ttm_backup_flags {
+	u32 purge : 1;
+	u32 writeback : 1;
+};
+
+long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
+		   const struct ttm_backup_flags flags);
+
 #if IS_ENABLED(CONFIG_AGP)
 #include <linux/agp_backend.h>
 
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v14 4/8] drm/ttm: Use fault-injection to test error paths
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (2 preceding siblings ...)
  2024-11-15 15:01 ` [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages Thomas Hellström
@ 2024-11-15 15:01 ` Thomas Hellström
  2024-11-15 15:01 ` [PATCH v14 5/8] drm/ttm: Add a macro to perform LRU iteration Thomas Hellström
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-11-15 15:01 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Christian König,
	Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Use fault-injection to test partial TTM swapout and interrupted swapin.
Return -EINTR for swapin to test the callers ability to handle and
restart the swapin, and on swapout perform a partial swapout to test that
the swapin and release_shrunken functionality.

v8:
- Use the core fault-injection system.
v9:
- Fix compliation failure for !CONFIG_FAULT_INJECTION

Cc: Christian König <christian.koenig@amd.com>
Cc: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com> #v7
---
 drivers/gpu/drm/ttm/ttm_pool.c | 27 ++++++++++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index f58864439edb..32c3ee255eb2 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -48,6 +48,13 @@
 
 #include "ttm_module.h"
 
+#ifdef CONFIG_FAULT_INJECTION
+#include <linux/fault-inject.h>
+static DECLARE_FAULT_ATTR(backup_fault_inject);
+#else
+#define should_fail(...) false
+#endif
+
 /**
  * struct ttm_pool_dma - Helper object for coherent DMA mappings
  *
@@ -431,6 +438,7 @@ static int ttm_pool_restore_tt(struct ttm_pool_tt_restore *restore,
 			       struct ttm_backup *backup,
 			       struct ttm_operation_ctx *ctx)
 {
+	static unsigned long __maybe_unused swappedin;
 	unsigned int i, nr = 1 << restore->order;
 	int ret = 0;
 
@@ -446,6 +454,12 @@ static int ttm_pool_restore_tt(struct ttm_pool_tt_restore *restore,
 			if (handle == 0)
 				continue;
 
+			if (IS_ENABLED(CONFIG_FAULT_INJECTION) && ctx->interruptible &&
+			    should_fail(&backup_fault_inject, 1)) {
+				ret = -EINTR;
+				break;
+			}
+
 			ret = ttm_backup_copy_page
 				(backup, restore->first_page[i],
 				 handle, ctx->interruptible);
@@ -892,7 +906,14 @@ long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm,
 
 	alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN | __GFP_RETRY_MAYFAIL;
 
-	for (i = 0; i < ttm->num_pages; ++i) {
+	num_pages = ttm->num_pages;
+
+	/* Pretend doing fault injection by shrinking only half of the pages. */
+
+	if (IS_ENABLED(CONFIG_FAULT_INJECTION) && should_fail(&backup_fault_inject, 1))
+		num_pages = DIV_ROUND_UP(num_pages, 2);
+
+	for (i = 0; i < num_pages; ++i) {
 		page = ttm->pages[i];
 		if (unlikely(!page))
 			continue;
@@ -1180,6 +1201,10 @@ int ttm_pool_mgr_init(unsigned long num_pages)
 			    &ttm_pool_debugfs_globals_fops);
 	debugfs_create_file("page_pool_shrink", 0400, ttm_debugfs_root, NULL,
 			    &ttm_pool_debugfs_shrink_fops);
+#ifdef CONFIG_FAULT_INJECTION
+	fault_create_debugfs_attr("backup_fault_inject", ttm_debugfs_root,
+				  &backup_fault_inject);
+#endif
 #endif
 
 	mm_shrinker = shrinker_alloc(0, "drm-ttm_pool");
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v14 5/8] drm/ttm: Add a macro to perform LRU iteration
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (3 preceding siblings ...)
  2024-11-15 15:01 ` [PATCH v14 4/8] drm/ttm: Use fault-injection to test error paths Thomas Hellström
@ 2024-11-15 15:01 ` Thomas Hellström
  2024-11-15 15:01 ` [PATCH v14 6/8] drm/ttm: Add helpers for shrinking Thomas Hellström
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-11-15 15:01 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Matthew Brost, Somalapuram Amaranath,
	Christian König, Paulo Zanoni, Simona Vetter, dri-devel

Following the design direction communicated here:

https://lore.kernel.org/linux-mm/b7491378-defd-4f1c-31e2-29e4c77e2d67@amd.com/T/#ma918844aa8a6efe8768fdcda0c6590d5c93850c9

Export a LRU walker for driver shrinker use. The walker
initially supports only trylocking, since that's the
method used by shrinkes. The walker makes use of
scoped_guard() to allow exiting from the LRU walk loop
without performing any explicit unlocking or
cleanup.

v8:
- Split out from another patch.
- Use a struct for bool arguments to increase readability (Matt Brost).
- Unmap user-space cpu-mappings before shrinking pages.
- Explain non-fatal error codes (Matt Brost)

v10:
- Instead of using the existing helper, Wrap the interface inside out and
  provide a loop to de-midlayer things the LRU iteration (Christian König).
- Removing the R-B by Matt Brost since the patch was significantly changed.

v11:
- Split the patch up to include just the LRU walk helper.

v12:
- Indent after scoped_guard() (Matt Brost)

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/ttm/ttm_bo_util.c | 140 +++++++++++++++++++++++++++++-
 include/drm/ttm/ttm_bo.h          |  71 +++++++++++++++
 2 files changed, 207 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 917096bd5f68..0cac02a9764c 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -769,12 +769,10 @@ int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo)
 	return ret;
 }
 
-static bool ttm_lru_walk_trylock(struct ttm_lru_walk *walk,
+static bool ttm_lru_walk_trylock(struct ttm_operation_ctx *ctx,
 				 struct ttm_buffer_object *bo,
 				 bool *needs_unlock)
 {
-	struct ttm_operation_ctx *ctx = walk->ctx;
-
 	*needs_unlock = false;
 
 	if (dma_resv_trylock(bo->base.resv)) {
@@ -877,7 +875,7 @@ s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev,
 		 * since if we do it the other way around, and the trylock fails,
 		 * we need to drop the lru lock to put the bo.
 		 */
-		if (ttm_lru_walk_trylock(walk, bo, &bo_needs_unlock))
+		if (ttm_lru_walk_trylock(walk->ctx, bo, &bo_needs_unlock))
 			bo_locked = true;
 		else if (!walk->ticket || walk->ctx->no_wait_gpu ||
 			 walk->trylock_only)
@@ -920,3 +918,137 @@ s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev,
 
 	return progress;
 }
+EXPORT_SYMBOL(ttm_lru_walk_for_evict);
+
+static void ttm_bo_lru_cursor_cleanup_bo(struct ttm_bo_lru_cursor *curs)
+{
+	struct ttm_buffer_object *bo = curs->bo;
+
+	if (bo) {
+		if (curs->needs_unlock)
+			dma_resv_unlock(bo->base.resv);
+		ttm_bo_put(bo);
+		curs->bo = NULL;
+	}
+}
+
+/**
+ * ttm_bo_lru_cursor_fini() - Stop using a struct ttm_bo_lru_cursor
+ * and clean up any iteration it was used for.
+ * @curs: The cursor.
+ */
+void ttm_bo_lru_cursor_fini(struct ttm_bo_lru_cursor *curs)
+{
+	spinlock_t *lru_lock = &curs->res_curs.man->bdev->lru_lock;
+
+	ttm_bo_lru_cursor_cleanup_bo(curs);
+	spin_lock(lru_lock);
+	ttm_resource_cursor_fini(&curs->res_curs);
+	spin_unlock(lru_lock);
+}
+EXPORT_SYMBOL(ttm_bo_lru_cursor_fini);
+
+/**
+ * ttm_bo_lru_cursor_init() - Initialize a struct ttm_bo_lru_cursor
+ * @curs: The ttm_bo_lru_cursor to initialize.
+ * @man: The ttm resource_manager whose LRU lists to iterate over.
+ * @ctx: The ttm_operation_ctx to govern the locking.
+ *
+ * Initialize a struct ttm_bo_lru_cursor. Currently only trylocking
+ * or prelocked buffer objects are available as detailed by
+ * @ctx::resv and @ctx::allow_res_evict. Ticketlocking is not
+ * supported.
+ *
+ * Return: Pointer to @curs. The function does not fail.
+ */
+struct ttm_bo_lru_cursor *
+ttm_bo_lru_cursor_init(struct ttm_bo_lru_cursor *curs,
+		       struct ttm_resource_manager *man,
+		       struct ttm_operation_ctx *ctx)
+{
+	memset(curs, 0, sizeof(*curs));
+	ttm_resource_cursor_init(&curs->res_curs, man);
+	curs->ctx = ctx;
+
+	return curs;
+}
+EXPORT_SYMBOL(ttm_bo_lru_cursor_init);
+
+static struct ttm_buffer_object *
+ttm_bo_from_res_reserved(struct ttm_resource *res, struct ttm_bo_lru_cursor *curs)
+{
+	struct ttm_buffer_object *bo = res->bo;
+
+	if (!ttm_lru_walk_trylock(curs->ctx, bo, &curs->needs_unlock))
+		return NULL;
+
+	if (!ttm_bo_get_unless_zero(bo)) {
+		if (curs->needs_unlock)
+			dma_resv_unlock(bo->base.resv);
+		return NULL;
+	}
+
+	curs->bo = bo;
+	return bo;
+}
+
+/**
+ * ttm_bo_lru_cursor_next() - Continue iterating a manager's LRU lists
+ * to find and lock buffer object.
+ * @curs: The cursor initialized using ttm_bo_lru_cursor_init() and
+ * ttm_bo_lru_cursor_first().
+ *
+ * Return: A pointer to a locked and reference-counted buffer object,
+ * or NULL if none could be found and looping should be terminated.
+ */
+struct ttm_buffer_object *ttm_bo_lru_cursor_next(struct ttm_bo_lru_cursor *curs)
+{
+	spinlock_t *lru_lock = &curs->res_curs.man->bdev->lru_lock;
+	struct ttm_resource *res = NULL;
+	struct ttm_buffer_object *bo;
+
+	ttm_bo_lru_cursor_cleanup_bo(curs);
+
+	spin_lock(lru_lock);
+	for (;;) {
+		res = ttm_resource_manager_next(&curs->res_curs);
+		if (!res)
+			break;
+
+		bo = ttm_bo_from_res_reserved(res, curs);
+		if (bo)
+			break;
+	}
+
+	spin_unlock(lru_lock);
+	return res ? bo : NULL;
+}
+EXPORT_SYMBOL(ttm_bo_lru_cursor_next);
+
+/**
+ * ttm_bo_lru_cursor_first() - Start iterating a manager's LRU lists
+ * to find and lock buffer object.
+ * @curs: The cursor initialized using ttm_bo_lru_cursor_init().
+ *
+ * Return: A pointer to a locked and reference-counted buffer object,
+ * or NULL if none could be found and looping should be terminated.
+ */
+struct ttm_buffer_object *ttm_bo_lru_cursor_first(struct ttm_bo_lru_cursor *curs)
+{
+	spinlock_t *lru_lock = &curs->res_curs.man->bdev->lru_lock;
+	struct ttm_buffer_object *bo;
+	struct ttm_resource *res;
+
+	spin_lock(lru_lock);
+	res = ttm_resource_manager_first(&curs->res_curs);
+	if (!res) {
+		spin_unlock(lru_lock);
+		return NULL;
+	}
+
+	bo = ttm_bo_from_res_reserved(res, curs);
+	spin_unlock(lru_lock);
+
+	return bo ? bo : ttm_bo_lru_cursor_next(curs);
+}
+EXPORT_SYMBOL(ttm_bo_lru_cursor_first);
diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h
index 5804408815be..17d5ee049a8e 100644
--- a/include/drm/ttm/ttm_bo.h
+++ b/include/drm/ttm/ttm_bo.h
@@ -465,4 +465,75 @@ void ttm_bo_tt_destroy(struct ttm_buffer_object *bo);
 int ttm_bo_populate(struct ttm_buffer_object *bo,
 		    struct ttm_operation_ctx *ctx);
 
+/* Driver LRU walk helpers initially targeted for shrinking. */
+
+/**
+ * struct ttm_bo_lru_cursor - Iterator cursor for TTM LRU list looping
+ */
+struct ttm_bo_lru_cursor {
+	/** @res_curs: Embedded struct ttm_resource_cursor. */
+	struct ttm_resource_cursor res_curs;
+	/**
+	 * @ctx: The struct ttm_operation_ctx used while looping.
+	 * governs the locking mode.
+	 */
+	struct ttm_operation_ctx *ctx;
+	/**
+	 * @bo: Buffer object pointer if a buffer object is refcounted,
+	 * NULL otherwise.
+	 */
+	struct ttm_buffer_object *bo;
+	/**
+	 * @needs_unlock: Valid iff @bo != NULL. The bo resv needs
+	 * unlock before the next iteration or after loop exit.
+	 */
+	bool needs_unlock;
+};
+
+void ttm_bo_lru_cursor_fini(struct ttm_bo_lru_cursor *curs);
+
+struct ttm_bo_lru_cursor *
+ttm_bo_lru_cursor_init(struct ttm_bo_lru_cursor *curs,
+		       struct ttm_resource_manager *man,
+		       struct ttm_operation_ctx *ctx);
+
+struct ttm_buffer_object *ttm_bo_lru_cursor_first(struct ttm_bo_lru_cursor *curs);
+
+struct ttm_buffer_object *ttm_bo_lru_cursor_next(struct ttm_bo_lru_cursor *curs);
+
+/*
+ * Defines needed to use autocleanup (linux/cleanup.h) with struct ttm_bo_lru_cursor.
+ */
+DEFINE_CLASS(ttm_bo_lru_cursor, struct ttm_bo_lru_cursor *,
+	     if (_T) {ttm_bo_lru_cursor_fini(_T); },
+	     ttm_bo_lru_cursor_init(curs, man, ctx),
+	     struct ttm_bo_lru_cursor *curs, struct ttm_resource_manager *man,
+	     struct ttm_operation_ctx *ctx);
+static inline void *
+class_ttm_bo_lru_cursor_lock_ptr(class_ttm_bo_lru_cursor_t *_T)
+{ return *_T; }
+
+/**
+ * ttm_bo_lru_for_each_reserved_guarded() - Iterate over buffer objects owning
+ * resources on LRU lists.
+ * @_cursor: struct ttm_bo_lru_cursor to use for the iteration.
+ * @_man: The resource manager whose LRU lists to iterate over.
+ * @_ctx: The struct ttm_operation_context to govern the @_bo locking.
+ * @_bo: The struct ttm_buffer_object pointer pointing to the buffer object
+ * for the current iteration.
+ *
+ * Iterate over all resources of @_man and for each resource, attempt to
+ * reference and lock (using the locking mode detailed in @_ctx) the buffer
+ * object it points to. If successful, assign @_bo to the address of the
+ * buffer object and update @_cursor. The iteration is guarded in the
+ * sense that @_cursor will be initialized before looping start and cleaned
+ * up at looping termination, even if terminated prematurely by, for
+ * example a return or break statement. Exiting the loop will also unlock
+ * (if needed) and unreference @_bo.
+ */
+#define ttm_bo_lru_for_each_reserved_guarded(_cursor, _man, _ctx, _bo)	\
+	scoped_guard(ttm_bo_lru_cursor, _cursor, _man, _ctx)		\
+		for ((_bo) = ttm_bo_lru_cursor_first(_cursor); (_bo);	\
+		     (_bo) = ttm_bo_lru_cursor_next(_cursor))
+
 #endif
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v14 6/8] drm/ttm: Add helpers for shrinking
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (4 preceding siblings ...)
  2024-11-15 15:01 ` [PATCH v14 5/8] drm/ttm: Add a macro to perform LRU iteration Thomas Hellström
@ 2024-11-15 15:01 ` Thomas Hellström
  2024-11-15 15:01 ` [PATCH v14 7/8] drm/xe: Add a shrinker for xe bos Thomas Hellström
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-11-15 15:01 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Matthew Brost, Somalapuram Amaranath,
	Christian König, Paulo Zanoni, Simona Vetter, dri-devel

Add a number of helpers for shrinking that access core TTM and
core MM functionality in a way that make them unsuitable for
driver open-coding.

v11:
- New patch (split off from previous) and additional helpers.
v13:
- Adapt to ttm_backup interface change.
- Take resource off LRU when backed up.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com> #v11
---
 drivers/gpu/drm/ttm/ttm_bo_util.c | 107 +++++++++++++++++++++++++++++-
 drivers/gpu/drm/ttm/ttm_tt.c      |  29 ++++++++
 include/drm/ttm/ttm_bo.h          |  21 ++++++
 include/drm/ttm/ttm_tt.h          |   2 +
 4 files changed, 158 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 0cac02a9764c..15cab9bda17f 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -28,7 +28,7 @@
 /*
  * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
  */
-
+#include <linux/swap.h>
 #include <linux/vmalloc.h>
 
 #include <drm/ttm/ttm_bo.h>
@@ -1052,3 +1052,108 @@ struct ttm_buffer_object *ttm_bo_lru_cursor_first(struct ttm_bo_lru_cursor *curs
 	return bo ? bo : ttm_bo_lru_cursor_next(curs);
 }
 EXPORT_SYMBOL(ttm_bo_lru_cursor_first);
+
+/**
+ * ttm_bo_shrink() - Helper to shrink a ttm buffer object.
+ * @ctx: The struct ttm_operation_ctx used for the shrinking operation.
+ * @bo: The buffer object.
+ * @flags: Flags governing the shrinking behaviour.
+ *
+ * The function uses the ttm_tt_back_up functionality to back up or
+ * purge a struct ttm_tt. If the bo is not in system, it's first
+ * moved there.
+ *
+ * Return: The number of pages shrunken or purged, or
+ * negative error code on failure.
+ */
+long ttm_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
+		   const struct ttm_bo_shrink_flags flags)
+{
+	static const struct ttm_place sys_placement_flags = {
+		.fpfn = 0,
+		.lpfn = 0,
+		.mem_type = TTM_PL_SYSTEM,
+		.flags = 0,
+	};
+	static struct ttm_placement sys_placement = {
+		.num_placement = 1,
+		.placement = &sys_placement_flags,
+	};
+	struct ttm_tt *tt = bo->ttm;
+	long lret;
+
+	dma_resv_assert_held(bo->base.resv);
+
+	if (flags.allow_move && bo->resource->mem_type != TTM_PL_SYSTEM) {
+		int ret = ttm_bo_validate(bo, &sys_placement, ctx);
+
+		/* Consider -ENOMEM and -ENOSPC non-fatal. */
+		if (ret) {
+			if (ret == -ENOMEM || ret == -ENOSPC)
+				ret = -EBUSY;
+			return ret;
+		}
+	}
+
+	ttm_bo_unmap_virtual(bo);
+	lret = ttm_bo_wait_ctx(bo, ctx);
+	if (lret < 0)
+		return lret;
+
+	if (bo->bulk_move) {
+		spin_lock(&bo->bdev->lru_lock);
+		ttm_resource_del_bulk_move(bo->resource, bo);
+		spin_unlock(&bo->bdev->lru_lock);
+	}
+
+	lret = ttm_tt_backup(bo->bdev, tt, (struct ttm_backup_flags)
+			     {.purge = flags.purge,
+			      .writeback = flags.writeback});
+
+	if (lret <= 0 && bo->bulk_move) {
+		spin_lock(&bo->bdev->lru_lock);
+		ttm_resource_add_bulk_move(bo->resource, bo);
+		spin_unlock(&bo->bdev->lru_lock);
+	}
+
+	if (lret < 0 && lret != -EINTR)
+		return -EBUSY;
+
+	return lret;
+}
+EXPORT_SYMBOL(ttm_bo_shrink);
+
+/**
+ * ttm_bo_shrink_suitable() - Whether a bo is suitable for shinking
+ * @ctx: The struct ttm_operation_ctx governing the shrinking.
+ * @bo: The candidate for shrinking.
+ *
+ * Check whether the object, given the information available to TTM,
+ * is suitable for shinking, This function can and should be used
+ * before attempting to shrink an object.
+ *
+ * Return: true if suitable. false if not.
+ */
+bool ttm_bo_shrink_suitable(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx)
+{
+	return bo->ttm && ttm_tt_is_populated(bo->ttm) && !bo->pin_count &&
+		(!ctx->no_wait_gpu ||
+		 dma_resv_test_signaled(bo->base.resv, DMA_RESV_USAGE_BOOKKEEP));
+}
+EXPORT_SYMBOL(ttm_bo_shrink_suitable);
+
+/**
+ * ttm_bo_shrink_avoid_wait() - Whether to avoid waiting for GPU
+ * during shrinking
+ *
+ * In some situations, like direct reclaim, waiting (in particular gpu waiting)
+ * should be avoided since it may stall a system that could otherwise make progress
+ * shrinking something else less time consuming.
+ *
+ * Return: true if gpu waiting should be avoided, false if not.
+ */
+bool ttm_bo_shrink_avoid_wait(void)
+{
+	return !current_is_kswapd();
+}
+EXPORT_SYMBOL(ttm_bo_shrink_avoid_wait);
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index dd4eabe4ad79..85057380480b 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -514,3 +514,32 @@ unsigned long ttm_tt_pages_limit(void)
 	return ttm_pages_limit;
 }
 EXPORT_SYMBOL(ttm_tt_pages_limit);
+
+/**
+ * ttm_tt_setup_backup() - Allocate and assign a backup structure for a ttm_tt
+ * @tt: The ttm_tt for wich to allocate and assign a backup structure.
+ *
+ * Assign a backup structure to be used for tt backup. This should
+ * typically be done at bo creation, to avoid allocations at shrinking
+ * time.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int ttm_tt_setup_backup(struct ttm_tt *tt)
+{
+	struct ttm_backup *backup =
+		ttm_backup_shmem_create(((loff_t)tt->num_pages) << PAGE_SHIFT);
+
+	if (WARN_ON_ONCE(!(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE)))
+		return -EINVAL;
+
+	if (IS_ERR(backup))
+		return PTR_ERR(backup);
+
+	if (tt->backup)
+		ttm_backup_fini(tt->backup);
+
+	tt->backup = backup;
+	return 0;
+}
+EXPORT_SYMBOL(ttm_tt_setup_backup);
diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h
index 17d5ee049a8e..1abf2d8eb72c 100644
--- a/include/drm/ttm/ttm_bo.h
+++ b/include/drm/ttm/ttm_bo.h
@@ -225,6 +225,27 @@ struct ttm_lru_walk {
 s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev,
 			   struct ttm_resource_manager *man, s64 target);
 
+/**
+ * struct ttm_bo_shrink_flags - flags to govern the bo shrinking behaviour
+ * @purge: Purge the content rather than backing it up.
+ * @writeback: Attempt to immediately write content to swap space.
+ * @allow_move: Allow moving to system before shrinking. This is typically
+ * not desired for zombie- or ghost objects (with zombie object meaning
+ * objects with a zero gem object refcount)
+ */
+struct ttm_bo_shrink_flags {
+	u32 purge : 1;
+	u32 writeback : 1;
+	u32 allow_move : 1;
+};
+
+long ttm_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
+		   const struct ttm_bo_shrink_flags flags);
+
+bool ttm_bo_shrink_suitable(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx);
+
+bool ttm_bo_shrink_avoid_wait(void);
+
 /**
  * ttm_bo_get - reference a struct ttm_buffer_object
  *
diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
index 6ca2fc7b2a26..01752806cfbd 100644
--- a/include/drm/ttm/ttm_tt.h
+++ b/include/drm/ttm/ttm_tt.h
@@ -265,6 +265,8 @@ struct ttm_backup_flags {
 long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
 		   const struct ttm_backup_flags flags);
 
+int ttm_tt_setup_backup(struct ttm_tt *tt);
+
 #if IS_ENABLED(CONFIG_AGP)
 #include <linux/agp_backend.h>
 
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v14 7/8] drm/xe: Add a shrinker for xe bos
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (5 preceding siblings ...)
  2024-11-15 15:01 ` [PATCH v14 6/8] drm/ttm: Add helpers for shrinking Thomas Hellström
@ 2024-11-15 15:01 ` Thomas Hellström
  2024-11-15 15:01 ` [PATCH v14 8/8] drm/xe: Increase the XE_PL_TT watermark Thomas Hellström
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-11-15 15:01 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Christian König,
	Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Rather than relying on the TTM watermark accounting add a shrinker
for xe_bos in TT or system memory.

Leverage the newly added TTM per-page shrinking and shmem backup
support.

Although xe doesn't fully support WONTNEED (purgeable) bos yet,
introduce and add shrinker support for purgeable ttm_tts.

v2:
- Cleanups bugfixes and a KUNIT shrinker test.
- Add writeback support, and activate if kswapd.
v3:
- Move the try_shrink() helper to core TTM.
- Minor cleanups.
v4:
- Add runtime pm for the shrinker. Shrinking may require an active
  device for CCS metadata copying.
v5:
- Separately purge ghost- and zombie objects in the shrinker.
- Fix a format specifier - type inconsistency. (Kernel test robot).
v7:
- s/long/s64/ (Christian König)
- s/sofar/progress/ (Matt Brost)
v8:
- Rebase on Xe KUNIT update.
- Add content verifying to the shrinker kunit test.
- Split out TTM changes to a separate patch.
- Get rid of multiple bool arguments for clarity (Matt Brost)
- Avoid an error pointer dereference (Matt Brost)
- Avoid an integer overflow (Matt Auld)
- Address misc review comments by Matt Brost.
v9:
- Fix a compliation error.
- Rebase.
v10:
- Update to new LRU walk interface.
- Rework ghost-, zombie and purged object shrinking.
- Rebase.
v11:
- Use additional TTM helpers.
- Honor __GFP_FS and __GFP_IO
- Rebase.
v13:
- Use ttm_tt_setup_backup().
v14:
- Don't set up backup on imported bos.

Cc: Christian König <christian.koenig@amd.com>
Cc: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/Makefile          |   1 +
 drivers/gpu/drm/xe/tests/xe_bo.c     |   6 +-
 drivers/gpu/drm/xe/xe_bo.c           | 195 ++++++++++++++++++--
 drivers/gpu/drm/xe/xe_bo.h           |  36 ++++
 drivers/gpu/drm/xe/xe_device.c       |   8 +
 drivers/gpu/drm/xe/xe_device_types.h |   2 +
 drivers/gpu/drm/xe/xe_shrinker.c     | 258 +++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_shrinker.h     |  18 ++
 8 files changed, 507 insertions(+), 17 deletions(-)
 create mode 100644 drivers/gpu/drm/xe/xe_shrinker.c
 create mode 100644 drivers/gpu/drm/xe/xe_shrinker.h

diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index a93e6fcc0ad9..275f87389fff 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -94,6 +94,7 @@ xe-y += xe_bb.o \
 	xe_ring_ops.o \
 	xe_sa.o \
 	xe_sched_job.o \
+	xe_shrinker.o \
 	xe_step.o \
 	xe_sync.o \
 	xe_tile.o \
diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c
index cd811aa2b227..606559b7353f 100644
--- a/drivers/gpu/drm/xe/tests/xe_bo.c
+++ b/drivers/gpu/drm/xe/tests/xe_bo.c
@@ -508,8 +508,13 @@ static int shrink_test_run_device(struct xe_device *xe)
 		 * other way around, they may not be subject to swapping...
 		 */
 		if (alloced < purgeable) {
+			xe_ttm_tt_account_subtract(&xe_tt->ttm);
 			xe_tt->purgeable = true;
+			xe_ttm_tt_account_add(&xe_tt->ttm);
 			bo->ttm.priority = 0;
+			spin_lock(&bo->ttm.bdev->lru_lock);
+			ttm_bo_move_to_lru_tail(&bo->ttm);
+			spin_unlock(&bo->ttm.bdev->lru_lock);
 		} else {
 			int ret = shrink_test_fill_random(bo, &prng, link);
 
@@ -564,7 +569,6 @@ static int shrink_test_run_device(struct xe_device *xe)
 				if (ret == -EINTR)
 					intr = true;
 			} while (ret == -EINTR && !signal_pending(current));
-
 			if (!ret && !purgeable)
 				failed = shrink_test_verify(test, bo, count, &prng, link);
 
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 549866da5cd1..f02404337f04 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -10,6 +10,7 @@
 #include <drm/drm_drv.h>
 #include <drm/drm_gem_ttm_helper.h>
 #include <drm/drm_managed.h>
+#include <drm/ttm/ttm_backup.h>
 #include <drm/ttm/ttm_device.h>
 #include <drm/ttm/ttm_placement.h>
 #include <drm/ttm/ttm_tt.h>
@@ -25,6 +26,7 @@
 #include "xe_pm.h"
 #include "xe_preempt_fence.h"
 #include "xe_res_cursor.h"
+#include "xe_shrinker.h"
 #include "xe_trace_bo.h"
 #include "xe_ttm_stolen_mgr.h"
 #include "xe_vm.h"
@@ -278,9 +280,11 @@ static void xe_evict_flags(struct ttm_buffer_object *tbo,
 	}
 }
 
+/* struct xe_ttm_tt - Subclassed ttm_tt for xe */
 struct xe_ttm_tt {
 	struct ttm_tt ttm;
-	struct device *dev;
+	/** @xe - The xe device */
+	struct xe_device *xe;
 	struct sg_table sgt;
 	struct sg_table *sg;
 	/** @purgeable: Whether the content of the pages of @ttm is purgeable. */
@@ -293,7 +297,8 @@ static int xe_tt_map_sg(struct ttm_tt *tt)
 	unsigned long num_pages = tt->num_pages;
 	int ret;
 
-	XE_WARN_ON(tt->page_flags & TTM_TT_FLAG_EXTERNAL);
+	XE_WARN_ON((tt->page_flags & TTM_TT_FLAG_EXTERNAL) &&
+		   !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE));
 
 	if (xe_tt->sg)
 		return 0;
@@ -301,13 +306,13 @@ static int xe_tt_map_sg(struct ttm_tt *tt)
 	ret = sg_alloc_table_from_pages_segment(&xe_tt->sgt, tt->pages,
 						num_pages, 0,
 						(u64)num_pages << PAGE_SHIFT,
-						xe_sg_segment_size(xe_tt->dev),
+						xe_sg_segment_size(xe_tt->xe->drm.dev),
 						GFP_KERNEL);
 	if (ret)
 		return ret;
 
 	xe_tt->sg = &xe_tt->sgt;
-	ret = dma_map_sgtable(xe_tt->dev, xe_tt->sg, DMA_BIDIRECTIONAL,
+	ret = dma_map_sgtable(xe_tt->xe->drm.dev, xe_tt->sg, DMA_BIDIRECTIONAL,
 			      DMA_ATTR_SKIP_CPU_SYNC);
 	if (ret) {
 		sg_free_table(xe_tt->sg);
@@ -323,7 +328,7 @@ static void xe_tt_unmap_sg(struct ttm_tt *tt)
 	struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
 
 	if (xe_tt->sg) {
-		dma_unmap_sgtable(xe_tt->dev, xe_tt->sg,
+		dma_unmap_sgtable(xe_tt->xe->drm.dev, xe_tt->sg,
 				  DMA_BIDIRECTIONAL, 0);
 		sg_free_table(xe_tt->sg);
 		xe_tt->sg = NULL;
@@ -338,21 +343,47 @@ struct sg_table *xe_bo_sg(struct xe_bo *bo)
 	return xe_tt->sg;
 }
 
+/*
+ * Account ttm pages against the device shrinker's shrinkable and
+ * purgeable counts.
+ */
+static void xe_ttm_tt_account_add(struct ttm_tt *tt)
+{
+	struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+
+	if (xe_tt->purgeable)
+		xe_shrinker_mod_pages(xe_tt->xe->mem.shrinker, 0, tt->num_pages);
+	else
+		xe_shrinker_mod_pages(xe_tt->xe->mem.shrinker, tt->num_pages, 0);
+}
+
+static void xe_ttm_tt_account_subtract(struct ttm_tt *tt)
+{
+	struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+
+	if (xe_tt->purgeable)
+		xe_shrinker_mod_pages(xe_tt->xe->mem.shrinker, 0, -(long)tt->num_pages);
+	else
+		xe_shrinker_mod_pages(xe_tt->xe->mem.shrinker, -(long)tt->num_pages, 0);
+}
+
 static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo,
 				       u32 page_flags)
 {
 	struct xe_bo *bo = ttm_to_xe_bo(ttm_bo);
 	struct xe_device *xe = xe_bo_device(bo);
-	struct xe_ttm_tt *tt;
+	struct xe_ttm_tt *xe_tt;
+	struct ttm_tt *tt;
 	unsigned long extra_pages;
 	enum ttm_caching caching = ttm_cached;
 	int err;
 
-	tt = kzalloc(sizeof(*tt), GFP_KERNEL);
-	if (!tt)
+	xe_tt = kzalloc(sizeof(*xe_tt), GFP_KERNEL);
+	if (!xe_tt)
 		return NULL;
 
-	tt->dev = xe->drm.dev;
+	tt = &xe_tt->ttm;
+	xe_tt->xe = xe;
 
 	extra_pages = 0;
 	if (xe_bo_needs_ccs_pages(bo))
@@ -398,42 +429,61 @@ static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo,
 		caching = ttm_uncached;
 	}
 
-	err = ttm_tt_init(&tt->ttm, &bo->ttm, page_flags, caching, extra_pages);
+	if (ttm_bo->type != ttm_bo_type_sg)
+		page_flags |= TTM_TT_FLAG_EXTERNAL | TTM_TT_FLAG_EXTERNAL_MAPPABLE;
+
+	err = ttm_tt_init(tt, &bo->ttm, page_flags, caching, extra_pages);
 	if (err) {
-		kfree(tt);
+		kfree(xe_tt);
 		return NULL;
 	}
 
-	return &tt->ttm;
+	if (ttm_bo->type != ttm_bo_type_sg) {
+		err = ttm_tt_setup_backup(tt);
+		if (err) {
+			ttm_tt_fini(tt);
+			kfree(xe_tt);
+			return NULL;
+		}
+	}
+
+	return tt;
 }
 
 static int xe_ttm_tt_populate(struct ttm_device *ttm_dev, struct ttm_tt *tt,
 			      struct ttm_operation_ctx *ctx)
 {
+	struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
 	int err;
 
 	/*
 	 * dma-bufs are not populated with pages, and the dma-
 	 * addresses are set up when moved to XE_PL_TT.
 	 */
-	if (tt->page_flags & TTM_TT_FLAG_EXTERNAL)
+	if ((tt->page_flags & TTM_TT_FLAG_EXTERNAL) &&
+	    !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE))
 		return 0;
 
 	err = ttm_pool_alloc(&ttm_dev->pool, tt, ctx);
 	if (err)
 		return err;
 
-	return err;
+	xe_tt->purgeable = false;
+	xe_ttm_tt_account_add(tt);
+
+	return 0;
 }
 
 static void xe_ttm_tt_unpopulate(struct ttm_device *ttm_dev, struct ttm_tt *tt)
 {
-	if (tt->page_flags & TTM_TT_FLAG_EXTERNAL)
+	if ((tt->page_flags & TTM_TT_FLAG_EXTERNAL) &&
+	    !(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE))
 		return;
 
 	xe_tt_unmap_sg(tt);
 
-	return ttm_pool_free(&ttm_dev->pool, tt);
+	ttm_pool_free(&ttm_dev->pool, tt);
+	xe_ttm_tt_account_subtract(tt);
 }
 
 static void xe_ttm_tt_destroy(struct ttm_device *ttm_dev, struct ttm_tt *tt)
@@ -854,6 +904,111 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
 	return ret;
 }
 
+static long xe_bo_shrink_purge(struct ttm_operation_ctx *ctx,
+			       struct ttm_buffer_object *bo,
+			       unsigned long *scanned)
+{
+	long lret;
+
+	/* Fake move to system, without copying data. */
+	if (bo->resource->mem_type != XE_PL_SYSTEM) {
+		struct ttm_resource *new_resource;
+
+		lret = ttm_bo_wait_ctx(bo, ctx);
+		if (lret)
+			return lret;
+
+		lret = ttm_bo_mem_space(bo, &sys_placement, &new_resource, ctx);
+		if (lret)
+			return lret;
+
+		xe_tt_unmap_sg(bo->ttm);
+		ttm_bo_move_null(bo, new_resource);
+	}
+
+	*scanned += bo->ttm->num_pages;
+	lret = ttm_bo_shrink(ctx, bo, (struct ttm_bo_shrink_flags)
+			     {.purge = true,
+			      .writeback = false,
+			      .allow_move = false});
+
+	if (lret > 0)
+		xe_ttm_tt_account_subtract(bo->ttm);
+
+	return lret;
+}
+
+/**
+ * xe_bo_shrink() - Try to shrink an xe bo.
+ * @ctx: The struct ttm_operation_ctx used for shrinking.
+ * @bo: The TTM buffer object whose pages to shrink.
+ * @flags: Flags governing the shrink behaviour.
+ * @scanned: Pointer to a counter of the number of pages
+ * attempted to shrink.
+ *
+ * Try to shrink- or purge a bo, and if it succeeds, unmap dma.
+ * Note that we need to be able to handle also non xe bos
+ * (ghost bos), but only if the struct ttm_tt is embedded in
+ * a struct xe_ttm_tt. When the function attempts to shrink
+ * the pages of a buffer object, The value pointed to by @scanned
+ * is updated.
+ *
+ * Return: The number of pages shrunken or purged, or negative error
+ * code on failure.
+ */
+long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
+		  const struct xe_bo_shrink_flags flags,
+		  unsigned long *scanned)
+{
+	struct ttm_tt *tt = bo->ttm;
+	struct xe_ttm_tt *xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+	struct ttm_place place = {.mem_type = bo->resource->mem_type};
+	struct xe_bo *xe_bo = ttm_to_xe_bo(bo);
+	struct xe_device *xe = xe_tt->xe;
+	bool needs_rpm;
+	long lret = 0L;
+
+	if (!(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE) ||
+	    (flags.purge && !xe_tt->purgeable))
+		return -EBUSY;
+
+	if (!ttm_bo_eviction_valuable(bo, &place))
+		return -EBUSY;
+
+	if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo))
+		return xe_bo_shrink_purge(ctx, bo, scanned);
+
+	if (xe_tt->purgeable) {
+		if (bo->resource->mem_type != XE_PL_SYSTEM)
+			lret = xe_bo_move_notify(xe_bo, ctx);
+		if (!lret)
+			lret = xe_bo_shrink_purge(ctx, bo, scanned);
+		goto out_unref;
+	}
+
+	/* System CCS needs gpu copy when moving PL_TT -> PL_SYSTEM */
+	needs_rpm = (!IS_DGFX(xe) && bo->resource->mem_type != XE_PL_SYSTEM &&
+		     xe_bo_needs_ccs_pages(xe_bo));
+	if (needs_rpm && !xe_pm_runtime_get_if_active(xe))
+		goto out_unref;
+
+	*scanned += tt->num_pages;
+	lret = ttm_bo_shrink(ctx, bo, (struct ttm_bo_shrink_flags)
+			     {.purge = false,
+			      .writeback = flags.writeback,
+			      .allow_move = true});
+	if (needs_rpm)
+		xe_pm_runtime_put(xe);
+
+	if (lret > 0)
+		xe_ttm_tt_account_subtract(tt);
+
+out_unref:
+	xe_bo_put(xe_bo);
+
+	return lret;
+}
+
 /**
  * xe_bo_evict_pinned() - Evict a pinned VRAM object to system memory
  * @bo: The buffer object to move.
@@ -1765,6 +1920,8 @@ int xe_bo_pin_external(struct xe_bo *bo)
 	}
 
 	ttm_bo_pin(&bo->ttm);
+	if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm))
+		xe_ttm_tt_account_subtract(bo->ttm.ttm);
 
 	/*
 	 * FIXME: If we always use the reserve / unreserve functions for locking
@@ -1824,6 +1981,8 @@ int xe_bo_pin(struct xe_bo *bo)
 	}
 
 	ttm_bo_pin(&bo->ttm);
+	if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm))
+		xe_ttm_tt_account_subtract(bo->ttm.ttm);
 
 	/*
 	 * FIXME: If we always use the reserve / unreserve functions for locking
@@ -1858,6 +2017,8 @@ void xe_bo_unpin_external(struct xe_bo *bo)
 	spin_unlock(&xe->pinned.lock);
 
 	ttm_bo_unpin(&bo->ttm);
+	if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm))
+		xe_ttm_tt_account_add(bo->ttm.ttm);
 
 	/*
 	 * FIXME: If we always use the reserve / unreserve functions for locking
@@ -1881,6 +2042,8 @@ void xe_bo_unpin(struct xe_bo *bo)
 		spin_unlock(&xe->pinned.lock);
 	}
 	ttm_bo_unpin(&bo->ttm);
+	if (bo->ttm.ttm && ttm_tt_is_populated(bo->ttm.ttm))
+		xe_ttm_tt_account_add(bo->ttm.ttm);
 }
 
 /**
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 7fa44a0138b0..33f546bfb4e3 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -134,6 +134,28 @@ static inline struct xe_bo *xe_bo_get(struct xe_bo *bo)
 
 void xe_bo_put(struct xe_bo *bo);
 
+/*
+ * xe_bo_get_unless_zero() - Conditionally obtain a GEM object refcount on an
+ * xe bo
+ * @bo: The bo for which we want to obtain a refcount.
+ *
+ * There is a short window between where the bo's GEM object refcount reaches
+ * zero and where we put the final ttm_bo reference. Code in the eviction- and
+ * shrinking path should therefore attempt to grab a gem object reference before
+ * trying to use members outside of the base class ttm object. This function is
+ * intended for that purpose. On successful return, this function must be paired
+ * with an xe_bo_put().
+ *
+ * Return: @bo on success, NULL on failure.
+ */
+static inline __must_check struct xe_bo *xe_bo_get_unless_zero(struct xe_bo *bo)
+{
+	if (!bo || !kref_get_unless_zero(&bo->ttm.base.refcount))
+		return NULL;
+
+	return bo;
+}
+
 static inline void __xe_bo_unset_bulk_move(struct xe_bo *bo)
 {
 	if (bo)
@@ -318,6 +340,20 @@ static inline unsigned int xe_sg_segment_size(struct device *dev)
 	return round_down(max / 2, PAGE_SIZE);
 }
 
+/**
+ * struct xe_bo_shrink_flags - flags governing the shrink behaviour.
+ * @purge: Only purging allowed. Don't shrink if bo not purgeable.
+ * @writeback: Attempt to immediately move content to swap.
+ */
+struct xe_bo_shrink_flags {
+	u32 purge : 1;
+	u32 writeback : 1;
+};
+
+long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
+		  const struct xe_bo_shrink_flags flags,
+		  unsigned long *scanned);
+
 #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
 /**
  * xe_bo_is_mem_type - Whether the bo currently resides in the given
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 0e2dd691bdae..824af8c39032 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -49,6 +49,7 @@
 #include "xe_pcode.h"
 #include "xe_pm.h"
 #include "xe_query.h"
+#include "xe_shrinker.h"
 #include "xe_sriov.h"
 #include "xe_tile.h"
 #include "xe_ttm_stolen_mgr.h"
@@ -288,6 +289,9 @@ static void xe_device_destroy(struct drm_device *dev, void *dummy)
 	if (xe->unordered_wq)
 		destroy_workqueue(xe->unordered_wq);
 
+	if (!IS_ERR_OR_NULL(xe->mem.shrinker))
+		xe_shrinker_destroy(xe->mem.shrinker);
+
 	if (xe->destroy_wq)
 		destroy_workqueue(xe->destroy_wq);
 
@@ -320,6 +324,10 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
 	if (err)
 		goto err;
 
+	xe->mem.shrinker = xe_shrinker_create(xe);
+	if (IS_ERR(xe->mem.shrinker))
+		return ERR_CAST(xe->mem.shrinker);
+
 	xe->info.devid = pdev->device;
 	xe->info.revid = pdev->revision;
 	xe->info.force_execlist = xe_modparam.force_execlist;
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index fffbb7d1c40b..2965391dc2af 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -365,6 +365,8 @@ struct xe_device {
 		struct xe_mem_region vram;
 		/** @mem.sys_mgr: system TTM manager */
 		struct ttm_resource_manager sys_mgr;
+		/** @mem.sys_mgr: system memory shrinker. */
+		struct xe_shrinker *shrinker;
 	} mem;
 
 	/** @sriov: device level virtualization data */
diff --git a/drivers/gpu/drm/xe/xe_shrinker.c b/drivers/gpu/drm/xe/xe_shrinker.c
new file mode 100644
index 000000000000..8184390f9c7b
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_shrinker.c
@@ -0,0 +1,258 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#include <linux/shrinker.h>
+
+#include <drm/ttm/ttm_backup.h>
+#include <drm/ttm/ttm_bo.h>
+#include <drm/ttm/ttm_tt.h>
+
+#include "xe_bo.h"
+#include "xe_pm.h"
+#include "xe_shrinker.h"
+
+/**
+ * struct xe_shrinker - per-device shrinker
+ * @xe: Back pointer to the device.
+ * @lock: Lock protecting accounting.
+ * @shrinkable_pages: Number of pages that are currently shrinkable.
+ * @purgeable_pages: Number of pages that are currently purgeable.
+ * @shrink: Pointer to the mm shrinker.
+ * @pm_worker: Worker to wake up the device if required.
+ */
+struct xe_shrinker {
+	struct xe_device *xe;
+	rwlock_t lock;
+	long shrinkable_pages;
+	long purgeable_pages;
+	struct shrinker *shrink;
+	struct work_struct pm_worker;
+};
+
+static struct xe_shrinker *to_xe_shrinker(struct shrinker *shrink)
+{
+	return shrink->private_data;
+}
+
+/**
+ * xe_shrinker_mod_pages() - Modify shrinker page accounting
+ * @shrinker: Pointer to the struct xe_shrinker.
+ * @shrinkable: Shrinkable pages delta. May be negative.
+ * @purgeable: Purgeable page delta. May be negative.
+ *
+ * Modifies the shrinkable and purgeable pages accounting.
+ */
+void
+xe_shrinker_mod_pages(struct xe_shrinker *shrinker, long shrinkable, long purgeable)
+{
+	write_lock(&shrinker->lock);
+	shrinker->shrinkable_pages += shrinkable;
+	shrinker->purgeable_pages += purgeable;
+	write_unlock(&shrinker->lock);
+}
+
+static s64 xe_shrinker_walk(struct xe_device *xe,
+			    struct ttm_operation_ctx *ctx,
+			    const struct xe_bo_shrink_flags flags,
+			    unsigned long to_scan, unsigned long *scanned)
+{
+	unsigned int mem_type;
+	s64 freed = 0, lret;
+
+	for (mem_type = XE_PL_SYSTEM; mem_type <= XE_PL_TT; ++mem_type) {
+		struct ttm_resource_manager *man = ttm_manager_type(&xe->ttm, mem_type);
+		struct ttm_bo_lru_cursor curs;
+		struct ttm_buffer_object *ttm_bo;
+
+		if (!man || !man->use_tt)
+			continue;
+
+		ttm_bo_lru_for_each_reserved_guarded(&curs, man, ctx, ttm_bo) {
+			if (!ttm_bo_shrink_suitable(ttm_bo, ctx))
+				continue;
+
+			lret = xe_bo_shrink(ctx, ttm_bo, flags, scanned);
+			if (lret < 0)
+				return lret;
+
+			freed += lret;
+			if (*scanned >= to_scan)
+				break;
+		}
+	}
+
+	return freed;
+}
+
+static unsigned long
+xe_shrinker_count(struct shrinker *shrink, struct shrink_control *sc)
+{
+	struct xe_shrinker *shrinker = to_xe_shrinker(shrink);
+	unsigned long num_pages;
+	bool can_backup = !!(sc->gfp_mask & __GFP_FS);
+
+	num_pages = ttm_backup_bytes_avail() >> PAGE_SHIFT;
+	read_lock(&shrinker->lock);
+
+	if (can_backup)
+		num_pages = min_t(unsigned long, num_pages, shrinker->shrinkable_pages);
+	else
+		num_pages = 0;
+
+	num_pages += shrinker->purgeable_pages;
+	read_unlock(&shrinker->lock);
+
+	return num_pages ? num_pages : SHRINK_EMPTY;
+}
+
+/*
+ * Check if we need runtime pm, and if so try to grab a reference if
+ * already active. If grabbing a reference fails, queue a worker that
+ * does it for us outside of reclaim, but don't wait for it to complete.
+ * If bo shrinking needs an rpm reference and we don't have it (yet),
+ * that bo will be skipped anyway.
+ */
+static bool xe_shrinker_runtime_pm_get(struct xe_shrinker *shrinker, bool force,
+				       unsigned long nr_to_scan, bool can_backup)
+{
+	struct xe_device *xe = shrinker->xe;
+
+	if (IS_DGFX(xe) || !xe_device_has_flat_ccs(xe) ||
+	    !ttm_backup_bytes_avail())
+		return false;
+
+	if (!force) {
+		read_lock(&shrinker->lock);
+		force = (nr_to_scan > shrinker->purgeable_pages && can_backup);
+		read_unlock(&shrinker->lock);
+		if (!force)
+			return false;
+	}
+
+	if (!xe_pm_runtime_get_if_active(xe)) {
+		if (xe_rpm_reclaim_safe(xe) && !ttm_bo_shrink_avoid_wait()) {
+			xe_pm_runtime_get(xe);
+			return true;
+		}
+		queue_work(xe->unordered_wq, &shrinker->pm_worker);
+		return false;
+	}
+
+	return true;
+}
+
+static void xe_shrinker_runtime_pm_put(struct xe_shrinker *shrinker, bool runtime_pm)
+{
+	if (runtime_pm)
+		xe_pm_runtime_put(shrinker->xe);
+}
+
+static unsigned long xe_shrinker_scan(struct shrinker *shrink, struct shrink_control *sc)
+{
+	struct xe_shrinker *shrinker = to_xe_shrinker(shrink);
+	struct ttm_operation_ctx ctx = {
+		.interruptible = false,
+		.no_wait_gpu = ttm_bo_shrink_avoid_wait(),
+	};
+	unsigned long nr_to_scan, nr_scanned = 0, freed = 0;
+	struct xe_bo_shrink_flags shrink_flags = {
+		.purge = true,
+		/* Don't request writeback without __GFP_IO. */
+		.writeback = !ctx.no_wait_gpu && (sc->gfp_mask & __GFP_IO),
+	};
+	bool runtime_pm;
+	bool purgeable;
+	bool can_backup = !!(sc->gfp_mask & __GFP_FS);
+	s64 lret;
+
+	nr_to_scan = sc->nr_to_scan;
+
+	read_lock(&shrinker->lock);
+	purgeable = !!shrinker->purgeable_pages;
+	read_unlock(&shrinker->lock);
+
+	/* Might need runtime PM. Try to wake early if it looks like it. */
+	runtime_pm = xe_shrinker_runtime_pm_get(shrinker, false, nr_to_scan, can_backup);
+
+	if (purgeable && nr_scanned < nr_to_scan) {
+		lret = xe_shrinker_walk(shrinker->xe, &ctx, shrink_flags,
+					nr_to_scan, &nr_scanned);
+		if (lret >= 0)
+			freed += lret;
+	}
+
+	sc->nr_scanned = nr_scanned;
+	if (nr_scanned >= nr_to_scan || !can_backup)
+		goto out;
+
+	/* If we didn't wake before, try to do it now if needed. */
+	if (!runtime_pm)
+		runtime_pm = xe_shrinker_runtime_pm_get(shrinker, true, 0, can_backup);
+
+	shrink_flags.purge = false;
+	lret = xe_shrinker_walk(shrinker->xe, &ctx, shrink_flags,
+				nr_to_scan, &nr_scanned);
+	if (lret >= 0)
+		freed += lret;
+
+	sc->nr_scanned = nr_scanned;
+out:
+	xe_shrinker_runtime_pm_put(shrinker, runtime_pm);
+	return nr_scanned ? freed : SHRINK_STOP;
+}
+
+/* Wake up the device for shrinking. */
+static void xe_shrinker_pm(struct work_struct *work)
+{
+	struct xe_shrinker *shrinker =
+		container_of(work, typeof(*shrinker), pm_worker);
+
+	xe_pm_runtime_get(shrinker->xe);
+	xe_pm_runtime_put(shrinker->xe);
+}
+
+/**
+ * xe_shrinker_create() - Create an xe per-device shrinker
+ * @xe: Pointer to the xe device.
+ *
+ * Returns: A pointer to the created shrinker on success,
+ * Negative error code on failure.
+ */
+struct xe_shrinker *xe_shrinker_create(struct xe_device *xe)
+{
+	struct xe_shrinker *shrinker = kzalloc(sizeof(*shrinker), GFP_KERNEL);
+
+	if (!shrinker)
+		return ERR_PTR(-ENOMEM);
+
+	shrinker->shrink = shrinker_alloc(0, "xe system shrinker");
+	if (!shrinker->shrink) {
+		kfree(shrinker);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	INIT_WORK(&shrinker->pm_worker, xe_shrinker_pm);
+	shrinker->xe = xe;
+	rwlock_init(&shrinker->lock);
+	shrinker->shrink->count_objects = xe_shrinker_count;
+	shrinker->shrink->scan_objects = xe_shrinker_scan;
+	shrinker->shrink->private_data = shrinker;
+	shrinker_register(shrinker->shrink);
+
+	return shrinker;
+}
+
+/**
+ * xe_shrinker_destroy() - Destroy an xe per-device shrinker
+ * @shrinker: Pointer to the shrinker to destroy.
+ */
+void xe_shrinker_destroy(struct xe_shrinker *shrinker)
+{
+	xe_assert(shrinker->xe, !shrinker->shrinkable_pages);
+	xe_assert(shrinker->xe, !shrinker->purgeable_pages);
+	shrinker_free(shrinker->shrink);
+	flush_work(&shrinker->pm_worker);
+	kfree(shrinker);
+}
diff --git a/drivers/gpu/drm/xe/xe_shrinker.h b/drivers/gpu/drm/xe/xe_shrinker.h
new file mode 100644
index 000000000000..28a038f4fcbf
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_shrinker.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#ifndef _XE_SHRINKER_H_
+#define _XE_SHRINKER_H_
+
+struct xe_shrinker;
+struct xe_device;
+
+void xe_shrinker_mod_pages(struct xe_shrinker *shrinker, long shrinkable, long purgeable);
+
+struct xe_shrinker *xe_shrinker_create(struct xe_device *xe);
+
+void xe_shrinker_destroy(struct xe_shrinker *shrinker);
+
+#endif
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v14 8/8] drm/xe: Increase the XE_PL_TT watermark
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (6 preceding siblings ...)
  2024-11-15 15:01 ` [PATCH v14 7/8] drm/xe: Add a shrinker for xe bos Thomas Hellström
@ 2024-11-15 15:01 ` Thomas Hellström
  2024-11-15 15:06 ` ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev13) Patchwork
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-11-15 15:01 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Matthew Brost, Somalapuram Amaranath,
	Christian König, Paulo Zanoni, Simona Vetter, dri-devel

The XE_PL_TT watermark was set to 50% of system memory.
The idea behind that was unclear since the net effect is that
TT memory will be evicted to TTM_PL_SYSTEM memory if that
watermark is exceeded, requiring PPGTT rebinds and dma
remapping. But there is no similar watermark for TTM_PL_1SYSTEM
memory.

The TTM functionality that tries to swap out system memory to
shmem objects if a 50% limit of total system memory is reached
is orthogonal to this, and with the shrinker added, it's no
longer in effect.

Replace the 50% TTM_PL_TT limit with a 100% limit, in effect
allowing all graphics memory to be bound to the device unless it
has been swapped out by the shrinker.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_ttm_sys_mgr.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_ttm_sys_mgr.c b/drivers/gpu/drm/xe/xe_ttm_sys_mgr.c
index 9844a8edbfe1..d38b91872da3 100644
--- a/drivers/gpu/drm/xe/xe_ttm_sys_mgr.c
+++ b/drivers/gpu/drm/xe/xe_ttm_sys_mgr.c
@@ -108,9 +108,8 @@ int xe_ttm_sys_mgr_init(struct xe_device *xe)
 	u64 gtt_size;
 
 	si_meminfo(&si);
+	/* Potentially restrict amount of TT memory here. */
 	gtt_size = (u64)si.totalram * si.mem_unit;
-	/* TTM limits allocation of all TTM devices by 50% of system memory */
-	gtt_size /= 2;
 
 	man->use_tt = true;
 	man->func = &xe_ttm_sys_mgr_func;
-- 
2.46.2


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev13)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (7 preceding siblings ...)
  2024-11-15 15:01 ` [PATCH v14 8/8] drm/xe: Increase the XE_PL_TT watermark Thomas Hellström
@ 2024-11-15 15:06 ` Patchwork
  2024-11-15 15:07 ` ✗ CI.checkpatch: warning " Patchwork
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-15 15:06 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev13)
URL   : https://patchwork.freedesktop.org/series/131815/
State : success

== Summary ==

=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: c43ac257e8f2 drm-tip: 2024y-11m-15d-13h-20m-05s UTC integration manifest
=== git am output follows ===
Applying: drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini()
Applying: drm/ttm: Provide a shmem backup implementation
Applying: drm/ttm/pool: Provide a helper to shrink pages
Applying: drm/ttm: Use fault-injection to test error paths
Applying: drm/ttm: Add a macro to perform LRU iteration
Applying: drm/ttm: Add helpers for shrinking
Applying: drm/xe: Add a shrinker for xe bos
Applying: drm/xe: Increase the XE_PL_TT watermark



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✗ CI.checkpatch: warning for TTM shrinker helpers and xe buffer object shrinker (rev13)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (8 preceding siblings ...)
  2024-11-15 15:06 ` ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev13) Patchwork
@ 2024-11-15 15:07 ` Patchwork
  2024-11-15 15:08 ` ✓ CI.KUnit: success " Patchwork
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-15 15:07 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev13)
URL   : https://patchwork.freedesktop.org/series/131815/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
30ab6715fc09baee6cc14cb3c89ad8858688d474
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit bc637d5b43558c7acdd44643cc5670d0b7e1556e
Author: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Date:   Fri Nov 15 16:01:20 2024 +0100

    drm/xe: Increase the XE_PL_TT watermark
    
    The XE_PL_TT watermark was set to 50% of system memory.
    The idea behind that was unclear since the net effect is that
    TT memory will be evicted to TTM_PL_SYSTEM memory if that
    watermark is exceeded, requiring PPGTT rebinds and dma
    remapping. But there is no similar watermark for TTM_PL_1SYSTEM
    memory.
    
    The TTM functionality that tries to swap out system memory to
    shmem objects if a 50% limit of total system memory is reached
    is orthogonal to this, and with the shrinker added, it's no
    longer in effect.
    
    Replace the 50% TTM_PL_TT limit with a 100% limit, in effect
    allowing all graphics memory to be bound to the device unless it
    has been swapped out by the shrinker.
    
    Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
    Reviewed-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch c43ac257e8f2dfe3a5f56d3565472cb8051ca32d drm-intel
d884349da0cd drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini()
-:155: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'cursor' - possible side-effects?
#155: FILE: include/drm/ttm/ttm_resource.h:476:
+#define ttm_resource_manager_for_each_res(cursor, res)	\
+	for (res = ttm_resource_manager_first(cursor); res;	\
 	     res = ttm_resource_manager_next(cursor))

-:155: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'res' - possible side-effects?
#155: FILE: include/drm/ttm/ttm_resource.h:476:
+#define ttm_resource_manager_for_each_res(cursor, res)	\
+	for (res = ttm_resource_manager_first(cursor); res;	\
 	     res = ttm_resource_manager_next(cursor))

total: 0 errors, 0 warnings, 2 checks, 114 lines checked
6156cf50fb4c drm/ttm: Provide a shmem backup implementation
-:52: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#52: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 286 lines checked
039c2f551881 drm/ttm/pool: Provide a helper to shrink pages
bbd72441a8fb drm/ttm: Use fault-injection to test error paths
4e8cf137180b drm/ttm: Add a macro to perform LRU iteration
-:11: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#11: 
https://lore.kernel.org/linux-mm/b7491378-defd-4f1c-31e2-29e4c77e2d67@amd.com/T/#ma918844aa8a6efe8768fdcda0c6590d5c93850c9

-:253: WARNING:TABSTOP: Statements should start on a tabstop
#253: FILE: include/drm/ttm/ttm_bo.h:508:
+	     if (_T) {ttm_bo_lru_cursor_fini(_T); },

-:253: ERROR:TRAILING_STATEMENTS: trailing statements should be on next line
#253: FILE: include/drm/ttm/ttm_bo.h:508:
+	     if (_T) {ttm_bo_lru_cursor_fini(_T); },

-:253: WARNING:BRACES: braces {} are not necessary for single statement blocks
#253: FILE: include/drm/ttm/ttm_bo.h:508:
+	     if (_T) {ttm_bo_lru_cursor_fini(_T); },

-:279: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#279: FILE: include/drm/ttm/ttm_bo.h:534:
+#define ttm_bo_lru_for_each_reserved_guarded(_cursor, _man, _ctx, _bo)	\
+	scoped_guard(ttm_bo_lru_cursor, _cursor, _man, _ctx)		\
+		for ((_bo) = ttm_bo_lru_cursor_first(_cursor); (_bo);	\
+		     (_bo) = ttm_bo_lru_cursor_next(_cursor))

-:279: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_cursor' - possible side-effects?
#279: FILE: include/drm/ttm/ttm_bo.h:534:
+#define ttm_bo_lru_for_each_reserved_guarded(_cursor, _man, _ctx, _bo)	\
+	scoped_guard(ttm_bo_lru_cursor, _cursor, _man, _ctx)		\
+		for ((_bo) = ttm_bo_lru_cursor_first(_cursor); (_bo);	\
+		     (_bo) = ttm_bo_lru_cursor_next(_cursor))

-:279: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_bo' - possible side-effects?
#279: FILE: include/drm/ttm/ttm_bo.h:534:
+#define ttm_bo_lru_for_each_reserved_guarded(_cursor, _man, _ctx, _bo)	\
+	scoped_guard(ttm_bo_lru_cursor, _cursor, _man, _ctx)		\
+		for ((_bo) = ttm_bo_lru_cursor_first(_cursor); (_bo);	\
+		     (_bo) = ttm_bo_lru_cursor_next(_cursor))

total: 2 errors, 3 warnings, 2 checks, 233 lines checked
45f6a822b522 drm/ttm: Add helpers for shrinking
91d8bcee6841 drm/xe: Add a shrinker for xe bos
-:540: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#540: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 705 lines checked
bc637d5b4355 drm/xe: Increase the XE_PL_TT watermark



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✓ CI.KUnit: success for TTM shrinker helpers and xe buffer object shrinker (rev13)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (9 preceding siblings ...)
  2024-11-15 15:07 ` ✗ CI.checkpatch: warning " Patchwork
@ 2024-11-15 15:08 ` Patchwork
  2024-11-15 15:17 ` ✗ CI.Build: failure " Patchwork
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-15 15:08 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev13)
URL   : https://patchwork.freedesktop.org/series/131815/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[15:07:27] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[15:07:31] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
  156 | u64 ioread64_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
  163 | u64 ioread64_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
  170 | u64 ioread64be_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
  178 | u64 ioread64be_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
  264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
  272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
  280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
  288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~

[15:07:59] Starting KUnit Kernel (1/1)...
[15:07:59] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[15:08:00] =================== guc_dbm (7 subtests) ===================
[15:08:00] [PASSED] test_empty
[15:08:00] [PASSED] test_default
[15:08:00] ======================== test_size  ========================
[15:08:00] [PASSED] 4
[15:08:00] [PASSED] 8
[15:08:00] [PASSED] 32
[15:08:00] [PASSED] 256
[15:08:00] ==================== [PASSED] test_size ====================
[15:08:00] ======================= test_reuse  ========================
[15:08:00] [PASSED] 4
[15:08:00] [PASSED] 8
[15:08:00] [PASSED] 32
[15:08:00] [PASSED] 256
[15:08:00] =================== [PASSED] test_reuse ====================
[15:08:00] =================== test_range_overlap  ====================
[15:08:00] [PASSED] 4
[15:08:00] [PASSED] 8
[15:08:00] [PASSED] 32
[15:08:00] [PASSED] 256
[15:08:00] =============== [PASSED] test_range_overlap ================
[15:08:00] =================== test_range_compact  ====================
[15:08:00] [PASSED] 4
[15:08:00] [PASSED] 8
[15:08:00] [PASSED] 32
[15:08:00] [PASSED] 256
[15:08:00] =============== [PASSED] test_range_compact ================
[15:08:00] ==================== test_range_spare  =====================
[15:08:00] [PASSED] 4
[15:08:00] [PASSED] 8
[15:08:00] [PASSED] 32
[15:08:00] [PASSED] 256
[15:08:00] ================ [PASSED] test_range_spare =================
[15:08:00] ===================== [PASSED] guc_dbm =====================
[15:08:00] =================== guc_idm (6 subtests) ===================
[15:08:00] [PASSED] bad_init
[15:08:00] [PASSED] no_init
[15:08:00] [PASSED] init_fini
[15:08:00] [PASSED] check_used
[15:08:00] [PASSED] check_quota
[15:08:00] [PASSED] check_all
[15:08:00] ===================== [PASSED] guc_idm =====================
[15:08:00] ================== no_relay (3 subtests) ===================
[15:08:00] [PASSED] xe_drops_guc2pf_if_not_ready
[15:08:00] [PASSED] xe_drops_guc2vf_if_not_ready
[15:08:00] [PASSED] xe_rejects_send_if_not_ready
[15:08:00] ==================== [PASSED] no_relay =====================
[15:08:00] ================== pf_relay (14 subtests) ==================
[15:08:00] [PASSED] pf_rejects_guc2pf_too_short
[15:08:00] [PASSED] pf_rejects_guc2pf_too_long
[15:08:00] [PASSED] pf_rejects_guc2pf_no_payload
[15:08:00] [PASSED] pf_fails_no_payload
[15:08:00] [PASSED] pf_fails_bad_origin
[15:08:00] [PASSED] pf_fails_bad_type
[15:08:00] [PASSED] pf_txn_reports_error
[15:08:00] [PASSED] pf_txn_sends_pf2guc
[15:08:00] [PASSED] pf_sends_pf2guc
[15:08:00] [SKIPPED] pf_loopback_nop
[15:08:00] [SKIPPED] pf_loopback_echo
[15:08:00] [SKIPPED] pf_loopback_fail
[15:08:00] [SKIPPED] pf_loopback_busy
[15:08:00] [SKIPPED] pf_loopback_retry
[15:08:00] ==================== [PASSED] pf_relay =====================
[15:08:00] ================== vf_relay (3 subtests) ===================
[15:08:00] [PASSED] vf_rejects_guc2vf_too_short
[15:08:00] [PASSED] vf_rejects_guc2vf_too_long
[15:08:00] [PASSED] vf_rejects_guc2vf_no_payload
[15:08:00] ==================== [PASSED] vf_relay =====================
[15:08:00] ================= pf_service (11 subtests) =================
[15:08:00] [PASSED] pf_negotiate_any
[15:08:00] [PASSED] pf_negotiate_base_match
[15:08:00] [PASSED] pf_negotiate_base_newer
[15:08:00] [PASSED] pf_negotiate_base_next
[15:08:00] [SKIPPED] pf_negotiate_base_older
[15:08:00] [PASSED] pf_negotiate_base_prev
[15:08:00] [PASSED] pf_negotiate_latest_match
[15:08:00] [PASSED] pf_negotiate_latest_newer
[15:08:00] [PASSED] pf_negotiate_latest_next
[15:08:00] [SKIPPED] pf_negotiate_latest_older
[15:08:00] [SKIPPED] pf_negotiate_latest_prev
[15:08:00] =================== [PASSED] pf_service ====================
[15:08:00] ===================== lmtt (1 subtest) =====================
[15:08:00] ======================== test_ops  =========================
[15:08:00] [PASSED] 2-level
[15:08:00] [PASSED] multi-level
[15:08:00] ==================== [PASSED] test_ops =====================
[15:08:00] ====================== [PASSED] lmtt =======================
[15:08:00] =================== xe_mocs (2 subtests) ===================
[15:08:00] ================ xe_live_mocs_kernel_kunit  ================
[15:08:00] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[15:08:00] ================ xe_live_mocs_reset_kunit  =================
[15:08:00] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[15:08:00] ==================== [SKIPPED] xe_mocs =====================
[15:08:00] ================= xe_migrate (2 subtests) ==================
[15:08:00] ================= xe_migrate_sanity_kunit  =================
[15:08:00] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[15:08:00] ================== xe_validate_ccs_kunit  ==================
[15:08:00] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[15:08:00] =================== [SKIPPED] xe_migrate ===================
[15:08:00] ================== xe_dma_buf (1 subtest) ==================
[15:08:00] ==================== xe_dma_buf_kunit  =====================
[15:08:00] ================ [SKIPPED] xe_dma_buf_kunit ================
[15:08:00] =================== [SKIPPED] xe_dma_buf ===================
[15:08:00] ==================== xe_bo (3 subtests) ====================
[15:08:00] ================== xe_ccs_migrate_kunit  ===================
[15:08:00] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[15:08:00] ==================== xe_bo_evict_kunit  ====================
[15:08:00] =============== [SKIPPED] xe_bo_evict_kunit ================
[15:08:00] =================== xe_bo_shrink_kunit  ====================
[15:08:00] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[15:08:00] ===================== [SKIPPED] xe_bo ======================
[15:08:00] ==================== args (11 subtests) ====================
[15:08:00] [PASSED] count_args_test
[15:08:00] [PASSED] call_args_example
[15:08:00] [PASSED] call_args_test
[15:08:00] [PASSED] drop_first_arg_example
[15:08:00] [PASSED] drop_first_arg_test
[15:08:00] [PASSED] first_arg_example
[15:08:00] [PASSED] first_arg_test
[15:08:00] [PASSED] last_arg_example
[15:08:00] [PASSED] last_arg_test
[15:08:00] [PASSED] pick_arg_example
[15:08:00] [PASSED] sep_comma_examplestty: 'standard input': Inappropriate ioctl for device

[15:08:00] ====================== [PASSED] args =======================
[15:08:00] =================== xe_pci (2 subtests) ====================
[15:08:00] [PASSED] xe_gmdid_graphics_ip
[15:08:00] [PASSED] xe_gmdid_media_ip
[15:08:00] ===================== [PASSED] xe_pci ======================
[15:08:00] =================== xe_rtp (2 subtests) ====================
[15:08:00] =============== xe_rtp_process_to_sr_tests  ================
[15:08:00] [PASSED] coalesce-same-reg
[15:08:00] [PASSED] no-match-no-add
[15:08:00] [PASSED] match-or
[15:08:00] [PASSED] match-or-xfail
[15:08:00] [PASSED] no-match-no-add-multiple-rules
[15:08:00] [PASSED] two-regs-two-entries
[15:08:00] [PASSED] clr-one-set-other
[15:08:00] [PASSED] set-field
[15:08:00] [PASSED] conflict-duplicate
[15:08:00] [PASSED] conflict-not-disjoint
[15:08:00] [PASSED] conflict-reg-type
[15:08:00] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[15:08:00] ================== xe_rtp_process_tests  ===================
[15:08:00] [PASSED] active1
[15:08:00] [PASSED] active2
[15:08:00] [PASSED] active-inactive
[15:08:00] [PASSED] inactive-active
[15:08:00] [PASSED] inactive-1st_or_active-inactive
[15:08:00] [PASSED] inactive-2nd_or_active-inactive
[15:08:00] [PASSED] inactive-last_or_active-inactive
[15:08:00] [PASSED] inactive-no_or_active-inactive
[15:08:00] ============== [PASSED] xe_rtp_process_tests ===============
[15:08:00] ===================== [PASSED] xe_rtp ======================
[15:08:00] ==================== xe_wa (1 subtest) =====================
[15:08:00] ======================== xe_wa_gt  =========================
[15:08:00] [PASSED] TIGERLAKE (B0)
[15:08:00] [PASSED] DG1 (A0)
[15:08:00] [PASSED] DG1 (B0)
[15:08:00] [PASSED] ALDERLAKE_S (A0)
[15:08:00] [PASSED] ALDERLAKE_S (B0)
[15:08:00] [PASSED] ALDERLAKE_S (C0)
[15:08:00] [PASSED] ALDERLAKE_S (D0)
[15:08:00] [PASSED] ALDERLAKE_P (A0)
[15:08:00] [PASSED] ALDERLAKE_P (B0)
[15:08:00] [PASSED] ALDERLAKE_P (C0)
[15:08:00] [PASSED] ALDERLAKE_S_RPLS (D0)
[15:08:00] [PASSED] ALDERLAKE_P_RPLU (E0)
[15:08:00] [PASSED] DG2_G10 (C0)
[15:08:00] [PASSED] DG2_G11 (B1)
[15:08:00] [PASSED] DG2_G12 (A1)
[15:08:00] [PASSED] METEORLAKE (g:A0, m:A0)
[15:08:00] [PASSED] METEORLAKE (g:A0, m:A0)
[15:08:00] [PASSED] METEORLAKE (g:A0, m:A0)
[15:08:00] [PASSED] LUNARLAKE (g:A0, m:A0)
[15:08:00] [PASSED] LUNARLAKE (g:B0, m:A0)
[15:08:00] [PASSED] BATTLEMAGE (g:A0, m:A1)
[15:08:00] ==================== [PASSED] xe_wa_gt =====================
[15:08:00] ====================== [PASSED] xe_wa ======================
[15:08:00] ============================================================
[15:08:00] Testing complete. Ran 122 tests: passed: 106, skipped: 16
[15:08:00] Elapsed time: 32.722s total, 4.427s configuring, 28.029s building, 0.242s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[15:08:00] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[15:08:02] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
  156 | u64 ioread64_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
  163 | u64 ioread64_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
  170 | u64 ioread64be_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
  178 | u64 ioread64be_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
  264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
  272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
  280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
  288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~

[15:08:24] Starting KUnit Kernel (1/1)...
[15:08:24] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[15:08:24] ================== drm_buddy (7 subtests) ==================
[15:08:24] [PASSED] drm_test_buddy_alloc_limit
[15:08:24] [PASSED] drm_test_buddy_alloc_optimistic
[15:08:24] [PASSED] drm_test_buddy_alloc_pessimistic
[15:08:24] [PASSED] drm_test_buddy_alloc_pathological
[15:08:24] [PASSED] drm_test_buddy_alloc_contiguous
[15:08:24] [PASSED] drm_test_buddy_alloc_clear
[15:08:24] [PASSED] drm_test_buddy_alloc_range_bias
[15:08:24] ==================== [PASSED] drm_buddy ====================
[15:08:24] ============= drm_cmdline_parser (40 subtests) =============
[15:08:24] [PASSED] drm_test_cmdline_force_d_only
[15:08:24] [PASSED] drm_test_cmdline_force_D_only_dvi
[15:08:24] [PASSED] drm_test_cmdline_force_D_only_hdmi
[15:08:24] [PASSED] drm_test_cmdline_force_D_only_not_digital
[15:08:24] [PASSED] drm_test_cmdline_force_e_only
[15:08:24] [PASSED] drm_test_cmdline_res
[15:08:24] [PASSED] drm_test_cmdline_res_vesa
[15:08:24] [PASSED] drm_test_cmdline_res_vesa_rblank
[15:08:24] [PASSED] drm_test_cmdline_res_rblank
[15:08:24] [PASSED] drm_test_cmdline_res_bpp
[15:08:24] [PASSED] drm_test_cmdline_res_refresh
[15:08:24] [PASSED] drm_test_cmdline_res_bpp_refresh
[15:08:24] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[15:08:24] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[15:08:24] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[15:08:24] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[15:08:24] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[15:08:24] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[15:08:24] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[15:08:24] [PASSED] drm_test_cmdline_res_margins_force_on
[15:08:24] [PASSED] drm_test_cmdline_res_vesa_margins
[15:08:24] [PASSED] drm_test_cmdline_name
[15:08:24] [PASSED] drm_test_cmdline_name_bpp
[15:08:24] [PASSED] drm_test_cmdline_name_option
[15:08:24] [PASSED] drm_test_cmdline_name_bpp_option
[15:08:24] [PASSED] drm_test_cmdline_rotate_0
[15:08:24] [PASSED] drm_test_cmdline_rotate_90
[15:08:24] [PASSED] drm_test_cmdline_rotate_180
[15:08:24] [PASSED] drm_test_cmdline_rotate_270
[15:08:24] [PASSED] drm_test_cmdline_hmirror
[15:08:24] [PASSED] drm_test_cmdline_vmirror
[15:08:24] [PASSED] drm_test_cmdline_margin_options
[15:08:24] [PASSED] drm_test_cmdline_multiple_options
[15:08:24] [PASSED] drm_test_cmdline_bpp_extra_and_option
[15:08:24] [PASSED] drm_test_cmdline_extra_and_option
[15:08:24] [PASSED] drm_test_cmdline_freestanding_options
[15:08:24] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[15:08:24] [PASSED] drm_test_cmdline_panel_orientation
[15:08:24] ================ drm_test_cmdline_invalid  =================
[15:08:24] [PASSED] margin_only
[15:08:24] [PASSED] interlace_only
[15:08:24] [PASSED] res_missing_x
[15:08:24] [PASSED] res_missing_y
[15:08:24] [PASSED] res_bad_y
[15:08:24] [PASSED] res_missing_y_bpp
[15:08:24] [PASSED] res_bad_bpp
[15:08:24] [PASSED] res_bad_refresh
[15:08:24] [PASSED] res_bpp_refresh_force_on_off
[15:08:24] [PASSED] res_invalid_mode
[15:08:24] [PASSED] res_bpp_wrong_place_mode
[15:08:24] [PASSED] name_bpp_refresh
[15:08:24] [PASSED] name_refresh
[15:08:24] [PASSED] name_refresh_wrong_mode
[15:08:24] [PASSED] name_refresh_invalid_mode
[15:08:24] [PASSED] rotate_multiple
[15:08:24] [PASSED] rotate_invalid_val
[15:08:24] [PASSED] rotate_truncated
[15:08:24] [PASSED] invalid_option
[15:08:24] [PASSED] invalid_tv_option
[15:08:24] [PASSED] truncated_tv_option
[15:08:24] ============ [PASSED] drm_test_cmdline_invalid =============
[15:08:24] =============== drm_test_cmdline_tv_options  ===============
[15:08:24] [PASSED] NTSC
[15:08:24] [PASSED] NTSC_443
[15:08:24] [PASSED] NTSC_J
[15:08:24] [PASSED] PAL
[15:08:24] [PASSED] PAL_M
[15:08:24] [PASSED] PAL_N
[15:08:24] [PASSED] SECAM
[15:08:24] [PASSED] MONO_525
[15:08:24] [PASSED] MONO_625
[15:08:24] =========== [PASSED] drm_test_cmdline_tv_options ===========
[15:08:24] =============== [PASSED] drm_cmdline_parser ================
[15:08:24] ========== drmm_connector_hdmi_init (19 subtests) ==========
[15:08:24] [PASSED] drm_test_connector_hdmi_init_valid
[15:08:24] [PASSED] drm_test_connector_hdmi_init_bpc_8
[15:08:24] [PASSED] drm_test_connector_hdmi_init_bpc_10
[15:08:24] [PASSED] drm_test_connector_hdmi_init_bpc_12
[15:08:24] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[15:08:24] [PASSED] drm_test_connector_hdmi_init_bpc_null
[15:08:24] [PASSED] drm_test_connector_hdmi_init_formats_empty
[15:08:24] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[15:08:24] [PASSED] drm_test_connector_hdmi_init_null_ddc
[15:08:24] [PASSED] drm_test_connector_hdmi_init_null_product
[15:08:24] [PASSED] drm_test_connector_hdmi_init_null_vendor
[15:08:24] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[15:08:24] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[15:08:24] [PASSED] drm_test_connector_hdmi_init_product_valid
[15:08:24] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[15:08:24] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[15:08:24] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[15:08:24] ========= drm_test_connector_hdmi_init_type_valid  =========
[15:08:24] [PASSED] HDMI-A
[15:08:24] [PASSED] HDMI-B
[15:08:24] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[15:08:24] ======== drm_test_connector_hdmi_init_type_invalid  ========
[15:08:24] [PASSED] Unknown
[15:08:24] [PASSED] VGA
[15:08:24] [PASSED] DVI-I
[15:08:24] [PASSED] DVI-D
[15:08:24] [PASSED] DVI-A
[15:08:24] [PASSED] Composite
[15:08:24] [PASSED] SVIDEO
[15:08:24] [PASSED] LVDS
[15:08:24] [PASSED] Component
[15:08:24] [PASSED] DIN
[15:08:24] [PASSED] DP
[15:08:24] [PASSED] TV
[15:08:24] [PASSED] eDP
[15:08:24] [PASSED] Virtual
[15:08:24] [PASSED] DSI
[15:08:24] [PASSED] DPI
[15:08:24] [PASSED] Writeback
[15:08:24] [PASSED] SPI
[15:08:24] [PASSED] USB
[15:08:24] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[15:08:24] ============ [PASSED] drmm_connector_hdmi_init =============
[15:08:24] ============= drmm_connector_init (3 subtests) =============
[15:08:24] [PASSED] drm_test_drmm_connector_init
[15:08:24] [PASSED] drm_test_drmm_connector_init_null_ddc
[15:08:24] ========= drm_test_drmm_connector_init_type_valid  =========
[15:08:24] [PASSED] Unknown
[15:08:24] [PASSED] VGA
[15:08:24] [PASSED] DVI-I
[15:08:24] [PASSED] DVI-D
[15:08:24] [PASSED] DVI-A
[15:08:24] [PASSED] Composite
[15:08:24] [PASSED] SVIDEO
[15:08:24] [PASSED] LVDS
[15:08:24] [PASSED] Component
[15:08:24] [PASSED] DIN
[15:08:24] [PASSED] DP
[15:08:24] [PASSED] HDMI-A
[15:08:24] [PASSED] HDMI-B
[15:08:24] [PASSED] TV
[15:08:24] [PASSED] eDP
[15:08:24] [PASSED] Virtual
[15:08:24] [PASSED] DSI
[15:08:24] [PASSED] DPI
[15:08:24] [PASSED] Writeback
[15:08:24] [PASSED] SPI
[15:08:24] [PASSED] USB
[15:08:24] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[15:08:24] =============== [PASSED] drmm_connector_init ===============
[15:08:24] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[15:08:24] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[15:08:24] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[15:08:24] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[15:08:24] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[15:08:24] ========== drm_test_get_tv_mode_from_name_valid  ===========
[15:08:24] [PASSED] NTSC
[15:08:24] [PASSED] NTSC-443
[15:08:24] [PASSED] NTSC-J
[15:08:24] [PASSED] PAL
[15:08:24] [PASSED] PAL-M
[15:08:24] [PASSED] PAL-N
[15:08:24] [PASSED] SECAM
[15:08:24] [PASSED] Mono
[15:08:24] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[15:08:24] [PASSED] drm_test_get_tv_mode_from_name_truncated
[15:08:24] ============ [PASSED] drm_get_tv_mode_from_name ============
[15:08:24] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[15:08:24] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[15:08:24] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[15:08:24] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[15:08:24] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[15:08:24] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[15:08:24] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[15:08:24] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[15:08:24] [PASSED] VIC 96
[15:08:24] [PASSED] VIC 97
[15:08:24] [PASSED] VIC 101
[15:08:24] [PASSED] VIC 102
[15:08:24] [PASSED] VIC 106
[15:08:24] [PASSED] VIC 107
[15:08:24] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[15:08:24] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[15:08:24] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[15:08:24] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[15:08:24] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[15:08:24] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[15:08:24] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[15:08:24] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[15:08:24] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[15:08:24] [PASSED] Automatic
[15:08:24] [PASSED] Full
[15:08:24] [PASSED] Limited 16:235
[15:08:24] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[15:08:24] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[15:08:24] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[15:08:24] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[15:08:24] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[15:08:24] [PASSED] RGB
[15:08:24] [PASSED] YUV 4:2:0
[15:08:24] [PASSED] YUV 4:2:2
[15:08:24] [PASSED] YUV 4:4:4
[15:08:24] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[15:08:24] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[15:08:24] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[15:08:24] ============= drm_damage_helper (21 subtests) ==============
[15:08:24] [PASSED] drm_test_damage_iter_no_damage
[15:08:24] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[15:08:24] [PASSED] drm_test_damage_iter_no_damage_src_moved
[15:08:24] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[15:08:24] [PASSED] drm_test_damage_iter_no_damage_not_visible
[15:08:24] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[15:08:24] [PASSED] drm_test_damage_iter_no_damage_no_fb
[15:08:24] [PASSED] drm_test_damage_iter_simple_damage
[15:08:24] [PASSED] drm_test_damage_iter_single_damage
[15:08:24] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[15:08:24] [PASSED] drm_test_damage_iter_single_damage_outside_src
[15:08:24] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[15:08:24] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[15:08:24] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[15:08:24] [PASSED] drm_test_damage_iter_single_damage_src_moved
[15:08:24] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[15:08:24] [PASSED] drm_test_damage_iter_damage
[15:08:24] [PASSED] drm_test_damage_iter_damage_one_intersect
[15:08:24] [PASSED] drm_test_damage_iter_damage_one_outside
[15:08:24] [PASSED] drm_test_damage_iter_damage_src_moved
[15:08:24] [PASSED] drm_test_damage_iter_damage_not_visible
[15:08:24] ================ [PASSED] drm_damage_helper ================
[15:08:24] ============== drm_dp_mst_helper (3 subtests) ==============
[15:08:24] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[15:08:24] [PASSED] Clock 154000 BPP 30 DSC disabled
[15:08:24] [PASSED] Clock 234000 BPP 30 DSC disabled
[15:08:24] [PASSED] Clock 297000 BPP 24 DSC disabled
[15:08:24] [PASSED] Clock 332880 BPP 24 DSC enabled
[15:08:24] [PASSED] Clock 324540 BPP 24 DSC enabled
[15:08:24] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[15:08:24] ============== drm_test_dp_mst_calc_pbn_div  ===============
[15:08:24] [PASSED] Link rate 2000000 lane count 4
[15:08:24] [PASSED] Link rate 2000000 lane count 2
[15:08:24] [PASSED] Link rate 2000000 lane count 1
[15:08:24] [PASSED] Link rate 1350000 lane count 4
[15:08:24] [PASSED] Link rate 1350000 lane count 2
[15:08:24] [PASSED] Link rate 1350000 lane count 1
[15:08:24] [PASSED] Link rate 1000000 lane count 4
[15:08:24] [PASSED] Link rate 1000000 lane count 2
[15:08:24] [PASSED] Link rate 1000000 lane count 1
[15:08:24] [PASSED] Link rate 810000 lane count 4
[15:08:24] [PASSED] Link rate 810000 lane count 2
[15:08:24] [PASSED] Link rate 810000 lane count 1
[15:08:24] [PASSED] Link rate 540000 lane count 4
[15:08:24] [PASSED] Link rate 540000 lane count 2
[15:08:24] [PASSED] Link rate 540000 lane count 1
[15:08:24] [PASSED] Link rate 270000 lane count 4
[15:08:24] [PASSED] Link rate 270000 lane count 2
[15:08:24] [PASSED] Link rate 270000 lane count 1
[15:08:24] [PASSED] Link rate 162000 lane count 4
[15:08:24] [PASSED] Link rate 162000 lane count 2
[15:08:24] [PASSED] Link rate 162000 lane count 1
[15:08:24] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[15:08:24] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[15:08:24] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[15:08:24] [PASSED] DP_POWER_UP_PHY with port number
[15:08:24] [PASSED] DP_POWER_DOWN_PHY with port number
[15:08:24] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[15:08:24] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[15:08:24] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[15:08:24] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[15:08:24] [PASSED] DP_QUERY_PAYLOAD with port number
[15:08:24] [PASSED] DP_QUERY_PAYLOAD with VCPI
[15:08:24] [PASSED] DP_REMOTE_DPCD_READ with port number
[15:08:24] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[15:08:24] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[15:08:24] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[15:08:24] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[15:08:24] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[15:08:24] [PASSED] DP_REMOTE_I2C_READ with port number
[15:08:24] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[15:08:24] [PASSED] DP_REMOTE_I2C_READ with transactions array
[15:08:24] [PASSED] DP_REMOTE_I2C_WRITE with port number
[15:08:24] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[15:08:24] [PASSED] DP_REMOTE_I2C_WRITE with data array
[15:08:24] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[15:08:24] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[15:08:24] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[15:08:24] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[15:08:24] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[15:08:24] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[15:08:24] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[15:08:24] ================ [PASSED] drm_dp_mst_helper ================
[15:08:24] ================== drm_exec (7 subtests) ===================
[15:08:24] [PASSED] sanitycheck
[15:08:24] [PASSED] test_lock
[15:08:24] [PASSED] test_lock_unlock
[15:08:24] [PASSED] test_duplicates
[15:08:24] [PASSED] test_prepare
[15:08:24] [PASSED] test_prepare_array
[15:08:24] [PASSED] test_multiple_loops
[15:08:24] ==================== [PASSED] drm_exec =====================
[15:08:24] =========== drm_format_helper_test (17 subtests) ===========
[15:08:24] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[15:08:24] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[15:08:24] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[15:08:24] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[15:08:24] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[15:08:24] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[15:08:24] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[15:08:24] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[15:08:24] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[15:08:24] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[15:08:24] ============== drm_test_fb_xrgb8888_to_mono  ===============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[15:08:24] ==================== drm_test_fb_swab  =====================
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ================ [PASSED] drm_test_fb_swab =================
[15:08:24] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[15:08:24] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[15:08:24] [PASSED] single_pixel_source_buffer
[15:08:24] [PASSED] single_pixel_clip_rectangle
[15:08:24] [PASSED] well_known_colors
[15:08:24] [PASSED] destination_pitch
[15:08:24] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[15:08:24] ================= drm_test_fb_clip_offset  =================
[15:08:24] [PASSED] pass through
[15:08:24] [PASSED] horizontal offset
[15:08:24] [PASSED] vertical offset
[15:08:24] [PASSED] horizontal and vertical offset
[15:08:24] [PASSED] horizontal offset (custom pitch)
[15:08:24] [PASSED] vertical offset (custom pitch)
[15:08:24] [PASSED] horizontal and vertical offset (custom pitch)
[15:08:24] ============= [PASSED] drm_test_fb_clip_offset =============
[15:08:24] ============== drm_test_fb_build_fourcc_list  ==============
[15:08:24] [PASSED] no native formats
[15:08:24] [PASSED] XRGB8888 as native format
[15:08:24] [PASSED] remove duplicates
[15:08:24] [PASSED] convert alpha formats
[15:08:24] [PASSED] random formats
[15:08:24] ========== [PASSED] drm_test_fb_build_fourcc_list ==========
[15:08:24] =================== drm_test_fb_memcpy  ====================
[15:08:24] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[15:08:24] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[15:08:24] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[15:08:24] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[15:08:24] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[15:08:24] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[15:08:24] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[15:08:24] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[15:08:24] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[15:08:24] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[15:08:24] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[15:08:24] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[15:08:24] =============== [PASSED] drm_test_fb_memcpy ================
[15:08:24] ============= [PASSED] drm_format_helper_test ==============
[15:08:24] ================= drm_format (18 subtests) =================
[15:08:24] [PASSED] drm_test_format_block_width_invalid
[15:08:24] [PASSED] drm_test_format_block_width_one_plane
[15:08:24] [PASSED] drm_test_format_block_width_two_plane
[15:08:24] [PASSED] drm_test_format_block_width_three_plane
[15:08:24] [PASSED] drm_test_format_block_width_tiled
[15:08:24] [PASSED] drm_test_format_block_height_invalid
[15:08:24] [PASSED] drm_test_format_block_height_one_plane
[15:08:24] [PASSED] drm_test_format_block_height_two_plane
[15:08:24] [PASSED] drm_test_format_block_height_three_plane
[15:08:24] [PASSED] drm_test_format_block_height_tiled
[15:08:24] [PASSED] drm_test_format_min_pitch_invalid
[15:08:24] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[15:08:24] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[15:08:24] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[15:08:24] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[15:08:24] [PASSED] drm_test_format_min_pitch_two_plane
[15:08:24] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[15:08:24] [PASSED] drm_test_format_min_pitch_tiled
[15:08:24] =================== [PASSED] drm_format ====================
[15:08:24] ============== drm_framebuffer (10 subtests) ===============
[15:08:24] ========== drm_test_framebuffer_check_src_coords  ==========
[15:08:24] [PASSED] Success: source fits into fb
[15:08:24] [PASSED] Fail: overflowing fb with x-axis coordinate
[15:08:24] [PASSED] Fail: overflowing fb with y-axis coordinate
[15:08:24] [PASSED] Fail: overflowing fb with source width
[15:08:24] [PASSED] Fail: overflowing fb with source height
[15:08:24] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[15:08:24] [PASSED] drm_test_framebuffer_cleanup
[15:08:24] =============== drm_test_framebuffer_create  ===============
[15:08:24] [PASSED] ABGR8888 normal sizes
[15:08:24] [PASSED] ABGR8888 max sizes
[15:08:24] [PASSED] ABGR8888 pitch greater than min required
[15:08:24] [PASSED] ABGR8888 pitch less than min required
[15:08:24] [PASSED] ABGR8888 Invalid width
[15:08:24] [PASSED] ABGR8888 Invalid buffer handle
[15:08:24] [PASSED] No pixel format
[15:08:24] [PASSED] ABGR8888 Width 0
[15:08:24] [PASSED] ABGR8888 Height 0
[15:08:24] [PASSED] ABGR8888 Out of bound height * pitch combination
[15:08:24] [PASSED] ABGR8888 Large buffer offset
[15:08:24] [PASSED] ABGR8888 Buffer offset for inexistent plane
[15:08:24] [PASSED] ABGR8888 Invalid flag
[15:08:24] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[15:08:24] [PASSED] ABGR8888 Valid buffer modifier
[15:08:24] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[15:08:24] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[15:08:24] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[15:08:24] [PASSED] NV12 Normal sizes
[15:08:24] [PASSED] NV12 Max sizes
[15:08:24] [PASSED] NV12 Invalid pitch
[15:08:24] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[15:08:24] [PASSED] NV12 different  modifier per-plane
[15:08:24] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[15:08:24] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[15:08:24] [PASSED] NV12 Modifier for inexistent plane
[15:08:24] [PASSED] NV12 Handle for inexistent plane
[15:08:24] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[15:08:24] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[15:08:24] [PASSED] YVU420 Normal sizes
[15:08:24] [PASSED] YVU420 Max sizes
[15:08:24] [PASSED] YVU420 Invalid pitch
[15:08:24] [PASSED] YVU420 Different pitches
[15:08:24] [PASSED] YVU420 Different buffer offsets/pitches
[15:08:24] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[15:08:24] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[15:08:24] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[15:08:24] [PASSED] YVU420 Valid modifier
[15:08:24] [PASSED] YVU420 Different modifiers per plane
[15:08:24] [PASSED] YVU420 Modifier for inexistent plane
[15:08:24] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[15:08:24] [PASSED] X0L2 Normal sizes
[15:08:24] [PASSED] X0L2 Max sizes
[15:08:24] [PASSED] X0L2 Invalid pitch
[15:08:24] [PASSED] X0L2 Pitch greater than minimum required
[15:08:24] [PASSED] X0L2 Handle for inexistent plane
[15:08:24] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[15:08:24] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[15:08:24] [PASSED] X0L2 Valid modifier
[15:08:24] [PASSED] X0L2 Modifier for inexistent plane
[15:08:24] =========== [PASSED] drm_test_framebuffer_create ===========
[15:08:24] [PASSED] drm_test_framebuffer_free
[15:08:24] [PASSED] drm_test_framebuffer_init
[15:08:24] [PASSED] drm_test_framebuffer_init_bad_format
[15:08:24] [PASSED] drm_test_framebuffer_init_dev_mismatch
[15:08:24] [PASSED] drm_test_framebuffer_lookup
[15:08:24] [PASSED] drm_test_framebuffer_lookup_inexistent
[15:08:24] [PASSED] drm_test_framebuffer_modifiers_not_supported
[15:08:24] ================= [PASSED] drm_framebuffer =================
[15:08:24] ================ drm_gem_shmem (8 subtests) ================
[15:08:24] [PASSED] drm_gem_shmem_test_obj_create
[15:08:24] [PASSED] drm_gem_shmem_test_obj_create_private
[15:08:24] [PASSED] drm_gem_shmem_test_pin_pages
[15:08:24] [PASSED] drm_gem_shmem_test_vmap
[15:08:24] [PASSED] drm_gem_shmem_test_get_pages_sgt
[15:08:24] [PASSED] drm_gem_shmem_test_get_sg_table
[15:08:24] [PASSED] drm_gem_shmem_test_madvise
[15:08:24] [PASSED] drm_gem_shmem_test_purge
[15:08:24] ================== [PASSED] drm_gem_shmem ==================
[15:08:24] === drm_atomic_helper_connector_hdmi_check (22 subtests) ===
[15:08:24] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[15:08:24] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[15:08:24] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[15:08:24] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[15:08:24] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[15:08:24] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[15:08:24] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[15:08:24] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[15:08:24] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[15:08:24] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback
[15:08:24] [PASSED] drm_test_check_max_tmds_rate_format_fallback
[15:08:24] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[15:08:24] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[15:08:24] [PASSED] drm_test_check_output_bpc_dvi
[15:08:24] [PASSED] drm_test_check_output_bpc_format_vic_1
[15:08:24] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[15:08:24] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[15:08:24] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[15:08:24] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[15:08:24] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[15:08:24] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[15:08:24] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[15:08:24] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[15:08:24] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[15:08:24] [PASSED] drm_test_check_broadcast_rgb_value
[15:08:24] [PASSED] drm_test_check_bpc_8_value
[15:08:24] [PASSED] drm_test_check_bpc_10_value
[15:08:24] [PASSED] drm_test_check_bpc_12_value
[15:08:24] [PASSED] drm_test_check_format_value
[15:08:24] [PASSED] drm_test_check_tmds_char_value
[15:08:24] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[15:08:24] ================= drm_managed (2 subtests) =================
[15:08:24] [PASSED] drm_test_managed_release_action
[15:08:24] [PASSED] drm_test_managed_run_action
[15:08:24] =================== [PASSED] drm_managed ===================
[15:08:24] =================== drm_mm (6 subtests) ====================
[15:08:24] [PASSED] drm_test_mm_init
[15:08:24] [PASSED] drm_test_mm_debug
[15:08:24] [PASSED] drm_test_mm_align32
[15:08:24] [PASSED] drm_test_mm_align64
[15:08:24] [PASSED] drm_test_mm_lowest
[15:08:24] [PASSED] drm_test_mm_highest
[15:08:24] ===================== [PASSED] drm_mm ======================
[15:08:24] ============= drm_modes_analog_tv (5 subtests) =============
[15:08:24] [PASSED] drm_test_modes_analog_tv_mono_576i
[15:08:24] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[15:08:24] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[15:08:24] [PASSED] drm_test_modes_analog_tv_pal_576i
[15:08:24] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[15:08:24] =============== [PASSED] drm_modes_analog_tv ===============
stty: 'standard input': Inappropriate ioctl for device
[15:08:24] ============== drm_plane_helper (2 subtests) ===============
[15:08:24] =============== drm_test_check_plane_state  ================
[15:08:24] [PASSED] clipping_simple
[15:08:24] [PASSED] clipping_rotate_reflect
[15:08:24] [PASSED] positioning_simple
[15:08:24] [PASSED] upscaling
[15:08:24] [PASSED] downscaling
[15:08:24] [PASSED] rounding1
[15:08:24] [PASSED] rounding2
[15:08:24] [PASSED] rounding3
[15:08:24] [PASSED] rounding4
[15:08:24] =========== [PASSED] drm_test_check_plane_state ============
[15:08:24] =========== drm_test_check_invalid_plane_state  ============
[15:08:24] [PASSED] positioning_invalid
[15:08:24] [PASSED] upscaling_invalid
[15:08:24] [PASSED] downscaling_invalid
[15:08:24] ======= [PASSED] drm_test_check_invalid_plane_state ========
[15:08:24] ================ [PASSED] drm_plane_helper =================
[15:08:24] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[15:08:24] ====== drm_test_connector_helper_tv_get_modes_check  =======
[15:08:24] [PASSED] None
[15:08:24] [PASSED] PAL
[15:08:24] [PASSED] NTSC
[15:08:24] [PASSED] Both, NTSC Default
[15:08:24] [PASSED] Both, PAL Default
[15:08:24] [PASSED] Both, NTSC Default, with PAL on command-line
[15:08:24] [PASSED] Both, PAL Default, with NTSC on command-line
[15:08:24] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[15:08:24] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[15:08:24] ================== drm_rect (9 subtests) ===================
[15:08:24] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[15:08:24] [PASSED] drm_test_rect_clip_scaled_not_clipped
[15:08:24] [PASSED] drm_test_rect_clip_scaled_clipped
[15:08:24] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[15:08:24] ================= drm_test_rect_intersect  =================
[15:08:24] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[15:08:24] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[15:08:24] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[15:08:24] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[15:08:24] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[15:08:24] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[15:08:24] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[15:08:24] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[15:08:24] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[15:08:24] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[15:08:24] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[15:08:24] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[15:08:24] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[15:08:24] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[15:08:24] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[15:08:24] ============= [PASSED] drm_test_rect_intersect =============
[15:08:24] ================ drm_test_rect_calc_hscale  ================
[15:08:24] [PASSED] normal use
[15:08:24] [PASSED] out of max range
[15:08:24] [PASSED] out of min range
[15:08:24] [PASSED] zero dst
[15:08:24] [PASSED] negative src
[15:08:24] [PASSED] negative dst
[15:08:24] ============ [PASSED] drm_test_rect_calc_hscale ============
[15:08:24] ================ drm_test_rect_calc_vscale  ================
[15:08:24] [PASSED] normal use
[15:08:24] [PASSED] out of max range
[15:08:24] [PASSED] out of min range
[15:08:24] [PASSED] zero dst
[15:08:24] [PASSED] negative src
[15:08:24] [PASSED] negative dst
[15:08:24] ============ [PASSED] drm_test_rect_calc_vscale ============
[15:08:24] ================== drm_test_rect_rotate  ===================
[15:08:24] [PASSED] reflect-x
[15:08:24] [PASSED] reflect-y
[15:08:24] [PASSED] rotate-0
[15:08:24] [PASSED] rotate-90
[15:08:24] [PASSED] rotate-180
[15:08:24] [PASSED] rotate-270
[15:08:24] ============== [PASSED] drm_test_rect_rotate ===============
[15:08:24] ================ drm_test_rect_rotate_inv  =================
[15:08:24] [PASSED] reflect-x
[15:08:24] [PASSED] reflect-y
[15:08:24] [PASSED] rotate-0
[15:08:24] [PASSED] rotate-90
[15:08:24] [PASSED] rotate-180
[15:08:24] [PASSED] rotate-270
[15:08:24] ============ [PASSED] drm_test_rect_rotate_inv =============
[15:08:24] ==================== [PASSED] drm_rect =====================
[15:08:24] ============================================================
[15:08:24] Testing complete. Ran 526 tests: passed: 526
[15:08:24] Elapsed time: 24.316s total, 1.668s configuring, 22.476s building, 0.170s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[15:08:24] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[15:08:26] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
[15:08:33] Starting KUnit Kernel (1/1)...
[15:08:33] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[15:08:34] ================= ttm_device (5 subtests) ==================
[15:08:34] [PASSED] ttm_device_init_basic
[15:08:34] [PASSED] ttm_device_init_multiple
[15:08:34] [PASSED] ttm_device_fini_basic
[15:08:34] [PASSED] ttm_device_init_no_vma_man
[15:08:34] ================== ttm_device_init_pools  ==================
[15:08:34] [PASSED] No DMA allocations, no DMA32 required
[15:08:34] [PASSED] DMA allocations, DMA32 required
[15:08:34] [PASSED] No DMA allocations, DMA32 required
[15:08:34] [PASSED] DMA allocations, no DMA32 required
[15:08:34] ============== [PASSED] ttm_device_init_pools ==============
[15:08:34] =================== [PASSED] ttm_device ====================
[15:08:34] ================== ttm_pool (8 subtests) ===================
[15:08:34] ================== ttm_pool_alloc_basic  ===================
[15:08:34] [PASSED] One page
[15:08:34] [PASSED] More than one page
[15:08:34] [PASSED] Above the allocation limit
[15:08:34] [PASSED] One page, with coherent DMA mappings enabled
[15:08:34] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[15:08:34] ============== [PASSED] ttm_pool_alloc_basic ===============
[15:08:34] ============== ttm_pool_alloc_basic_dma_addr  ==============
[15:08:34] [PASSED] One page
[15:08:34] [PASSED] More than one page
[15:08:34] [PASSED] Above the allocation limit
[15:08:34] [PASSED] One page, with coherent DMA mappings enabled
[15:08:34] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[15:08:34] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[15:08:34] [PASSED] ttm_pool_alloc_order_caching_match
[15:08:34] [PASSED] ttm_pool_alloc_caching_mismatch
[15:08:34] [PASSED] ttm_pool_alloc_order_mismatch
[15:08:34] [PASSED] ttm_pool_free_dma_alloc
[15:08:34] [PASSED] ttm_pool_free_no_dma_alloc
[15:08:34] [PASSED] ttm_pool_fini_basic
[15:08:34] ==================== [PASSED] ttm_pool =====================
[15:08:34] ================ ttm_resource (8 subtests) =================
[15:08:34] ================= ttm_resource_init_basic  =================
[15:08:34] [PASSED] Init resource in TTM_PL_SYSTEM
[15:08:34] [PASSED] Init resource in TTM_PL_VRAM
[15:08:34] [PASSED] Init resource in a private placement
[15:08:34] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[15:08:34] ============= [PASSED] ttm_resource_init_basic =============
[15:08:34] [PASSED] ttm_resource_init_pinned
[15:08:34] [PASSED] ttm_resource_fini_basic
[15:08:34] [PASSED] ttm_resource_manager_init_basic
[15:08:34] [PASSED] ttm_resource_manager_usage_basic
[15:08:34] [PASSED] ttm_resource_manager_set_used_basic
[15:08:34] [PASSED] ttm_sys_man_alloc_basic
[15:08:34] [PASSED] ttm_sys_man_free_basic
[15:08:34] ================== [PASSED] ttm_resource ===================
[15:08:34] =================== ttm_tt (15 subtests) ===================
[15:08:34] ==================== ttm_tt_init_basic  ====================
[15:08:34] [PASSED] Page-aligned size
[15:08:34] [PASSED] Extra pages requested
[15:08:34] ================ [PASSED] ttm_tt_init_basic ================
[15:08:34] [PASSED] ttm_tt_init_misaligned
[15:08:34] [PASSED] ttm_tt_fini_basic
[15:08:34] [PASSED] ttm_tt_fini_sg
[15:08:34] [PASSED] ttm_tt_fini_shmem
[15:08:34] [PASSED] ttm_tt_create_basic
[15:08:34] [PASSED] ttm_tt_create_invalid_bo_type
[15:08:34] [PASSED] ttm_tt_create_ttm_exists
[15:08:34] [PASSED] ttm_tt_create_failed
[15:08:34] [PASSED] ttm_tt_destroy_basic
[15:08:34] [PASSED] ttm_tt_populate_null_ttm
[15:08:34] [PASSED] ttm_tt_populate_populated_ttm
[15:08:34] [PASSED] ttm_tt_unpopulate_basic
[15:08:34] [PASSED] ttm_tt_unpopulate_empty_ttm
[15:08:34] [PASSED] ttm_tt_swapin_basic
[15:08:34] ===================== [PASSED] ttm_tt ======================
[15:08:34] =================== ttm_bo (14 subtests) ===================
[15:08:34] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[15:08:34] [PASSED] Cannot be interrupted and sleeps
[15:08:34] [PASSED] Cannot be interrupted, locks straight away
[15:08:34] [PASSED] Can be interrupted, sleeps
[15:08:34] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[15:08:34] [PASSED] ttm_bo_reserve_locked_no_sleep
[15:08:34] [PASSED] ttm_bo_reserve_no_wait_ticket
[15:08:34] [PASSED] ttm_bo_reserve_double_resv
[15:08:34] [PASSED] ttm_bo_reserve_interrupted
[15:08:34] [PASSED] ttm_bo_reserve_deadlock
[15:08:34] [PASSED] ttm_bo_unreserve_basic
[15:08:34] [PASSED] ttm_bo_unreserve_pinned
[15:08:34] [PASSED] ttm_bo_unreserve_bulk
[15:08:34] [PASSED] ttm_bo_put_basic
[15:08:34] [PASSED] ttm_bo_put_shared_resv
[15:08:34] [PASSED] ttm_bo_pin_basic
[15:08:34] [PASSED] ttm_bo_pin_unpin_resource
[15:08:34] [PASSED] ttm_bo_multiple_pin_one_unpin
[15:08:34] ===================== [PASSED] ttm_bo ======================
[15:08:34] ============== ttm_bo_validate (22 subtests) ===============
[15:08:34] ============== ttm_bo_init_reserved_sys_man  ===============
[15:08:34] [PASSED] Buffer object for userspace
[15:08:34] [PASSED] Kernel buffer object
[15:08:34] [PASSED] Shared buffer object
[15:08:34] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[15:08:34] ============== ttm_bo_init_reserved_mock_man  ==============
[15:08:34] [PASSED] Buffer object for userspace
[15:08:34] [PASSED] Kernel buffer object
[15:08:34] [PASSED] Shared buffer object
[15:08:34] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[15:08:34] [PASSED] ttm_bo_init_reserved_resv
[15:08:34] ================== ttm_bo_validate_basic  ==================
[15:08:34] [PASSED] Buffer object for userspace
[15:08:34] [PASSED] Kernel buffer object
[15:08:34] [PASSED] Shared buffer object
[15:08:34] ============== [PASSED] ttm_bo_validate_basic ==============
[15:08:34] [PASSED] ttm_bo_validate_invalid_placement
[15:08:34] ============= ttm_bo_validate_same_placement  ==============
[15:08:34] [PASSED] System manager
[15:08:34] [PASSED] VRAM manager
[15:08:34] ========= [PASSED] ttm_bo_validate_same_placement ==========
[15:08:34] [PASSED] ttm_bo_validate_failed_alloc
[15:08:34] [PASSED] ttm_bo_validate_pinned
[15:08:34] [PASSED] ttm_bo_validate_busy_placement
[15:08:34] ================ ttm_bo_validate_multihop  =================
[15:08:34] [PASSED] Buffer object for userspace
[15:08:34] [PASSED] Kernel buffer object
[15:08:34] [PASSED] Shared buffer object
[15:08:34] ============ [PASSED] ttm_bo_validate_multihop =============
[15:08:34] ========== ttm_bo_validate_no_placement_signaled  ==========
[15:08:34] [PASSED] Buffer object in system domain, no page vector
[15:08:34] [PASSED] Buffer object in system domain with an existing page vector
[15:08:34] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[15:08:34] ======== ttm_bo_validate_no_placement_not_signaled  ========
[15:08:34] [PASSED] Buffer object for userspace
[15:08:34] [PASSED] Kernel buffer object
[15:08:34] [PASSED] Shared buffer object
[15:08:34] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[15:08:34] [PASSED] ttm_bo_validate_move_fence_signaled
[15:08:34] ========= ttm_bo_validate_move_fence_not_signaled  =========
[15:08:34] [PASSED] Waits for GPU
[15:08:34] [PASSED] Tries to lock straight away
[15:08:34] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[15:08:34] [PASSED] ttm_bo_validate_swapout
[15:08:34] [PASSED] ttm_bo_validate_happy_evict
[15:08:34] [PASSED] ttm_bo_validate_all_pinned_evict
[15:08:34] [PASSED] ttm_bo_validate_allowed_only_evict
[15:08:34] [PASSED] ttm_bo_validate_deleted_evict
[15:08:34] [PASSED] ttm_bo_validate_busy_domain_evict
[15:08:34] [PASSED] ttm_bo_validate_evict_gutting
[15:08:34] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[15:08:34] ================= [PASSED] ttm_bo_validate =================
[15:08:34] ============================================================
[15:08:34] Testing complete. Ran 102 tests: passed: 102
[15:08:34] Elapsed time: 9.870s total, 1.586s configuring, 7.617s building, 0.558s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✗ CI.Build: failure for TTM shrinker helpers and xe buffer object shrinker (rev13)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (10 preceding siblings ...)
  2024-11-15 15:08 ` ✓ CI.KUnit: success " Patchwork
@ 2024-11-15 15:17 ` Patchwork
  2024-11-16 11:26 ` ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev14) Patchwork
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-15 15:17 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev13)
URL   : https://patchwork.freedesktop.org/series/131815/
State : failure

== Summary ==

lib/modules/6.12.0-rc7-xe/kernel/crypto/ecrdsa_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/xcbc.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/serpent_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/aria_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/crypto_simd.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/adiantum.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/tcrypt.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/crypto_engine.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/zstd.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/asymmetric_keys/
lib/modules/6.12.0-rc7-xe/kernel/crypto/asymmetric_keys/pkcs7_test_key.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/asymmetric_keys/pkcs8_key_parser.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/des_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/xctr.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/authenc.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/sm4_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/keywrap.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/camellia_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/sm3.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/pcrypt.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/aegis128.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/af_alg.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/algif_aead.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/cmac.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/sm3_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/aes_ti.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/chacha_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/poly1305_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/nhpoly1305.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/crc32_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/essiv.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/ccm.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/wp512.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/streebog_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/authencesn.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/echainiv.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/lrw.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/cryptd.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/crypto_user.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/algif_hash.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/vmac.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/polyval-generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/hctr2.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/842.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/pcbc.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/ansi_cprng.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/cast6_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/twofish_common.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/twofish_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/lz4hc.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/blowfish_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/md4.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/chacha20poly1305.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/curve25519-generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/lz4.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/rmd160.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/algif_skcipher.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/cast5_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/fcrypt.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/ecdsa_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/sm4.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/cast_common.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/blowfish_common.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/michael_mic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/async_xor.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/async_tx.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/async_memcpy.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/async_pq.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/async_raid6_recov.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/algif_rng.ko
lib/modules/6.12.0-rc7-xe/kernel/block/
lib/modules/6.12.0-rc7-xe/kernel/block/bfq.ko
lib/modules/6.12.0-rc7-xe/kernel/block/kyber-iosched.ko
lib/modules/6.12.0-rc7-xe/build
lib/modules/6.12.0-rc7-xe/modules.alias.bin
lib/modules/6.12.0-rc7-xe/modules.builtin
lib/modules/6.12.0-rc7-xe/modules.softdep
lib/modules/6.12.0-rc7-xe/modules.alias
lib/modules/6.12.0-rc7-xe/modules.order
lib/modules/6.12.0-rc7-xe/modules.symbols
lib/modules/6.12.0-rc7-xe/modules.dep.bin
+ mv kernel.tar.gz ..
+ cd ..
+ rm -rf archive
++ date +%s
+ echo -e '\e[0Ksection_end:1731683857:package_x86_64\r\e[0K'
^[[0Ksection_end:1731683857:package_x86_64
^[[0K
++ date +%s
+ echo -e '\e[0Ksection_start:1731683857:build_x86_64_nodebug[collapsed=true]\r\e[0KBuild x86-64 NoDebug'
+ mkdir -p build64-nodebug
^[[0Ksection_start:1731683857:build_x86_64_nodebug[collapsed=true]
^[[0KBuild x86-64 NoDebug
+ KCONFIG_CONFIG=build64-nodebug/.config
+ ./scripts/kconfig/merge_config.sh -m .ci/kernel/kconfig .ci/kernel/nodebug.fragment
Using .ci/kernel/kconfig as base
Merging .ci/kernel/nodebug.fragment
The merge file '.ci/kernel/nodebug.fragment' does not exist.  Exit.
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev14)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (11 preceding siblings ...)
  2024-11-15 15:17 ` ✗ CI.Build: failure " Patchwork
@ 2024-11-16 11:26 ` Patchwork
  2024-11-16 11:26 ` ✗ CI.checkpatch: warning " Patchwork
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-16 11:26 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev14)
URL   : https://patchwork.freedesktop.org/series/131815/
State : success

== Summary ==

=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: 9a7388467f79 drm-tip: 2024y-11m-16d-00h-00m-45s UTC integration manifest
=== git am output follows ===
Applying: drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini()
Applying: drm/ttm: Provide a shmem backup implementation
Applying: drm/ttm/pool: Provide a helper to shrink pages
Applying: drm/ttm: Use fault-injection to test error paths
Applying: drm/ttm: Add a macro to perform LRU iteration
Applying: drm/ttm: Add helpers for shrinking
Applying: drm/xe: Add a shrinker for xe bos
Applying: drm/xe: Increase the XE_PL_TT watermark



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✗ CI.checkpatch: warning for TTM shrinker helpers and xe buffer object shrinker (rev14)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (12 preceding siblings ...)
  2024-11-16 11:26 ` ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev14) Patchwork
@ 2024-11-16 11:26 ` Patchwork
  2024-11-16 11:28 ` ✓ CI.KUnit: success " Patchwork
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-16 11:26 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev14)
URL   : https://patchwork.freedesktop.org/series/131815/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
30ab6715fc09baee6cc14cb3c89ad8858688d474
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 619f216a1a83c6330b481081ca4d1f4c1ac7e9e6
Author: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Date:   Fri Nov 15 16:01:20 2024 +0100

    drm/xe: Increase the XE_PL_TT watermark
    
    The XE_PL_TT watermark was set to 50% of system memory.
    The idea behind that was unclear since the net effect is that
    TT memory will be evicted to TTM_PL_SYSTEM memory if that
    watermark is exceeded, requiring PPGTT rebinds and dma
    remapping. But there is no similar watermark for TTM_PL_1SYSTEM
    memory.
    
    The TTM functionality that tries to swap out system memory to
    shmem objects if a 50% limit of total system memory is reached
    is orthogonal to this, and with the shrinker added, it's no
    longer in effect.
    
    Replace the 50% TTM_PL_TT limit with a 100% limit, in effect
    allowing all graphics memory to be bound to the device unless it
    has been swapped out by the shrinker.
    
    Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
    Reviewed-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch 9a7388467f79fb74c67a2444c5b1add91652f89e drm-intel
8cab497403b8 drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini()
-:155: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'cursor' - possible side-effects?
#155: FILE: include/drm/ttm/ttm_resource.h:476:
+#define ttm_resource_manager_for_each_res(cursor, res)	\
+	for (res = ttm_resource_manager_first(cursor); res;	\
 	     res = ttm_resource_manager_next(cursor))

-:155: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'res' - possible side-effects?
#155: FILE: include/drm/ttm/ttm_resource.h:476:
+#define ttm_resource_manager_for_each_res(cursor, res)	\
+	for (res = ttm_resource_manager_first(cursor); res;	\
 	     res = ttm_resource_manager_next(cursor))

total: 0 errors, 0 warnings, 2 checks, 114 lines checked
a2c12278edc4 drm/ttm: Provide a shmem backup implementation
-:52: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#52: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 286 lines checked
481b5d3db096 drm/ttm/pool: Provide a helper to shrink pages
353772f23041 drm/ttm: Use fault-injection to test error paths
e87c1d0d723f drm/ttm: Add a macro to perform LRU iteration
-:11: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#11: 
https://lore.kernel.org/linux-mm/b7491378-defd-4f1c-31e2-29e4c77e2d67@amd.com/T/#ma918844aa8a6efe8768fdcda0c6590d5c93850c9

-:253: WARNING:TABSTOP: Statements should start on a tabstop
#253: FILE: include/drm/ttm/ttm_bo.h:508:
+	     if (_T) {ttm_bo_lru_cursor_fini(_T); },

-:253: ERROR:TRAILING_STATEMENTS: trailing statements should be on next line
#253: FILE: include/drm/ttm/ttm_bo.h:508:
+	     if (_T) {ttm_bo_lru_cursor_fini(_T); },

-:253: WARNING:BRACES: braces {} are not necessary for single statement blocks
#253: FILE: include/drm/ttm/ttm_bo.h:508:
+	     if (_T) {ttm_bo_lru_cursor_fini(_T); },

-:279: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#279: FILE: include/drm/ttm/ttm_bo.h:534:
+#define ttm_bo_lru_for_each_reserved_guarded(_cursor, _man, _ctx, _bo)	\
+	scoped_guard(ttm_bo_lru_cursor, _cursor, _man, _ctx)		\
+		for ((_bo) = ttm_bo_lru_cursor_first(_cursor); (_bo);	\
+		     (_bo) = ttm_bo_lru_cursor_next(_cursor))

-:279: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_cursor' - possible side-effects?
#279: FILE: include/drm/ttm/ttm_bo.h:534:
+#define ttm_bo_lru_for_each_reserved_guarded(_cursor, _man, _ctx, _bo)	\
+	scoped_guard(ttm_bo_lru_cursor, _cursor, _man, _ctx)		\
+		for ((_bo) = ttm_bo_lru_cursor_first(_cursor); (_bo);	\
+		     (_bo) = ttm_bo_lru_cursor_next(_cursor))

-:279: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_bo' - possible side-effects?
#279: FILE: include/drm/ttm/ttm_bo.h:534:
+#define ttm_bo_lru_for_each_reserved_guarded(_cursor, _man, _ctx, _bo)	\
+	scoped_guard(ttm_bo_lru_cursor, _cursor, _man, _ctx)		\
+		for ((_bo) = ttm_bo_lru_cursor_first(_cursor); (_bo);	\
+		     (_bo) = ttm_bo_lru_cursor_next(_cursor))

total: 2 errors, 3 warnings, 2 checks, 233 lines checked
9db31d86242d drm/ttm: Add helpers for shrinking
9199a0daba49 drm/xe: Add a shrinker for xe bos
-:540: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#540: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 705 lines checked
619f216a1a83 drm/xe: Increase the XE_PL_TT watermark



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✓ CI.KUnit: success for TTM shrinker helpers and xe buffer object shrinker (rev14)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (13 preceding siblings ...)
  2024-11-16 11:26 ` ✗ CI.checkpatch: warning " Patchwork
@ 2024-11-16 11:28 ` Patchwork
  2024-11-16 11:46 ` ✓ CI.Build: " Patchwork
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-16 11:28 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev14)
URL   : https://patchwork.freedesktop.org/series/131815/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[11:26:55] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[11:26:59] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
  156 | u64 ioread64_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
  163 | u64 ioread64_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
  170 | u64 ioread64be_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
  178 | u64 ioread64be_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
  264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
  272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
  280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
  288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~

[11:27:27] Starting KUnit Kernel (1/1)...
[11:27:27] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[11:27:28] =================== guc_dbm (7 subtests) ===================
[11:27:28] [PASSED] test_empty
[11:27:28] [PASSED] test_default
[11:27:28] ======================== test_size  ========================
[11:27:28] [PASSED] 4
[11:27:28] [PASSED] 8
[11:27:28] [PASSED] 32
[11:27:28] [PASSED] 256
[11:27:28] ==================== [PASSED] test_size ====================
[11:27:28] ======================= test_reuse  ========================
[11:27:28] [PASSED] 4
[11:27:28] [PASSED] 8
[11:27:28] [PASSED] 32
[11:27:28] [PASSED] 256
[11:27:28] =================== [PASSED] test_reuse ====================
[11:27:28] =================== test_range_overlap  ====================
[11:27:28] [PASSED] 4
[11:27:28] [PASSED] 8
[11:27:28] [PASSED] 32
[11:27:28] [PASSED] 256
[11:27:28] =============== [PASSED] test_range_overlap ================
[11:27:28] =================== test_range_compact  ====================
[11:27:28] [PASSED] 4
[11:27:28] [PASSED] 8
[11:27:28] [PASSED] 32
[11:27:28] [PASSED] 256
[11:27:28] =============== [PASSED] test_range_compact ================
[11:27:28] ==================== test_range_spare  =====================
[11:27:28] [PASSED] 4
[11:27:28] [PASSED] 8
[11:27:28] [PASSED] 32
[11:27:28] [PASSED] 256
[11:27:28] ================ [PASSED] test_range_spare =================
[11:27:28] ===================== [PASSED] guc_dbm =====================
[11:27:28] =================== guc_idm (6 subtests) ===================
[11:27:28] [PASSED] bad_init
[11:27:28] [PASSED] no_init
[11:27:28] [PASSED] init_fini
[11:27:28] [PASSED] check_used
[11:27:28] [PASSED] check_quota
[11:27:28] [PASSED] check_all
[11:27:28] ===================== [PASSED] guc_idm =====================
[11:27:28] ================== no_relay (3 subtests) ===================
[11:27:28] [PASSED] xe_drops_guc2pf_if_not_ready
[11:27:28] [PASSED] xe_drops_guc2vf_if_not_ready
[11:27:28] [PASSED] xe_rejects_send_if_not_ready
[11:27:28] ==================== [PASSED] no_relay =====================
[11:27:28] ================== pf_relay (14 subtests) ==================
[11:27:28] [PASSED] pf_rejects_guc2pf_too_short
[11:27:28] [PASSED] pf_rejects_guc2pf_too_long
[11:27:28] [PASSED] pf_rejects_guc2pf_no_payload
[11:27:28] [PASSED] pf_fails_no_payload
[11:27:28] [PASSED] pf_fails_bad_origin
[11:27:28] [PASSED] pf_fails_bad_type
[11:27:28] [PASSED] pf_txn_reports_error
[11:27:28] [PASSED] pf_txn_sends_pf2guc
[11:27:28] [PASSED] pf_sends_pf2guc
[11:27:28] [SKIPPED] pf_loopback_nop
[11:27:28] [SKIPPED] pf_loopback_echo
[11:27:28] [SKIPPED] pf_loopback_fail
[11:27:28] [SKIPPED] pf_loopback_busy
[11:27:28] [SKIPPED] pf_loopback_retry
[11:27:28] ==================== [PASSED] pf_relay =====================
[11:27:28] ================== vf_relay (3 subtests) ===================
[11:27:28] [PASSED] vf_rejects_guc2vf_too_short
[11:27:28] [PASSED] vf_rejects_guc2vf_too_long
[11:27:28] [PASSED] vf_rejects_guc2vf_no_payload
[11:27:28] ==================== [PASSED] vf_relay =====================
[11:27:28] ================= pf_service (11 subtests) =================
[11:27:28] [PASSED] pf_negotiate_any
[11:27:28] [PASSED] pf_negotiate_base_match
[11:27:28] [PASSED] pf_negotiate_base_newer
[11:27:28] [PASSED] pf_negotiate_base_next
[11:27:28] [SKIPPED] pf_negotiate_base_older
[11:27:28] [PASSED] pf_negotiate_base_prev
[11:27:28] [PASSED] pf_negotiate_latest_match
[11:27:28] [PASSED] pf_negotiate_latest_newer
[11:27:28] [PASSED] pf_negotiate_latest_next
[11:27:28] [SKIPPED] pf_negotiate_latest_older
[11:27:28] [SKIPPED] pf_negotiate_latest_prev
[11:27:28] =================== [PASSED] pf_service ====================
[11:27:28] ===================== lmtt (1 subtest) =====================
[11:27:28] ======================== test_ops  =========================
[11:27:28] [PASSED] 2-level
[11:27:28] [PASSED] multi-level
[11:27:28] ==================== [PASSED] test_ops =====================
[11:27:28] ====================== [PASSED] lmtt =======================
[11:27:28] =================== xe_mocs (2 subtests) ===================
[11:27:28] ================ xe_live_mocs_kernel_kunit  ================
[11:27:28] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[11:27:28] ================ xe_live_mocs_reset_kunit  =================
[11:27:28] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[11:27:28] ==================== [SKIPPED] xe_mocs =====================
[11:27:28] ================= xe_migrate (2 subtests) ==================
[11:27:28] ================= xe_migrate_sanity_kunit  =================
[11:27:28] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[11:27:28] ================== xe_validate_ccs_kunit  ==================
[11:27:28] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[11:27:28] =================== [SKIPPED] xe_migrate ===================
[11:27:28] ================== xe_dma_buf (1 subtest) ==================
[11:27:28] ==================== xe_dma_buf_kunit  =====================
[11:27:28] ================ [SKIPPED] xe_dma_buf_kunit ================
[11:27:28] =================== [SKIPPED] xe_dma_buf ===================
[11:27:28] ==================== xe_bo (3 subtests) ====================
[11:27:28] ================== xe_ccs_migrate_kunit  ===================
[11:27:28] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[11:27:28] ==================== xe_bo_evict_kunit  ====================
[11:27:28] =============== [SKIPPED] xe_bo_evict_kunit ================
[11:27:28] =================== xe_bo_shrink_kunit  ====================
[11:27:28] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[11:27:28] ===================== [SKIPPED] xe_bo ======================
[11:27:28] ==================== args (11 subtests) ====================
[11:27:28] [PASSED] count_args_test
[11:27:28] [PASSED] call_args_example
[11:27:28] [PASSED] call_args_test
[11:27:28] [PASSED] drop_first_arg_example
[11:27:28] [PASSED] drop_first_arg_test
[11:27:28] [PASSED] first_arg_example
[11:27:28] [PASSED] first_arg_test
[11:27:28] [PASSED] last_arg_example
[11:27:28] [PASSED] last_arg_test
[11:27:28] [PASSED] pick_arg_example
[11:27:28] [PASSED] sep_comma_examplestty: 'standard input': Inappropriate ioctl for device

[11:27:28] ====================== [PASSED] args =======================
[11:27:28] =================== xe_pci (2 subtests) ====================
[11:27:28] [PASSED] xe_gmdid_graphics_ip
[11:27:28] [PASSED] xe_gmdid_media_ip
[11:27:28] ===================== [PASSED] xe_pci ======================
[11:27:28] =================== xe_rtp (2 subtests) ====================
[11:27:28] =============== xe_rtp_process_to_sr_tests  ================
[11:27:28] [PASSED] coalesce-same-reg
[11:27:28] [PASSED] no-match-no-add
[11:27:28] [PASSED] match-or
[11:27:28] [PASSED] match-or-xfail
[11:27:28] [PASSED] no-match-no-add-multiple-rules
[11:27:28] [PASSED] two-regs-two-entries
[11:27:28] [PASSED] clr-one-set-other
[11:27:28] [PASSED] set-field
[11:27:28] [PASSED] conflict-duplicate
[11:27:28] [PASSED] conflict-not-disjoint
[11:27:28] [PASSED] conflict-reg-type
[11:27:28] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[11:27:28] ================== xe_rtp_process_tests  ===================
[11:27:28] [PASSED] active1
[11:27:28] [PASSED] active2
[11:27:28] [PASSED] active-inactive
[11:27:28] [PASSED] inactive-active
[11:27:28] [PASSED] inactive-1st_or_active-inactive
[11:27:28] [PASSED] inactive-2nd_or_active-inactive
[11:27:28] [PASSED] inactive-last_or_active-inactive
[11:27:28] [PASSED] inactive-no_or_active-inactive
[11:27:28] ============== [PASSED] xe_rtp_process_tests ===============
[11:27:28] ===================== [PASSED] xe_rtp ======================
[11:27:28] ==================== xe_wa (1 subtest) =====================
[11:27:28] ======================== xe_wa_gt  =========================
[11:27:28] [PASSED] TIGERLAKE (B0)
[11:27:28] [PASSED] DG1 (A0)
[11:27:28] [PASSED] DG1 (B0)
[11:27:28] [PASSED] ALDERLAKE_S (A0)
[11:27:28] [PASSED] ALDERLAKE_S (B0)
[11:27:28] [PASSED] ALDERLAKE_S (C0)
[11:27:28] [PASSED] ALDERLAKE_S (D0)
[11:27:28] [PASSED] ALDERLAKE_P (A0)
[11:27:28] [PASSED] ALDERLAKE_P (B0)
[11:27:28] [PASSED] ALDERLAKE_P (C0)
[11:27:28] [PASSED] ALDERLAKE_S_RPLS (D0)
[11:27:28] [PASSED] ALDERLAKE_P_RPLU (E0)
[11:27:28] [PASSED] DG2_G10 (C0)
[11:27:28] [PASSED] DG2_G11 (B1)
[11:27:28] [PASSED] DG2_G12 (A1)
[11:27:28] [PASSED] METEORLAKE (g:A0, m:A0)
[11:27:28] [PASSED] METEORLAKE (g:A0, m:A0)
[11:27:28] [PASSED] METEORLAKE (g:A0, m:A0)
[11:27:28] [PASSED] LUNARLAKE (g:A0, m:A0)
[11:27:28] [PASSED] LUNARLAKE (g:B0, m:A0)
[11:27:28] [PASSED] BATTLEMAGE (g:A0, m:A1)
[11:27:28] ==================== [PASSED] xe_wa_gt =====================
[11:27:28] ====================== [PASSED] xe_wa ======================
[11:27:28] ============================================================
[11:27:28] Testing complete. Ran 122 tests: passed: 106, skipped: 16
[11:27:28] Elapsed time: 32.860s total, 4.429s configuring, 28.163s building, 0.223s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[11:27:28] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[11:27:30] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
  156 | u64 ioread64_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
  163 | u64 ioread64_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
  170 | u64 ioread64be_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
  178 | u64 ioread64be_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
  264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
  272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
  280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
  288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~

[11:27:52] Starting KUnit Kernel (1/1)...
[11:27:52] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[11:27:52] ================== drm_buddy (7 subtests) ==================
[11:27:52] [PASSED] drm_test_buddy_alloc_limit
[11:27:52] [PASSED] drm_test_buddy_alloc_optimistic
[11:27:52] [PASSED] drm_test_buddy_alloc_pessimistic
[11:27:52] [PASSED] drm_test_buddy_alloc_pathological
[11:27:52] [PASSED] drm_test_buddy_alloc_contiguous
[11:27:52] [PASSED] drm_test_buddy_alloc_clear
[11:27:52] [PASSED] drm_test_buddy_alloc_range_bias
[11:27:52] ==================== [PASSED] drm_buddy ====================
[11:27:52] ============= drm_cmdline_parser (40 subtests) =============
[11:27:52] [PASSED] drm_test_cmdline_force_d_only
[11:27:52] [PASSED] drm_test_cmdline_force_D_only_dvi
[11:27:52] [PASSED] drm_test_cmdline_force_D_only_hdmi
[11:27:52] [PASSED] drm_test_cmdline_force_D_only_not_digital
[11:27:52] [PASSED] drm_test_cmdline_force_e_only
[11:27:52] [PASSED] drm_test_cmdline_res
[11:27:52] [PASSED] drm_test_cmdline_res_vesa
[11:27:52] [PASSED] drm_test_cmdline_res_vesa_rblank
[11:27:52] [PASSED] drm_test_cmdline_res_rblank
[11:27:52] [PASSED] drm_test_cmdline_res_bpp
[11:27:52] [PASSED] drm_test_cmdline_res_refresh
[11:27:52] [PASSED] drm_test_cmdline_res_bpp_refresh
[11:27:52] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[11:27:52] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[11:27:52] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[11:27:52] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[11:27:52] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[11:27:52] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[11:27:52] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[11:27:52] [PASSED] drm_test_cmdline_res_margins_force_on
[11:27:52] [PASSED] drm_test_cmdline_res_vesa_margins
[11:27:52] [PASSED] drm_test_cmdline_name
[11:27:52] [PASSED] drm_test_cmdline_name_bpp
[11:27:52] [PASSED] drm_test_cmdline_name_option
[11:27:52] [PASSED] drm_test_cmdline_name_bpp_option
[11:27:52] [PASSED] drm_test_cmdline_rotate_0
[11:27:52] [PASSED] drm_test_cmdline_rotate_90
[11:27:52] [PASSED] drm_test_cmdline_rotate_180
[11:27:52] [PASSED] drm_test_cmdline_rotate_270
[11:27:52] [PASSED] drm_test_cmdline_hmirror
[11:27:52] [PASSED] drm_test_cmdline_vmirror
[11:27:52] [PASSED] drm_test_cmdline_margin_options
[11:27:52] [PASSED] drm_test_cmdline_multiple_options
[11:27:52] [PASSED] drm_test_cmdline_bpp_extra_and_option
[11:27:52] [PASSED] drm_test_cmdline_extra_and_option
[11:27:52] [PASSED] drm_test_cmdline_freestanding_options
[11:27:52] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[11:27:52] [PASSED] drm_test_cmdline_panel_orientation
[11:27:52] ================ drm_test_cmdline_invalid  =================
[11:27:52] [PASSED] margin_only
[11:27:52] [PASSED] interlace_only
[11:27:52] [PASSED] res_missing_x
[11:27:52] [PASSED] res_missing_y
[11:27:52] [PASSED] res_bad_y
[11:27:52] [PASSED] res_missing_y_bpp
[11:27:52] [PASSED] res_bad_bpp
[11:27:52] [PASSED] res_bad_refresh
[11:27:52] [PASSED] res_bpp_refresh_force_on_off
[11:27:52] [PASSED] res_invalid_mode
[11:27:52] [PASSED] res_bpp_wrong_place_mode
[11:27:52] [PASSED] name_bpp_refresh
[11:27:52] [PASSED] name_refresh
[11:27:52] [PASSED] name_refresh_wrong_mode
[11:27:52] [PASSED] name_refresh_invalid_mode
[11:27:52] [PASSED] rotate_multiple
[11:27:52] [PASSED] rotate_invalid_val
[11:27:52] [PASSED] rotate_truncated
[11:27:52] [PASSED] invalid_option
[11:27:52] [PASSED] invalid_tv_option
[11:27:52] [PASSED] truncated_tv_option
[11:27:52] ============ [PASSED] drm_test_cmdline_invalid =============
[11:27:52] =============== drm_test_cmdline_tv_options  ===============
[11:27:52] [PASSED] NTSC
[11:27:52] [PASSED] NTSC_443
[11:27:52] [PASSED] NTSC_J
[11:27:52] [PASSED] PAL
[11:27:52] [PASSED] PAL_M
[11:27:52] [PASSED] PAL_N
[11:27:52] [PASSED] SECAM
[11:27:52] [PASSED] MONO_525
[11:27:52] [PASSED] MONO_625
[11:27:52] =========== [PASSED] drm_test_cmdline_tv_options ===========
[11:27:52] =============== [PASSED] drm_cmdline_parser ================
[11:27:52] ========== drmm_connector_hdmi_init (19 subtests) ==========
[11:27:52] [PASSED] drm_test_connector_hdmi_init_valid
[11:27:52] [PASSED] drm_test_connector_hdmi_init_bpc_8
[11:27:52] [PASSED] drm_test_connector_hdmi_init_bpc_10
[11:27:52] [PASSED] drm_test_connector_hdmi_init_bpc_12
[11:27:52] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[11:27:52] [PASSED] drm_test_connector_hdmi_init_bpc_null
[11:27:52] [PASSED] drm_test_connector_hdmi_init_formats_empty
[11:27:52] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[11:27:52] [PASSED] drm_test_connector_hdmi_init_null_ddc
[11:27:52] [PASSED] drm_test_connector_hdmi_init_null_product
[11:27:52] [PASSED] drm_test_connector_hdmi_init_null_vendor
[11:27:52] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[11:27:52] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[11:27:52] [PASSED] drm_test_connector_hdmi_init_product_valid
[11:27:52] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[11:27:52] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[11:27:52] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[11:27:52] ========= drm_test_connector_hdmi_init_type_valid  =========
[11:27:52] [PASSED] HDMI-A
[11:27:52] [PASSED] HDMI-B
[11:27:52] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[11:27:52] ======== drm_test_connector_hdmi_init_type_invalid  ========
[11:27:52] [PASSED] Unknown
[11:27:52] [PASSED] VGA
[11:27:52] [PASSED] DVI-I
[11:27:52] [PASSED] DVI-D
[11:27:52] [PASSED] DVI-A
[11:27:52] [PASSED] Composite
[11:27:52] [PASSED] SVIDEO
[11:27:52] [PASSED] LVDS
[11:27:52] [PASSED] Component
[11:27:52] [PASSED] DIN
[11:27:52] [PASSED] DP
[11:27:52] [PASSED] TV
[11:27:52] [PASSED] eDP
[11:27:52] [PASSED] Virtual
[11:27:52] [PASSED] DSI
[11:27:52] [PASSED] DPI
[11:27:52] [PASSED] Writeback
[11:27:52] [PASSED] SPI
[11:27:52] [PASSED] USB
[11:27:52] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[11:27:52] ============ [PASSED] drmm_connector_hdmi_init =============
[11:27:52] ============= drmm_connector_init (3 subtests) =============
[11:27:52] [PASSED] drm_test_drmm_connector_init
[11:27:52] [PASSED] drm_test_drmm_connector_init_null_ddc
[11:27:52] ========= drm_test_drmm_connector_init_type_valid  =========
[11:27:52] [PASSED] Unknown
[11:27:52] [PASSED] VGA
[11:27:52] [PASSED] DVI-I
[11:27:52] [PASSED] DVI-D
[11:27:52] [PASSED] DVI-A
[11:27:52] [PASSED] Composite
[11:27:52] [PASSED] SVIDEO
[11:27:52] [PASSED] LVDS
[11:27:52] [PASSED] Component
[11:27:52] [PASSED] DIN
[11:27:52] [PASSED] DP
[11:27:52] [PASSED] HDMI-A
[11:27:52] [PASSED] HDMI-B
[11:27:52] [PASSED] TV
[11:27:52] [PASSED] eDP
[11:27:52] [PASSED] Virtual
[11:27:52] [PASSED] DSI
[11:27:52] [PASSED] DPI
[11:27:52] [PASSED] Writeback
[11:27:52] [PASSED] SPI
[11:27:52] [PASSED] USB
[11:27:52] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[11:27:52] =============== [PASSED] drmm_connector_init ===============
[11:27:52] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[11:27:52] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[11:27:52] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[11:27:52] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[11:27:52] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[11:27:52] ========== drm_test_get_tv_mode_from_name_valid  ===========
[11:27:52] [PASSED] NTSC
[11:27:52] [PASSED] NTSC-443
[11:27:52] [PASSED] NTSC-J
[11:27:52] [PASSED] PAL
[11:27:52] [PASSED] PAL-M
[11:27:52] [PASSED] PAL-N
[11:27:52] [PASSED] SECAM
[11:27:52] [PASSED] Mono
[11:27:52] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[11:27:52] [PASSED] drm_test_get_tv_mode_from_name_truncated
[11:27:52] ============ [PASSED] drm_get_tv_mode_from_name ============
[11:27:52] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[11:27:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[11:27:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[11:27:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[11:27:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[11:27:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[11:27:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[11:27:52] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[11:27:52] [PASSED] VIC 96
[11:27:52] [PASSED] VIC 97
[11:27:52] [PASSED] VIC 101
[11:27:52] [PASSED] VIC 102
[11:27:52] [PASSED] VIC 106
[11:27:52] [PASSED] VIC 107
[11:27:52] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[11:27:52] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[11:27:52] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[11:27:52] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[11:27:52] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[11:27:52] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[11:27:52] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[11:27:52] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[11:27:52] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[11:27:52] [PASSED] Automatic
[11:27:52] [PASSED] Full
[11:27:52] [PASSED] Limited 16:235
[11:27:52] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[11:27:52] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[11:27:52] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[11:27:52] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[11:27:52] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[11:27:52] [PASSED] RGB
[11:27:52] [PASSED] YUV 4:2:0
[11:27:52] [PASSED] YUV 4:2:2
[11:27:52] [PASSED] YUV 4:4:4
[11:27:52] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[11:27:52] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[11:27:52] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[11:27:52] ============= drm_damage_helper (21 subtests) ==============
[11:27:52] [PASSED] drm_test_damage_iter_no_damage
[11:27:52] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[11:27:52] [PASSED] drm_test_damage_iter_no_damage_src_moved
[11:27:52] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[11:27:52] [PASSED] drm_test_damage_iter_no_damage_not_visible
[11:27:52] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[11:27:52] [PASSED] drm_test_damage_iter_no_damage_no_fb
[11:27:52] [PASSED] drm_test_damage_iter_simple_damage
[11:27:52] [PASSED] drm_test_damage_iter_single_damage
[11:27:52] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[11:27:52] [PASSED] drm_test_damage_iter_single_damage_outside_src
[11:27:52] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[11:27:52] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[11:27:52] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[11:27:52] [PASSED] drm_test_damage_iter_single_damage_src_moved
[11:27:52] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[11:27:52] [PASSED] drm_test_damage_iter_damage
[11:27:52] [PASSED] drm_test_damage_iter_damage_one_intersect
[11:27:52] [PASSED] drm_test_damage_iter_damage_one_outside
[11:27:52] [PASSED] drm_test_damage_iter_damage_src_moved
[11:27:52] [PASSED] drm_test_damage_iter_damage_not_visible
[11:27:52] ================ [PASSED] drm_damage_helper ================
[11:27:52] ============== drm_dp_mst_helper (3 subtests) ==============
[11:27:52] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[11:27:52] [PASSED] Clock 154000 BPP 30 DSC disabled
[11:27:52] [PASSED] Clock 234000 BPP 30 DSC disabled
[11:27:52] [PASSED] Clock 297000 BPP 24 DSC disabled
[11:27:52] [PASSED] Clock 332880 BPP 24 DSC enabled
[11:27:52] [PASSED] Clock 324540 BPP 24 DSC enabled
[11:27:52] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[11:27:52] ============== drm_test_dp_mst_calc_pbn_div  ===============
[11:27:52] [PASSED] Link rate 2000000 lane count 4
[11:27:52] [PASSED] Link rate 2000000 lane count 2
[11:27:52] [PASSED] Link rate 2000000 lane count 1
[11:27:52] [PASSED] Link rate 1350000 lane count 4
[11:27:52] [PASSED] Link rate 1350000 lane count 2
[11:27:52] [PASSED] Link rate 1350000 lane count 1
[11:27:52] [PASSED] Link rate 1000000 lane count 4
[11:27:52] [PASSED] Link rate 1000000 lane count 2
[11:27:52] [PASSED] Link rate 1000000 lane count 1
[11:27:52] [PASSED] Link rate 810000 lane count 4
[11:27:52] [PASSED] Link rate 810000 lane count 2
[11:27:52] [PASSED] Link rate 810000 lane count 1
[11:27:52] [PASSED] Link rate 540000 lane count 4
[11:27:52] [PASSED] Link rate 540000 lane count 2
[11:27:52] [PASSED] Link rate 540000 lane count 1
[11:27:52] [PASSED] Link rate 270000 lane count 4
[11:27:52] [PASSED] Link rate 270000 lane count 2
[11:27:52] [PASSED] Link rate 270000 lane count 1
[11:27:52] [PASSED] Link rate 162000 lane count 4
[11:27:52] [PASSED] Link rate 162000 lane count 2
[11:27:52] [PASSED] Link rate 162000 lane count 1
[11:27:52] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[11:27:52] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[11:27:52] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[11:27:52] [PASSED] DP_POWER_UP_PHY with port number
[11:27:52] [PASSED] DP_POWER_DOWN_PHY with port number
[11:27:52] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[11:27:52] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[11:27:52] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[11:27:52] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[11:27:52] [PASSED] DP_QUERY_PAYLOAD with port number
[11:27:52] [PASSED] DP_QUERY_PAYLOAD with VCPI
[11:27:52] [PASSED] DP_REMOTE_DPCD_READ with port number
[11:27:52] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[11:27:52] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[11:27:52] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[11:27:52] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[11:27:52] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[11:27:52] [PASSED] DP_REMOTE_I2C_READ with port number
[11:27:52] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[11:27:52] [PASSED] DP_REMOTE_I2C_READ with transactions array
[11:27:52] [PASSED] DP_REMOTE_I2C_WRITE with port number
[11:27:52] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[11:27:52] [PASSED] DP_REMOTE_I2C_WRITE with data array
[11:27:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[11:27:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[11:27:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[11:27:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[11:27:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[11:27:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[11:27:52] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[11:27:52] ================ [PASSED] drm_dp_mst_helper ================
[11:27:52] ================== drm_exec (7 subtests) ===================
[11:27:52] [PASSED] sanitycheck
[11:27:52] [PASSED] test_lock
[11:27:52] [PASSED] test_lock_unlock
[11:27:52] [PASSED] test_duplicates
[11:27:52] [PASSED] test_prepare
[11:27:52] [PASSED] test_prepare_array
[11:27:52] [PASSED] test_multiple_loops
[11:27:52] ==================== [PASSED] drm_exec =====================
[11:27:52] =========== drm_format_helper_test (17 subtests) ===========
[11:27:52] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[11:27:52] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[11:27:52] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[11:27:52] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[11:27:52] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[11:27:52] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[11:27:52] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[11:27:52] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[11:27:52] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[11:27:52] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[11:27:52] ============== drm_test_fb_xrgb8888_to_mono  ===============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[11:27:52] ==================== drm_test_fb_swab  =====================
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ================ [PASSED] drm_test_fb_swab =================
[11:27:52] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[11:27:52] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[11:27:52] [PASSED] single_pixel_source_buffer
[11:27:52] [PASSED] single_pixel_clip_rectangle
[11:27:52] [PASSED] well_known_colors
[11:27:52] [PASSED] destination_pitch
[11:27:52] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[11:27:52] ================= drm_test_fb_clip_offset  =================
[11:27:52] [PASSED] pass through
[11:27:52] [PASSED] horizontal offset
[11:27:52] [PASSED] vertical offset
[11:27:52] [PASSED] horizontal and vertical offset
[11:27:52] [PASSED] horizontal offset (custom pitch)
[11:27:52] [PASSED] vertical offset (custom pitch)
[11:27:52] [PASSED] horizontal and vertical offset (custom pitch)
[11:27:52] ============= [PASSED] drm_test_fb_clip_offset =============
[11:27:52] ============== drm_test_fb_build_fourcc_list  ==============
[11:27:52] [PASSED] no native formats
[11:27:52] [PASSED] XRGB8888 as native format
[11:27:52] [PASSED] remove duplicates
[11:27:52] [PASSED] convert alpha formats
[11:27:52] [PASSED] random formats
[11:27:52] ========== [PASSED] drm_test_fb_build_fourcc_list ==========
[11:27:52] =================== drm_test_fb_memcpy  ====================
[11:27:52] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[11:27:52] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[11:27:52] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[11:27:52] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[11:27:52] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[11:27:52] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[11:27:52] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[11:27:52] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[11:27:52] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[11:27:52] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[11:27:52] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[11:27:52] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[11:27:52] =============== [PASSED] drm_test_fb_memcpy ================
[11:27:52] ============= [PASSED] drm_format_helper_test ==============
[11:27:52] ================= drm_format (18 subtests) =================
[11:27:52] [PASSED] drm_test_format_block_width_invalid
[11:27:52] [PASSED] drm_test_format_block_width_one_plane
[11:27:52] [PASSED] drm_test_format_block_width_two_plane
[11:27:52] [PASSED] drm_test_format_block_width_three_plane
[11:27:52] [PASSED] drm_test_format_block_width_tiled
[11:27:52] [PASSED] drm_test_format_block_height_invalid
[11:27:52] [PASSED] drm_test_format_block_height_one_plane
[11:27:52] [PASSED] drm_test_format_block_height_two_plane
[11:27:52] [PASSED] drm_test_format_block_height_three_plane
[11:27:52] [PASSED] drm_test_format_block_height_tiled
[11:27:52] [PASSED] drm_test_format_min_pitch_invalid
[11:27:52] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[11:27:52] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[11:27:52] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[11:27:52] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[11:27:52] [PASSED] drm_test_format_min_pitch_two_plane
[11:27:52] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[11:27:52] [PASSED] drm_test_format_min_pitch_tiled
[11:27:52] =================== [PASSED] drm_format ====================
[11:27:52] ============== drm_framebuffer (10 subtests) ===============
[11:27:52] ========== drm_test_framebuffer_check_src_coords  ==========
[11:27:52] [PASSED] Success: source fits into fb
[11:27:52] [PASSED] Fail: overflowing fb with x-axis coordinate
[11:27:52] [PASSED] Fail: overflowing fb with y-axis coordinate
[11:27:52] [PASSED] Fail: overflowing fb with source width
[11:27:52] [PASSED] Fail: overflowing fb with source height
[11:27:52] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[11:27:52] [PASSED] drm_test_framebuffer_cleanup
[11:27:52] =============== drm_test_framebuffer_create  ===============
[11:27:52] [PASSED] ABGR8888 normal sizes
[11:27:52] [PASSED] ABGR8888 max sizes
[11:27:52] [PASSED] ABGR8888 pitch greater than min required
[11:27:52] [PASSED] ABGR8888 pitch less than min required
[11:27:52] [PASSED] ABGR8888 Invalid width
[11:27:52] [PASSED] ABGR8888 Invalid buffer handle
[11:27:52] [PASSED] No pixel format
[11:27:52] [PASSED] ABGR8888 Width 0
[11:27:52] [PASSED] ABGR8888 Height 0
[11:27:52] [PASSED] ABGR8888 Out of bound height * pitch combination
[11:27:52] [PASSED] ABGR8888 Large buffer offset
[11:27:52] [PASSED] ABGR8888 Buffer offset for inexistent plane
[11:27:52] [PASSED] ABGR8888 Invalid flag
[11:27:52] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[11:27:52] [PASSED] ABGR8888 Valid buffer modifier
[11:27:52] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[11:27:52] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[11:27:52] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[11:27:52] [PASSED] NV12 Normal sizes
[11:27:52] [PASSED] NV12 Max sizes
[11:27:52] [PASSED] NV12 Invalid pitch
[11:27:52] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[11:27:52] [PASSED] NV12 different  modifier per-plane
[11:27:52] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[11:27:52] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[11:27:52] [PASSED] NV12 Modifier for inexistent plane
[11:27:52] [PASSED] NV12 Handle for inexistent plane
[11:27:52] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[11:27:52] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[11:27:52] [PASSED] YVU420 Normal sizes
[11:27:52] [PASSED] YVU420 Max sizes
[11:27:52] [PASSED] YVU420 Invalid pitch
[11:27:52] [PASSED] YVU420 Different pitches
[11:27:52] [PASSED] YVU420 Different buffer offsets/pitches
[11:27:52] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[11:27:52] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[11:27:52] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[11:27:52] [PASSED] YVU420 Valid modifier
[11:27:52] [PASSED] YVU420 Different modifiers per plane
[11:27:52] [PASSED] YVU420 Modifier for inexistent plane
[11:27:52] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[11:27:52] [PASSED] X0L2 Normal sizes
[11:27:52] [PASSED] X0L2 Max sizes
[11:27:52] [PASSED] X0L2 Invalid pitch
[11:27:52] [PASSED] X0L2 Pitch greater than minimum required
[11:27:52] [PASSED] X0L2 Handle for inexistent plane
[11:27:52] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[11:27:52] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[11:27:52] [PASSED] X0L2 Valid modifier
[11:27:52] [PASSED] X0L2 Modifier for inexistent plane
[11:27:52] =========== [PASSED] drm_test_framebuffer_create ===========
[11:27:52] [PASSED] drm_test_framebuffer_free
[11:27:52] [PASSED] drm_test_framebuffer_init
[11:27:52] [PASSED] drm_test_framebuffer_init_bad_format
[11:27:52] [PASSED] drm_test_framebuffer_init_dev_mismatch
[11:27:52] [PASSED] drm_test_framebuffer_lookup
[11:27:52] [PASSED] drm_test_framebuffer_lookup_inexistent
[11:27:52] [PASSED] drm_test_framebuffer_modifiers_not_supported
[11:27:52] ================= [PASSED] drm_framebuffer =================
[11:27:52] ================ drm_gem_shmem (8 subtests) ================
[11:27:52] [PASSED] drm_gem_shmem_test_obj_create
[11:27:52] [PASSED] drm_gem_shmem_test_obj_create_private
[11:27:52] [PASSED] drm_gem_shmem_test_pin_pages
[11:27:52] [PASSED] drm_gem_shmem_test_vmap
[11:27:52] [PASSED] drm_gem_shmem_test_get_pages_sgt
[11:27:52] [PASSED] drm_gem_shmem_test_get_sg_table
[11:27:52] [PASSED] drm_gem_shmem_test_madvise
[11:27:52] [PASSED] drm_gem_shmem_test_purge
[11:27:52] ================== [PASSED] drm_gem_shmem ==================
[11:27:52] === drm_atomic_helper_connector_hdmi_check (22 subtests) ===
[11:27:52] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[11:27:52] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[11:27:52] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[11:27:52] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[11:27:52] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[11:27:52] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[11:27:52] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[11:27:52] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[11:27:52] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[11:27:52] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback
[11:27:52] [PASSED] drm_test_check_max_tmds_rate_format_fallback
[11:27:52] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[11:27:52] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[11:27:52] [PASSED] drm_test_check_output_bpc_dvi
[11:27:52] [PASSED] drm_test_check_output_bpc_format_vic_1
[11:27:52] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[11:27:52] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[11:27:52] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[11:27:52] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[11:27:52] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[11:27:52] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[11:27:52] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[11:27:52] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[11:27:52] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[11:27:52] [PASSED] drm_test_check_broadcast_rgb_value
[11:27:52] [PASSED] drm_test_check_bpc_8_value
[11:27:52] [PASSED] drm_test_check_bpc_10_value
[11:27:52] [PASSED] drm_test_check_bpc_12_value
[11:27:52] [PASSED] drm_test_check_format_value
[11:27:52] [PASSED] drm_test_check_tmds_char_value
[11:27:52] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[11:27:52] ================= drm_managed (2 subtests) =================
[11:27:52] [PASSED] drm_test_managed_release_action
[11:27:52] [PASSED] drm_test_managed_run_action
[11:27:52] =================== [PASSED] drm_managed ===================
[11:27:52] =================== drm_mm (6 subtests) ====================
[11:27:52] [PASSED] drm_test_mm_init
[11:27:52] [PASSED] drm_test_mm_debug
[11:27:52] [PASSED] drm_test_mm_align32
[11:27:52] [PASSED] drm_test_mm_align64
[11:27:52] [PASSED] drm_test_mm_lowest
[11:27:52] [PASSED] drm_test_mm_highest
[11:27:52] ===================== [PASSED] drm_mm ======================
[11:27:52] ============= drm_modes_analog_tv (5 subtests) =============
[11:27:52] [PASSED] drm_test_modes_analog_tv_mono_576i
[11:27:52] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[11:27:52] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[11:27:52] [PASSED] drm_test_modes_analog_tv_pal_576i
[11:27:52] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[11:27:52] =============== [PASSED] drm_modes_analog_tv ===============
stty: 'standard input': Inappropriate ioctl for device
[11:27:52] ============== drm_plane_helper (2 subtests) ===============
[11:27:52] =============== drm_test_check_plane_state  ================
[11:27:52] [PASSED] clipping_simple
[11:27:52] [PASSED] clipping_rotate_reflect
[11:27:52] [PASSED] positioning_simple
[11:27:52] [PASSED] upscaling
[11:27:52] [PASSED] downscaling
[11:27:52] [PASSED] rounding1
[11:27:52] [PASSED] rounding2
[11:27:52] [PASSED] rounding3
[11:27:52] [PASSED] rounding4
[11:27:52] =========== [PASSED] drm_test_check_plane_state ============
[11:27:52] =========== drm_test_check_invalid_plane_state  ============
[11:27:52] [PASSED] positioning_invalid
[11:27:52] [PASSED] upscaling_invalid
[11:27:52] [PASSED] downscaling_invalid
[11:27:52] ======= [PASSED] drm_test_check_invalid_plane_state ========
[11:27:52] ================ [PASSED] drm_plane_helper =================
[11:27:52] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[11:27:52] ====== drm_test_connector_helper_tv_get_modes_check  =======
[11:27:52] [PASSED] None
[11:27:52] [PASSED] PAL
[11:27:52] [PASSED] NTSC
[11:27:52] [PASSED] Both, NTSC Default
[11:27:52] [PASSED] Both, PAL Default
[11:27:52] [PASSED] Both, NTSC Default, with PAL on command-line
[11:27:52] [PASSED] Both, PAL Default, with NTSC on command-line
[11:27:52] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[11:27:52] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[11:27:52] ================== drm_rect (9 subtests) ===================
[11:27:52] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[11:27:52] [PASSED] drm_test_rect_clip_scaled_not_clipped
[11:27:52] [PASSED] drm_test_rect_clip_scaled_clipped
[11:27:52] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[11:27:52] ================= drm_test_rect_intersect  =================
[11:27:52] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[11:27:52] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[11:27:52] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[11:27:52] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[11:27:52] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[11:27:52] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[11:27:52] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[11:27:52] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[11:27:52] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[11:27:52] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[11:27:52] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[11:27:52] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[11:27:52] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[11:27:52] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[11:27:52] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[11:27:52] ============= [PASSED] drm_test_rect_intersect =============
[11:27:52] ================ drm_test_rect_calc_hscale  ================
[11:27:52] [PASSED] normal use
[11:27:52] [PASSED] out of max range
[11:27:52] [PASSED] out of min range
[11:27:52] [PASSED] zero dst
[11:27:52] [PASSED] negative src
[11:27:52] [PASSED] negative dst
[11:27:52] ============ [PASSED] drm_test_rect_calc_hscale ============
[11:27:52] ================ drm_test_rect_calc_vscale  ================
[11:27:52] [PASSED] normal use
[11:27:52] [PASSED] out of max range
[11:27:52] [PASSED] out of min range
[11:27:52] [PASSED] zero dst
[11:27:52] [PASSED] negative src
[11:27:52] [PASSED] negative dst
[11:27:52] ============ [PASSED] drm_test_rect_calc_vscale ============
[11:27:52] ================== drm_test_rect_rotate  ===================
[11:27:52] [PASSED] reflect-x
[11:27:52] [PASSED] reflect-y
[11:27:52] [PASSED] rotate-0
[11:27:52] [PASSED] rotate-90
[11:27:52] [PASSED] rotate-180
[11:27:52] [PASSED] rotate-270
[11:27:52] ============== [PASSED] drm_test_rect_rotate ===============
[11:27:52] ================ drm_test_rect_rotate_inv  =================
[11:27:52] [PASSED] reflect-x
[11:27:52] [PASSED] reflect-y
[11:27:52] [PASSED] rotate-0
[11:27:52] [PASSED] rotate-90
[11:27:52] [PASSED] rotate-180
[11:27:52] [PASSED] rotate-270
[11:27:52] ============ [PASSED] drm_test_rect_rotate_inv =============
[11:27:52] ==================== [PASSED] drm_rect =====================
[11:27:52] ============================================================
[11:27:52] Testing complete. Ran 526 tests: passed: 526
[11:27:52] Elapsed time: 24.212s total, 1.653s configuring, 22.388s building, 0.169s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[11:27:52] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[11:27:54] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
[11:28:01] Starting KUnit Kernel (1/1)...
[11:28:01] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[11:28:02] ================= ttm_device (5 subtests) ==================
[11:28:02] [PASSED] ttm_device_init_basic
[11:28:02] [PASSED] ttm_device_init_multiple
[11:28:02] [PASSED] ttm_device_fini_basic
[11:28:02] [PASSED] ttm_device_init_no_vma_man
[11:28:02] ================== ttm_device_init_pools  ==================
[11:28:02] [PASSED] No DMA allocations, no DMA32 required
[11:28:02] [PASSED] DMA allocations, DMA32 required
[11:28:02] [PASSED] No DMA allocations, DMA32 required
[11:28:02] [PASSED] DMA allocations, no DMA32 required
[11:28:02] ============== [PASSED] ttm_device_init_pools ==============
[11:28:02] =================== [PASSED] ttm_device ====================
[11:28:02] ================== ttm_pool (8 subtests) ===================
[11:28:02] ================== ttm_pool_alloc_basic  ===================
[11:28:02] [PASSED] One page
[11:28:02] [PASSED] More than one page
[11:28:02] [PASSED] Above the allocation limit
[11:28:02] [PASSED] One page, with coherent DMA mappings enabled
[11:28:02] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[11:28:02] ============== [PASSED] ttm_pool_alloc_basic ===============
[11:28:02] ============== ttm_pool_alloc_basic_dma_addr  ==============
[11:28:02] [PASSED] One page
[11:28:02] [PASSED] More than one page
[11:28:02] [PASSED] Above the allocation limit
[11:28:02] [PASSED] One page, with coherent DMA mappings enabled
[11:28:02] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[11:28:02] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[11:28:02] [PASSED] ttm_pool_alloc_order_caching_match
[11:28:02] [PASSED] ttm_pool_alloc_caching_mismatch
[11:28:02] [PASSED] ttm_pool_alloc_order_mismatch
[11:28:02] [PASSED] ttm_pool_free_dma_alloc
[11:28:02] [PASSED] ttm_pool_free_no_dma_alloc
[11:28:02] [PASSED] ttm_pool_fini_basic
[11:28:02] ==================== [PASSED] ttm_pool =====================
[11:28:02] ================ ttm_resource (8 subtests) =================
[11:28:02] ================= ttm_resource_init_basic  =================
[11:28:02] [PASSED] Init resource in TTM_PL_SYSTEM
[11:28:02] [PASSED] Init resource in TTM_PL_VRAM
[11:28:02] [PASSED] Init resource in a private placement
[11:28:02] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[11:28:02] ============= [PASSED] ttm_resource_init_basic =============
[11:28:02] [PASSED] ttm_resource_init_pinned
[11:28:02] [PASSED] ttm_resource_fini_basic
[11:28:02] [PASSED] ttm_resource_manager_init_basic
[11:28:02] [PASSED] ttm_resource_manager_usage_basic
[11:28:02] [PASSED] ttm_resource_manager_set_used_basic
[11:28:02] [PASSED] ttm_sys_man_alloc_basic
[11:28:02] [PASSED] ttm_sys_man_free_basic
[11:28:02] ================== [PASSED] ttm_resource ===================
[11:28:02] =================== ttm_tt (15 subtests) ===================
[11:28:02] ==================== ttm_tt_init_basic  ====================
[11:28:02] [PASSED] Page-aligned size
[11:28:02] [PASSED] Extra pages requested
[11:28:02] ================ [PASSED] ttm_tt_init_basic ================
[11:28:02] [PASSED] ttm_tt_init_misaligned
[11:28:02] [PASSED] ttm_tt_fini_basic
[11:28:02] [PASSED] ttm_tt_fini_sg
[11:28:02] [PASSED] ttm_tt_fini_shmem
[11:28:02] [PASSED] ttm_tt_create_basic
[11:28:02] [PASSED] ttm_tt_create_invalid_bo_type
[11:28:02] [PASSED] ttm_tt_create_ttm_exists
[11:28:02] [PASSED] ttm_tt_create_failed
[11:28:02] [PASSED] ttm_tt_destroy_basic
[11:28:02] [PASSED] ttm_tt_populate_null_ttm
[11:28:02] [PASSED] ttm_tt_populate_populated_ttm
[11:28:02] [PASSED] ttm_tt_unpopulate_basic
[11:28:02] [PASSED] ttm_tt_unpopulate_empty_ttm
[11:28:02] [PASSED] ttm_tt_swapin_basic
[11:28:02] ===================== [PASSED] ttm_tt ======================
[11:28:02] =================== ttm_bo (14 subtests) ===================
[11:28:02] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[11:28:02] [PASSED] Cannot be interrupted and sleeps
[11:28:02] [PASSED] Cannot be interrupted, locks straight away
[11:28:02] [PASSED] Can be interrupted, sleeps
[11:28:02] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[11:28:02] [PASSED] ttm_bo_reserve_locked_no_sleep
[11:28:02] [PASSED] ttm_bo_reserve_no_wait_ticket
[11:28:02] [PASSED] ttm_bo_reserve_double_resv
[11:28:02] [PASSED] ttm_bo_reserve_interrupted
[11:28:02] [PASSED] ttm_bo_reserve_deadlock
[11:28:02] [PASSED] ttm_bo_unreserve_basic
[11:28:02] [PASSED] ttm_bo_unreserve_pinned
[11:28:02] [PASSED] ttm_bo_unreserve_bulk
[11:28:02] [PASSED] ttm_bo_put_basic
[11:28:02] [PASSED] ttm_bo_put_shared_resv
[11:28:02] [PASSED] ttm_bo_pin_basic
[11:28:02] [PASSED] ttm_bo_pin_unpin_resource
[11:28:02] [PASSED] ttm_bo_multiple_pin_one_unpin
[11:28:02] ===================== [PASSED] ttm_bo ======================
[11:28:02] ============== ttm_bo_validate (22 subtests) ===============
[11:28:02] ============== ttm_bo_init_reserved_sys_man  ===============
[11:28:02] [PASSED] Buffer object for userspace
[11:28:02] [PASSED] Kernel buffer object
[11:28:02] [PASSED] Shared buffer object
[11:28:02] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[11:28:02] ============== ttm_bo_init_reserved_mock_man  ==============
[11:28:02] [PASSED] Buffer object for userspace
[11:28:02] [PASSED] Kernel buffer object
[11:28:02] [PASSED] Shared buffer object
[11:28:02] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[11:28:02] [PASSED] ttm_bo_init_reserved_resv
[11:28:02] ================== ttm_bo_validate_basic  ==================
[11:28:02] [PASSED] Buffer object for userspace
[11:28:02] [PASSED] Kernel buffer object
[11:28:02] [PASSED] Shared buffer object
[11:28:02] ============== [PASSED] ttm_bo_validate_basic ==============
[11:28:02] [PASSED] ttm_bo_validate_invalid_placement
[11:28:02] ============= ttm_bo_validate_same_placement  ==============
[11:28:02] [PASSED] System manager
[11:28:02] [PASSED] VRAM manager
[11:28:02] ========= [PASSED] ttm_bo_validate_same_placement ==========
[11:28:02] [PASSED] ttm_bo_validate_failed_alloc
[11:28:02] [PASSED] ttm_bo_validate_pinned
[11:28:02] [PASSED] ttm_bo_validate_busy_placement
[11:28:02] ================ ttm_bo_validate_multihop  =================
[11:28:02] [PASSED] Buffer object for userspace
[11:28:02] [PASSED] Kernel buffer object
[11:28:02] [PASSED] Shared buffer object
[11:28:02] ============ [PASSED] ttm_bo_validate_multihop =============
[11:28:02] ========== ttm_bo_validate_no_placement_signaled  ==========
[11:28:02] [PASSED] Buffer object in system domain, no page vector
[11:28:02] [PASSED] Buffer object in system domain with an existing page vector
[11:28:02] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[11:28:02] ======== ttm_bo_validate_no_placement_not_signaled  ========
[11:28:02] [PASSED] Buffer object for userspace
[11:28:02] [PASSED] Kernel buffer object
[11:28:02] [PASSED] Shared buffer object
[11:28:02] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[11:28:02] [PASSED] ttm_bo_validate_move_fence_signaled
[11:28:02] ========= ttm_bo_validate_move_fence_not_signaled  =========
[11:28:02] [PASSED] Waits for GPU
[11:28:02] [PASSED] Tries to lock straight away
[11:28:02] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[11:28:02] [PASSED] ttm_bo_validate_swapout
[11:28:02] [PASSED] ttm_bo_validate_happy_evict
[11:28:02] [PASSED] ttm_bo_validate_all_pinned_evict
[11:28:02] [PASSED] ttm_bo_validate_allowed_only_evict
[11:28:02] [PASSED] ttm_bo_validate_deleted_evict
[11:28:02] [PASSED] ttm_bo_validate_busy_domain_evict
[11:28:02] [PASSED] ttm_bo_validate_evict_gutting
[11:28:02] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[11:28:02] ================= [PASSED] ttm_bo_validate =================
[11:28:02] ============================================================
[11:28:02] Testing complete. Ran 102 tests: passed: 102
[11:28:02] Elapsed time: 9.934s total, 1.637s configuring, 7.631s building, 0.555s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✓ CI.Build: success for TTM shrinker helpers and xe buffer object shrinker (rev14)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (14 preceding siblings ...)
  2024-11-16 11:28 ` ✓ CI.KUnit: success " Patchwork
@ 2024-11-16 11:46 ` Patchwork
  2024-11-16 11:46 ` ✗ CI.Hooks: failure " Patchwork
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-16 11:46 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev14)
URL   : https://patchwork.freedesktop.org/series/131815/
State : success

== Summary ==

lib/modules/6.12.0-rc7-xe/kernel/arch/x86/events/rapl.ko
lib/modules/6.12.0-rc7-xe/kernel/arch/x86/kvm/
lib/modules/6.12.0-rc7-xe/kernel/arch/x86/kvm/kvm.ko
lib/modules/6.12.0-rc7-xe/kernel/arch/x86/kvm/kvm-intel.ko
lib/modules/6.12.0-rc7-xe/kernel/arch/x86/kvm/kvm-amd.ko
lib/modules/6.12.0-rc7-xe/kernel/kernel/
lib/modules/6.12.0-rc7-xe/kernel/kernel/kheaders.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/
lib/modules/6.12.0-rc7-xe/kernel/crypto/ecrdsa_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/xcbc.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/serpent_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/aria_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/crypto_simd.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/adiantum.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/tcrypt.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/crypto_engine.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/zstd.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/asymmetric_keys/
lib/modules/6.12.0-rc7-xe/kernel/crypto/asymmetric_keys/pkcs7_test_key.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/asymmetric_keys/pkcs8_key_parser.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/des_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/xctr.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/authenc.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/sm4_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/keywrap.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/camellia_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/sm3.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/pcrypt.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/aegis128.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/af_alg.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/algif_aead.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/cmac.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/sm3_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/aes_ti.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/chacha_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/poly1305_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/nhpoly1305.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/crc32_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/essiv.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/ccm.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/wp512.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/streebog_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/authencesn.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/echainiv.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/lrw.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/cryptd.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/crypto_user.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/algif_hash.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/vmac.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/polyval-generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/hctr2.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/842.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/pcbc.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/ansi_cprng.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/cast6_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/twofish_common.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/twofish_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/lz4hc.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/blowfish_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/md4.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/chacha20poly1305.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/curve25519-generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/lz4.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/rmd160.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/algif_skcipher.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/cast5_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/fcrypt.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/ecdsa_generic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/sm4.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/cast_common.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/blowfish_common.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/michael_mic.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/async_xor.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/async_tx.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/async_memcpy.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/async_pq.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/async_tx/async_raid6_recov.ko
lib/modules/6.12.0-rc7-xe/kernel/crypto/algif_rng.ko
lib/modules/6.12.0-rc7-xe/kernel/block/
lib/modules/6.12.0-rc7-xe/kernel/block/bfq.ko
lib/modules/6.12.0-rc7-xe/kernel/block/kyber-iosched.ko
lib/modules/6.12.0-rc7-xe/build
lib/modules/6.12.0-rc7-xe/modules.alias.bin
lib/modules/6.12.0-rc7-xe/modules.builtin
lib/modules/6.12.0-rc7-xe/modules.softdep
lib/modules/6.12.0-rc7-xe/modules.alias
lib/modules/6.12.0-rc7-xe/modules.order
lib/modules/6.12.0-rc7-xe/modules.symbols
lib/modules/6.12.0-rc7-xe/modules.dep.bin
+ mv kernel-nodebug.tar.gz ..
+ cd ..
+ rm -rf archive
++ date +%s
+ echo -e '\e[0Ksection_end:1731757551:package_x86_64_nodebug\r\e[0K'
^[[0Ksection_end:1731757551:package_x86_64_nodebug
^[[0K
+ sync
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✗ CI.Hooks: failure for TTM shrinker helpers and xe buffer object shrinker (rev14)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (15 preceding siblings ...)
  2024-11-16 11:46 ` ✓ CI.Build: " Patchwork
@ 2024-11-16 11:46 ` Patchwork
  2024-11-16 11:47 ` ✗ CI.checksparse: warning " Patchwork
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-16 11:46 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev14)
URL   : https://patchwork.freedesktop.org/series/131815/
State : failure

== Summary ==

run-parts: executing /workspace/ci/hooks/00-showenv
+ export
+ grep -Ei '(^|\W)CI_'
declare -x CI_KERNEL_BUILD_DIR="/workspace/kernel/build64-default"
declare -x CI_KERNEL_SRC_DIR="/workspace/kernel"
declare -x CI_TOOLS_SRC_DIR="/workspace/ci"
declare -x CI_WORKSPACE_DIR="/workspace"
run-parts: executing /workspace/ci/hooks/10-build-W1
+ SRC_DIR=/workspace/kernel
+ RESTORE_DISPLAY_CONFIG=0
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ cd /workspace/kernel
++ nproc
+ make -j48 O=/workspace/kernel/build64-default modules_prepare
make[1]: Entering directory '/workspace/kernel/build64-default'
  GEN     Makefile
  UPD     include/config/kernel.release
mkdir -p /workspace/kernel/build64-default/tools/objtool && make O=/workspace/kernel/build64-default subdir=tools/objtool --no-print-directory -C objtool 
  UPD     include/generated/utsrelease.h
  CALL    ../scripts/checksyscalls.sh
  INSTALL libsubcmd_headers
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/exec-cmd.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/help.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/pager.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/parse-options.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/run-command.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/sigchain.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/subcmd-config.o
  LD      /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd-in.o
  AR      /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd.a
  CC      /workspace/kernel/build64-default/tools/objtool/weak.o
  CC      /workspace/kernel/build64-default/tools/objtool/check.o
  CC      /workspace/kernel/build64-default/tools/objtool/builtin-check.o
  CC      /workspace/kernel/build64-default/tools/objtool/special.o
  CC      /workspace/kernel/build64-default/tools/objtool/elf.o
  CC      /workspace/kernel/build64-default/tools/objtool/objtool.o
  CC      /workspace/kernel/build64-default/tools/objtool/orc_gen.o
  CC      /workspace/kernel/build64-default/tools/objtool/orc_dump.o
  CC      /workspace/kernel/build64-default/tools/objtool/libstring.o
  CC      /workspace/kernel/build64-default/tools/objtool/libctype.o
  CC      /workspace/kernel/build64-default/tools/objtool/arch/x86/special.o
  CC      /workspace/kernel/build64-default/tools/objtool/str_error_r.o
  CC      /workspace/kernel/build64-default/tools/objtool/librbtree.o
  CC      /workspace/kernel/build64-default/tools/objtool/arch/x86/decode.o
  CC      /workspace/kernel/build64-default/tools/objtool/arch/x86/orc.o
  LD      /workspace/kernel/build64-default/tools/objtool/arch/x86/objtool-in.o
  LD      /workspace/kernel/build64-default/tools/objtool/objtool-in.o
  LINK    /workspace/kernel/build64-default/tools/objtool/objtool
make[1]: Leaving directory '/workspace/kernel/build64-default'
++ nproc
+ make -j48 O=/workspace/kernel/build64-default W=1 drivers/gpu/drm/xe
make[1]: Entering directory '/workspace/kernel/build64-default'
make[2]: Nothing to be done for 'drivers/gpu/drm/xe'.
make[1]: Leaving directory '/workspace/kernel/build64-default'
run-parts: executing /workspace/ci/hooks/11-build-32b
+++ realpath /workspace/ci/hooks/11-build-32b
++ dirname /workspace/ci/hooks/11-build-32b
+ THIS_SCRIPT_DIR=/workspace/ci/hooks
+ SRC_DIR=/workspace/kernel
+ TOOLS_SRC_DIR=/workspace/ci
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ BUILD_DIR=/workspace/kernel/build64-default/build32
+ cd /workspace/kernel
+ mkdir -p /workspace/kernel/build64-default/build32
++ nproc
+ make -j48 ARCH=i386 O=/workspace/kernel/build64-default/build32 defconfig
make[1]: Entering directory '/workspace/kernel/build64-default/build32'
  GEN     Makefile
  HOSTCC  scripts/basic/fixdep
  HOSTCC  scripts/kconfig/conf.o
  HOSTCC  scripts/kconfig/confdata.o
  HOSTCC  scripts/kconfig/expr.o
  LEX     scripts/kconfig/lexer.lex.c
  YACC    scripts/kconfig/parser.tab.[ch]
  HOSTCC  scripts/kconfig/menu.o
  HOSTCC  scripts/kconfig/preprocess.o
  HOSTCC  scripts/kconfig/symbol.o
  HOSTCC  scripts/kconfig/util.o
  HOSTCC  scripts/kconfig/lexer.lex.o
  HOSTCC  scripts/kconfig/parser.tab.o
  HOSTLD  scripts/kconfig/conf
*** Default configuration is based on 'i386_defconfig'
#
# configuration written to .config
#
make[1]: Leaving directory '/workspace/kernel/build64-default/build32'
+ cd /workspace/kernel/build64-default/build32
+ /workspace/kernel/scripts/kconfig/merge_config.sh .config /workspace/ci/kernel/10-xe.fragment
Using .config as base
Merging /workspace/ci/kernel/10-xe.fragment
The merge file '/workspace/ci/kernel/10-xe.fragment' does not exist.  Exit.
run-parts: /workspace/ci/hooks/11-build-32b exited with return code 1



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✗ CI.checksparse: warning for TTM shrinker helpers and xe buffer object shrinker (rev14)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (16 preceding siblings ...)
  2024-11-16 11:46 ` ✗ CI.Hooks: failure " Patchwork
@ 2024-11-16 11:47 ` Patchwork
  2024-11-18 12:37 ` ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev15) Patchwork
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-16 11:47 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev14)
URL   : https://patchwork.freedesktop.org/series/131815/
State : warning

== Summary ==

+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast 9a7388467f79fb74c67a2444c5b1add91652f89e
/root/linux/maintainer-tools/dim: line 2068: sparse: command not found
Sparse version: 
Fast mode used, each commit won't be checked separately.
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev15)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (17 preceding siblings ...)
  2024-11-16 11:47 ` ✗ CI.checksparse: warning " Patchwork
@ 2024-11-18 12:37 ` Patchwork
  2024-11-18 12:37 ` ✗ CI.checkpatch: warning " Patchwork
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-18 12:37 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev15)
URL   : https://patchwork.freedesktop.org/series/131815/
State : success

== Summary ==

=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: 90014f8026e3 drm-tip: 2024y-11m-18d-09h-06m-48s UTC integration manifest
=== git am output follows ===
Applying: drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini()
Applying: drm/ttm: Provide a shmem backup implementation
Applying: drm/ttm/pool: Provide a helper to shrink pages
Applying: drm/ttm: Use fault-injection to test error paths
Applying: drm/ttm: Add a macro to perform LRU iteration
Applying: drm/ttm: Add helpers for shrinking
Applying: drm/xe: Add a shrinker for xe bos
Applying: drm/xe: Increase the XE_PL_TT watermark



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✗ CI.checkpatch: warning for TTM shrinker helpers and xe buffer object shrinker (rev15)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (18 preceding siblings ...)
  2024-11-18 12:37 ` ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev15) Patchwork
@ 2024-11-18 12:37 ` Patchwork
  2024-11-18 12:38 ` ✓ CI.KUnit: success " Patchwork
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-18 12:37 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev15)
URL   : https://patchwork.freedesktop.org/series/131815/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
30ab6715fc09baee6cc14cb3c89ad8858688d474
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 94402ffe4781f2531492fc8f64c86333b4a859f1
Author: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Date:   Fri Nov 15 16:01:20 2024 +0100

    drm/xe: Increase the XE_PL_TT watermark
    
    The XE_PL_TT watermark was set to 50% of system memory.
    The idea behind that was unclear since the net effect is that
    TT memory will be evicted to TTM_PL_SYSTEM memory if that
    watermark is exceeded, requiring PPGTT rebinds and dma
    remapping. But there is no similar watermark for TTM_PL_1SYSTEM
    memory.
    
    The TTM functionality that tries to swap out system memory to
    shmem objects if a 50% limit of total system memory is reached
    is orthogonal to this, and with the shrinker added, it's no
    longer in effect.
    
    Replace the 50% TTM_PL_TT limit with a 100% limit, in effect
    allowing all graphics memory to be bound to the device unless it
    has been swapped out by the shrinker.
    
    Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
    Reviewed-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch 90014f8026e31874d368834834253debd131268b drm-intel
56b3726cff66 drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini()
-:155: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'cursor' - possible side-effects?
#155: FILE: include/drm/ttm/ttm_resource.h:476:
+#define ttm_resource_manager_for_each_res(cursor, res)	\
+	for (res = ttm_resource_manager_first(cursor); res;	\
 	     res = ttm_resource_manager_next(cursor))

-:155: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'res' - possible side-effects?
#155: FILE: include/drm/ttm/ttm_resource.h:476:
+#define ttm_resource_manager_for_each_res(cursor, res)	\
+	for (res = ttm_resource_manager_first(cursor); res;	\
 	     res = ttm_resource_manager_next(cursor))

total: 0 errors, 0 warnings, 2 checks, 114 lines checked
523beb564bf7 drm/ttm: Provide a shmem backup implementation
-:52: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#52: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 286 lines checked
e9a2740a5b6f drm/ttm/pool: Provide a helper to shrink pages
1c07e34363c1 drm/ttm: Use fault-injection to test error paths
8afa93e85483 drm/ttm: Add a macro to perform LRU iteration
-:11: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#11: 
https://lore.kernel.org/linux-mm/b7491378-defd-4f1c-31e2-29e4c77e2d67@amd.com/T/#ma918844aa8a6efe8768fdcda0c6590d5c93850c9

-:253: WARNING:TABSTOP: Statements should start on a tabstop
#253: FILE: include/drm/ttm/ttm_bo.h:508:
+	     if (_T) {ttm_bo_lru_cursor_fini(_T); },

-:253: ERROR:TRAILING_STATEMENTS: trailing statements should be on next line
#253: FILE: include/drm/ttm/ttm_bo.h:508:
+	     if (_T) {ttm_bo_lru_cursor_fini(_T); },

-:253: WARNING:BRACES: braces {} are not necessary for single statement blocks
#253: FILE: include/drm/ttm/ttm_bo.h:508:
+	     if (_T) {ttm_bo_lru_cursor_fini(_T); },

-:279: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#279: FILE: include/drm/ttm/ttm_bo.h:534:
+#define ttm_bo_lru_for_each_reserved_guarded(_cursor, _man, _ctx, _bo)	\
+	scoped_guard(ttm_bo_lru_cursor, _cursor, _man, _ctx)		\
+		for ((_bo) = ttm_bo_lru_cursor_first(_cursor); (_bo);	\
+		     (_bo) = ttm_bo_lru_cursor_next(_cursor))

-:279: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_cursor' - possible side-effects?
#279: FILE: include/drm/ttm/ttm_bo.h:534:
+#define ttm_bo_lru_for_each_reserved_guarded(_cursor, _man, _ctx, _bo)	\
+	scoped_guard(ttm_bo_lru_cursor, _cursor, _man, _ctx)		\
+		for ((_bo) = ttm_bo_lru_cursor_first(_cursor); (_bo);	\
+		     (_bo) = ttm_bo_lru_cursor_next(_cursor))

-:279: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_bo' - possible side-effects?
#279: FILE: include/drm/ttm/ttm_bo.h:534:
+#define ttm_bo_lru_for_each_reserved_guarded(_cursor, _man, _ctx, _bo)	\
+	scoped_guard(ttm_bo_lru_cursor, _cursor, _man, _ctx)		\
+		for ((_bo) = ttm_bo_lru_cursor_first(_cursor); (_bo);	\
+		     (_bo) = ttm_bo_lru_cursor_next(_cursor))

total: 2 errors, 3 warnings, 2 checks, 233 lines checked
fdbb91847401 drm/ttm: Add helpers for shrinking
ab156969a36c drm/xe: Add a shrinker for xe bos
-:540: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#540: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 705 lines checked
94402ffe4781 drm/xe: Increase the XE_PL_TT watermark



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✓ CI.KUnit: success for TTM shrinker helpers and xe buffer object shrinker (rev15)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (19 preceding siblings ...)
  2024-11-18 12:37 ` ✗ CI.checkpatch: warning " Patchwork
@ 2024-11-18 12:38 ` Patchwork
  2024-11-18 12:56 ` ✓ CI.Build: " Patchwork
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-18 12:38 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev15)
URL   : https://patchwork.freedesktop.org/series/131815/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[12:37:25] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[12:37:29] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
  156 | u64 ioread64_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
  163 | u64 ioread64_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
  170 | u64 ioread64be_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
  178 | u64 ioread64be_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
  264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
  272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
  280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
  288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~

[12:37:57] Starting KUnit Kernel (1/1)...
[12:37:57] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[12:37:57] =================== guc_dbm (7 subtests) ===================
[12:37:57] [PASSED] test_empty
[12:37:57] [PASSED] test_default
[12:37:57] ======================== test_size  ========================
[12:37:57] [PASSED] 4
[12:37:57] [PASSED] 8
[12:37:57] [PASSED] 32
[12:37:57] [PASSED] 256
[12:37:57] ==================== [PASSED] test_size ====================
[12:37:57] ======================= test_reuse  ========================
[12:37:57] [PASSED] 4
[12:37:57] [PASSED] 8
[12:37:57] [PASSED] 32
[12:37:57] [PASSED] 256
[12:37:57] =================== [PASSED] test_reuse ====================
[12:37:57] =================== test_range_overlap  ====================
[12:37:57] [PASSED] 4
[12:37:57] [PASSED] 8
[12:37:57] [PASSED] 32
[12:37:57] [PASSED] 256
[12:37:57] =============== [PASSED] test_range_overlap ================
[12:37:57] =================== test_range_compact  ====================
[12:37:57] [PASSED] 4
[12:37:57] [PASSED] 8
[12:37:57] [PASSED] 32
[12:37:57] [PASSED] 256
[12:37:57] =============== [PASSED] test_range_compact ================
[12:37:57] ==================== test_range_spare  =====================
[12:37:57] [PASSED] 4
[12:37:57] [PASSED] 8
[12:37:57] [PASSED] 32
[12:37:57] [PASSED] 256
[12:37:57] ================ [PASSED] test_range_spare =================
[12:37:57] ===================== [PASSED] guc_dbm =====================
[12:37:57] =================== guc_idm (6 subtests) ===================
[12:37:57] [PASSED] bad_init
[12:37:57] [PASSED] no_init
[12:37:57] [PASSED] init_fini
[12:37:57] [PASSED] check_used
[12:37:57] [PASSED] check_quota
[12:37:57] [PASSED] check_all
[12:37:57] ===================== [PASSED] guc_idm =====================
[12:37:57] ================== no_relay (3 subtests) ===================
[12:37:57] [PASSED] xe_drops_guc2pf_if_not_ready
[12:37:57] [PASSED] xe_drops_guc2vf_if_not_ready
[12:37:57] [PASSED] xe_rejects_send_if_not_ready
[12:37:57] ==================== [PASSED] no_relay =====================
[12:37:57] ================== pf_relay (14 subtests) ==================
[12:37:57] [PASSED] pf_rejects_guc2pf_too_short
[12:37:57] [PASSED] pf_rejects_guc2pf_too_long
[12:37:57] [PASSED] pf_rejects_guc2pf_no_payload
[12:37:57] [PASSED] pf_fails_no_payload
[12:37:57] [PASSED] pf_fails_bad_origin
[12:37:57] [PASSED] pf_fails_bad_type
[12:37:57] [PASSED] pf_txn_reports_error
[12:37:57] [PASSED] pf_txn_sends_pf2guc
[12:37:57] [PASSED] pf_sends_pf2guc
[12:37:57] [SKIPPED] pf_loopback_nop
[12:37:57] [SKIPPED] pf_loopback_echo
[12:37:57] [SKIPPED] pf_loopback_fail
[12:37:57] [SKIPPED] pf_loopback_busy
[12:37:57] [SKIPPED] pf_loopback_retry
[12:37:57] ==================== [PASSED] pf_relay =====================
[12:37:57] ================== vf_relay (3 subtests) ===================
[12:37:57] [PASSED] vf_rejects_guc2vf_too_short
[12:37:57] [PASSED] vf_rejects_guc2vf_too_long
[12:37:57] [PASSED] vf_rejects_guc2vf_no_payload
[12:37:57] ==================== [PASSED] vf_relay =====================
[12:37:57] ================= pf_service (11 subtests) =================
[12:37:57] [PASSED] pf_negotiate_any
[12:37:57] [PASSED] pf_negotiate_base_match
[12:37:57] [PASSED] pf_negotiate_base_newer
[12:37:57] [PASSED] pf_negotiate_base_next
[12:37:57] [SKIPPED] pf_negotiate_base_older
[12:37:57] [PASSED] pf_negotiate_base_prev
[12:37:57] [PASSED] pf_negotiate_latest_match
[12:37:57] [PASSED] pf_negotiate_latest_newer
[12:37:57] [PASSED] pf_negotiate_latest_next
[12:37:57] [SKIPPED] pf_negotiate_latest_older
[12:37:57] [SKIPPED] pf_negotiate_latest_prev
[12:37:57] =================== [PASSED] pf_service ====================
[12:37:57] ===================== lmtt (1 subtest) =====================
[12:37:57] ======================== test_ops  =========================
[12:37:57] [PASSED] 2-level
[12:37:57] [PASSED] multi-level
[12:37:57] ==================== [PASSED] test_ops =====================
[12:37:57] ====================== [PASSED] lmtt =======================
[12:37:57] =================== xe_mocs (2 subtests) ===================
[12:37:57] ================ xe_live_mocs_kernel_kunit  ================
[12:37:57] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[12:37:57] ================ xe_live_mocs_reset_kunit  =================
[12:37:57] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[12:37:57] ==================== [SKIPPED] xe_mocs =====================
[12:37:57] ================= xe_migrate (2 subtests) ==================
[12:37:57] ================= xe_migrate_sanity_kunit  =================
[12:37:57] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[12:37:57] ================== xe_validate_ccs_kunit  ==================
[12:37:57] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[12:37:57] =================== [SKIPPED] xe_migrate ===================
[12:37:57] ================== xe_dma_buf (1 subtest) ==================
[12:37:57] ==================== xe_dma_buf_kunit  =====================
[12:37:57] ================ [SKIPPED] xe_dma_buf_kunit ================
[12:37:57] =================== [SKIPPED] xe_dma_buf ===================
[12:37:57] ==================== xe_bo (3 subtests) ====================
[12:37:57] ================== xe_ccs_migrate_kunit  ===================
[12:37:57] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[12:37:57] ==================== xe_bo_evict_kunit  ====================
[12:37:57] =============== [SKIPPED] xe_bo_evict_kunit ================
[12:37:57] =================== xe_bo_shrink_kunit  ====================
[12:37:57] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[12:37:57] ===================== [SKIPPED] xe_bo ======================
[12:37:57] ==================== args (11 subtests) ====================
[12:37:57] [PASSED] count_args_test
[12:37:57] [PASSED] call_args_example
[12:37:57] [PASSED] call_args_test
[12:37:57] [PASSED] drop_first_arg_example
[12:37:57] [PASSED] drop_first_arg_test
[12:37:57] [PASSED] first_arg_example
[12:37:57] [PASSED] first_arg_test
[12:37:57] [PASSED] last_arg_example
[12:37:57] [PASSED] last_arg_test
[12:37:57] [PASSED] pick_arg_example
[12:37:57] [PASSED] sep_comma_examplestty: 'standard input': Inappropriate ioctl for device

[12:37:57] ====================== [PASSED] args =======================
[12:37:57] =================== xe_pci (2 subtests) ====================
[12:37:57] [PASSED] xe_gmdid_graphics_ip
[12:37:57] [PASSED] xe_gmdid_media_ip
[12:37:57] ===================== [PASSED] xe_pci ======================
[12:37:57] =================== xe_rtp (2 subtests) ====================
[12:37:57] =============== xe_rtp_process_to_sr_tests  ================
[12:37:57] [PASSED] coalesce-same-reg
[12:37:57] [PASSED] no-match-no-add
[12:37:57] [PASSED] match-or
[12:37:57] [PASSED] match-or-xfail
[12:37:57] [PASSED] no-match-no-add-multiple-rules
[12:37:57] [PASSED] two-regs-two-entries
[12:37:57] [PASSED] clr-one-set-other
[12:37:57] [PASSED] set-field
[12:37:57] [PASSED] conflict-duplicate
[12:37:57] [PASSED] conflict-not-disjoint
[12:37:57] [PASSED] conflict-reg-type
[12:37:57] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[12:37:57] ================== xe_rtp_process_tests  ===================
[12:37:57] [PASSED] active1
[12:37:57] [PASSED] active2
[12:37:57] [PASSED] active-inactive
[12:37:57] [PASSED] inactive-active
[12:37:57] [PASSED] inactive-1st_or_active-inactive
[12:37:57] [PASSED] inactive-2nd_or_active-inactive
[12:37:57] [PASSED] inactive-last_or_active-inactive
[12:37:57] [PASSED] inactive-no_or_active-inactive
[12:37:57] ============== [PASSED] xe_rtp_process_tests ===============
[12:37:57] ===================== [PASSED] xe_rtp ======================
[12:37:57] ==================== xe_wa (1 subtest) =====================
[12:37:57] ======================== xe_wa_gt  =========================
[12:37:57] [PASSED] TIGERLAKE (B0)
[12:37:57] [PASSED] DG1 (A0)
[12:37:57] [PASSED] DG1 (B0)
[12:37:57] [PASSED] ALDERLAKE_S (A0)
[12:37:57] [PASSED] ALDERLAKE_S (B0)
[12:37:57] [PASSED] ALDERLAKE_S (C0)
[12:37:57] [PASSED] ALDERLAKE_S (D0)
[12:37:57] [PASSED] ALDERLAKE_P (A0)
[12:37:57] [PASSED] ALDERLAKE_P (B0)
[12:37:57] [PASSED] ALDERLAKE_P (C0)
[12:37:57] [PASSED] ALDERLAKE_S_RPLS (D0)
[12:37:57] [PASSED] ALDERLAKE_P_RPLU (E0)
[12:37:57] [PASSED] DG2_G10 (C0)
[12:37:57] [PASSED] DG2_G11 (B1)
[12:37:57] [PASSED] DG2_G12 (A1)
[12:37:57] [PASSED] METEORLAKE (g:A0, m:A0)
[12:37:57] [PASSED] METEORLAKE (g:A0, m:A0)
[12:37:57] [PASSED] METEORLAKE (g:A0, m:A0)
[12:37:57] [PASSED] LUNARLAKE (g:A0, m:A0)
[12:37:57] [PASSED] LUNARLAKE (g:B0, m:A0)
[12:37:57] [PASSED] BATTLEMAGE (g:A0, m:A1)
[12:37:57] ==================== [PASSED] xe_wa_gt =====================
[12:37:57] ====================== [PASSED] xe_wa ======================
[12:37:57] ============================================================
[12:37:57] Testing complete. Ran 122 tests: passed: 106, skipped: 16
[12:37:57] Elapsed time: 32.963s total, 4.426s configuring, 28.271s building, 0.221s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[12:37:58] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[12:37:59] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
  156 | u64 ioread64_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
  163 | u64 ioread64_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
  170 | u64 ioread64be_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
  178 | u64 ioread64be_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
  264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
  272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
  280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
  288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~

[12:38:22] Starting KUnit Kernel (1/1)...
[12:38:22] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[12:38:22] ================== drm_buddy (7 subtests) ==================
[12:38:22] [PASSED] drm_test_buddy_alloc_limit
[12:38:22] [PASSED] drm_test_buddy_alloc_optimistic
[12:38:22] [PASSED] drm_test_buddy_alloc_pessimistic
[12:38:22] [PASSED] drm_test_buddy_alloc_pathological
[12:38:22] [PASSED] drm_test_buddy_alloc_contiguous
[12:38:22] [PASSED] drm_test_buddy_alloc_clear
[12:38:22] [PASSED] drm_test_buddy_alloc_range_bias
[12:38:22] ==================== [PASSED] drm_buddy ====================
[12:38:22] ============= drm_cmdline_parser (40 subtests) =============
[12:38:22] [PASSED] drm_test_cmdline_force_d_only
[12:38:22] [PASSED] drm_test_cmdline_force_D_only_dvi
[12:38:22] [PASSED] drm_test_cmdline_force_D_only_hdmi
[12:38:22] [PASSED] drm_test_cmdline_force_D_only_not_digital
[12:38:22] [PASSED] drm_test_cmdline_force_e_only
[12:38:22] [PASSED] drm_test_cmdline_res
[12:38:22] [PASSED] drm_test_cmdline_res_vesa
[12:38:22] [PASSED] drm_test_cmdline_res_vesa_rblank
[12:38:22] [PASSED] drm_test_cmdline_res_rblank
[12:38:22] [PASSED] drm_test_cmdline_res_bpp
[12:38:22] [PASSED] drm_test_cmdline_res_refresh
[12:38:22] [PASSED] drm_test_cmdline_res_bpp_refresh
[12:38:22] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[12:38:22] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[12:38:22] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[12:38:22] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[12:38:22] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[12:38:22] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[12:38:22] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[12:38:22] [PASSED] drm_test_cmdline_res_margins_force_on
[12:38:22] [PASSED] drm_test_cmdline_res_vesa_margins
[12:38:22] [PASSED] drm_test_cmdline_name
[12:38:22] [PASSED] drm_test_cmdline_name_bpp
[12:38:22] [PASSED] drm_test_cmdline_name_option
[12:38:22] [PASSED] drm_test_cmdline_name_bpp_option
[12:38:22] [PASSED] drm_test_cmdline_rotate_0
[12:38:22] [PASSED] drm_test_cmdline_rotate_90
[12:38:22] [PASSED] drm_test_cmdline_rotate_180
[12:38:22] [PASSED] drm_test_cmdline_rotate_270
[12:38:22] [PASSED] drm_test_cmdline_hmirror
[12:38:22] [PASSED] drm_test_cmdline_vmirror
[12:38:22] [PASSED] drm_test_cmdline_margin_options
[12:38:22] [PASSED] drm_test_cmdline_multiple_options
[12:38:22] [PASSED] drm_test_cmdline_bpp_extra_and_option
[12:38:22] [PASSED] drm_test_cmdline_extra_and_option
[12:38:22] [PASSED] drm_test_cmdline_freestanding_options
[12:38:22] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[12:38:22] [PASSED] drm_test_cmdline_panel_orientation
[12:38:22] ================ drm_test_cmdline_invalid  =================
[12:38:22] [PASSED] margin_only
[12:38:22] [PASSED] interlace_only
[12:38:22] [PASSED] res_missing_x
[12:38:22] [PASSED] res_missing_y
[12:38:22] [PASSED] res_bad_y
[12:38:22] [PASSED] res_missing_y_bpp
[12:38:22] [PASSED] res_bad_bpp
[12:38:22] [PASSED] res_bad_refresh
[12:38:22] [PASSED] res_bpp_refresh_force_on_off
[12:38:22] [PASSED] res_invalid_mode
[12:38:22] [PASSED] res_bpp_wrong_place_mode
[12:38:22] [PASSED] name_bpp_refresh
[12:38:22] [PASSED] name_refresh
[12:38:22] [PASSED] name_refresh_wrong_mode
[12:38:22] [PASSED] name_refresh_invalid_mode
[12:38:22] [PASSED] rotate_multiple
[12:38:22] [PASSED] rotate_invalid_val
[12:38:22] [PASSED] rotate_truncated
[12:38:22] [PASSED] invalid_option
[12:38:22] [PASSED] invalid_tv_option
[12:38:22] [PASSED] truncated_tv_option
[12:38:22] ============ [PASSED] drm_test_cmdline_invalid =============
[12:38:22] =============== drm_test_cmdline_tv_options  ===============
[12:38:22] [PASSED] NTSC
[12:38:22] [PASSED] NTSC_443
[12:38:22] [PASSED] NTSC_J
[12:38:22] [PASSED] PAL
[12:38:22] [PASSED] PAL_M
[12:38:22] [PASSED] PAL_N
[12:38:22] [PASSED] SECAM
[12:38:22] [PASSED] MONO_525
[12:38:22] [PASSED] MONO_625
[12:38:22] =========== [PASSED] drm_test_cmdline_tv_options ===========
[12:38:22] =============== [PASSED] drm_cmdline_parser ================
[12:38:22] ========== drmm_connector_hdmi_init (19 subtests) ==========
[12:38:22] [PASSED] drm_test_connector_hdmi_init_valid
[12:38:22] [PASSED] drm_test_connector_hdmi_init_bpc_8
[12:38:22] [PASSED] drm_test_connector_hdmi_init_bpc_10
[12:38:22] [PASSED] drm_test_connector_hdmi_init_bpc_12
[12:38:22] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[12:38:22] [PASSED] drm_test_connector_hdmi_init_bpc_null
[12:38:22] [PASSED] drm_test_connector_hdmi_init_formats_empty
[12:38:22] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[12:38:22] [PASSED] drm_test_connector_hdmi_init_null_ddc
[12:38:22] [PASSED] drm_test_connector_hdmi_init_null_product
[12:38:22] [PASSED] drm_test_connector_hdmi_init_null_vendor
[12:38:22] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[12:38:22] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[12:38:22] [PASSED] drm_test_connector_hdmi_init_product_valid
[12:38:22] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[12:38:22] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[12:38:22] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[12:38:22] ========= drm_test_connector_hdmi_init_type_valid  =========
[12:38:22] [PASSED] HDMI-A
[12:38:22] [PASSED] HDMI-B
[12:38:22] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[12:38:22] ======== drm_test_connector_hdmi_init_type_invalid  ========
[12:38:22] [PASSED] Unknown
[12:38:22] [PASSED] VGA
[12:38:22] [PASSED] DVI-I
[12:38:22] [PASSED] DVI-D
[12:38:22] [PASSED] DVI-A
[12:38:22] [PASSED] Composite
[12:38:22] [PASSED] SVIDEO
[12:38:22] [PASSED] LVDS
[12:38:22] [PASSED] Component
[12:38:22] [PASSED] DIN
[12:38:22] [PASSED] DP
[12:38:22] [PASSED] TV
[12:38:22] [PASSED] eDP
[12:38:22] [PASSED] Virtual
[12:38:22] [PASSED] DSI
[12:38:22] [PASSED] DPI
[12:38:22] [PASSED] Writeback
[12:38:22] [PASSED] SPI
[12:38:22] [PASSED] USB
[12:38:22] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[12:38:22] ============ [PASSED] drmm_connector_hdmi_init =============
[12:38:22] ============= drmm_connector_init (3 subtests) =============
[12:38:22] [PASSED] drm_test_drmm_connector_init
[12:38:22] [PASSED] drm_test_drmm_connector_init_null_ddc
[12:38:22] ========= drm_test_drmm_connector_init_type_valid  =========
[12:38:22] [PASSED] Unknown
[12:38:22] [PASSED] VGA
[12:38:22] [PASSED] DVI-I
[12:38:22] [PASSED] DVI-D
[12:38:22] [PASSED] DVI-A
[12:38:22] [PASSED] Composite
[12:38:22] [PASSED] SVIDEO
[12:38:22] [PASSED] LVDS
[12:38:22] [PASSED] Component
[12:38:22] [PASSED] DIN
[12:38:22] [PASSED] DP
[12:38:22] [PASSED] HDMI-A
[12:38:22] [PASSED] HDMI-B
[12:38:22] [PASSED] TV
[12:38:22] [PASSED] eDP
[12:38:22] [PASSED] Virtual
[12:38:22] [PASSED] DSI
[12:38:22] [PASSED] DPI
[12:38:22] [PASSED] Writeback
[12:38:22] [PASSED] SPI
[12:38:22] [PASSED] USB
[12:38:22] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[12:38:22] =============== [PASSED] drmm_connector_init ===============
[12:38:22] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[12:38:22] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[12:38:22] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[12:38:22] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[12:38:22] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[12:38:22] ========== drm_test_get_tv_mode_from_name_valid  ===========
[12:38:22] [PASSED] NTSC
[12:38:22] [PASSED] NTSC-443
[12:38:22] [PASSED] NTSC-J
[12:38:22] [PASSED] PAL
[12:38:22] [PASSED] PAL-M
[12:38:22] [PASSED] PAL-N
[12:38:22] [PASSED] SECAM
[12:38:22] [PASSED] Mono
[12:38:22] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[12:38:22] [PASSED] drm_test_get_tv_mode_from_name_truncated
[12:38:22] ============ [PASSED] drm_get_tv_mode_from_name ============
[12:38:22] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[12:38:22] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[12:38:22] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[12:38:22] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[12:38:22] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[12:38:22] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[12:38:22] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[12:38:22] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[12:38:22] [PASSED] VIC 96
[12:38:22] [PASSED] VIC 97
[12:38:22] [PASSED] VIC 101
[12:38:22] [PASSED] VIC 102
[12:38:22] [PASSED] VIC 106
[12:38:22] [PASSED] VIC 107
[12:38:22] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[12:38:22] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[12:38:22] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[12:38:22] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[12:38:22] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[12:38:22] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[12:38:22] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[12:38:22] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[12:38:22] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[12:38:22] [PASSED] Automatic
[12:38:22] [PASSED] Full
[12:38:22] [PASSED] Limited 16:235
[12:38:22] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[12:38:22] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[12:38:22] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[12:38:22] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[12:38:22] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[12:38:22] [PASSED] RGB
[12:38:22] [PASSED] YUV 4:2:0
[12:38:22] [PASSED] YUV 4:2:2
[12:38:22] [PASSED] YUV 4:4:4
[12:38:22] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[12:38:22] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[12:38:22] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[12:38:22] ============= drm_damage_helper (21 subtests) ==============
[12:38:22] [PASSED] drm_test_damage_iter_no_damage
[12:38:22] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[12:38:22] [PASSED] drm_test_damage_iter_no_damage_src_moved
[12:38:22] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[12:38:22] [PASSED] drm_test_damage_iter_no_damage_not_visible
[12:38:22] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[12:38:22] [PASSED] drm_test_damage_iter_no_damage_no_fb
[12:38:22] [PASSED] drm_test_damage_iter_simple_damage
[12:38:22] [PASSED] drm_test_damage_iter_single_damage
[12:38:22] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[12:38:22] [PASSED] drm_test_damage_iter_single_damage_outside_src
[12:38:22] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[12:38:22] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[12:38:22] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[12:38:22] [PASSED] drm_test_damage_iter_single_damage_src_moved
[12:38:22] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[12:38:22] [PASSED] drm_test_damage_iter_damage
[12:38:22] [PASSED] drm_test_damage_iter_damage_one_intersect
[12:38:22] [PASSED] drm_test_damage_iter_damage_one_outside
[12:38:22] [PASSED] drm_test_damage_iter_damage_src_moved
[12:38:22] [PASSED] drm_test_damage_iter_damage_not_visible
[12:38:22] ================ [PASSED] drm_damage_helper ================
[12:38:22] ============== drm_dp_mst_helper (3 subtests) ==============
[12:38:22] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[12:38:22] [PASSED] Clock 154000 BPP 30 DSC disabled
[12:38:22] [PASSED] Clock 234000 BPP 30 DSC disabled
[12:38:22] [PASSED] Clock 297000 BPP 24 DSC disabled
[12:38:22] [PASSED] Clock 332880 BPP 24 DSC enabled
[12:38:22] [PASSED] Clock 324540 BPP 24 DSC enabled
[12:38:22] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[12:38:22] ============== drm_test_dp_mst_calc_pbn_div  ===============
[12:38:22] [PASSED] Link rate 2000000 lane count 4
[12:38:22] [PASSED] Link rate 2000000 lane count 2
[12:38:22] [PASSED] Link rate 2000000 lane count 1
[12:38:22] [PASSED] Link rate 1350000 lane count 4
[12:38:22] [PASSED] Link rate 1350000 lane count 2
[12:38:22] [PASSED] Link rate 1350000 lane count 1
[12:38:22] [PASSED] Link rate 1000000 lane count 4
[12:38:22] [PASSED] Link rate 1000000 lane count 2
[12:38:22] [PASSED] Link rate 1000000 lane count 1
[12:38:22] [PASSED] Link rate 810000 lane count 4
[12:38:22] [PASSED] Link rate 810000 lane count 2
[12:38:22] [PASSED] Link rate 810000 lane count 1
[12:38:22] [PASSED] Link rate 540000 lane count 4
[12:38:22] [PASSED] Link rate 540000 lane count 2
[12:38:22] [PASSED] Link rate 540000 lane count 1
[12:38:22] [PASSED] Link rate 270000 lane count 4
[12:38:22] [PASSED] Link rate 270000 lane count 2
[12:38:22] [PASSED] Link rate 270000 lane count 1
[12:38:22] [PASSED] Link rate 162000 lane count 4
[12:38:22] [PASSED] Link rate 162000 lane count 2
[12:38:22] [PASSED] Link rate 162000 lane count 1
[12:38:22] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[12:38:22] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[12:38:22] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[12:38:22] [PASSED] DP_POWER_UP_PHY with port number
[12:38:22] [PASSED] DP_POWER_DOWN_PHY with port number
[12:38:22] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[12:38:22] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[12:38:22] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[12:38:22] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[12:38:22] [PASSED] DP_QUERY_PAYLOAD with port number
[12:38:22] [PASSED] DP_QUERY_PAYLOAD with VCPI
[12:38:22] [PASSED] DP_REMOTE_DPCD_READ with port number
[12:38:22] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[12:38:22] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[12:38:22] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[12:38:22] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[12:38:22] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[12:38:22] [PASSED] DP_REMOTE_I2C_READ with port number
[12:38:22] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[12:38:22] [PASSED] DP_REMOTE_I2C_READ with transactions array
[12:38:22] [PASSED] DP_REMOTE_I2C_WRITE with port number
[12:38:22] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[12:38:22] [PASSED] DP_REMOTE_I2C_WRITE with data array
[12:38:22] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[12:38:22] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[12:38:22] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[12:38:22] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[12:38:22] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[12:38:22] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[12:38:22] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[12:38:22] ================ [PASSED] drm_dp_mst_helper ================
[12:38:22] ================== drm_exec (7 subtests) ===================
[12:38:22] [PASSED] sanitycheck
[12:38:22] [PASSED] test_lock
[12:38:22] [PASSED] test_lock_unlock
[12:38:22] [PASSED] test_duplicates
[12:38:22] [PASSED] test_prepare
[12:38:22] [PASSED] test_prepare_array
[12:38:22] [PASSED] test_multiple_loops
[12:38:22] ==================== [PASSED] drm_exec =====================
[12:38:22] =========== drm_format_helper_test (17 subtests) ===========
[12:38:22] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[12:38:22] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[12:38:22] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[12:38:22] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[12:38:22] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[12:38:22] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[12:38:22] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[12:38:22] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[12:38:22] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[12:38:22] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[12:38:22] ============== drm_test_fb_xrgb8888_to_mono  ===============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[12:38:22] ==================== drm_test_fb_swab  =====================
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ================ [PASSED] drm_test_fb_swab =================
[12:38:22] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[12:38:22] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[12:38:22] [PASSED] single_pixel_source_buffer
[12:38:22] [PASSED] single_pixel_clip_rectangle
[12:38:22] [PASSED] well_known_colors
[12:38:22] [PASSED] destination_pitch
[12:38:22] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[12:38:22] ================= drm_test_fb_clip_offset  =================
[12:38:22] [PASSED] pass through
[12:38:22] [PASSED] horizontal offset
[12:38:22] [PASSED] vertical offset
[12:38:22] [PASSED] horizontal and vertical offset
[12:38:22] [PASSED] horizontal offset (custom pitch)
[12:38:22] [PASSED] vertical offset (custom pitch)
[12:38:22] [PASSED] horizontal and vertical offset (custom pitch)
[12:38:22] ============= [PASSED] drm_test_fb_clip_offset =============
[12:38:22] ============== drm_test_fb_build_fourcc_list  ==============
[12:38:22] [PASSED] no native formats
[12:38:22] [PASSED] XRGB8888 as native format
[12:38:22] [PASSED] remove duplicates
[12:38:22] [PASSED] convert alpha formats
[12:38:22] [PASSED] random formats
[12:38:22] ========== [PASSED] drm_test_fb_build_fourcc_list ==========
[12:38:22] =================== drm_test_fb_memcpy  ====================
[12:38:22] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[12:38:22] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[12:38:22] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[12:38:22] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[12:38:22] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[12:38:22] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[12:38:22] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[12:38:22] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[12:38:22] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[12:38:22] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[12:38:22] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[12:38:22] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[12:38:22] =============== [PASSED] drm_test_fb_memcpy ================
[12:38:22] ============= [PASSED] drm_format_helper_test ==============
[12:38:22] ================= drm_format (18 subtests) =================
[12:38:22] [PASSED] drm_test_format_block_width_invalid
[12:38:22] [PASSED] drm_test_format_block_width_one_plane
[12:38:22] [PASSED] drm_test_format_block_width_two_plane
[12:38:22] [PASSED] drm_test_format_block_width_three_plane
[12:38:22] [PASSED] drm_test_format_block_width_tiled
[12:38:22] [PASSED] drm_test_format_block_height_invalid
[12:38:22] [PASSED] drm_test_format_block_height_one_plane
[12:38:22] [PASSED] drm_test_format_block_height_two_plane
[12:38:22] [PASSED] drm_test_format_block_height_three_plane
[12:38:22] [PASSED] drm_test_format_block_height_tiled
[12:38:22] [PASSED] drm_test_format_min_pitch_invalid
[12:38:22] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[12:38:22] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[12:38:22] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[12:38:22] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[12:38:22] [PASSED] drm_test_format_min_pitch_two_plane
[12:38:22] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[12:38:22] [PASSED] drm_test_format_min_pitch_tiled
[12:38:22] =================== [PASSED] drm_format ====================
[12:38:22] ============== drm_framebuffer (10 subtests) ===============
[12:38:22] ========== drm_test_framebuffer_check_src_coords  ==========
[12:38:22] [PASSED] Success: source fits into fb
[12:38:22] [PASSED] Fail: overflowing fb with x-axis coordinate
[12:38:22] [PASSED] Fail: overflowing fb with y-axis coordinate
[12:38:22] [PASSED] Fail: overflowing fb with source width
[12:38:22] [PASSED] Fail: overflowing fb with source height
[12:38:22] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[12:38:22] [PASSED] drm_test_framebuffer_cleanup
[12:38:22] =============== drm_test_framebuffer_create  ===============
[12:38:22] [PASSED] ABGR8888 normal sizes
[12:38:22] [PASSED] ABGR8888 max sizes
[12:38:22] [PASSED] ABGR8888 pitch greater than min required
[12:38:22] [PASSED] ABGR8888 pitch less than min required
[12:38:22] [PASSED] ABGR8888 Invalid width
[12:38:22] [PASSED] ABGR8888 Invalid buffer handle
[12:38:22] [PASSED] No pixel format
[12:38:22] [PASSED] ABGR8888 Width 0
[12:38:22] [PASSED] ABGR8888 Height 0
[12:38:22] [PASSED] ABGR8888 Out of bound height * pitch combination
[12:38:22] [PASSED] ABGR8888 Large buffer offset
[12:38:22] [PASSED] ABGR8888 Buffer offset for inexistent plane
[12:38:22] [PASSED] ABGR8888 Invalid flag
[12:38:22] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[12:38:22] [PASSED] ABGR8888 Valid buffer modifier
[12:38:22] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[12:38:22] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[12:38:22] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[12:38:22] [PASSED] NV12 Normal sizes
[12:38:22] [PASSED] NV12 Max sizes
[12:38:22] [PASSED] NV12 Invalid pitch
[12:38:22] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[12:38:22] [PASSED] NV12 different  modifier per-plane
[12:38:22] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[12:38:22] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[12:38:22] [PASSED] NV12 Modifier for inexistent plane
[12:38:22] [PASSED] NV12 Handle for inexistent plane
[12:38:22] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[12:38:22] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[12:38:22] [PASSED] YVU420 Normal sizes
[12:38:22] [PASSED] YVU420 Max sizes
[12:38:22] [PASSED] YVU420 Invalid pitch
[12:38:22] [PASSED] YVU420 Different pitches
[12:38:22] [PASSED] YVU420 Different buffer offsets/pitches
[12:38:22] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[12:38:22] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[12:38:22] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[12:38:22] [PASSED] YVU420 Valid modifier
[12:38:22] [PASSED] YVU420 Different modifiers per plane
[12:38:22] [PASSED] YVU420 Modifier for inexistent plane
[12:38:22] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[12:38:22] [PASSED] X0L2 Normal sizes
[12:38:22] [PASSED] X0L2 Max sizes
[12:38:22] [PASSED] X0L2 Invalid pitch
[12:38:22] [PASSED] X0L2 Pitch greater than minimum required
[12:38:22] [PASSED] X0L2 Handle for inexistent plane
[12:38:22] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[12:38:22] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[12:38:22] [PASSED] X0L2 Valid modifier
[12:38:22] [PASSED] X0L2 Modifier for inexistent plane
[12:38:22] =========== [PASSED] drm_test_framebuffer_create ===========
[12:38:22] [PASSED] drm_test_framebuffer_free
[12:38:22] [PASSED] drm_test_framebuffer_init
[12:38:22] [PASSED] drm_test_framebuffer_init_bad_format
[12:38:22] [PASSED] drm_test_framebuffer_init_dev_mismatch
[12:38:22] [PASSED] drm_test_framebuffer_lookup
[12:38:22] [PASSED] drm_test_framebuffer_lookup_inexistent
[12:38:22] [PASSED] drm_test_framebuffer_modifiers_not_supported
[12:38:22] ================= [PASSED] drm_framebuffer =================
[12:38:22] ================ drm_gem_shmem (8 subtests) ================
[12:38:22] [PASSED] drm_gem_shmem_test_obj_create
[12:38:22] [PASSED] drm_gem_shmem_test_obj_create_private
[12:38:22] [PASSED] drm_gem_shmem_test_pin_pages
[12:38:22] [PASSED] drm_gem_shmem_test_vmap
[12:38:22] [PASSED] drm_gem_shmem_test_get_pages_sgt
[12:38:22] [PASSED] drm_gem_shmem_test_get_sg_table
[12:38:22] [PASSED] drm_gem_shmem_test_madvise
[12:38:22] [PASSED] drm_gem_shmem_test_purge
[12:38:22] ================== [PASSED] drm_gem_shmem ==================
[12:38:22] === drm_atomic_helper_connector_hdmi_check (22 subtests) ===
[12:38:22] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[12:38:22] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[12:38:22] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[12:38:22] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[12:38:22] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[12:38:22] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[12:38:22] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[12:38:22] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[12:38:22] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[12:38:22] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback
[12:38:22] [PASSED] drm_test_check_max_tmds_rate_format_fallback
[12:38:22] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[12:38:22] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[12:38:22] [PASSED] drm_test_check_output_bpc_dvi
[12:38:22] [PASSED] drm_test_check_output_bpc_format_vic_1
[12:38:22] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[12:38:22] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[12:38:22] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[12:38:22] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[12:38:22] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[12:38:22] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[12:38:22] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[12:38:22] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[12:38:22] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[12:38:22] [PASSED] drm_test_check_broadcast_rgb_value
[12:38:22] [PASSED] drm_test_check_bpc_8_value
[12:38:22] [PASSED] drm_test_check_bpc_10_value
[12:38:22] [PASSED] drm_test_check_bpc_12_value
[12:38:22] [PASSED] drm_test_check_format_value
[12:38:22] [PASSED] drm_test_check_tmds_char_value
[12:38:22] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[12:38:22] ================= drm_managed (2 subtests) =================
[12:38:22] [PASSED] drm_test_managed_release_action
[12:38:22] [PASSED] drm_test_managed_run_action
[12:38:22] =================== [PASSED] drm_managed ===================
[12:38:22] =================== drm_mm (6 subtests) ====================
[12:38:22] [PASSED] drm_test_mm_init
[12:38:22] [PASSED] drm_test_mm_debug
[12:38:22] [PASSED] drm_test_mm_align32
[12:38:22] [PASSED] drm_test_mm_align64
[12:38:22] [PASSED] drm_test_mm_lowest
[12:38:22] [PASSED] drm_test_mm_highest
[12:38:22] ===================== [PASSED] drm_mm ======================
[12:38:22] ============= drm_modes_analog_tv (5 subtests) =============
[12:38:22] [PASSED] drm_test_modes_analog_tv_mono_576i
[12:38:22] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[12:38:22] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[12:38:22] [PASSED] drm_test_modes_analog_tv_pal_576i
[12:38:22] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[12:38:22] =============== [PASSED] drm_modes_analog_tv ===============
stty: 'standard input': Inappropriate ioctl for device
[12:38:22] ============== drm_plane_helper (2 subtests) ===============
[12:38:22] =============== drm_test_check_plane_state  ================
[12:38:22] [PASSED] clipping_simple
[12:38:22] [PASSED] clipping_rotate_reflect
[12:38:22] [PASSED] positioning_simple
[12:38:22] [PASSED] upscaling
[12:38:22] [PASSED] downscaling
[12:38:22] [PASSED] rounding1
[12:38:22] [PASSED] rounding2
[12:38:22] [PASSED] rounding3
[12:38:22] [PASSED] rounding4
[12:38:22] =========== [PASSED] drm_test_check_plane_state ============
[12:38:22] =========== drm_test_check_invalid_plane_state  ============
[12:38:22] [PASSED] positioning_invalid
[12:38:22] [PASSED] upscaling_invalid
[12:38:22] [PASSED] downscaling_invalid
[12:38:22] ======= [PASSED] drm_test_check_invalid_plane_state ========
[12:38:22] ================ [PASSED] drm_plane_helper =================
[12:38:22] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[12:38:22] ====== drm_test_connector_helper_tv_get_modes_check  =======
[12:38:22] [PASSED] None
[12:38:22] [PASSED] PAL
[12:38:22] [PASSED] NTSC
[12:38:22] [PASSED] Both, NTSC Default
[12:38:22] [PASSED] Both, PAL Default
[12:38:22] [PASSED] Both, NTSC Default, with PAL on command-line
[12:38:22] [PASSED] Both, PAL Default, with NTSC on command-line
[12:38:22] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[12:38:22] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[12:38:22] ================== drm_rect (9 subtests) ===================
[12:38:22] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[12:38:22] [PASSED] drm_test_rect_clip_scaled_not_clipped
[12:38:22] [PASSED] drm_test_rect_clip_scaled_clipped
[12:38:22] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[12:38:22] ================= drm_test_rect_intersect  =================
[12:38:22] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[12:38:22] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[12:38:22] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[12:38:22] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[12:38:22] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[12:38:22] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[12:38:22] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[12:38:22] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[12:38:22] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[12:38:22] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[12:38:22] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[12:38:22] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[12:38:22] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[12:38:22] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[12:38:22] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[12:38:22] ============= [PASSED] drm_test_rect_intersect =============
[12:38:22] ================ drm_test_rect_calc_hscale  ================
[12:38:22] [PASSED] normal use
[12:38:22] [PASSED] out of max range
[12:38:22] [PASSED] out of min range
[12:38:22] [PASSED] zero dst
[12:38:22] [PASSED] negative src
[12:38:22] [PASSED] negative dst
[12:38:22] ============ [PASSED] drm_test_rect_calc_hscale ============
[12:38:22] ================ drm_test_rect_calc_vscale  ================
[12:38:22] [PASSED] normal use
[12:38:22] [PASSED] out of max range
[12:38:22] [PASSED] out of min range
[12:38:22] [PASSED] zero dst
[12:38:22] [PASSED] negative src
[12:38:22] [PASSED] negative dst
[12:38:22] ============ [PASSED] drm_test_rect_calc_vscale ============
[12:38:22] ================== drm_test_rect_rotate  ===================
[12:38:22] [PASSED] reflect-x
[12:38:22] [PASSED] reflect-y
[12:38:22] [PASSED] rotate-0
[12:38:22] [PASSED] rotate-90
[12:38:22] [PASSED] rotate-180
[12:38:22] [PASSED] rotate-270
[12:38:22] ============== [PASSED] drm_test_rect_rotate ===============
[12:38:22] ================ drm_test_rect_rotate_inv  =================
[12:38:22] [PASSED] reflect-x
[12:38:22] [PASSED] reflect-y
[12:38:22] [PASSED] rotate-0
[12:38:22] [PASSED] rotate-90
[12:38:22] [PASSED] rotate-180
[12:38:22] [PASSED] rotate-270
[12:38:22] ============ [PASSED] drm_test_rect_rotate_inv =============
[12:38:22] ==================== [PASSED] drm_rect =====================
[12:38:22] ============================================================
[12:38:22] Testing complete. Ran 526 tests: passed: 526
[12:38:22] Elapsed time: 24.131s total, 1.587s configuring, 22.327s building, 0.165s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[12:38:22] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[12:38:23] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
[12:38:31] Starting KUnit Kernel (1/1)...
[12:38:31] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[12:38:31] ================= ttm_device (5 subtests) ==================
[12:38:31] [PASSED] ttm_device_init_basic
[12:38:31] [PASSED] ttm_device_init_multiple
[12:38:31] [PASSED] ttm_device_fini_basic
[12:38:31] [PASSED] ttm_device_init_no_vma_man
[12:38:31] ================== ttm_device_init_pools  ==================
[12:38:31] [PASSED] No DMA allocations, no DMA32 required
[12:38:31] [PASSED] DMA allocations, DMA32 required
[12:38:31] [PASSED] No DMA allocations, DMA32 required
[12:38:31] [PASSED] DMA allocations, no DMA32 required
[12:38:31] ============== [PASSED] ttm_device_init_pools ==============
[12:38:31] =================== [PASSED] ttm_device ====================
[12:38:31] ================== ttm_pool (8 subtests) ===================
[12:38:31] ================== ttm_pool_alloc_basic  ===================
[12:38:31] [PASSED] One page
[12:38:31] [PASSED] More than one page
[12:38:31] [PASSED] Above the allocation limit
[12:38:31] [PASSED] One page, with coherent DMA mappings enabled
[12:38:31] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[12:38:31] ============== [PASSED] ttm_pool_alloc_basic ===============
[12:38:31] ============== ttm_pool_alloc_basic_dma_addr  ==============
[12:38:31] [PASSED] One page
[12:38:31] [PASSED] More than one page
[12:38:31] [PASSED] Above the allocation limit
[12:38:31] [PASSED] One page, with coherent DMA mappings enabled
[12:38:31] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[12:38:31] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[12:38:31] [PASSED] ttm_pool_alloc_order_caching_match
[12:38:31] [PASSED] ttm_pool_alloc_caching_mismatch
[12:38:31] [PASSED] ttm_pool_alloc_order_mismatch
[12:38:31] [PASSED] ttm_pool_free_dma_alloc
[12:38:31] [PASSED] ttm_pool_free_no_dma_alloc
[12:38:31] [PASSED] ttm_pool_fini_basic
[12:38:31] ==================== [PASSED] ttm_pool =====================
[12:38:31] ================ ttm_resource (8 subtests) =================
[12:38:31] ================= ttm_resource_init_basic  =================
[12:38:31] [PASSED] Init resource in TTM_PL_SYSTEM
[12:38:31] [PASSED] Init resource in TTM_PL_VRAM
[12:38:31] [PASSED] Init resource in a private placement
[12:38:31] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[12:38:31] ============= [PASSED] ttm_resource_init_basic =============
[12:38:31] [PASSED] ttm_resource_init_pinned
[12:38:31] [PASSED] ttm_resource_fini_basic
[12:38:31] [PASSED] ttm_resource_manager_init_basic
[12:38:31] [PASSED] ttm_resource_manager_usage_basic
[12:38:31] [PASSED] ttm_resource_manager_set_used_basic
[12:38:31] [PASSED] ttm_sys_man_alloc_basic
[12:38:31] [PASSED] ttm_sys_man_free_basic
[12:38:31] ================== [PASSED] ttm_resource ===================
[12:38:31] =================== ttm_tt (15 subtests) ===================
[12:38:31] ==================== ttm_tt_init_basic  ====================
[12:38:31] [PASSED] Page-aligned size
[12:38:31] [PASSED] Extra pages requested
[12:38:31] ================ [PASSED] ttm_tt_init_basic ================
[12:38:31] [PASSED] ttm_tt_init_misaligned
[12:38:31] [PASSED] ttm_tt_fini_basic
[12:38:31] [PASSED] ttm_tt_fini_sg
[12:38:31] [PASSED] ttm_tt_fini_shmem
[12:38:31] [PASSED] ttm_tt_create_basic
[12:38:31] [PASSED] ttm_tt_create_invalid_bo_type
[12:38:31] [PASSED] ttm_tt_create_ttm_exists
[12:38:31] [PASSED] ttm_tt_create_failed
[12:38:31] [PASSED] ttm_tt_destroy_basic
[12:38:31] [PASSED] ttm_tt_populate_null_ttm
[12:38:31] [PASSED] ttm_tt_populate_populated_ttm
[12:38:31] [PASSED] ttm_tt_unpopulate_basic
[12:38:31] [PASSED] ttm_tt_unpopulate_empty_ttm
[12:38:31] [PASSED] ttm_tt_swapin_basic
[12:38:31] ===================== [PASSED] ttm_tt ======================
[12:38:31] =================== ttm_bo (14 subtests) ===================
[12:38:31] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[12:38:31] [PASSED] Cannot be interrupted and sleeps
[12:38:31] [PASSED] Cannot be interrupted, locks straight away
[12:38:31] [PASSED] Can be interrupted, sleeps
[12:38:31] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[12:38:31] [PASSED] ttm_bo_reserve_locked_no_sleep
[12:38:31] [PASSED] ttm_bo_reserve_no_wait_ticket
[12:38:31] [PASSED] ttm_bo_reserve_double_resv
[12:38:31] [PASSED] ttm_bo_reserve_interrupted
[12:38:31] [PASSED] ttm_bo_reserve_deadlock
[12:38:31] [PASSED] ttm_bo_unreserve_basic
[12:38:31] [PASSED] ttm_bo_unreserve_pinned
[12:38:31] [PASSED] ttm_bo_unreserve_bulk
[12:38:31] [PASSED] ttm_bo_put_basic
[12:38:31] [PASSED] ttm_bo_put_shared_resv
[12:38:31] [PASSED] ttm_bo_pin_basic
[12:38:31] [PASSED] ttm_bo_pin_unpin_resource
[12:38:31] [PASSED] ttm_bo_multiple_pin_one_unpin
[12:38:31] ===================== [PASSED] ttm_bo ======================
[12:38:31] ============== ttm_bo_validate (22 subtests) ===============
[12:38:31] ============== ttm_bo_init_reserved_sys_man  ===============
[12:38:31] [PASSED] Buffer object for userspace
[12:38:31] [PASSED] Kernel buffer object
[12:38:31] [PASSED] Shared buffer object
[12:38:31] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[12:38:31] ============== ttm_bo_init_reserved_mock_man  ==============
[12:38:31] [PASSED] Buffer object for userspace
[12:38:31] [PASSED] Kernel buffer object
[12:38:31] [PASSED] Shared buffer object
[12:38:31] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[12:38:31] [PASSED] ttm_bo_init_reserved_resv
[12:38:31] ================== ttm_bo_validate_basic  ==================
[12:38:31] [PASSED] Buffer object for userspace
[12:38:31] [PASSED] Kernel buffer object
[12:38:31] [PASSED] Shared buffer object
[12:38:31] ============== [PASSED] ttm_bo_validate_basic ==============
[12:38:31] [PASSED] ttm_bo_validate_invalid_placement
[12:38:31] ============= ttm_bo_validate_same_placement  ==============
[12:38:31] [PASSED] System manager
[12:38:31] [PASSED] VRAM manager
[12:38:31] ========= [PASSED] ttm_bo_validate_same_placement ==========
[12:38:31] [PASSED] ttm_bo_validate_failed_alloc
[12:38:31] [PASSED] ttm_bo_validate_pinned
[12:38:31] [PASSED] ttm_bo_validate_busy_placement
[12:38:31] ================ ttm_bo_validate_multihop  =================
[12:38:31] [PASSED] Buffer object for userspace
[12:38:31] [PASSED] Kernel buffer object
[12:38:31] [PASSED] Shared buffer object
[12:38:31] ============ [PASSED] ttm_bo_validate_multihop =============
[12:38:31] ========== ttm_bo_validate_no_placement_signaled  ==========
[12:38:31] [PASSED] Buffer object in system domain, no page vector
[12:38:31] [PASSED] Buffer object in system domain with an existing page vector
[12:38:31] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[12:38:31] ======== ttm_bo_validate_no_placement_not_signaled  ========
[12:38:31] [PASSED] Buffer object for userspace
[12:38:31] [PASSED] Kernel buffer object
[12:38:31] [PASSED] Shared buffer object
[12:38:31] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[12:38:31] [PASSED] ttm_bo_validate_move_fence_signaled
[12:38:31] ========= ttm_bo_validate_move_fence_not_signaled  =========
[12:38:31] [PASSED] Waits for GPU
[12:38:31] [PASSED] Tries to lock straight away
[12:38:32] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[12:38:32] [PASSED] ttm_bo_validate_swapout
[12:38:32] [PASSED] ttm_bo_validate_happy_evict
[12:38:32] [PASSED] ttm_bo_validate_all_pinned_evict
[12:38:32] [PASSED] ttm_bo_validate_allowed_only_evict
[12:38:32] [PASSED] ttm_bo_validate_deleted_evict
[12:38:32] [PASSED] ttm_bo_validate_busy_domain_evict
[12:38:32] [PASSED] ttm_bo_validate_evict_gutting
[12:38:32] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[12:38:32] ================= [PASSED] ttm_bo_validate =================
[12:38:32] ============================================================
[12:38:32] Testing complete. Ran 102 tests: passed: 102
[12:38:32] Elapsed time: 9.791s total, 1.547s configuring, 7.577s building, 0.565s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✓ CI.Build: success for TTM shrinker helpers and xe buffer object shrinker (rev15)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (20 preceding siblings ...)
  2024-11-18 12:38 ` ✓ CI.KUnit: success " Patchwork
@ 2024-11-18 12:56 ` Patchwork
  2024-11-18 12:56 ` ✗ CI.Hooks: failure " Patchwork
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-18 12:56 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev15)
URL   : https://patchwork.freedesktop.org/series/131815/
State : success

== Summary ==

lib/modules/6.12.0-xe/kernel/arch/x86/events/rapl.ko
lib/modules/6.12.0-xe/kernel/arch/x86/kvm/
lib/modules/6.12.0-xe/kernel/arch/x86/kvm/kvm.ko
lib/modules/6.12.0-xe/kernel/arch/x86/kvm/kvm-intel.ko
lib/modules/6.12.0-xe/kernel/arch/x86/kvm/kvm-amd.ko
lib/modules/6.12.0-xe/kernel/kernel/
lib/modules/6.12.0-xe/kernel/kernel/kheaders.ko
lib/modules/6.12.0-xe/kernel/crypto/
lib/modules/6.12.0-xe/kernel/crypto/ecrdsa_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/xcbc.ko
lib/modules/6.12.0-xe/kernel/crypto/serpent_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/aria_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/crypto_simd.ko
lib/modules/6.12.0-xe/kernel/crypto/adiantum.ko
lib/modules/6.12.0-xe/kernel/crypto/tcrypt.ko
lib/modules/6.12.0-xe/kernel/crypto/crypto_engine.ko
lib/modules/6.12.0-xe/kernel/crypto/zstd.ko
lib/modules/6.12.0-xe/kernel/crypto/asymmetric_keys/
lib/modules/6.12.0-xe/kernel/crypto/asymmetric_keys/pkcs7_test_key.ko
lib/modules/6.12.0-xe/kernel/crypto/asymmetric_keys/pkcs8_key_parser.ko
lib/modules/6.12.0-xe/kernel/crypto/des_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/xctr.ko
lib/modules/6.12.0-xe/kernel/crypto/authenc.ko
lib/modules/6.12.0-xe/kernel/crypto/sm4_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/keywrap.ko
lib/modules/6.12.0-xe/kernel/crypto/camellia_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/sm3.ko
lib/modules/6.12.0-xe/kernel/crypto/pcrypt.ko
lib/modules/6.12.0-xe/kernel/crypto/aegis128.ko
lib/modules/6.12.0-xe/kernel/crypto/af_alg.ko
lib/modules/6.12.0-xe/kernel/crypto/algif_aead.ko
lib/modules/6.12.0-xe/kernel/crypto/cmac.ko
lib/modules/6.12.0-xe/kernel/crypto/sm3_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/aes_ti.ko
lib/modules/6.12.0-xe/kernel/crypto/chacha_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/poly1305_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/nhpoly1305.ko
lib/modules/6.12.0-xe/kernel/crypto/crc32_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/essiv.ko
lib/modules/6.12.0-xe/kernel/crypto/ccm.ko
lib/modules/6.12.0-xe/kernel/crypto/wp512.ko
lib/modules/6.12.0-xe/kernel/crypto/streebog_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/authencesn.ko
lib/modules/6.12.0-xe/kernel/crypto/echainiv.ko
lib/modules/6.12.0-xe/kernel/crypto/lrw.ko
lib/modules/6.12.0-xe/kernel/crypto/cryptd.ko
lib/modules/6.12.0-xe/kernel/crypto/crypto_user.ko
lib/modules/6.12.0-xe/kernel/crypto/algif_hash.ko
lib/modules/6.12.0-xe/kernel/crypto/vmac.ko
lib/modules/6.12.0-xe/kernel/crypto/polyval-generic.ko
lib/modules/6.12.0-xe/kernel/crypto/hctr2.ko
lib/modules/6.12.0-xe/kernel/crypto/842.ko
lib/modules/6.12.0-xe/kernel/crypto/pcbc.ko
lib/modules/6.12.0-xe/kernel/crypto/ansi_cprng.ko
lib/modules/6.12.0-xe/kernel/crypto/cast6_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/twofish_common.ko
lib/modules/6.12.0-xe/kernel/crypto/twofish_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/lz4hc.ko
lib/modules/6.12.0-xe/kernel/crypto/blowfish_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/md4.ko
lib/modules/6.12.0-xe/kernel/crypto/chacha20poly1305.ko
lib/modules/6.12.0-xe/kernel/crypto/curve25519-generic.ko
lib/modules/6.12.0-xe/kernel/crypto/lz4.ko
lib/modules/6.12.0-xe/kernel/crypto/rmd160.ko
lib/modules/6.12.0-xe/kernel/crypto/algif_skcipher.ko
lib/modules/6.12.0-xe/kernel/crypto/cast5_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/fcrypt.ko
lib/modules/6.12.0-xe/kernel/crypto/ecdsa_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/sm4.ko
lib/modules/6.12.0-xe/kernel/crypto/cast_common.ko
lib/modules/6.12.0-xe/kernel/crypto/blowfish_common.ko
lib/modules/6.12.0-xe/kernel/crypto/michael_mic.ko
lib/modules/6.12.0-xe/kernel/crypto/async_tx/
lib/modules/6.12.0-xe/kernel/crypto/async_tx/async_xor.ko
lib/modules/6.12.0-xe/kernel/crypto/async_tx/async_tx.ko
lib/modules/6.12.0-xe/kernel/crypto/async_tx/async_memcpy.ko
lib/modules/6.12.0-xe/kernel/crypto/async_tx/async_pq.ko
lib/modules/6.12.0-xe/kernel/crypto/async_tx/async_raid6_recov.ko
lib/modules/6.12.0-xe/kernel/crypto/algif_rng.ko
lib/modules/6.12.0-xe/kernel/block/
lib/modules/6.12.0-xe/kernel/block/bfq.ko
lib/modules/6.12.0-xe/kernel/block/kyber-iosched.ko
lib/modules/6.12.0-xe/build
lib/modules/6.12.0-xe/modules.alias.bin
lib/modules/6.12.0-xe/modules.builtin
lib/modules/6.12.0-xe/modules.softdep
lib/modules/6.12.0-xe/modules.alias
lib/modules/6.12.0-xe/modules.order
lib/modules/6.12.0-xe/modules.symbols
lib/modules/6.12.0-xe/modules.dep.bin
+ mv kernel-nodebug.tar.gz ..
+ cd ..
+ rm -rf archive
++ date +%s
+ echo -e '\e[0Ksection_end:1731934575:package_x86_64_nodebug\r\e[0K'
^[[0Ksection_end:1731934575:package_x86_64_nodebug
^[[0K
+ sync
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✗ CI.Hooks: failure for TTM shrinker helpers and xe buffer object shrinker (rev15)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (21 preceding siblings ...)
  2024-11-18 12:56 ` ✓ CI.Build: " Patchwork
@ 2024-11-18 12:56 ` Patchwork
  2024-11-18 12:58 ` ✗ CI.checksparse: warning " Patchwork
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-18 12:56 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev15)
URL   : https://patchwork.freedesktop.org/series/131815/
State : failure

== Summary ==

run-parts: executing /workspace/ci/hooks/00-showenv
+ export
+ grep -Ei '(^|\W)CI_'
declare -x CI_KERNEL_BUILD_DIR="/workspace/kernel/build64-default"
declare -x CI_KERNEL_SRC_DIR="/workspace/kernel"
declare -x CI_TOOLS_SRC_DIR="/workspace/ci"
declare -x CI_WORKSPACE_DIR="/workspace"
run-parts: executing /workspace/ci/hooks/10-build-W1
+ SRC_DIR=/workspace/kernel
+ RESTORE_DISPLAY_CONFIG=0
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ cd /workspace/kernel
++ nproc
+ make -j48 O=/workspace/kernel/build64-default modules_prepare
make[1]: Entering directory '/workspace/kernel/build64-default'
  GEN     Makefile
  UPD     include/config/kernel.release
mkdir -p /workspace/kernel/build64-default/tools/objtool && make O=/workspace/kernel/build64-default subdir=tools/objtool --no-print-directory -C objtool 
  UPD     include/generated/utsrelease.h
  CALL    ../scripts/checksyscalls.sh
  INSTALL libsubcmd_headers
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/exec-cmd.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/pager.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/help.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/parse-options.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/run-command.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/sigchain.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/subcmd-config.o
  LD      /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd-in.o
  AR      /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd.a
  CC      /workspace/kernel/build64-default/tools/objtool/weak.o
  CC      /workspace/kernel/build64-default/tools/objtool/check.o
  CC      /workspace/kernel/build64-default/tools/objtool/special.o
  CC      /workspace/kernel/build64-default/tools/objtool/builtin-check.o
  CC      /workspace/kernel/build64-default/tools/objtool/elf.o
  CC      /workspace/kernel/build64-default/tools/objtool/objtool.o
  CC      /workspace/kernel/build64-default/tools/objtool/orc_gen.o
  CC      /workspace/kernel/build64-default/tools/objtool/orc_dump.o
  CC      /workspace/kernel/build64-default/tools/objtool/arch/x86/special.o
  CC      /workspace/kernel/build64-default/tools/objtool/libstring.o
  CC      /workspace/kernel/build64-default/tools/objtool/libctype.o
  CC      /workspace/kernel/build64-default/tools/objtool/str_error_r.o
  CC      /workspace/kernel/build64-default/tools/objtool/arch/x86/decode.o
  CC      /workspace/kernel/build64-default/tools/objtool/librbtree.o
  CC      /workspace/kernel/build64-default/tools/objtool/arch/x86/orc.o
  LD      /workspace/kernel/build64-default/tools/objtool/arch/x86/objtool-in.o
  LD      /workspace/kernel/build64-default/tools/objtool/objtool-in.o
  LINK    /workspace/kernel/build64-default/tools/objtool/objtool
make[1]: Leaving directory '/workspace/kernel/build64-default'
++ nproc
+ make -j48 O=/workspace/kernel/build64-default W=1 drivers/gpu/drm/xe
make[1]: Entering directory '/workspace/kernel/build64-default'
make[2]: Nothing to be done for 'drivers/gpu/drm/xe'.
make[1]: Leaving directory '/workspace/kernel/build64-default'
run-parts: executing /workspace/ci/hooks/11-build-32b
+++ realpath /workspace/ci/hooks/11-build-32b
++ dirname /workspace/ci/hooks/11-build-32b
+ THIS_SCRIPT_DIR=/workspace/ci/hooks
+ SRC_DIR=/workspace/kernel
+ TOOLS_SRC_DIR=/workspace/ci
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ BUILD_DIR=/workspace/kernel/build64-default/build32
+ cd /workspace/kernel
+ mkdir -p /workspace/kernel/build64-default/build32
++ nproc
+ make -j48 ARCH=i386 O=/workspace/kernel/build64-default/build32 defconfig
make[1]: Entering directory '/workspace/kernel/build64-default/build32'
  GEN     Makefile
  HOSTCC  scripts/basic/fixdep
  HOSTCC  scripts/kconfig/conf.o
  HOSTCC  scripts/kconfig/confdata.o
  HOSTCC  scripts/kconfig/expr.o
  LEX     scripts/kconfig/lexer.lex.c
  YACC    scripts/kconfig/parser.tab.[ch]
  HOSTCC  scripts/kconfig/preprocess.o
  HOSTCC  scripts/kconfig/menu.o
  HOSTCC  scripts/kconfig/symbol.o
  HOSTCC  scripts/kconfig/util.o
  HOSTCC  scripts/kconfig/lexer.lex.o
  HOSTCC  scripts/kconfig/parser.tab.o
  HOSTLD  scripts/kconfig/conf
*** Default configuration is based on 'i386_defconfig'
#
# configuration written to .config
#
make[1]: Leaving directory '/workspace/kernel/build64-default/build32'
+ cd /workspace/kernel/build64-default/build32
+ /workspace/kernel/scripts/kconfig/merge_config.sh .config /workspace/ci/kernel/10-xe.fragment
Using .config as base
Merging /workspace/ci/kernel/10-xe.fragment
The merge file '/workspace/ci/kernel/10-xe.fragment' does not exist.  Exit.
run-parts: /workspace/ci/hooks/11-build-32b exited with return code 1



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✗ CI.checksparse: warning for TTM shrinker helpers and xe buffer object shrinker (rev15)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (22 preceding siblings ...)
  2024-11-18 12:56 ` ✗ CI.Hooks: failure " Patchwork
@ 2024-11-18 12:58 ` Patchwork
  2024-11-18 13:16 ` ✓ CI.BAT: success " Patchwork
  2024-11-18 16:29 ` ✗ CI.FULL: failure " Patchwork
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-18 12:58 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev15)
URL   : https://patchwork.freedesktop.org/series/131815/
State : warning

== Summary ==

+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast 90014f8026e31874d368834834253debd131268b
/root/linux/maintainer-tools/dim: line 2068: sparse: command not found
Sparse version: 
Fast mode used, each commit won't be checked separately.
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✓ CI.BAT: success for TTM shrinker helpers and xe buffer object shrinker (rev15)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (23 preceding siblings ...)
  2024-11-18 12:58 ` ✗ CI.checksparse: warning " Patchwork
@ 2024-11-18 13:16 ` Patchwork
  2024-11-18 16:29 ` ✗ CI.FULL: failure " Patchwork
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-18 13:16 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 3970 bytes --]

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev15)
URL   : https://patchwork.freedesktop.org/series/131815/
State : success

== Summary ==

CI Bug Log - changes from xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f_BAT -> xe-pw-131815v15_BAT
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (9 -> 9)
------------------------------

  No changes in participating hosts

Known issues
------------

  Here are the changes found in xe-pw-131815v15_BAT that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_frontbuffer_tracking@basic:
    - bat-adlp-7:         [PASS][1] -> [DMESG-FAIL][2] ([Intel XE#1033])
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html

  * igt@xe_intel_bb@render@render-linear-256:
    - bat-adlp-vf:        [PASS][3] -> [DMESG-WARN][4] ([Intel XE#358]) +1 other test dmesg-warn
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/bat-adlp-vf/igt@xe_intel_bb@render@render-linear-256.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/bat-adlp-vf/igt@xe_intel_bb@render@render-linear-256.html

  * igt@xe_live_ktest@xe_migrate:
    - bat-adlp-vf:        [PASS][5] -> [SKIP][6] ([Intel XE#1192]) +1 other test skip
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/bat-adlp-vf/igt@xe_live_ktest@xe_migrate.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/bat-adlp-vf/igt@xe_live_ktest@xe_migrate.html

  
#### Possible fixes ####

  * igt@kms_flip@basic-flip-vs-wf_vblank:
    - bat-lnl-1:          [FAIL][7] ([Intel XE#886]) -> [PASS][8] +1 other test pass
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/bat-lnl-1/igt@kms_flip@basic-flip-vs-wf_vblank.html
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/bat-lnl-1/igt@kms_flip@basic-flip-vs-wf_vblank.html

  * igt@xe_exec_threads@threads-mixed-shared-vm-basic:
    - bat-pvc-2:          [DMESG-WARN][9] ([Intel XE#3371]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/bat-pvc-2/igt@xe_exec_threads@threads-mixed-shared-vm-basic.html
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/bat-pvc-2/igt@xe_exec_threads@threads-mixed-shared-vm-basic.html

  
#### Warnings ####

  * igt@xe_live_ktest@xe_bo:
    - bat-adlp-vf:        [SKIP][11] ([Intel XE#2229] / [Intel XE#455]) -> [SKIP][12] ([Intel XE#1192])
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/bat-adlp-vf/igt@xe_live_ktest@xe_bo.html
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/bat-adlp-vf/igt@xe_live_ktest@xe_bo.html

  
  [Intel XE#1033]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1033
  [Intel XE#1192]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1192
  [Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
  [Intel XE#3371]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3371
  [Intel XE#358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/358
  [Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
  [Intel XE#886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/886


Build changes
-------------

  * Linux: xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f -> xe-pw-131815v15

  IGT_8114: 8114
  xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f: 57639ceec0f66f06f4a8a8ac3b9551b7b493c33f
  xe-pw-131815v15: 131815v15

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/index.html

[-- Attachment #2: Type: text/html, Size: 4732 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* ✗ CI.FULL: failure for TTM shrinker helpers and xe buffer object shrinker (rev15)
  2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
                   ` (24 preceding siblings ...)
  2024-11-18 13:16 ` ✓ CI.BAT: success " Patchwork
@ 2024-11-18 16:29 ` Patchwork
  25 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-11-18 16:29 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 120093 bytes --]

== Series Details ==

Series: TTM shrinker helpers and xe buffer object shrinker (rev15)
URL   : https://patchwork.freedesktop.org/series/131815/
State : failure

== Summary ==

CI Bug Log - changes from xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f_full -> xe-pw-131815v15_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-131815v15_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-131815v15_full, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (4 -> 4)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-131815v15_full:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_addfb_basic@bad-pitch-32:
    - shard-lnl:          [PASS][1] -> [DMESG-WARN][2] +4 other tests dmesg-warn
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-8/igt@kms_addfb_basic@bad-pitch-32.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-1/igt@kms_addfb_basic@bad-pitch-32.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-dg2-rc-ccs-cc@pipe-a-hdmi-a-6:
    - shard-dg2-set2:     NOTRUN -> [DMESG-WARN][3] +2 other tests dmesg-warn
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_ccs@crc-primary-basic-4-tiled-dg2-rc-ccs-cc@pipe-a-hdmi-a-6.html

  * igt@kms_flip@2x-flip-vs-expired-vblank@ab-dp2-hdmi-a3:
    - shard-bmg:          NOTRUN -> [INCOMPLETE][4] +1 other test incomplete
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_flip@2x-flip-vs-expired-vblank@ab-dp2-hdmi-a3.html

  * igt@xe_ccs@block-copy-uncompressed-inc-dimension@linear-uncompressed-compfmt0-vram01-system-331x331:
    - shard-bmg:          [PASS][5] -> [DMESG-WARN][6] +8 other tests dmesg-warn
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-5/igt@xe_ccs@block-copy-uncompressed-inc-dimension@linear-uncompressed-compfmt0-vram01-system-331x331.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@xe_ccs@block-copy-uncompressed-inc-dimension@linear-uncompressed-compfmt0-vram01-system-331x331.html

  * igt@xe_exec_threads@threads-cm-shared-vm-userptr-invalidate:
    - shard-bmg:          [PASS][7] -> [DMESG-FAIL][8]
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-6/igt@xe_exec_threads@threads-cm-shared-vm-userptr-invalidate.html
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-8/igt@xe_exec_threads@threads-cm-shared-vm-userptr-invalidate.html

  * igt@xe_module_load@unload:
    - shard-dg2-set2:     [PASS][9] -> [DMESG-WARN][10]
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_module_load@unload.html
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@xe_module_load@unload.html

  * igt@xe_pm_residency@gt-c6-freeze@gt1:
    - shard-bmg:          NOTRUN -> [DMESG-FAIL][11] +1 other test dmesg-fail
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@xe_pm_residency@gt-c6-freeze@gt1.html

  * igt@xe_vm@shared-pde2-page:
    - shard-bmg:          NOTRUN -> [DMESG-WARN][12] +7 other tests dmesg-warn
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-8/igt@xe_vm@shared-pde2-page.html

  
#### Warnings ####

  * igt@core_hotunplug@hotunplug-rescan:
    - shard-bmg:          [INCOMPLETE][13] ([Intel XE#3468]) -> [INCOMPLETE][14]
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-2/igt@core_hotunplug@hotunplug-rescan.html
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-3/igt@core_hotunplug@hotunplug-rescan.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-dg2-rc-ccs-cc:
    - shard-dg2-set2:     [SKIP][15] ([Intel XE#2136] / [Intel XE#2351]) -> [DMESG-WARN][16]
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_ccs@crc-primary-basic-4-tiled-dg2-rc-ccs-cc.html
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_ccs@crc-primary-basic-4-tiled-dg2-rc-ccs-cc.html

  * igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs:
    - shard-dg2-set2:     [SKIP][17] ([Intel XE#3442]) -> [SKIP][18] +1 other test skip
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html

  * igt@xe_exec_basic@multigpu-once-basic-defer-bind:
    - shard-dg2-set2:     [SKIP][19] ([Intel XE#1130]) -> [DMESG-WARN][20] +2 other tests dmesg-warn
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_exec_basic@multigpu-once-basic-defer-bind.html
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@xe_exec_basic@multigpu-once-basic-defer-bind.html

  * igt@xe_wedged@wedged-mode-toggle:
    - shard-bmg:          [DMESG-WARN][21] ([Intel XE#3468]) -> [DMESG-WARN][22]
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-3/igt@xe_wedged@wedged-mode-toggle.html
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-3/igt@xe_wedged@wedged-mode-toggle.html

  
Known issues
------------

  Here are the changes found in xe-pw-131815v15_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@core_hotunplug@hotreplug:
    - shard-dg2-set2:     [PASS][23] -> [SKIP][24] ([Intel XE#1885]) +1 other test skip
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@core_hotunplug@hotreplug.html
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@core_hotunplug@hotreplug.html

  * igt@fbdev@unaligned-read:
    - shard-dg2-set2:     NOTRUN -> [SKIP][25] ([Intel XE#2134])
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@fbdev@unaligned-read.html

  * igt@kms_big_fb@4-tiled-64bpp-rotate-90:
    - shard-bmg:          NOTRUN -> [SKIP][26] ([Intel XE#2327])
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_big_fb@4-tiled-64bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-64bpp-rotate-90:
    - shard-dg2-set2:     NOTRUN -> [SKIP][27] ([Intel XE#316])
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_big_fb@x-tiled-64bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0:
    - shard-dg2-set2:     [PASS][28] -> [SKIP][29] ([Intel XE#2136]) +22 other tests skip
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0.html
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0:
    - shard-dg2-set2:     NOTRUN -> [SKIP][30] ([Intel XE#1124])
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-adlp:         [PASS][31] -> [DMESG-FAIL][32] ([Intel XE#1033])
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-1/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-1/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
    - shard-bmg:          NOTRUN -> [SKIP][33] ([Intel XE#1124]) +2 other tests skip
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html

  * igt@kms_bw@linear-tiling-3-displays-1920x1080p:
    - shard-bmg:          NOTRUN -> [SKIP][34] ([Intel XE#367])
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_bw@linear-tiling-3-displays-1920x1080p.html

  * igt@kms_ccs@crc-primary-basic-yf-tiled-ccs:
    - shard-bmg:          NOTRUN -> [SKIP][35] ([Intel XE#2887]) +2 other tests skip
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@kms_ccs@crc-primary-basic-yf-tiled-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-mc-ccs@pipe-b-hdmi-a-6:
    - shard-dg2-set2:     NOTRUN -> [SKIP][36] ([Intel XE#787]) +139 other tests skip
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-mc-ccs@pipe-b-hdmi-a-6.html

  * igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-mc-ccs@pipe-d-dp-4:
    - shard-dg2-set2:     NOTRUN -> [SKIP][37] ([Intel XE#455] / [Intel XE#787]) +20 other tests skip
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-mc-ccs@pipe-d-dp-4.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs:
    - shard-dg2-set2:     [PASS][38] -> [INCOMPLETE][39] ([Intel XE#1195] / [Intel XE#1727])
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs:
    - shard-dg2-set2:     [PASS][40] -> [DMESG-WARN][41] ([Intel XE#1727])
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-d-dp-4:
    - shard-dg2-set2:     [PASS][42] -> [DMESG-WARN][43] ([Intel XE#3113])
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-d-dp-4.html
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-d-dp-4.html

  * igt@kms_cdclk@mode-transition@pipe-d-dp-4:
    - shard-dg2-set2:     NOTRUN -> [SKIP][44] ([Intel XE#314]) +3 other tests skip
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_cdclk@mode-transition@pipe-d-dp-4.html

  * igt@kms_chamelium_edid@dp-edid-resolution-list:
    - shard-bmg:          NOTRUN -> [SKIP][45] ([Intel XE#2252]) +3 other tests skip
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@kms_chamelium_edid@dp-edid-resolution-list.html

  * igt@kms_chamelium_hpd@dp-hpd:
    - shard-dg2-set2:     NOTRUN -> [SKIP][46] ([Intel XE#373])
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_chamelium_hpd@dp-hpd.html

  * igt@kms_content_protection@legacy@pipe-a-dp-4:
    - shard-dg2-set2:     NOTRUN -> [FAIL][47] ([Intel XE#1178]) +1 other test fail
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_content_protection@legacy@pipe-a-dp-4.html

  * igt@kms_content_protection@lic-type-0@pipe-a-dp-4:
    - shard-dg2-set2:     NOTRUN -> [FAIL][48] ([Intel XE#3304])
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_content_protection@lic-type-0@pipe-a-dp-4.html

  * igt@kms_cursor_crc@cursor-offscreen-max-size:
    - shard-bmg:          NOTRUN -> [SKIP][49] ([Intel XE#2320]) +1 other test skip
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@kms_cursor_crc@cursor-offscreen-max-size.html

  * igt@kms_cursor_crc@cursor-sliding-64x64:
    - shard-dg2-set2:     NOTRUN -> [SKIP][50] ([Intel XE#2423] / [i915#2575]) +32 other tests skip
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_cursor_crc@cursor-sliding-64x64.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-varying-size:
    - shard-bmg:          NOTRUN -> [SKIP][51] ([Intel XE#2286])
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-varying-size.html

  * igt@kms_flip@2x-flip-vs-expired-vblank:
    - shard-bmg:          NOTRUN -> [INCOMPLETE][52] ([Intel XE#2635])
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_flip@2x-flip-vs-expired-vblank.html

  * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ad-dp2-hdmi-a3:
    - shard-bmg:          [PASS][53] -> [FAIL][54] ([Intel XE#3321] / [Intel XE#3486])
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-2/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ad-dp2-hdmi-a3.html
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-3/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ad-dp2-hdmi-a3.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible:
    - shard-dg2-set2:     [PASS][55] -> [INCOMPLETE][56] ([Intel XE#1195] / [Intel XE#2049] / [Intel XE#2597])
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-433/igt@kms_flip@2x-flip-vs-suspend-interruptible.html
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_flip@2x-flip-vs-suspend-interruptible.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible@ad-hdmi-a6-dp4:
    - shard-dg2-set2:     [PASS][57] -> [INCOMPLETE][58] ([Intel XE#1195]) +1 other test incomplete
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-433/igt@kms_flip@2x-flip-vs-suspend-interruptible@ad-hdmi-a6-dp4.html
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_flip@2x-flip-vs-suspend-interruptible@ad-hdmi-a6-dp4.html

  * igt@kms_flip@flip-vs-suspend@a-hdmi-a6:
    - shard-dg2-set2:     NOTRUN -> [DMESG-WARN][59] ([Intel XE#3468]) +2 other tests dmesg-warn
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_flip@flip-vs-suspend@a-hdmi-a6.html

  * igt@kms_flip@flip-vs-suspend@c-hdmi-a1:
    - shard-adlp:         [PASS][60] -> [DMESG-WARN][61] ([Intel XE#2953] / [Intel XE#3086]) +2 other tests dmesg-warn
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-8/igt@kms_flip@flip-vs-suspend@c-hdmi-a1.html
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-2/igt@kms_flip@flip-vs-suspend@c-hdmi-a1.html

  * igt@kms_flip@flip-vs-suspend@c-hdmi-a6:
    - shard-dg2-set2:     NOTRUN -> [DMESG-FAIL][62] ([Intel XE#3468]) +5 other tests dmesg-fail
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_flip@flip-vs-suspend@c-hdmi-a6.html

  * igt@kms_flip@plain-flip-ts-check:
    - shard-bmg:          [PASS][63] -> [FAIL][64] ([Intel XE#2882]) +1 other test fail
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-1/igt@kms_flip@plain-flip-ts-check.html
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_flip@plain-flip-ts-check.html
    - shard-dg2-set2:     [PASS][65] -> [FAIL][66] ([Intel XE#886])
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_flip@plain-flip-ts-check.html
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_flip@plain-flip-ts-check.html

  * igt@kms_flip@plain-flip-ts-check-interruptible@d-hdmi-a3:
    - shard-bmg:          NOTRUN -> [DMESG-WARN][67] ([Intel XE#3468]) +8 other tests dmesg-warn
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-8/igt@kms_flip@plain-flip-ts-check-interruptible@d-hdmi-a3.html

  * igt@kms_flip@plain-flip-ts-check@b-edp1:
    - shard-lnl:          [PASS][68] -> [FAIL][69] ([Intel XE#886]) +4 other tests fail
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-2/igt@kms_flip@plain-flip-ts-check@b-edp1.html
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-3/igt@kms_flip@plain-flip-ts-check@b-edp1.html

  * igt@kms_flip@plain-flip-ts-check@b-hdmi-a6:
    - shard-dg2-set2:     [PASS][70] -> [FAIL][71] ([Intel XE#3477]) +1 other test fail
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_flip@plain-flip-ts-check@b-hdmi-a6.html
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_flip@plain-flip-ts-check@b-hdmi-a6.html

  * igt@kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling:
    - shard-dg2-set2:     [PASS][72] -> [SKIP][73] ([Intel XE#2136] / [Intel XE#2351]) +13 other tests skip
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling.html
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode:
    - shard-dg2-set2:     NOTRUN -> [SKIP][74] ([Intel XE#455]) +6 other tests skip
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode.html

  * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-shrfb-draw-mmap-wc:
    - shard-bmg:          NOTRUN -> [SKIP][75] ([Intel XE#2311]) +8 other tests skip
   [75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-shrfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-fullscreen:
    - shard-dg2-set2:     NOTRUN -> [SKIP][76] ([Intel XE#651]) +4 other tests skip
   [76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-fullscreen.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-blt:
    - shard-bmg:          NOTRUN -> [FAIL][77] ([Intel XE#2333])
   [77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbc-modesetfrombusy:
    - shard-bmg:          NOTRUN -> [DMESG-FAIL][78] ([Intel XE#3468]) +2 other tests dmesg-fail
   [78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-modesetfrombusy.html

  * igt@kms_frontbuffer_tracking@fbc-tiling-y:
    - shard-bmg:          NOTRUN -> [SKIP][79] ([Intel XE#2352])
   [79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-tiling-y.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-render:
    - shard-dg2-set2:     NOTRUN -> [SKIP][80] ([Intel XE#2136]) +27 other tests skip
   [80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-shrfb-pgflip-blt:
    - shard-dg2-set2:     NOTRUN -> [SKIP][81] ([Intel XE#2136] / [Intel XE#2351]) +9 other tests skip
   [81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-shrfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y:
    - shard-dg2-set2:     NOTRUN -> [SKIP][82] ([Intel XE#658])
   [82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt:
    - shard-bmg:          NOTRUN -> [SKIP][83] ([Intel XE#2313]) +8 other tests skip
   [83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-onoff:
    - shard-dg2-set2:     NOTRUN -> [SKIP][84] ([Intel XE#653]) +1 other test skip
   [84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-onoff.html

  * igt@kms_hdmi_inject@inject-audio:
    - shard-dg2-set2:     [PASS][85] -> [SKIP][86] ([Intel XE#2423] / [i915#2575]) +82 other tests skip
   [85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_hdmi_inject@inject-audio.html
   [86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_hdmi_inject@inject-audio.html

  * igt@kms_hdr@static-swap:
    - shard-bmg:          [PASS][87] -> [DMESG-WARN][88] ([Intel XE#3468]) +4 other tests dmesg-warn
   [87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-4/igt@kms_hdr@static-swap.html
   [88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-7/igt@kms_hdr@static-swap.html

  * igt@kms_joiner@invalid-modeset-ultra-joiner:
    - shard-dg2-set2:     NOTRUN -> [SKIP][89] ([Intel XE#2927])
   [89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_joiner@invalid-modeset-ultra-joiner.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-20x20@pipe-b:
    - shard-dg2-set2:     NOTRUN -> [SKIP][90] ([Intel XE#2763]) +11 other tests skip
   [90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-20x20@pipe-b.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-d:
    - shard-dg2-set2:     NOTRUN -> [SKIP][91] ([Intel XE#2763] / [Intel XE#455]) +3 other tests skip
   [91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-d.html

  * igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75:
    - shard-bmg:          NOTRUN -> [SKIP][92] ([Intel XE#2763]) +4 other tests skip
   [92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75.html

  * igt@kms_pm_dc@deep-pkgc:
    - shard-lnl:          [PASS][93] -> [FAIL][94] ([Intel XE#2029])
   [93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-3/igt@kms_pm_dc@deep-pkgc.html
   [94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-5/igt@kms_pm_dc@deep-pkgc.html

  * igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait:
    - shard-dg2-set2:     NOTRUN -> [SKIP][95] ([Intel XE#2446]) +1 other test skip
   [95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait.html

  * igt@kms_pm_rpm@system-suspend-modeset:
    - shard-dg2-set2:     [PASS][96] -> [SKIP][97] ([Intel XE#2446]) +1 other test skip
   [96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_pm_rpm@system-suspend-modeset.html
   [97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_pm_rpm@system-suspend-modeset.html

  * igt@kms_properties@plane-properties-legacy:
    - shard-adlp:         [PASS][98] -> [DMESG-WARN][99] ([Intel XE#3086]) +1 other test dmesg-warn
   [98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-4/igt@kms_properties@plane-properties-legacy.html
   [99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-4/igt@kms_properties@plane-properties-legacy.html

  * igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf:
    - shard-bmg:          NOTRUN -> [SKIP][100] ([Intel XE#1489]) +1 other test skip
   [100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf.html

  * igt@kms_psr@fbc-psr-primary-page-flip:
    - shard-bmg:          NOTRUN -> [SKIP][101] ([Intel XE#2234] / [Intel XE#2850]) +2 other tests skip
   [101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_psr@fbc-psr-primary-page-flip.html

  * igt@kms_psr@fbc-psr2-sprite-plane-onoff:
    - shard-dg2-set2:     NOTRUN -> [SKIP][102] ([Intel XE#2850] / [Intel XE#929]) +3 other tests skip
   [102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_psr@fbc-psr2-sprite-plane-onoff.html

  * igt@kms_rotation_crc@primary-rotation-270:
    - shard-bmg:          NOTRUN -> [SKIP][103] ([Intel XE#3414]) +1 other test skip
   [103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_rotation_crc@primary-rotation-270.html

  * igt@xe_compute_preempt@compute-threadgroup-preempt@engine-drm_xe_engine_class_compute:
    - shard-dg2-set2:     NOTRUN -> [SKIP][104] ([Intel XE#1280] / [Intel XE#455])
   [104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@xe_compute_preempt@compute-threadgroup-preempt@engine-drm_xe_engine_class_compute.html

  * igt@xe_eudebug_online@breakpoint-many-sessions-tiles:
    - shard-dg2-set2:     NOTRUN -> [SKIP][105] ([Intel XE#2905]) +1 other test skip
   [105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@xe_eudebug_online@breakpoint-many-sessions-tiles.html

  * igt@xe_eudebug_online@debugger-reopen:
    - shard-bmg:          NOTRUN -> [SKIP][106] ([Intel XE#2905])
   [106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@xe_eudebug_online@debugger-reopen.html

  * igt@xe_exec_basic@many-execqueues-many-vm-bindexecqueue:
    - shard-bmg:          [PASS][107] -> [FAIL][108] ([Intel XE#3497])
   [107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-8/igt@xe_exec_basic@many-execqueues-many-vm-bindexecqueue.html
   [108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-7/igt@xe_exec_basic@many-execqueues-many-vm-bindexecqueue.html

  * igt@xe_exec_basic@multigpu-many-execqueues-many-vm-basic-defer-mmap:
    - shard-bmg:          NOTRUN -> [SKIP][109] ([Intel XE#2322]) +2 other tests skip
   [109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-basic-defer-mmap.html

  * igt@xe_exec_basic@no-exec-basic-defer-bind:
    - shard-dg2-set2:     [PASS][110] -> [SKIP][111] ([Intel XE#1130]) +161 other tests skip
   [110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@xe_exec_basic@no-exec-basic-defer-bind.html
   [111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_exec_basic@no-exec-basic-defer-bind.html

  * igt@xe_exec_fault_mode@many-execqueues-bindexecqueue-imm:
    - shard-dg2-set2:     NOTRUN -> [SKIP][112] ([Intel XE#288])
   [112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@xe_exec_fault_mode@many-execqueues-bindexecqueue-imm.html

  * igt@xe_exec_mix_modes@exec-simple-batch-store-lr:
    - shard-dg2-set2:     NOTRUN -> [SKIP][113] ([Intel XE#1130]) +30 other tests skip
   [113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_exec_mix_modes@exec-simple-batch-store-lr.html

  * igt@xe_exec_reset@cat-error:
    - shard-adlp:         [PASS][114] -> [DMESG-WARN][115] ([Intel XE#358])
   [114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-3/igt@xe_exec_reset@cat-error.html
   [115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-6/igt@xe_exec_reset@cat-error.html

  * igt@xe_fault_injection@inject-fault-probe-function-xe_guc_log_init:
    - shard-dg2-set2:     NOTRUN -> [DMESG-WARN][116] ([Intel XE#3343])
   [116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@xe_fault_injection@inject-fault-probe-function-xe_guc_log_init.html

  * igt@xe_fault_injection@inject-fault-probe-function-xe_guc_relay_init:
    - shard-bmg:          NOTRUN -> [DMESG-WARN][117] ([Intel XE#3343])
   [117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@xe_fault_injection@inject-fault-probe-function-xe_guc_relay_init.html

  * igt@xe_media_fill@media-fill:
    - shard-bmg:          NOTRUN -> [SKIP][118] ([Intel XE#2459] / [Intel XE#2596])
   [118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@xe_media_fill@media-fill.html

  * igt@xe_module_load@reload-no-display:
    - shard-dg2-set2:     NOTRUN -> [FAIL][119] ([Intel XE#2136])
   [119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_module_load@reload-no-display.html

  * igt@xe_oa@enable-disable@rcs-0:
    - shard-lnl:          NOTRUN -> [DMESG-WARN][120] ([Intel XE#3466])
   [120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-7/igt@xe_oa@enable-disable@rcs-0.html

  * igt@xe_peer2peer@write@write-gpua-vram01-gpub-system-p2p:
    - shard-dg2-set2:     NOTRUN -> [FAIL][121] ([Intel XE#1173])
   [121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-464/igt@xe_peer2peer@write@write-gpua-vram01-gpub-system-p2p.html

  * igt@xe_pm@s4-multiple-execs:
    - shard-lnl:          [PASS][122] -> [ABORT][123] ([Intel XE#1358] / [Intel XE#1607] / [Intel XE#1794])
   [122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-5/igt@xe_pm@s4-multiple-execs.html
   [123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-2/igt@xe_pm@s4-multiple-execs.html

  * igt@xe_pm@s4-vm-bind-prefetch:
    - shard-adlp:         [PASS][124] -> [ABORT][125] ([Intel XE#1607] / [Intel XE#1794])
   [124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-8/igt@xe_pm@s4-vm-bind-prefetch.html
   [125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-9/igt@xe_pm@s4-vm-bind-prefetch.html

  * igt@xe_prime_self_import@export-vs-gem_close-race:
    - shard-lnl:          [PASS][126] -> [DMESG-WARN][127] ([Intel XE#3466])
   [126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-8/igt@xe_prime_self_import@export-vs-gem_close-race.html
   [127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-1/igt@xe_prime_self_import@export-vs-gem_close-race.html

  * igt@xe_query@multigpu-query-mem-usage:
    - shard-bmg:          NOTRUN -> [SKIP][128] ([Intel XE#944])
   [128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-8/igt@xe_query@multigpu-query-mem-usage.html

  * igt@xe_sriov_flr@flr-each-isolation:
    - shard-adlp:         [PASS][129] -> [FAIL][130] ([Intel XE#3507])
   [129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-2/igt@xe_sriov_flr@flr-each-isolation.html
   [130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-8/igt@xe_sriov_flr@flr-each-isolation.html

  
#### Possible fixes ####

  * igt@core_getversion@all-cards:
    - shard-dg2-set2:     [FAIL][131] ([Intel XE#3440]) -> [PASS][132]
   [131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@core_getversion@all-cards.html
   [132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@core_getversion@all-cards.html

  * igt@core_setmaster@master-drop-set-user:
    - shard-dg2-set2:     [FAIL][133] ([Intel XE#3339]) -> [PASS][134]
   [133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@core_setmaster@master-drop-set-user.html
   [134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@core_setmaster@master-drop-set-user.html

  * igt@fbdev@read:
    - shard-dg2-set2:     [SKIP][135] ([Intel XE#2134]) -> [PASS][136] +2 other tests pass
   [135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@fbdev@read.html
   [136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@fbdev@read.html

  * igt@kms_atomic@plane-invalid-params-fence:
    - shard-dg2-set2:     [SKIP][137] ([Intel XE#2423] / [i915#2575]) -> [PASS][138] +95 other tests pass
   [137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_atomic@plane-invalid-params-fence.html
   [138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_atomic@plane-invalid-params-fence.html

  * igt@kms_atomic_transition@plane-toggle-modeset-transition:
    - shard-adlp:         [FAIL][139] ([Intel XE#1426]) -> [PASS][140] +1 other test pass
   [139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-1/igt@kms_atomic_transition@plane-toggle-modeset-transition.html
   [140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-1/igt@kms_atomic_transition@plane-toggle-modeset-transition.html

  * igt@kms_color@degamma@pipe-a-dp-2:
    - shard-bmg:          [DMESG-WARN][141] -> [PASS][142] +10 other tests pass
   [141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-6/igt@kms_color@degamma@pipe-a-dp-2.html
   [142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_color@degamma@pipe-a-dp-2.html

  * igt@kms_cursor_edge_walk@64x64-top-edge:
    - shard-lnl:          [FAIL][143] ([Intel XE#2577] / [Intel XE#3106]) -> [PASS][144] +1 other test pass
   [143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-3/igt@kms_cursor_edge_walk@64x64-top-edge.html
   [144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-5/igt@kms_cursor_edge_walk@64x64-top-edge.html

  * igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
    - shard-bmg:          [DMESG-WARN][145] ([Intel XE#877]) -> [PASS][146] +1 other test pass
   [145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
   [146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-7/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic:
    - shard-bmg:          [DMESG-FAIL][147] ([Intel XE#3468]) -> [PASS][148] +10 other tests pass
   [147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-6/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
   [148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html

  * igt@kms_cursor_legacy@torture-bo@pipe-a:
    - shard-dg2-set2:     [DMESG-WARN][149] ([Intel XE#3184]) -> [PASS][150] +2 other tests pass
   [149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_cursor_legacy@torture-bo@pipe-a.html
   [150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-434/igt@kms_cursor_legacy@torture-bo@pipe-a.html

  * igt@kms_dp_aux_dev:
    - shard-dg2-set2:     [SKIP][151] ([Intel XE#2423]) -> [PASS][152] +2 other tests pass
   [151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_dp_aux_dev.html
   [152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_dp_aux_dev.html

  * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-dp2-hdmi-a3:
    - shard-bmg:          [FAIL][153] ([Intel XE#3486]) -> [PASS][154] +1 other test pass
   [153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-2/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-dp2-hdmi-a3.html
   [154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-3/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bc-dp2-hdmi-a3.html

  * igt@kms_flip@2x-plain-flip-fb-recreate-interruptible:
    - shard-bmg:          [FAIL][155] ([Intel XE#2882]) -> [PASS][156] +7 other tests pass
   [155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-6/igt@kms_flip@2x-plain-flip-fb-recreate-interruptible.html
   [156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-8/igt@kms_flip@2x-plain-flip-fb-recreate-interruptible.html

  * igt@kms_flip@flip-vs-absolute-wf_vblank:
    - shard-lnl:          [FAIL][157] ([Intel XE#886]) -> [PASS][158] +1 other test pass
   [157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-3/igt@kms_flip@flip-vs-absolute-wf_vblank.html
   [158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-7/igt@kms_flip@flip-vs-absolute-wf_vblank.html

  * igt@kms_flip@flip-vs-suspend@b-hdmi-a1:
    - shard-adlp:         [DMESG-WARN][159] ([Intel XE#2953] / [Intel XE#3086]) -> [PASS][160] +2 other tests pass
   [159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-8/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html
   [160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-2/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html

  * igt@kms_flip@flip-vs-suspend@c-dp2:
    - shard-bmg:          [DMESG-WARN][161] ([Intel XE#3451]) -> [PASS][162] +1 other test pass
   [161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-6/igt@kms_flip@flip-vs-suspend@c-dp2.html
   [162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-8/igt@kms_flip@flip-vs-suspend@c-dp2.html

  * igt@kms_frontbuffer_tracking@basic:
    - shard-dg2-set2:     [SKIP][163] ([Intel XE#2351]) -> [PASS][164]
   [163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_frontbuffer_tracking@basic.html
   [164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_frontbuffer_tracking@basic.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-blt:
    - shard-dg2-set2:     [SKIP][165] ([Intel XE#2136]) -> [PASS][166] +27 other tests pass
   [165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-blt.html
   [166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-mmap-wc:
    - shard-dg2-set2:     [SKIP][167] ([Intel XE#2136] / [Intel XE#2351]) -> [PASS][168] +7 other tests pass
   [167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-mmap-wc.html
   [168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-mmap-wc.html

  * igt@kms_plane_alpha_blend@alpha-7efc@pipe-a-dp-2:
    - shard-bmg:          [DMESG-FAIL][169] -> [PASS][170]
   [169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-6/igt@kms_plane_alpha_blend@alpha-7efc@pipe-a-dp-2.html
   [170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_plane_alpha_blend@alpha-7efc@pipe-a-dp-2.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-75-upscale-20x20:
    - shard-lnl:          [DMESG-WARN][171] ([Intel XE#2566]) -> [PASS][172]
   [171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-3/igt@kms_plane_scaling@planes-downscale-factor-0-75-upscale-20x20.html
   [172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-4/igt@kms_plane_scaling@planes-downscale-factor-0-75-upscale-20x20.html

  * igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-5:
    - shard-bmg:          [DMESG-WARN][173] ([Intel XE#2566]) -> [PASS][174]
   [173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-6/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-5.html
   [174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-8/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-5.html

  * igt@kms_plane_scaling@planes-upscale-20x20@pipe-d:
    - shard-adlp:         [DMESG-WARN][175] ([Intel XE#3086]) -> [PASS][176] +2 other tests pass
   [175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-9/igt@kms_plane_scaling@planes-upscale-20x20@pipe-d.html
   [176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-3/igt@kms_plane_scaling@planes-upscale-20x20@pipe-d.html

  * igt@kms_pm_rpm@cursor:
    - shard-lnl:          [DMESG-WARN][177] ([Intel XE#3184]) -> [PASS][178]
   [177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-7/igt@kms_pm_rpm@cursor.html
   [178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-8/igt@kms_pm_rpm@cursor.html

  * igt@kms_pm_rpm@cursor-dpms:
    - shard-dg2-set2:     [SKIP][179] ([Intel XE#2446]) -> [PASS][180] +3 other tests pass
   [179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_pm_rpm@cursor-dpms.html
   [180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@kms_pm_rpm@cursor-dpms.html

  * igt@kms_pm_rpm@universal-planes:
    - shard-lnl:          [DMESG-WARN][181] ([Intel XE#2042]) -> [PASS][182]
   [181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-3/igt@kms_pm_rpm@universal-planes.html
   [182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-4/igt@kms_pm_rpm@universal-planes.html

  * igt@kms_rotation_crc@primary-x-tiled-reflect-x-180:
    - shard-lnl:          [DMESG-WARN][183] -> [PASS][184] +94 other tests pass
   [183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-3/igt@kms_rotation_crc@primary-x-tiled-reflect-x-180.html
   [184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-4/igt@kms_rotation_crc@primary-x-tiled-reflect-x-180.html

  * igt@kms_rotation_crc@sprite-rotation-180:
    - shard-lnl:          [DMESG-WARN][185] ([Intel XE#3466]) -> [PASS][186]
   [185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-3/igt@kms_rotation_crc@sprite-rotation-180.html
   [186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-4/igt@kms_rotation_crc@sprite-rotation-180.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1:
    - shard-lnl:          [FAIL][187] ([Intel XE#899]) -> [PASS][188] +1 other test pass
   [187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-6/igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1.html
   [188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-6/igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1.html

  * igt@kms_vrr@max-min:
    - shard-lnl:          [FAIL][189] ([Intel XE#1522]) -> [PASS][190] +1 other test pass
   [189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-4/igt@kms_vrr@max-min.html
   [190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-8/igt@kms_vrr@max-min.html

  * igt@xe_drm_fdinfo@utilization-others-idle:
    - shard-bmg:          [INCOMPLETE][191] -> [PASS][192] +1 other test pass
   [191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-1/igt@xe_drm_fdinfo@utilization-others-idle.html
   [192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-1/igt@xe_drm_fdinfo@utilization-others-idle.html

  * igt@xe_exec_balancer@once-parallel-rebind:
    - shard-dg2-set2:     [SKIP][193] ([Intel XE#1130]) -> [PASS][194] +174 other tests pass
   [193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_exec_balancer@once-parallel-rebind.html
   [194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@xe_exec_balancer@once-parallel-rebind.html

  * igt@xe_exec_threads@threads-mixed-shared-vm-userptr-invalidate:
    - shard-adlp:         [DMESG-FAIL][195] ([Intel XE#3371]) -> [PASS][196]
   [195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-8/igt@xe_exec_threads@threads-mixed-shared-vm-userptr-invalidate.html
   [196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-9/igt@xe_exec_threads@threads-mixed-shared-vm-userptr-invalidate.html

  * igt@xe_fault_injection@vm-create-fail-xe_pt_create:
    - shard-bmg:          [DMESG-WARN][197] ([Intel XE#3467]) -> [PASS][198] +1 other test pass
   [197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-6/igt@xe_fault_injection@vm-create-fail-xe_pt_create.html
   [198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@xe_fault_injection@vm-create-fail-xe_pt_create.html

  * igt@xe_live_ktest@xe_bo@xe_bo_shrink_kunit:
    - shard-bmg:          [INCOMPLETE][199] ([Intel XE#2998]) -> [PASS][200] +1 other test pass
   [199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-1/igt@xe_live_ktest@xe_bo@xe_bo_shrink_kunit.html
   [200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@xe_live_ktest@xe_bo@xe_bo_shrink_kunit.html

  * igt@xe_pm@s2idle-vm-bind-unbind-all:
    - shard-lnl:          [DMESG-WARN][201] ([Intel XE#1616]) -> [PASS][202]
   [201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-3/igt@xe_pm@s2idle-vm-bind-unbind-all.html
   [202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-4/igt@xe_pm@s2idle-vm-bind-unbind-all.html

  * igt@xe_pm_residency@toggle-gt-c6:
    - shard-adlp:         [FAIL][203] ([Intel XE#958]) -> [PASS][204]
   [203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-9/igt@xe_pm_residency@toggle-gt-c6.html
   [204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-4/igt@xe_pm_residency@toggle-gt-c6.html
    - shard-lnl:          [FAIL][205] ([Intel XE#958]) -> [PASS][206]
   [205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-7/igt@xe_pm_residency@toggle-gt-c6.html
   [206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-4/igt@xe_pm_residency@toggle-gt-c6.html

  * igt@xe_vm@large-userptr-split-misaligned-binds-67108864:
    - shard-bmg:          [DMESG-WARN][207] ([Intel XE#3468]) -> [PASS][208] +45 other tests pass
   [207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-6/igt@xe_vm@large-userptr-split-misaligned-binds-67108864.html
   [208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@xe_vm@large-userptr-split-misaligned-binds-67108864.html

  
#### Warnings ####

  * igt@core_hotunplug@hotrebind-lateclose:
    - shard-dg2-set2:     [SKIP][209] ([Intel XE#1885]) -> [DMESG-WARN][210] ([Intel XE#3468])
   [209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@core_hotunplug@hotrebind-lateclose.html
   [210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@core_hotunplug@hotrebind-lateclose.html

  * igt@core_hotunplug@hotunplug-rescan:
    - shard-dg2-set2:     [DMESG-WARN][211] ([Intel XE#3468]) -> [SKIP][212] ([Intel XE#1885]) +1 other test skip
   [211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@core_hotunplug@hotunplug-rescan.html
   [212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@core_hotunplug@hotunplug-rescan.html

  * igt@core_hotunplug@unplug-rescan:
    - shard-dg2-set2:     [INCOMPLETE][213] ([Intel XE#1195]) -> [SKIP][214] ([Intel XE#1885])
   [213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@core_hotunplug@unplug-rescan.html
   [214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@core_hotunplug@unplug-rescan.html

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - shard-dg2-set2:     [SKIP][215] ([Intel XE#623]) -> [SKIP][216] ([Intel XE#2423] / [i915#2575])
   [215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
   [216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-edp-1-linear:
    - shard-lnl:          [DMESG-FAIL][217] -> [FAIL][218] ([Intel XE#911]) +3 other tests fail
   [217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-3/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-edp-1-linear.html
   [218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-4/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-edp-1-linear.html

  * igt@kms_async_flips@invalid-async-flip:
    - shard-dg2-set2:     [SKIP][219] ([Intel XE#873]) -> [SKIP][220] ([Intel XE#2423] / [i915#2575])
   [219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_async_flips@invalid-async-flip.html
   [220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_async_flips@invalid-async-flip.html

  * igt@kms_big_fb@4-tiled-64bpp-rotate-270:
    - shard-dg2-set2:     [INCOMPLETE][221] ([Intel XE#1195] / [Intel XE#402]) -> [SKIP][222] ([Intel XE#2136] / [Intel XE#2351])
   [221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_big_fb@4-tiled-64bpp-rotate-270.html
   [222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_big_fb@4-tiled-64bpp-rotate-270.html

  * igt@kms_big_fb@4-tiled-8bpp-rotate-270:
    - shard-dg2-set2:     [SKIP][223] ([Intel XE#316]) -> [SKIP][224] ([Intel XE#2136]) +1 other test skip
   [223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_big_fb@4-tiled-8bpp-rotate-270.html
   [224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_big_fb@4-tiled-8bpp-rotate-270.html

  * igt@kms_big_fb@4-tiled-8bpp-rotate-90:
    - shard-dg2-set2:     [SKIP][225] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][226] ([Intel XE#316]) +1 other test skip
   [225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_big_fb@4-tiled-8bpp-rotate-90.html
   [226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_big_fb@4-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@4-tiled-addfb-size-overflow:
    - shard-dg2-set2:     [DMESG-WARN][227] -> [SKIP][228] ([Intel XE#2136])
   [227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-464/igt@kms_big_fb@4-tiled-addfb-size-overflow.html
   [228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_big_fb@4-tiled-addfb-size-overflow.html

  * igt@kms_big_fb@linear-8bpp-rotate-180:
    - shard-bmg:          [DMESG-FAIL][229] ([Intel XE#3468]) -> [DMESG-WARN][230] ([Intel XE#3468]) +3 other tests dmesg-warn
   [229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-3/igt@kms_big_fb@linear-8bpp-rotate-180.html
   [230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-3/igt@kms_big_fb@linear-8bpp-rotate-180.html

  * igt@kms_big_fb@x-tiled-16bpp-rotate-270:
    - shard-dg2-set2:     [SKIP][231] ([Intel XE#2136]) -> [SKIP][232] ([Intel XE#316]) +4 other tests skip
   [231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_big_fb@x-tiled-16bpp-rotate-270.html
   [232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@kms_big_fb@x-tiled-16bpp-rotate-270.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0:
    - shard-bmg:          [DMESG-WARN][233] ([Intel XE#3468]) -> [DMESG-FAIL][234] ([Intel XE#3468]) +1 other test dmesg-fail
   [233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-8/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0.html
   [234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-7/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0:
    - shard-adlp:         [DMESG-FAIL][235] ([Intel XE#1033]) -> [FAIL][236] ([Intel XE#1874])
   [235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-4/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0.html
   [236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-8/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0.html

  * igt@kms_big_fb@y-tiled-16bpp-rotate-180:
    - shard-dg2-set2:     [SKIP][237] ([Intel XE#1124]) -> [SKIP][238] ([Intel XE#2136]) +5 other tests skip
   [237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_big_fb@y-tiled-16bpp-rotate-180.html
   [238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_big_fb@y-tiled-16bpp-rotate-180.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-180:
    - shard-dg2-set2:     [SKIP][239] ([Intel XE#1124]) -> [SKIP][240] ([Intel XE#2136] / [Intel XE#2351]) +5 other tests skip
   [239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_big_fb@y-tiled-64bpp-rotate-180.html
   [240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_big_fb@y-tiled-64bpp-rotate-180.html

  * igt@kms_big_fb@y-tiled-addfb-size-overflow:
    - shard-dg2-set2:     [SKIP][241] ([Intel XE#2136]) -> [SKIP][242] ([Intel XE#610]) +1 other test skip
   [241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_big_fb@y-tiled-addfb-size-overflow.html
   [242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_big_fb@y-tiled-addfb-size-overflow.html

  * igt@kms_big_fb@yf-tiled-16bpp-rotate-0:
    - shard-dg2-set2:     [SKIP][243] ([Intel XE#2136]) -> [SKIP][244] ([Intel XE#1124]) +6 other tests skip
   [243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_big_fb@yf-tiled-16bpp-rotate-0.html
   [244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_big_fb@yf-tiled-16bpp-rotate-0.html

  * igt@kms_big_fb@yf-tiled-64bpp-rotate-180:
    - shard-dg2-set2:     [SKIP][245] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][246] ([Intel XE#1124]) +3 other tests skip
   [245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_big_fb@yf-tiled-64bpp-rotate-180.html
   [246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_big_fb@yf-tiled-64bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow:
    - shard-dg2-set2:     [SKIP][247] ([Intel XE#607]) -> [SKIP][248] ([Intel XE#2136])
   [247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
   [248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html

  * igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p:
    - shard-dg2-set2:     [SKIP][249] ([Intel XE#2423] / [i915#2575]) -> [SKIP][250] ([Intel XE#367]) +4 other tests skip
   [249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html
   [250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html

  * igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p:
    - shard-dg2-set2:     [SKIP][251] ([Intel XE#2423] / [i915#2575]) -> [SKIP][252] ([Intel XE#2191])
   [251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p.html
   [252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p.html

  * igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p:
    - shard-dg2-set2:     [SKIP][253] ([Intel XE#2191]) -> [SKIP][254] ([Intel XE#2423] / [i915#2575])
   [253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html
   [254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html

  * igt@kms_bw@linear-tiling-2-displays-3840x2160p:
    - shard-dg2-set2:     [SKIP][255] ([Intel XE#367]) -> [SKIP][256] ([Intel XE#2423] / [i915#2575]) +3 other tests skip
   [255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_bw@linear-tiling-2-displays-3840x2160p.html
   [256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_bw@linear-tiling-2-displays-3840x2160p.html

  * igt@kms_ccs@bad-pixel-format-yf-tiled-ccs:
    - shard-dg2-set2:     [SKIP][257] ([Intel XE#2136]) -> [SKIP][258] ([Intel XE#455] / [Intel XE#787]) +15 other tests skip
   [257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_ccs@bad-pixel-format-yf-tiled-ccs.html
   [258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_ccs@bad-pixel-format-yf-tiled-ccs.html

  * igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs:
    - shard-dg2-set2:     [SKIP][259] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][260] ([Intel XE#2136] / [Intel XE#2351])
   [259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs.html
   [260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs.html

  * igt@kms_ccs@crc-primary-rotation-180-y-tiled-gen12-rc-ccs-cc:
    - shard-dg2-set2:     [SKIP][261] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][262] ([Intel XE#455] / [Intel XE#787]) +2 other tests skip
   [261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_ccs@crc-primary-rotation-180-y-tiled-gen12-rc-ccs-cc.html
   [262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@kms_ccs@crc-primary-rotation-180-y-tiled-gen12-rc-ccs-cc.html

  * igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs-cc:
    - shard-dg2-set2:     [SKIP][263] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][264] ([Intel XE#2136]) +11 other tests skip
   [263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs-cc.html
   [264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs-cc.html

  * igt@kms_cdclk@mode-transition-all-outputs:
    - shard-dg2-set2:     [SKIP][265] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][266] ([Intel XE#314])
   [265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_cdclk@mode-transition-all-outputs.html
   [266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_cdclk@mode-transition-all-outputs.html

  * igt@kms_chamelium_color@ctm-0-25:
    - shard-dg2-set2:     [SKIP][267] ([Intel XE#306]) -> [SKIP][268] ([Intel XE#2423] / [i915#2575])
   [267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_chamelium_color@ctm-0-25.html
   [268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_chamelium_color@ctm-0-25.html

  * igt@kms_chamelium_color@ctm-negative:
    - shard-dg2-set2:     [SKIP][269] ([Intel XE#2423] / [i915#2575]) -> [SKIP][270] ([Intel XE#306]) +1 other test skip
   [269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_chamelium_color@ctm-negative.html
   [270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_chamelium_color@ctm-negative.html

  * igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe:
    - shard-dg2-set2:     [SKIP][271] ([Intel XE#2423] / [i915#2575]) -> [SKIP][272] ([Intel XE#373]) +12 other tests skip
   [271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe.html
   [272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe.html

  * igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode:
    - shard-dg2-set2:     [SKIP][273] ([Intel XE#373]) -> [SKIP][274] ([Intel XE#2423] / [i915#2575]) +10 other tests skip
   [273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode.html
   [274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-dg2-set2:     [FAIL][275] ([Intel XE#1178]) -> [SKIP][276] ([Intel XE#2423] / [i915#2575])
   [275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-464/igt@kms_content_protection@atomic-dpms.html
   [276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_content_protection@lic-type-0:
    - shard-dg2-set2:     [SKIP][277] ([Intel XE#2423] / [i915#2575]) -> [FAIL][278] ([Intel XE#1178]) +2 other tests fail
   [277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_content_protection@lic-type-0.html
   [278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_content_protection@lic-type-0.html

  * igt@kms_cursor_crc@cursor-offscreen-512x170:
    - shard-dg2-set2:     [SKIP][279] ([Intel XE#2423] / [i915#2575]) -> [SKIP][280] ([Intel XE#308])
   [279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_cursor_crc@cursor-offscreen-512x170.html
   [280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_cursor_crc@cursor-offscreen-512x170.html

  * igt@kms_cursor_crc@cursor-rapid-movement-512x170:
    - shard-dg2-set2:     [SKIP][281] ([Intel XE#308]) -> [SKIP][282] ([Intel XE#2423] / [i915#2575])
   [281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_cursor_crc@cursor-rapid-movement-512x170.html
   [282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_cursor_crc@cursor-rapid-movement-512x170.html

  * igt@kms_cursor_crc@cursor-sliding-max-size:
    - shard-dg2-set2:     [SKIP][283] ([Intel XE#2423] / [i915#2575]) -> [SKIP][284] ([Intel XE#455]) +3 other tests skip
   [283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_cursor_crc@cursor-sliding-max-size.html
   [284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_cursor_crc@cursor-sliding-max-size.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
    - shard-dg2-set2:     [SKIP][285] ([Intel XE#2423] / [i915#2575]) -> [SKIP][286] ([Intel XE#323]) +1 other test skip
   [285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
   [286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html

  * igt@kms_dirtyfb@psr-dirtyfb-ioctl:
    - shard-dg2-set2:     [SKIP][287] ([Intel XE#455]) -> [SKIP][288] ([Intel XE#2136] / [Intel XE#2351]) +1 other test skip
   [287]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_dirtyfb@psr-dirtyfb-ioctl.html
   [288]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_dirtyfb@psr-dirtyfb-ioctl.html

  * igt@kms_display_modes@mst-extended-mode-negative:
    - shard-dg2-set2:     [SKIP][289] ([Intel XE#307]) -> [SKIP][290] ([Intel XE#2423] / [i915#2575])
   [289]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_display_modes@mst-extended-mode-negative.html
   [290]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_display_modes@mst-extended-mode-negative.html

  * igt@kms_draw_crc@draw-method-mmap-wc@rgb565-4tiled:
    - shard-bmg:          [INCOMPLETE][291] ([Intel XE#3468]) -> [DMESG-FAIL][292] ([Intel XE#3468])
   [291]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-4/igt@kms_draw_crc@draw-method-mmap-wc@rgb565-4tiled.html
   [292]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-7/igt@kms_draw_crc@draw-method-mmap-wc@rgb565-4tiled.html

  * igt@kms_dsc@dsc-with-bpc:
    - shard-dg2-set2:     [SKIP][293] ([Intel XE#455]) -> [SKIP][294] ([Intel XE#2136]) +3 other tests skip
   [293]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_dsc@dsc-with-bpc.html
   [294]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_dsc@dsc-with-bpc.html

  * igt@kms_feature_discovery@display-4x:
    - shard-dg2-set2:     [SKIP][295] ([Intel XE#2423] / [i915#2575]) -> [SKIP][296] ([Intel XE#1138])
   [295]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_feature_discovery@display-4x.html
   [296]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@kms_feature_discovery@display-4x.html

  * igt@kms_feature_discovery@psr1:
    - shard-dg2-set2:     [SKIP][297] ([Intel XE#1135]) -> [SKIP][298] ([Intel XE#2423] / [i915#2575])
   [297]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-464/igt@kms_feature_discovery@psr1.html
   [298]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_feature_discovery@psr1.html

  * igt@kms_feature_discovery@psr2:
    - shard-dg2-set2:     [SKIP][299] ([Intel XE#2423] / [i915#2575]) -> [SKIP][300] ([Intel XE#1135])
   [299]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_feature_discovery@psr2.html
   [300]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_feature_discovery@psr2.html

  * igt@kms_flip@2x-flip-vs-modeset-vs-hang:
    - shard-dg2-set2:     [DMESG-WARN][301] -> [SKIP][302] ([Intel XE#2423] / [i915#2575]) +1 other test skip
   [301]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_flip@2x-flip-vs-modeset-vs-hang.html
   [302]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_flip@2x-flip-vs-modeset-vs-hang.html

  * igt@kms_flip@flip-vs-suspend:
    - shard-dg2-set2:     [SKIP][303] ([Intel XE#2423] / [i915#2575]) -> [DMESG-FAIL][304] ([Intel XE#3468])
   [303]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_flip@flip-vs-suspend.html
   [304]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_flip@flip-vs-suspend.html

  * igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-downscaling:
    - shard-dg2-set2:     [INCOMPLETE][305] ([Intel XE#1195]) -> [INCOMPLETE][306] ([Intel XE#1195] / [Intel XE#3468]) +1 other test incomplete
   [305]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-433/igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-downscaling.html
   [306]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-464/igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling:
    - shard-dg2-set2:     [SKIP][307] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][308] ([Intel XE#455]) +1 other test skip
   [307]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling.html
   [308]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-downscaling:
    - shard-dg2-set2:     [DMESG-WARN][309] -> [SKIP][310] ([Intel XE#2136] / [Intel XE#2351])
   [309]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-downscaling.html
   [310]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling:
    - shard-dg2-set2:     [SKIP][311] ([Intel XE#2136]) -> [SKIP][312] ([Intel XE#455]) +4 other tests skip
   [311]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling.html
   [312]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling.html

  * igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-onoff:
    - shard-dg2-set2:     [SKIP][313] ([Intel XE#651]) -> [SKIP][314] ([Intel XE#2136]) +12 other tests skip
   [313]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-onoff.html
   [314]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff:
    - shard-dg2-set2:     [SKIP][315] ([Intel XE#2136]) -> [SKIP][316] ([Intel XE#651]) +26 other tests skip
   [315]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff.html
   [316]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@drrs-rgb101010-draw-mmap-wc:
    - shard-dg2-set2:     [SKIP][317] ([Intel XE#651]) -> [SKIP][318] ([Intel XE#2136] / [Intel XE#2351]) +13 other tests skip
   [317]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_frontbuffer_tracking@drrs-rgb101010-draw-mmap-wc.html
   [318]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_frontbuffer_tracking@drrs-rgb101010-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-move:
    - shard-bmg:          [DMESG-FAIL][319] ([Intel XE#3468]) -> [FAIL][320] ([Intel XE#2333]) +2 other tests fail
   [319]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-move.html
   [320]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-move.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-rgb101010-draw-mmap-wc:
    - shard-dg2-set2:     [SKIP][321] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][322] ([Intel XE#651]) +9 other tests skip
   [321]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcdrrs-rgb101010-draw-mmap-wc.html
   [322]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcdrrs-rgb101010-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-msflip-blt:
    - shard-dg2-set2:     [SKIP][323] ([Intel XE#2136]) -> [SKIP][324] ([Intel XE#653]) +33 other tests skip
   [323]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-msflip-blt.html
   [324]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt:
    - shard-dg2-set2:     [SKIP][325] ([Intel XE#653]) -> [SKIP][326] ([Intel XE#2136]) +25 other tests skip
   [325]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt.html
   [326]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-render:
    - shard-dg2-set2:     [SKIP][327] ([Intel XE#653]) -> [SKIP][328] ([Intel XE#2136] / [Intel XE#2351]) +9 other tests skip
   [327]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-render.html
   [328]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-tiling-y:
    - shard-dg2-set2:     [SKIP][329] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][330] ([Intel XE#658])
   [329]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html
   [330]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-render:
    - shard-dg2-set2:     [SKIP][331] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][332] ([Intel XE#653]) +5 other tests skip
   [331]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-render.html
   [332]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-render.html

  * igt@kms_getfb@getfb-reject-ccs:
    - shard-dg2-set2:     [SKIP][333] ([Intel XE#2423] / [i915#2575]) -> [SKIP][334] ([Intel XE#605])
   [333]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_getfb@getfb-reject-ccs.html
   [334]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@kms_getfb@getfb-reject-ccs.html

  * igt@kms_joiner@basic-big-joiner:
    - shard-dg2-set2:     [SKIP][335] ([Intel XE#346]) -> [SKIP][336] ([Intel XE#2136])
   [335]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_joiner@basic-big-joiner.html
   [336]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_joiner@basic-big-joiner.html

  * igt@kms_pipe_crc_basic@suspend-read-crc:
    - shard-dg2-set2:     [INCOMPLETE][337] ([Intel XE#1195]) -> [SKIP][338] ([Intel XE#2423] / [i915#2575])
   [337]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_pipe_crc_basic@suspend-read-crc.html
   [338]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_pipe_crc_basic@suspend-read-crc.html

  * igt@kms_plane@pixel-format:
    - shard-adlp:         [INCOMPLETE][339] ([Intel XE#1035] / [Intel XE#1195]) -> [INCOMPLETE][340] ([Intel XE#1035])
   [339]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-adlp-4/igt@kms_plane@pixel-format.html
   [340]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-adlp-3/igt@kms_plane@pixel-format.html

  * igt@kms_plane_cursor@primary:
    - shard-dg2-set2:     [FAIL][341] ([Intel XE#616]) -> [SKIP][342] ([Intel XE#2423] / [i915#2575])
   [341]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_plane_cursor@primary.html
   [342]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_plane_cursor@primary.html

  * igt@kms_plane_lowres@tiling-y:
    - shard-dg2-set2:     [SKIP][343] ([Intel XE#455]) -> [SKIP][344] ([Intel XE#2423] / [i915#2575]) +7 other tests skip
   [343]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-464/igt@kms_plane_lowres@tiling-y.html
   [344]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_plane_lowres@tiling-y.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-25-with-modifiers:
    - shard-dg2-set2:     [SKIP][345] ([Intel XE#2763] / [Intel XE#455]) -> [SKIP][346] ([Intel XE#2423] / [i915#2575])
   [345]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-modifiers.html
   [346]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-modifiers.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-unity-scaling:
    - shard-dg2-set2:     [SKIP][347] ([Intel XE#2423] / [i915#2575]) -> [SKIP][348] ([Intel XE#2763] / [Intel XE#455]) +3 other tests skip
   [347]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_plane_scaling@planes-downscale-factor-0-25-unity-scaling.html
   [348]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_plane_scaling@planes-downscale-factor-0-25-unity-scaling.html

  * igt@kms_pm_backlight@fade-with-suspend:
    - shard-dg2-set2:     [SKIP][349] ([Intel XE#870]) -> [SKIP][350] ([Intel XE#2136]) +1 other test skip
   [349]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_pm_backlight@fade-with-suspend.html
   [350]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_pm_backlight@fade-with-suspend.html

  * igt@kms_pm_dc@dc3co-vpb-simulation:
    - shard-dg2-set2:     [SKIP][351] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][352] ([Intel XE#1122])
   [351]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_pm_dc@dc3co-vpb-simulation.html
   [352]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_pm_dc@dc3co-vpb-simulation.html

  * igt@kms_pm_rpm@basic-pci-d3-state:
    - shard-dg2-set2:     [DMESG-WARN][353] ([Intel XE#3468]) -> [ABORT][354] ([Intel XE#3468])
   [353]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_pm_rpm@basic-pci-d3-state.html
   [354]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-464/igt@kms_pm_rpm@basic-pci-d3-state.html

  * igt@kms_pm_rpm@modeset-non-lpsp:
    - shard-dg2-set2:     [DMESG-WARN][355] ([Intel XE#3468]) -> [SKIP][356] ([Intel XE#2446])
   [355]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_pm_rpm@modeset-non-lpsp.html
   [356]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_pm_rpm@modeset-non-lpsp.html

  * igt@kms_psr2_sf@fbc-psr2-primary-plane-update-sf-dmg-area:
    - shard-dg2-set2:     [SKIP][357] ([Intel XE#1489]) -> [SKIP][358] ([Intel XE#2136]) +8 other tests skip
   [357]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_psr2_sf@fbc-psr2-primary-plane-update-sf-dmg-area.html
   [358]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_psr2_sf@fbc-psr2-primary-plane-update-sf-dmg-area.html

  * igt@kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area:
    - shard-dg2-set2:     [SKIP][359] ([Intel XE#2136]) -> [SKIP][360] ([Intel XE#1489]) +7 other tests skip
   [359]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area.html
   [360]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area.html

  * igt@kms_psr2_su@page_flip-nv12:
    - shard-dg2-set2:     [SKIP][361] ([Intel XE#1122]) -> [SKIP][362] ([Intel XE#2136])
   [361]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_psr2_su@page_flip-nv12.html
   [362]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_psr2_su@page_flip-nv12.html

  * igt@kms_psr2_su@page_flip-p010:
    - shard-dg2-set2:     [SKIP][363] ([Intel XE#2136]) -> [SKIP][364] ([Intel XE#1122]) +1 other test skip
   [363]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_psr2_su@page_flip-p010.html
   [364]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_psr2_su@page_flip-p010.html

  * igt@kms_psr@fbc-psr-no-drrs:
    - shard-dg2-set2:     [SKIP][365] ([Intel XE#2850] / [Intel XE#929]) -> [SKIP][366] ([Intel XE#2136] / [Intel XE#2351]) +3 other tests skip
   [365]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_psr@fbc-psr-no-drrs.html
   [366]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_psr@fbc-psr-no-drrs.html

  * igt@kms_psr@fbc-psr2-cursor-plane-onoff:
    - shard-dg2-set2:     [SKIP][367] ([Intel XE#2136]) -> [SKIP][368] ([Intel XE#2850] / [Intel XE#929]) +12 other tests skip
   [367]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_psr@fbc-psr2-cursor-plane-onoff.html
   [368]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_psr@fbc-psr2-cursor-plane-onoff.html

  * igt@kms_psr@fbc-psr2-primary-render:
    - shard-dg2-set2:     [SKIP][369] ([Intel XE#2850] / [Intel XE#929]) -> [SKIP][370] ([Intel XE#2136]) +8 other tests skip
   [369]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_psr@fbc-psr2-primary-render.html
   [370]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_psr@fbc-psr2-primary-render.html

  * igt@kms_psr@psr-cursor-plane-move:
    - shard-dg2-set2:     [SKIP][371] ([Intel XE#2850] / [Intel XE#929]) -> [SKIP][372] ([Intel XE#2351])
   [371]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_psr@psr-cursor-plane-move.html
   [372]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_psr@psr-cursor-plane-move.html

  * igt@kms_psr@psr-dpms:
    - shard-dg2-set2:     [SKIP][373] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][374] ([Intel XE#2850] / [Intel XE#929]) +3 other tests skip
   [373]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_psr@psr-dpms.html
   [374]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_psr@psr-dpms.html

  * igt@kms_rotation_crc@primary-rotation-90:
    - shard-dg2-set2:     [SKIP][375] ([Intel XE#3414]) -> [SKIP][376] ([Intel XE#2423] / [i915#2575])
   [375]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_rotation_crc@primary-rotation-90.html
   [376]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_rotation_crc@primary-rotation-90.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-0:
    - shard-dg2-set2:     [SKIP][377] ([Intel XE#2423] / [i915#2575]) -> [SKIP][378] ([Intel XE#1127]) +1 other test skip
   [377]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
   [378]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html

  * igt@kms_rotation_crc@sprite-rotation-90-pos-100-0:
    - shard-dg2-set2:     [SKIP][379] ([Intel XE#2423] / [i915#2575]) -> [SKIP][380] ([Intel XE#3414])
   [379]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_rotation_crc@sprite-rotation-90-pos-100-0.html
   [380]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_rotation_crc@sprite-rotation-90-pos-100-0.html

  * igt@kms_tiled_display@basic-test-pattern:
    - shard-dg2-set2:     [SKIP][381] ([Intel XE#2423] / [i915#2575]) -> [SKIP][382] ([Intel XE#362])
   [381]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_tiled_display@basic-test-pattern.html
   [382]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@kms_tiled_display@basic-test-pattern.html

  * igt@kms_tiled_display@basic-test-pattern-with-chamelium:
    - shard-dg2-set2:     [SKIP][383] ([Intel XE#2423] / [i915#2575]) -> [SKIP][384] ([Intel XE#1500])
   [383]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
   [384]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html

  * igt@kms_tv_load_detect@load-detect:
    - shard-dg2-set2:     [SKIP][385] ([Intel XE#330]) -> [SKIP][386] ([Intel XE#2423] / [i915#2575])
   [385]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@kms_tv_load_detect@load-detect.html
   [386]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_tv_load_detect@load-detect.html

  * igt@kms_vblank@ts-continuation-suspend:
    - shard-dg2-set2:     [DMESG-WARN][387] ([Intel XE#3468]) -> [SKIP][388] ([Intel XE#2423] / [i915#2575])
   [387]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_vblank@ts-continuation-suspend.html
   [388]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_vblank@ts-continuation-suspend.html

  * igt@kms_vblank@wait-idle:
    - shard-dg2-set2:     [FAIL][389] -> [SKIP][390] ([Intel XE#2423] / [i915#2575])
   [389]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@kms_vblank@wait-idle.html
   [390]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_vblank@wait-idle.html

  * igt@kms_vrr@cmrr:
    - shard-dg2-set2:     [SKIP][391] ([Intel XE#2423] / [i915#2575]) -> [SKIP][392] ([Intel XE#2168])
   [391]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_vrr@cmrr.html
   [392]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-464/igt@kms_vrr@cmrr.html

  * igt@kms_vrr@lobf:
    - shard-dg2-set2:     [SKIP][393] ([Intel XE#2168]) -> [SKIP][394] ([Intel XE#2423] / [i915#2575])
   [393]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@kms_vrr@lobf.html
   [394]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_vrr@lobf.html

  * igt@kms_writeback@writeback-check-output-xrgb2101010:
    - shard-dg2-set2:     [SKIP][395] ([Intel XE#756]) -> [SKIP][396] ([Intel XE#2423] / [i915#2575])
   [395]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@kms_writeback@writeback-check-output-xrgb2101010.html
   [396]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@kms_writeback@writeback-check-output-xrgb2101010.html

  * igt@kms_writeback@writeback-pixel-formats:
    - shard-dg2-set2:     [SKIP][397] ([Intel XE#2423] / [i915#2575]) -> [SKIP][398] ([Intel XE#756]) +1 other test skip
   [397]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@kms_writeback@writeback-pixel-formats.html
   [398]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@kms_writeback@writeback-pixel-formats.html

  * igt@xe_compute_preempt@compute-threadgroup-preempt:
    - shard-dg2-set2:     [SKIP][399] ([Intel XE#1130]) -> [SKIP][400] ([Intel XE#1280] / [Intel XE#455])
   [399]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_compute_preempt@compute-threadgroup-preempt.html
   [400]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@xe_compute_preempt@compute-threadgroup-preempt.html

  * igt@xe_copy_basic@mem-copy-linear-0xfffe:
    - shard-dg2-set2:     [SKIP][401] ([Intel XE#1123]) -> [SKIP][402] ([Intel XE#1130]) +1 other test skip
   [401]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@xe_copy_basic@mem-copy-linear-0xfffe.html
   [402]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_copy_basic@mem-copy-linear-0xfffe.html

  * igt@xe_copy_basic@mem-set-linear-0x369:
    - shard-dg2-set2:     [SKIP][403] ([Intel XE#1130]) -> [SKIP][404] ([Intel XE#1126])
   [403]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_copy_basic@mem-set-linear-0x369.html
   [404]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@xe_copy_basic@mem-set-linear-0x369.html

  * igt@xe_eudebug@basic-vm-bind-extended-discovery:
    - shard-dg2-set2:     [SKIP][405] ([Intel XE#1130]) -> [SKIP][406] ([Intel XE#2905]) +12 other tests skip
   [405]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_eudebug@basic-vm-bind-extended-discovery.html
   [406]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@xe_eudebug@basic-vm-bind-extended-discovery.html

  * igt@xe_eudebug_online@interrupt-all-set-breakpoint:
    - shard-dg2-set2:     [SKIP][407] ([Intel XE#2905]) -> [SKIP][408] ([Intel XE#1130]) +10 other tests skip
   [407]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@xe_eudebug_online@interrupt-all-set-breakpoint.html
   [408]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_eudebug_online@interrupt-all-set-breakpoint.html

  * igt@xe_evict@evict-beng-mixed-many-threads-small:
    - shard-dg2-set2:     [SKIP][409] ([Intel XE#1130]) -> [TIMEOUT][410] ([Intel XE#1473] / [Intel XE#402])
   [409]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_evict@evict-beng-mixed-many-threads-small.html
   [410]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@xe_evict@evict-beng-mixed-many-threads-small.html

  * igt@xe_evict@evict-large-multi-vm-cm:
    - shard-dg2-set2:     [SKIP][411] ([Intel XE#1130]) -> [FAIL][412] ([Intel XE#1600])
   [411]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_evict@evict-large-multi-vm-cm.html
   [412]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@xe_evict@evict-large-multi-vm-cm.html

  * igt@xe_evict@evict-mixed-many-threads-small:
    - shard-bmg:          [INCOMPLETE][413] ([Intel XE#1473]) -> [DMESG-WARN][414] ([Intel XE#3468])
   [413]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-bmg-8/igt@xe_evict@evict-mixed-many-threads-small.html
   [414]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-bmg-7/igt@xe_evict@evict-mixed-many-threads-small.html

  * igt@xe_exec_fault_mode@twice-userptr-invalidate-race:
    - shard-dg2-set2:     [SKIP][415] ([Intel XE#1130]) -> [SKIP][416] ([Intel XE#288]) +28 other tests skip
   [415]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_exec_fault_mode@twice-userptr-invalidate-race.html
   [416]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@xe_exec_fault_mode@twice-userptr-invalidate-race.html

  * igt@xe_exec_fault_mode@twice-userptr-rebind-imm:
    - shard-dg2-set2:     [SKIP][417] ([Intel XE#288]) -> [SKIP][418] ([Intel XE#1130]) +29 other tests skip
   [417]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@xe_exec_fault_mode@twice-userptr-rebind-imm.html
   [418]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_exec_fault_mode@twice-userptr-rebind-imm.html

  * igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence:
    - shard-dg2-set2:     [SKIP][419] ([Intel XE#1130]) -> [SKIP][420] ([Intel XE#2360])
   [419]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence.html
   [420]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence.html

  * igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence:
    - shard-dg2-set2:     [SKIP][421] ([Intel XE#2360]) -> [SKIP][422] ([Intel XE#1130])
   [421]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence.html
   [422]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence.html

  * igt@xe_exec_threads@threads-bal-mixed-shared-vm-userptr-invalidate:
    - shard-dg2-set2:     [SKIP][423] ([Intel XE#1130]) -> [DMESG-FAIL][424] ([Intel XE#3371])
   [423]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_exec_threads@threads-bal-mixed-shared-vm-userptr-invalidate.html
   [424]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-464/igt@xe_exec_threads@threads-bal-mixed-shared-vm-userptr-invalidate.html

  * igt@xe_fault_injection@inject-fault-probe-function-xe_wopcm_init:
    - shard-dg2-set2:     [DMESG-WARN][425] ([Intel XE#3343]) -> [SKIP][426] ([Intel XE#1130]) +1 other test skip
   [425]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@xe_fault_injection@inject-fault-probe-function-xe_wopcm_init.html
   [426]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_fault_injection@inject-fault-probe-function-xe_wopcm_init.html

  * igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_create:
    - shard-dg2-set2:     [SKIP][427] ([Intel XE#1130]) -> [DMESG-WARN][428] ([Intel XE#3467]) +2 other tests dmesg-warn
   [427]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_create.html
   [428]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_create.html

  * igt@xe_fault_injection@vm-create-fail-xe_exec_queue_create_bind:
    - shard-dg2-set2:     [DMESG-WARN][429] ([Intel XE#3467]) -> [SKIP][430] ([Intel XE#1130]) +3 other tests skip
   [429]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@xe_fault_injection@vm-create-fail-xe_exec_queue_create_bind.html
   [430]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_fault_injection@vm-create-fail-xe_exec_queue_create_bind.html

  * igt@xe_huc_copy@huc_copy:
    - shard-dg2-set2:     [SKIP][431] ([Intel XE#1130]) -> [SKIP][432] ([Intel XE#255])
   [431]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_huc_copy@huc_copy.html
   [432]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@xe_huc_copy@huc_copy.html

  * igt@xe_live_ktest@xe_eudebug:
    - shard-lnl:          [SKIP][433] ([Intel XE#1192] / [Intel XE#3026]) -> [SKIP][434] ([Intel XE#2833])
   [433]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-lnl-3/igt@xe_live_ktest@xe_eudebug.html
   [434]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-lnl-4/igt@xe_live_ktest@xe_eudebug.html

  * igt@xe_media_fill@media-fill:
    - shard-dg2-set2:     [SKIP][435] ([Intel XE#560]) -> [SKIP][436] ([Intel XE#1130])
   [435]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@xe_media_fill@media-fill.html
   [436]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_media_fill@media-fill.html

  * igt@xe_oa@polling-small-buf:
    - shard-dg2-set2:     [SKIP][437] ([Intel XE#2541]) -> [SKIP][438] ([Intel XE#1130]) +9 other tests skip
   [437]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-464/igt@xe_oa@polling-small-buf.html
   [438]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_oa@polling-small-buf.html

  * igt@xe_oa@whitelisted-registers-userspace-config:
    - shard-dg2-set2:     [SKIP][439] ([Intel XE#1130]) -> [SKIP][440] ([Intel XE#2541]) +7 other tests skip
   [439]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_oa@whitelisted-registers-userspace-config.html
   [440]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@xe_oa@whitelisted-registers-userspace-config.html

  * igt@xe_pat@display-vs-wb-transient:
    - shard-dg2-set2:     [SKIP][441] ([Intel XE#1130]) -> [SKIP][442] ([Intel XE#1337])
   [441]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_pat@display-vs-wb-transient.html
   [442]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@xe_pat@display-vs-wb-transient.html

  * igt@xe_peer2peer@read:
    - shard-dg2-set2:     [FAIL][443] ([Intel XE#1173]) -> [SKIP][444] ([Intel XE#1061])
   [443]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@xe_peer2peer@read.html
   [444]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_peer2peer@read.html

  * igt@xe_peer2peer@write:
    - shard-dg2-set2:     [SKIP][445] ([Intel XE#1061]) -> [FAIL][446] ([Intel XE#1173])
   [445]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_peer2peer@write.html
   [446]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-464/igt@xe_peer2peer@write.html

  * igt@xe_pm@d3cold-basic-exec:
    - shard-dg2-set2:     [SKIP][447] ([Intel XE#1130]) -> [SKIP][448] ([Intel XE#2284] / [Intel XE#366]) +1 other test skip
   [447]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_pm@d3cold-basic-exec.html
   [448]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@xe_pm@d3cold-basic-exec.html

  * igt@xe_pm@d3cold-mmap-vram:
    - shard-dg2-set2:     [SKIP][449] ([Intel XE#2284] / [Intel XE#366]) -> [SKIP][450] ([Intel XE#1130]) +1 other test skip
   [449]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-434/igt@xe_pm@d3cold-mmap-vram.html
   [450]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_pm@d3cold-mmap-vram.html

  * igt@xe_pm@d3cold-mocs:
    - shard-dg2-set2:     [SKIP][451] ([Intel XE#1130]) -> [SKIP][452] ([Intel XE#2284])
   [451]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_pm@d3cold-mocs.html
   [452]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-433/igt@xe_pm@d3cold-mocs.html

  * igt@xe_pm@s4-exec-after:
    - shard-dg2-set2:     [DMESG-WARN][453] ([Intel XE#3468]) -> [SKIP][454] ([Intel XE#1130]) +1 other test skip
   [453]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@xe_pm@s4-exec-after.html
   [454]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_pm@s4-exec-after.html

  * igt@xe_pm@s4-multiple-execs:
    - shard-dg2-set2:     [SKIP][455] ([Intel XE#1130]) -> [DMESG-WARN][456] ([Intel XE#3468])
   [455]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_pm@s4-multiple-execs.html
   [456]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@xe_pm@s4-multiple-execs.html

  * igt@xe_pm@s4-vm-bind-userptr:
    - shard-dg2-set2:     [SKIP][457] ([Intel XE#1130]) -> [DMESG-WARN][458] ([Intel XE#2280])
   [457]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_pm@s4-vm-bind-userptr.html
   [458]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-463/igt@xe_pm@s4-vm-bind-userptr.html

  * igt@xe_pm@vram-d3cold-threshold:
    - shard-dg2-set2:     [SKIP][459] ([Intel XE#579]) -> [SKIP][460] ([Intel XE#1130])
   [459]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-435/igt@xe_pm@vram-d3cold-threshold.html
   [460]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_pm@vram-d3cold-threshold.html

  * igt@xe_pm_residency@gt-c6-freeze:
    - shard-dg2-set2:     [DMESG-WARN][461] ([Intel XE#3088]) -> [SKIP][462] ([Intel XE#1130])
   [461]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-436/igt@xe_pm_residency@gt-c6-freeze.html
   [462]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_pm_residency@gt-c6-freeze.html

  * igt@xe_query@multigpu-query-config:
    - shard-dg2-set2:     [SKIP][463] ([Intel XE#1130]) -> [SKIP][464] ([Intel XE#944]) +3 other tests skip
   [463]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_query@multigpu-query-config.html
   [464]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-436/igt@xe_query@multigpu-query-config.html

  * igt@xe_query@multigpu-query-invalid-extension:
    - shard-dg2-set2:     [SKIP][465] ([Intel XE#944]) -> [SKIP][466] ([Intel XE#1130]) +2 other tests skip
   [465]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-464/igt@xe_query@multigpu-query-invalid-extension.html
   [466]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_query@multigpu-query-invalid-extension.html

  * igt@xe_sriov_flr@flr-vf1-clear:
    - shard-dg2-set2:     [SKIP][467] ([Intel XE#1130]) -> [SKIP][468] ([Intel XE#3342])
   [467]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_sriov_flr@flr-vf1-clear.html
   [468]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-435/igt@xe_sriov_flr@flr-vf1-clear.html

  * igt@xe_vm@large-split-misaligned-binds-67108864:
    - shard-dg2-set2:     [DMESG-WARN][469] -> [SKIP][470] ([Intel XE#1130]) +2 other tests skip
   [469]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-464/igt@xe_vm@large-split-misaligned-binds-67108864.html
   [470]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_vm@large-split-misaligned-binds-67108864.html

  * igt@xe_wedged@wedged-at-any-timeout:
    - shard-dg2-set2:     [ABORT][471] ([Intel XE#3421]) -> [SKIP][472] ([Intel XE#1130])
   [471]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-463/igt@xe_wedged@wedged-at-any-timeout.html
   [472]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-466/igt@xe_wedged@wedged-at-any-timeout.html

  * igt@xe_wedged@wedged-mode-toggle:
    - shard-dg2-set2:     [SKIP][473] ([Intel XE#1130]) -> [ABORT][474] ([Intel XE#3075] / [Intel XE#3084])
   [473]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f/shard-dg2-466/igt@xe_wedged@wedged-mode-toggle.html
   [474]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/shard-dg2-464/igt@xe_wedged@wedged-mode-toggle.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [Intel XE#1033]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1033
  [Intel XE#1035]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1035
  [Intel XE#1061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1061
  [Intel XE#1122]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1122
  [Intel XE#1123]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1123
  [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
  [Intel XE#1126]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1126
  [Intel XE#1127]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1127
  [Intel XE#1130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1130
  [Intel XE#1135]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1135
  [Intel XE#1138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1138
  [Intel XE#1173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1173
  [Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
  [Intel XE#1192]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1192
  [Intel XE#1195]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1195
  [Intel XE#1280]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1280
  [Intel XE#1337]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1337
  [Intel XE#1358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1358
  [Intel XE#1426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1426
  [Intel XE#1473]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1473
  [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
  [Intel XE#1500]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1500
  [Intel XE#1522]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1522
  [Intel XE#1600]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1600
  [Intel XE#1607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1607
  [Intel XE#1616]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1616
  [Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
  [Intel XE#1794]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1794
  [Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
  [Intel XE#1885]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1885
  [Intel XE#2029]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2029
  [Intel XE#2042]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2042
  [Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
  [Intel XE#2134]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2134
  [Intel XE#2136]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2136
  [Intel XE#2168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2168
  [Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
  [Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
  [Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
  [Intel XE#2280]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2280
  [Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
  [Intel XE#2286]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2286
  [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
  [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
  [Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
  [Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
  [Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
  [Intel XE#2333]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2333
  [Intel XE#2351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2351
  [Intel XE#2352]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2352
  [Intel XE#2360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2360
  [Intel XE#2423]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2423
  [Intel XE#2446]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2446
  [Intel XE#2459]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2459
  [Intel XE#2541]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2541
  [Intel XE#255]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/255
  [Intel XE#2566]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2566
  [Intel XE#2577]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2577
  [Intel XE#2596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2596
  [Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
  [Intel XE#2635]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2635
  [Intel XE#2763]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2763
  [Intel XE#2833]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2833
  [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
  [Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
  [Intel XE#2882]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2882
  [Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
  [Intel XE#2905]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2905
  [Intel XE#2927]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2927
  [Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
  [Intel XE#2998]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2998
  [Intel XE#3026]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3026
  [Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
  [Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
  [Intel XE#3075]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3075
  [Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
  [Intel XE#3084]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3084
  [Intel XE#3086]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3086
  [Intel XE#3088]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3088
  [Intel XE#3106]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3106
  [Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
  [Intel XE#314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/314
  [Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
  [Intel XE#3184]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3184
  [Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
  [Intel XE#330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/330
  [Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304
  [Intel XE#3321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3321
  [Intel XE#3339]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3339
  [Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
  [Intel XE#3343]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3343
  [Intel XE#3371]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3371
  [Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
  [Intel XE#3421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3421
  [Intel XE#3440]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3440
  [Intel XE#3442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3442
  [Intel XE#3451]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3451
  [Intel XE#346]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/346
  [Intel XE#3466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3466
  [Intel XE#3467]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3467
  [Intel XE#3468]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3468
  [Intel XE#3477]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3477
  [Intel XE#3486]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3486
  [Intel XE#3497]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3497
  [Intel XE#3507]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3507
  [Intel XE#358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/358
  [Intel XE#362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/362
  [Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
  [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
  [Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
  [Intel XE#402]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/402
  [Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
  [Intel XE#560]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/560
  [Intel XE#579]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/579
  [Intel XE#605]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/605
  [Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
  [Intel XE#610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/610
  [Intel XE#616]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/616
  [Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
  [Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
  [Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
  [Intel XE#658]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/658
  [Intel XE#756]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/756
  [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
  [Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
  [Intel XE#873]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/873
  [Intel XE#877]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/877
  [Intel XE#886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/886
  [Intel XE#899]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/899
  [Intel XE#911]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/911
  [Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
  [Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
  [Intel XE#958]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/958
  [i915#2575]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2575


Build changes
-------------

  * Linux: xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f -> xe-pw-131815v15

  IGT_8114: 8114
  xe-2245-57639ceec0f66f06f4a8a8ac3b9551b7b493c33f: 57639ceec0f66f06f4a8a8ac3b9551b7b493c33f
  xe-pw-131815v15: 131815v15

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-131815v15/index.html

[-- Attachment #2: Type: text/html, Size: 150644 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation
  2024-11-15 15:01 ` [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation Thomas Hellström
@ 2024-11-19 13:40   ` Christian König
  2024-11-20  7:58     ` Thomas Hellström
  0 siblings, 1 reply; 54+ messages in thread
From: Christian König @ 2024-11-19 13:40 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Am 15.11.24 um 16:01 schrieb Thomas Hellström:
> Provide a standalone shmem backup implementation.
> Given the ttm_backup interface, this could
> later on be extended to providing other backup
> implementation than shmem, with one use-case being
> GPU swapout to a user-provided fd.
>
> v5:
> - Fix a UAF. (kernel test robot, Dan Carptenter)
> v6:
> - Rename ttm_backup_shmem_copy_page() function argument
>    (Matthew Brost)
> - Add some missing documentation
> v8:
> - Use folio_file_page to get to the page we want to writeback
>    instead of using the first page of the folio.
> v13:
> - Remove the base class abstraction (Christian König)
> - Include ttm_backup_bytes_avail().
> v14:
> - Fix kerneldoc for ttm_backup_bytes_avail() (0-day)
> - Work around casting of __randomize_layout struct pointer (0-day)
>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Reviewed-by: Matthew Brost <matthew.brost@intel.com> #v13
> ---
>   drivers/gpu/drm/ttm/Makefile     |   2 +-
>   drivers/gpu/drm/ttm/ttm_backup.c | 204 +++++++++++++++++++++++++++++++
>   include/drm/ttm/ttm_backup.h     |  74 +++++++++++
>   3 files changed, 279 insertions(+), 1 deletion(-)
>   create mode 100644 drivers/gpu/drm/ttm/ttm_backup.c
>   create mode 100644 include/drm/ttm/ttm_backup.h
>
> diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile
> index dad298127226..40d07a35293a 100644
> --- a/drivers/gpu/drm/ttm/Makefile
> +++ b/drivers/gpu/drm/ttm/Makefile
> @@ -4,7 +4,7 @@
>   
>   ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \
>   	ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o \
> -	ttm_device.o ttm_sys_manager.o
> +	ttm_device.o ttm_sys_manager.o ttm_backup.o
>   ttm-$(CONFIG_AGP) += ttm_agp_backend.o
>   
>   obj-$(CONFIG_DRM_TTM) += ttm.o
> diff --git a/drivers/gpu/drm/ttm/ttm_backup.c b/drivers/gpu/drm/ttm/ttm_backup.c
> new file mode 100644
> index 000000000000..bf16bb0c594e
> --- /dev/null
> +++ b/drivers/gpu/drm/ttm/ttm_backup.c
> @@ -0,0 +1,204 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#include <drm/ttm/ttm_backup.h>
> +#include <linux/page-flags.h>
> +#include <linux/swap.h>
> +
> +/*
> + * Casting from randomized struct file * to struct ttm_backup * is fine since
> + * struct ttm_backup is never defined nor dereferenced.
> + */
> +static struct file *ttm_backup_to_file(struct ttm_backup *backup)

Do I get it right that struct ttm_backup is never really defined? What 
purpose does that have?

> +{
> +	return (void *)backup;
> +}
> +
> +static struct ttm_backup *ttm_file_to_backup(struct file *file)
> +{
> +	return (void *)file;
> +}
> +
> +/*
> + * Need to map shmem indices to handle since a handle value
> + * of 0 means error, following the swp_entry_t convention.
> + */
> +static unsigned long ttm_backup_shmem_idx_to_handle(pgoff_t idx)
> +{
> +	return (unsigned long)idx + 1;
> +}
> +
> +static pgoff_t ttm_backup_handle_to_shmem_idx(pgoff_t handle)
> +{
> +	return handle - 1;
> +}
> +
> +/**
> + * ttm_backup_drop() - release memory associated with a handle
> + * @backup: The struct backup pointer used to obtain the handle
> + * @handle: The handle obtained from the @backup_page function.
> + */
> +void ttm_backup_drop(struct ttm_backup *backup, pgoff_t handle)
> +{
> +	loff_t start = ttm_backup_handle_to_shmem_idx(handle);
> +
> +	start <<= PAGE_SHIFT;
> +	shmem_truncate_range(file_inode(ttm_backup_to_file(backup)), start,
> +			     start + PAGE_SIZE - 1);
> +}
> +
> +/**
> + * ttm_backup_copy_page() - Copy the contents of a previously backed
> + * up page
> + * @backup: The struct backup pointer used to back up the page.
> + * @dst: The struct page to copy into.
> + * @handle: The handle returned when the page was backed up.
> + * @intr: Try to perform waits interruptable or at least killable.
> + *
> + * Return: 0 on success, Negative error code on failure, notably
> + * -EINTR if @intr was set to true and a signal is pending.
> + */
> +int ttm_backup_copy_page(struct ttm_backup *backup, struct page *dst,
> +			 pgoff_t handle, bool intr)
> +{
> +	struct file *filp = ttm_backup_to_file(backup);
> +	struct address_space *mapping = filp->f_mapping;
> +	struct folio *from_folio;
> +	pgoff_t idx = ttm_backup_handle_to_shmem_idx(handle);
> +
> +	from_folio = shmem_read_folio(mapping, idx);
> +	if (IS_ERR(from_folio))
> +		return PTR_ERR(from_folio);
> +
> +	copy_highpage(dst, folio_file_page(from_folio, idx));
> +	folio_put(from_folio);
> +
> +	return 0;
> +}
> +
> +/**
> + * ttm_backup_backup_page() - Backup a page
> + * @backup: The struct backup pointer to use.
> + * @page: The page to back up.
> + * @writeback: Whether to perform immediate writeback of the page.
> + * This may have performance implications.
> + * @idx: A unique integer for each page and each struct backup.
> + * This allows the backup implementation to avoid managing
> + * its address space separately.
> + * @page_gfp: The gfp value used when the page was allocated.
> + * This is used for accounting purposes.
> + * @alloc_gfp: The gpf to be used when allocating memory.

Typo: gfp instead of gpf.

> + *
> + * Context: If called from reclaim context, the caller needs to
> + * assert that the shrinker gfp has __GFP_FS set, to avoid
> + * deadlocking on lock_page(). If @writeback is set to true and
> + * called from reclaim context, the caller also needs to assert
> + * that the shrinker gfp has __GFP_IO set, since without it,
> + * we're not allowed to start backup IO.
> + *
> + * Return: A handle on success. 0 on failure.
> + * (This is following the swp_entry_t convention).
> + *
> + * Note: This function could be extended to back up a folio and
> + * implementations would then split the folio internally if needed.
> + * Drawback is that the caller would then have to keep track of
> + * the folio size- and usage.
> + */
> +unsigned long
> +ttm_backup_backup_page(struct ttm_backup *backup, struct page *page,
> +		       bool writeback, pgoff_t idx, gfp_t page_gfp,
> +		       gfp_t alloc_gfp)
> +{
> +	struct file *filp = ttm_backup_to_file(backup);
> +	struct address_space *mapping = filp->f_mapping;
> +	unsigned long handle = 0;
> +	struct folio *to_folio;
> +	int ret;
> +
> +	to_folio = shmem_read_folio_gfp(mapping, idx, alloc_gfp);
> +	if (IS_ERR(to_folio))
> +		return handle;

Just that I sleep better: This can never return a folio larger than a 
page, doesn't it?

Apart from those background questions looks good to me.

Regards,
Christian.

> +
> +	folio_mark_accessed(to_folio);
> +	folio_lock(to_folio);
> +	folio_mark_dirty(to_folio);
> +	copy_highpage(folio_file_page(to_folio, idx), page);
> +	handle = ttm_backup_shmem_idx_to_handle(idx);
> +
> +	if (writeback && !folio_mapped(to_folio) &&
> +	    folio_clear_dirty_for_io(to_folio)) {
> +		struct writeback_control wbc = {
> +			.sync_mode = WB_SYNC_NONE,
> +			.nr_to_write = SWAP_CLUSTER_MAX,
> +			.range_start = 0,
> +			.range_end = LLONG_MAX,
> +			.for_reclaim = 1,
> +		};
> +		folio_set_reclaim(to_folio);
> +		ret = mapping->a_ops->writepage(folio_file_page(to_folio, idx), &wbc);
> +		if (!folio_test_writeback(to_folio))
> +			folio_clear_reclaim(to_folio);
> +		/* If writepage succeeds, it unlocks the folio */
> +		if (ret)
> +			folio_unlock(to_folio);
> +	} else {
> +		folio_unlock(to_folio);
> +	}
> +
> +	folio_put(to_folio);
> +
> +	return handle;
> +}
> +
> +/**
> + * ttm_backup_fini() - Free the struct backup resources after last use.
> + * @backup: Pointer to the struct backup whose resources to free.
> + *
> + * After a call to this function, it's illegal to use the @backup pointer.
> + */
> +void ttm_backup_fini(struct ttm_backup *backup)
> +{
> +	fput(ttm_backup_to_file(backup));
> +}
> +
> +/**
> + * ttm_backup_bytes_avail() - Report the approximate number of bytes of backup space
> + * left for backup.
> + *
> + * This function is intended also for driver use to indicate whether a
> + * backup attempt is meaningful.
> + *
> + * Return: An approximate size of backup space available.
> + */
> +u64 ttm_backup_bytes_avail(void)
> +{
> +	/*
> +	 * The idea behind backing up to shmem is that shmem objects may
> +	 * eventually be swapped out. So no point swapping out if there
> +	 * is no or low swap-space available. But the accuracy of this
> +	 * number also depends on shmem actually swapping out backed-up
> +	 * shmem objects without too much buffering.
> +	 */
> +	return (u64)get_nr_swap_pages() << PAGE_SHIFT;
> +}
> +EXPORT_SYMBOL_GPL(ttm_backup_bytes_avail);
> +
> +/**
> + * ttm_backup_shmem_create() - Create a shmem-based struct backup.
> + * @size: The maximum size (in bytes) to back up.
> + *
> + * Create a backup utilizing shmem objects.
> + *
> + * Return: A pointer to a struct ttm_backup on success,
> + * an error pointer on error.
> + */
> +struct ttm_backup *ttm_backup_shmem_create(loff_t size)
> +{
> +	struct file *filp;
> +
> +	filp = shmem_file_setup("ttm shmem backup", size, 0);
> +
> +	return ttm_file_to_backup(filp);
> +}
> diff --git a/include/drm/ttm/ttm_backup.h b/include/drm/ttm/ttm_backup.h
> new file mode 100644
> index 000000000000..20609da7e281
> --- /dev/null
> +++ b/include/drm/ttm/ttm_backup.h
> @@ -0,0 +1,74 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#ifndef _TTM_BACKUP_H_
> +#define _TTM_BACKUP_H_
> +
> +#include <linux/mm_types.h>
> +#include <linux/shmem_fs.h>
> +
> +struct ttm_backup;
> +
> +/**
> + * ttm_backup_handle_to_page_ptr() - Convert handle to struct page pointer
> + * @handle: The handle to convert.
> + *
> + * Converts an opaque handle received from the
> + * struct ttm_backoup_ops::backup_page() function to an (invalid)
> + * struct page pointer suitable for a struct page array.
> + *
> + * Return: An (invalid) struct page pointer.
> + */
> +static inline struct page *
> +ttm_backup_handle_to_page_ptr(unsigned long handle)
> +{
> +	return (struct page *)(handle << 1 | 1);
> +}
> +
> +/**
> + * ttm_backup_page_ptr_is_handle() - Whether a struct page pointer is a handle
> + * @page: The struct page pointer to check.
> + *
> + * Return: true if the struct page pointer is a handld returned from
> + * ttm_backup_handle_to_page_ptr(). False otherwise.
> + */
> +static inline bool ttm_backup_page_ptr_is_handle(const struct page *page)
> +{
> +	return (unsigned long)page & 1;
> +}
> +
> +/**
> + * ttm_backup_page_ptr_to_handle() - Convert a struct page pointer to a handle
> + * @page: The struct page pointer to convert
> + *
> + * Return: The handle that was previously used in
> + * ttm_backup_handle_to_page_ptr() to obtain a struct page pointer, suitable
> + * for use as argument in the struct ttm_backup_ops drop() or
> + * copy_backed_up_page() functions.
> + */
> +static inline unsigned long
> +ttm_backup_page_ptr_to_handle(const struct page *page)
> +{
> +	WARN_ON(!ttm_backup_page_ptr_is_handle(page));
> +	return (unsigned long)page >> 1;
> +}
> +
> +void ttm_backup_drop(struct ttm_backup *backup, pgoff_t handle);
> +
> +int ttm_backup_copy_page(struct ttm_backup *backup, struct page *dst,
> +			 pgoff_t handle, bool intr);
> +
> +unsigned long
> +ttm_backup_backup_page(struct ttm_backup *backup, struct page *page,
> +		       bool writeback, pgoff_t idx, gfp_t page_gfp,
> +		       gfp_t alloc_gfp);
> +
> +void ttm_backup_fini(struct ttm_backup *backup);
> +
> +u64 ttm_backup_bytes_avail(void);
> +
> +struct ttm_backup *ttm_backup_shmem_create(loff_t size);
> +
> +#endif


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation
  2024-11-19 13:40   ` Christian König
@ 2024-11-20  7:58     ` Thomas Hellström
  2024-11-20  9:24       ` Christian König
  0 siblings, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-11-20  7:58 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Tue, 2024-11-19 at 14:40 +0100, Christian König wrote:
> Am 15.11.24 um 16:01 schrieb Thomas Hellström:
> > Provide a standalone shmem backup implementation.
> > Given the ttm_backup interface, this could
> > later on be extended to providing other backup
> > implementation than shmem, with one use-case being
> > GPU swapout to a user-provided fd.
> > 
> > v5:
> > - Fix a UAF. (kernel test robot, Dan Carptenter)
> > v6:
> > - Rename ttm_backup_shmem_copy_page() function argument
> >    (Matthew Brost)
> > - Add some missing documentation
> > v8:
> > - Use folio_file_page to get to the page we want to writeback
> >    instead of using the first page of the folio.
> > v13:
> > - Remove the base class abstraction (Christian König)
> > - Include ttm_backup_bytes_avail().
> > v14:
> > - Fix kerneldoc for ttm_backup_bytes_avail() (0-day)
> > - Work around casting of __randomize_layout struct pointer (0-day)
> > 
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: <dri-devel@lists.freedesktop.org>
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Reviewed-by: Matthew Brost <matthew.brost@intel.com> #v13
> > ---
> >   drivers/gpu/drm/ttm/Makefile     |   2 +-
> >   drivers/gpu/drm/ttm/ttm_backup.c | 204
> > +++++++++++++++++++++++++++++++
> >   include/drm/ttm/ttm_backup.h     |  74 +++++++++++
> >   3 files changed, 279 insertions(+), 1 deletion(-)
> >   create mode 100644 drivers/gpu/drm/ttm/ttm_backup.c
> >   create mode 100644 include/drm/ttm/ttm_backup.h
> > 
> > diff --git a/drivers/gpu/drm/ttm/Makefile
> > b/drivers/gpu/drm/ttm/Makefile
> > index dad298127226..40d07a35293a 100644
> > --- a/drivers/gpu/drm/ttm/Makefile
> > +++ b/drivers/gpu/drm/ttm/Makefile
> > @@ -4,7 +4,7 @@
> >   
> >   ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o
> > \
> >   	ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o
> > ttm_pool.o \
> > -	ttm_device.o ttm_sys_manager.o
> > +	ttm_device.o ttm_sys_manager.o ttm_backup.o
> >   ttm-$(CONFIG_AGP) += ttm_agp_backend.o
> >   
> >   obj-$(CONFIG_DRM_TTM) += ttm.o
> > diff --git a/drivers/gpu/drm/ttm/ttm_backup.c
> > b/drivers/gpu/drm/ttm/ttm_backup.c
> > new file mode 100644
> > index 000000000000..bf16bb0c594e
> > --- /dev/null
> > +++ b/drivers/gpu/drm/ttm/ttm_backup.c
> > @@ -0,0 +1,204 @@
> > +// SPDX-License-Identifier: MIT
> > +/*
> > + * Copyright © 2024 Intel Corporation
> > + */
> > +
> > +#include <drm/ttm/ttm_backup.h>
> > +#include <linux/page-flags.h>
> > +#include <linux/swap.h>
> > +
> > +/*
> > + * Casting from randomized struct file * to struct ttm_backup * is
> > fine since
> > + * struct ttm_backup is never defined nor dereferenced.
> > + */
> > +static struct file *ttm_backup_to_file(struct ttm_backup *backup)
> 
> Do I get it right that struct ttm_backup is never really defined?

Yes.

> What 
> purpose does that have?

It's to make the struct ttm_backup opaque to the users of the
ttm_backup interface, so that the implementation doesn't have to worry
about the user making illegal assumptions about the implementation.

> > +{
> > +	return (void *)backup;
> > +}
> > +
> > +static struct ttm_backup *ttm_file_to_backup(struct file *file)
> > +{
> > +	return (void *)file;
> > +}
> > +
> > +/*
> > + * Need to map shmem indices to handle since a handle value
> > + * of 0 means error, following the swp_entry_t convention.
> > + */
> > +static unsigned long ttm_backup_shmem_idx_to_handle(pgoff_t idx)
> > +{
> > +	return (unsigned long)idx + 1;
> > +}
> > +
> > +static pgoff_t ttm_backup_handle_to_shmem_idx(pgoff_t handle)
> > +{
> > +	return handle - 1;
> > +}
> > +
> > +/**
> > + * ttm_backup_drop() - release memory associated with a handle
> > + * @backup: The struct backup pointer used to obtain the handle
> > + * @handle: The handle obtained from the @backup_page function.
> > + */
> > +void ttm_backup_drop(struct ttm_backup *backup, pgoff_t handle)
> > +{
> > +	loff_t start = ttm_backup_handle_to_shmem_idx(handle);
> > +
> > +	start <<= PAGE_SHIFT;
> > +	shmem_truncate_range(file_inode(ttm_backup_to_file(backup)
> > ), start,
> > +			     start + PAGE_SIZE - 1);
> > +}
> > +
> > +/**
> > + * ttm_backup_copy_page() - Copy the contents of a previously
> > backed
> > + * up page
> > + * @backup: The struct backup pointer used to back up the page.
> > + * @dst: The struct page to copy into.
> > + * @handle: The handle returned when the page was backed up.
> > + * @intr: Try to perform waits interruptable or at least killable.
> > + *
> > + * Return: 0 on success, Negative error code on failure, notably
> > + * -EINTR if @intr was set to true and a signal is pending.
> > + */
> > +int ttm_backup_copy_page(struct ttm_backup *backup, struct page
> > *dst,
> > +			 pgoff_t handle, bool intr)
> > +{
> > +	struct file *filp = ttm_backup_to_file(backup);
> > +	struct address_space *mapping = filp->f_mapping;
> > +	struct folio *from_folio;
> > +	pgoff_t idx = ttm_backup_handle_to_shmem_idx(handle);
> > +
> > +	from_folio = shmem_read_folio(mapping, idx);
> > +	if (IS_ERR(from_folio))
> > +		return PTR_ERR(from_folio);
> > +
> > +	copy_highpage(dst, folio_file_page(from_folio, idx));
> > +	folio_put(from_folio);
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * ttm_backup_backup_page() - Backup a page
> > + * @backup: The struct backup pointer to use.
> > + * @page: The page to back up.
> > + * @writeback: Whether to perform immediate writeback of the page.
> > + * This may have performance implications.
> > + * @idx: A unique integer for each page and each struct backup.
> > + * This allows the backup implementation to avoid managing
> > + * its address space separately.
> > + * @page_gfp: The gfp value used when the page was allocated.
> > + * This is used for accounting purposes.
> > + * @alloc_gfp: The gpf to be used when allocating memory.
> 
> Typo: gfp instead of gpf.

Sure.

> 
> > + *
> > + * Context: If called from reclaim context, the caller needs to
> > + * assert that the shrinker gfp has __GFP_FS set, to avoid
> > + * deadlocking on lock_page(). If @writeback is set to true and
> > + * called from reclaim context, the caller also needs to assert
> > + * that the shrinker gfp has __GFP_IO set, since without it,
> > + * we're not allowed to start backup IO.
> > + *
> > + * Return: A handle on success. 0 on failure.
> > + * (This is following the swp_entry_t convention).
> > + *
> > + * Note: This function could be extended to back up a folio and
> > + * implementations would then split the folio internally if
> > needed.
> > + * Drawback is that the caller would then have to keep track of
> > + * the folio size- and usage.
> > + */
> > +unsigned long
> > +ttm_backup_backup_page(struct ttm_backup *backup, struct page
> > *page,
> > +		       bool writeback, pgoff_t idx, gfp_t
> > page_gfp,
> > +		       gfp_t alloc_gfp)
> > +{
> > +	struct file *filp = ttm_backup_to_file(backup);
> > +	struct address_space *mapping = filp->f_mapping;
> > +	unsigned long handle = 0;
> > +	struct folio *to_folio;
> > +	int ret;
> > +
> > +	to_folio = shmem_read_folio_gfp(mapping, idx, alloc_gfp);
> > +	if (IS_ERR(to_folio))
> > +		return handle;
> 
> Just that I sleep better: This can never return a folio larger than a
> page, doesn't it?

The interface definitely allows for returning larger folios, but the
individual page in the folio is selected by folio_file_page(folio,
idx).

/Thomas


> 
> Apart from those background questions looks good to me.
> 
> Regards,
> Christian.
> 
> > +
> > +	folio_mark_accessed(to_folio);
> > +	folio_lock(to_folio);
> > +	folio_mark_dirty(to_folio);
> > +	copy_highpage(folio_file_page(to_folio, idx), page);
> > +	handle = ttm_backup_shmem_idx_to_handle(idx);
> > +
> > +	if (writeback && !folio_mapped(to_folio) &&
> > +	    folio_clear_dirty_for_io(to_folio)) {
> > +		struct writeback_control wbc = {
> > +			.sync_mode = WB_SYNC_NONE,
> > +			.nr_to_write = SWAP_CLUSTER_MAX,
> > +			.range_start = 0,
> > +			.range_end = LLONG_MAX,
> > +			.for_reclaim = 1,
> > +		};
> > +		folio_set_reclaim(to_folio);
> > +		ret = mapping->a_ops-
> > >writepage(folio_file_page(to_folio, idx), &wbc);
> > +		if (!folio_test_writeback(to_folio))
> > +			folio_clear_reclaim(to_folio);
> > +		/* If writepage succeeds, it unlocks the folio */
> > +		if (ret)
> > +			folio_unlock(to_folio);
> > +	} else {
> > +		folio_unlock(to_folio);
> > +	}
> > +
> > +	folio_put(to_folio);
> > +
> > +	return handle;
> > +}
> > +
> > +/**
> > + * ttm_backup_fini() - Free the struct backup resources after last
> > use.
> > + * @backup: Pointer to the struct backup whose resources to free.
> > + *
> > + * After a call to this function, it's illegal to use the @backup
> > pointer.
> > + */
> > +void ttm_backup_fini(struct ttm_backup *backup)
> > +{
> > +	fput(ttm_backup_to_file(backup));
> > +}
> > +
> > +/**
> > + * ttm_backup_bytes_avail() - Report the approximate number of
> > bytes of backup space
> > + * left for backup.
> > + *
> > + * This function is intended also for driver use to indicate
> > whether a
> > + * backup attempt is meaningful.
> > + *
> > + * Return: An approximate size of backup space available.
> > + */
> > +u64 ttm_backup_bytes_avail(void)
> > +{
> > +	/*
> > +	 * The idea behind backing up to shmem is that shmem
> > objects may
> > +	 * eventually be swapped out. So no point swapping out if
> > there
> > +	 * is no or low swap-space available. But the accuracy of
> > this
> > +	 * number also depends on shmem actually swapping out
> > backed-up
> > +	 * shmem objects without too much buffering.
> > +	 */
> > +	return (u64)get_nr_swap_pages() << PAGE_SHIFT;
> > +}
> > +EXPORT_SYMBOL_GPL(ttm_backup_bytes_avail);
> > +
> > +/**
> > + * ttm_backup_shmem_create() - Create a shmem-based struct backup.
> > + * @size: The maximum size (in bytes) to back up.
> > + *
> > + * Create a backup utilizing shmem objects.
> > + *
> > + * Return: A pointer to a struct ttm_backup on success,
> > + * an error pointer on error.
> > + */
> > +struct ttm_backup *ttm_backup_shmem_create(loff_t size)
> > +{
> > +	struct file *filp;
> > +
> > +	filp = shmem_file_setup("ttm shmem backup", size, 0);
> > +
> > +	return ttm_file_to_backup(filp);
> > +}
> > diff --git a/include/drm/ttm/ttm_backup.h
> > b/include/drm/ttm/ttm_backup.h
> > new file mode 100644
> > index 000000000000..20609da7e281
> > --- /dev/null
> > +++ b/include/drm/ttm/ttm_backup.h
> > @@ -0,0 +1,74 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*
> > + * Copyright © 2024 Intel Corporation
> > + */
> > +
> > +#ifndef _TTM_BACKUP_H_
> > +#define _TTM_BACKUP_H_
> > +
> > +#include <linux/mm_types.h>
> > +#include <linux/shmem_fs.h>
> > +
> > +struct ttm_backup;
> > +
> > +/**
> > + * ttm_backup_handle_to_page_ptr() - Convert handle to struct page
> > pointer
> > + * @handle: The handle to convert.
> > + *
> > + * Converts an opaque handle received from the
> > + * struct ttm_backoup_ops::backup_page() function to an (invalid)
> > + * struct page pointer suitable for a struct page array.
> > + *
> > + * Return: An (invalid) struct page pointer.
> > + */
> > +static inline struct page *
> > +ttm_backup_handle_to_page_ptr(unsigned long handle)
> > +{
> > +	return (struct page *)(handle << 1 | 1);
> > +}
> > +
> > +/**
> > + * ttm_backup_page_ptr_is_handle() - Whether a struct page pointer
> > is a handle
> > + * @page: The struct page pointer to check.
> > + *
> > + * Return: true if the struct page pointer is a handld returned
> > from
> > + * ttm_backup_handle_to_page_ptr(). False otherwise.
> > + */
> > +static inline bool ttm_backup_page_ptr_is_handle(const struct page
> > *page)
> > +{
> > +	return (unsigned long)page & 1;
> > +}
> > +
> > +/**
> > + * ttm_backup_page_ptr_to_handle() - Convert a struct page pointer
> > to a handle
> > + * @page: The struct page pointer to convert
> > + *
> > + * Return: The handle that was previously used in
> > + * ttm_backup_handle_to_page_ptr() to obtain a struct page
> > pointer, suitable
> > + * for use as argument in the struct ttm_backup_ops drop() or
> > + * copy_backed_up_page() functions.
> > + */
> > +static inline unsigned long
> > +ttm_backup_page_ptr_to_handle(const struct page *page)
> > +{
> > +	WARN_ON(!ttm_backup_page_ptr_is_handle(page));
> > +	return (unsigned long)page >> 1;
> > +}
> > +
> > +void ttm_backup_drop(struct ttm_backup *backup, pgoff_t handle);
> > +
> > +int ttm_backup_copy_page(struct ttm_backup *backup, struct page
> > *dst,
> > +			 pgoff_t handle, bool intr);
> > +
> > +unsigned long
> > +ttm_backup_backup_page(struct ttm_backup *backup, struct page
> > *page,
> > +		       bool writeback, pgoff_t idx, gfp_t
> > page_gfp,
> > +		       gfp_t alloc_gfp);
> > +
> > +void ttm_backup_fini(struct ttm_backup *backup);
> > +
> > +u64 ttm_backup_bytes_avail(void);
> > +
> > +struct ttm_backup *ttm_backup_shmem_create(loff_t size);
> > +
> > +#endif
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation
  2024-11-20  7:58     ` Thomas Hellström
@ 2024-11-20  9:24       ` Christian König
  2024-11-20 10:34         ` Thomas Hellström
  2024-11-20 11:20         ` Thomas Hellström
  0 siblings, 2 replies; 54+ messages in thread
From: Christian König @ 2024-11-20  9:24 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

[-- Attachment #1: Type: text/plain, Size: 8212 bytes --]

Am 20.11.24 um 08:58 schrieb Thomas Hellström:
> On Tue, 2024-11-19 at 14:40 +0100, Christian König wrote:
>> [SNIP]
>>> +
>>> +/*
>>> + * Casting from randomized struct file * to struct ttm_backup * is
>>> fine since
>>> + * struct ttm_backup is never defined nor dereferenced.
>>> + */
>>> +static struct file *ttm_backup_to_file(struct ttm_backup *backup)
>> Do I get it right that struct ttm_backup is never really defined?
> Yes.
>
>> What
>> purpose does that have?
> It's to make the struct ttm_backup opaque to the users of the
> ttm_backup interface, so that the implementation doesn't have to worry
> about the user making illegal assumptions about the implementation.

That is usually done with a typedef and one of the few cases where 
typedefs are actually advised to be used.


[SNIP]
>>> + *
>>> + * Context: If called from reclaim context, the caller needs to
>>> + * assert that the shrinker gfp has __GFP_FS set, to avoid
>>> + * deadlocking on lock_page(). If @writeback is set to true and
>>> + * called from reclaim context, the caller also needs to assert
>>> + * that the shrinker gfp has __GFP_IO set, since without it,
>>> + * we're not allowed to start backup IO.
>>> + *
>>> + * Return: A handle on success. 0 on failure.
>>> + * (This is following the swp_entry_t convention).
>>> + *
>>> + * Note: This function could be extended to back up a folio and
>>> + * implementations would then split the folio internally if
>>> needed.
>>> + * Drawback is that the caller would then have to keep track of
>>> + * the folio size- and usage.
>>> + */
>>> +unsigned long
>>> +ttm_backup_backup_page(struct ttm_backup *backup, struct page
>>> *page,
>>> +		       bool writeback, pgoff_t idx, gfp_t
>>> page_gfp,
>>> +		       gfp_t alloc_gfp)
>>> +{
>>> +	struct file *filp = ttm_backup_to_file(backup);
>>> +	struct address_space *mapping = filp->f_mapping;
>>> +	unsigned long handle = 0;
>>> +	struct folio *to_folio;
>>> +	int ret;
>>> +
>>> +	to_folio = shmem_read_folio_gfp(mapping, idx, alloc_gfp);
>>> +	if (IS_ERR(to_folio))
>>> +		return handle;

Probably better to explicitly return 0 here.

And BTW why are we using 0 as indication for an error? Couldn't we just 
use a long as return value and return a proper -errno here?

>> Just that I sleep better: This can never return a folio larger than a
>> page, doesn't it?
> The interface definitely allows for returning larger folios, but the
> individual page in the folio is selected by folio_file_page(folio,
> idx).

Ah, yeah completely missed that and was really wondering why that would 
work.

>
> /Thomas
>
>
>> Apart from those background questions looks good to me.
>>
>> Regards,
>> Christian.
>>
>>> +
>>> +	folio_mark_accessed(to_folio);
>>> +	folio_lock(to_folio);
>>> +	folio_mark_dirty(to_folio);
>>> +	copy_highpage(folio_file_page(to_folio, idx), page);
>>> +	handle = ttm_backup_shmem_idx_to_handle(idx);
>>> +
>>> +	if (writeback && !folio_mapped(to_folio) &&
>>> +	    folio_clear_dirty_for_io(to_folio)) {
>>> +		struct writeback_control wbc = {
>>> +			.sync_mode = WB_SYNC_NONE,
>>> +			.nr_to_write = SWAP_CLUSTER_MAX,
>>> +			.range_start = 0,
>>> +			.range_end = LLONG_MAX,
>>> +			.for_reclaim = 1,
>>> +		};
>>> +		folio_set_reclaim(to_folio);
>>> +		ret = mapping->a_ops-
>>>> writepage(folio_file_page(to_folio, idx), &wbc);
>>> +		if (!folio_test_writeback(to_folio))
>>> +			folio_clear_reclaim(to_folio);
>>> +		/* If writepage succeeds, it unlocks the folio */
>>> +		if (ret)
>>> +			folio_unlock(to_folio);

The code ignores the error and potentially deserves an explanation for that.

Regards,
Christian.

>>> +	} else {
>>> +		folio_unlock(to_folio);
>>> +	}
>>> +
>>> +	folio_put(to_folio);
>>> +
>>> +	return handle;
>>> +}
>>> +
>>> +/**
>>> + * ttm_backup_fini() - Free the struct backup resources after last
>>> use.
>>> + * @backup: Pointer to the struct backup whose resources to free.
>>> + *
>>> + * After a call to this function, it's illegal to use the @backup
>>> pointer.
>>> + */
>>> +void ttm_backup_fini(struct ttm_backup *backup)
>>> +{
>>> +	fput(ttm_backup_to_file(backup));
>>> +}
>>> +
>>> +/**
>>> + * ttm_backup_bytes_avail() - Report the approximate number of
>>> bytes of backup space
>>> + * left for backup.
>>> + *
>>> + * This function is intended also for driver use to indicate
>>> whether a
>>> + * backup attempt is meaningful.
>>> + *
>>> + * Return: An approximate size of backup space available.
>>> + */
>>> +u64 ttm_backup_bytes_avail(void)
>>> +{
>>> +	/*
>>> +	 * The idea behind backing up to shmem is that shmem
>>> objects may
>>> +	 * eventually be swapped out. So no point swapping out if
>>> there
>>> +	 * is no or low swap-space available. But the accuracy of
>>> this
>>> +	 * number also depends on shmem actually swapping out
>>> backed-up
>>> +	 * shmem objects without too much buffering.
>>> +	 */
>>> +	return (u64)get_nr_swap_pages() << PAGE_SHIFT;
>>> +}
>>> +EXPORT_SYMBOL_GPL(ttm_backup_bytes_avail);
>>> +
>>> +/**
>>> + * ttm_backup_shmem_create() - Create a shmem-based struct backup.
>>> + * @size: The maximum size (in bytes) to back up.
>>> + *
>>> + * Create a backup utilizing shmem objects.
>>> + *
>>> + * Return: A pointer to a struct ttm_backup on success,
>>> + * an error pointer on error.
>>> + */
>>> +struct ttm_backup *ttm_backup_shmem_create(loff_t size)
>>> +{
>>> +	struct file *filp;
>>> +
>>> +	filp = shmem_file_setup("ttm shmem backup", size, 0);
>>> +
>>> +	return ttm_file_to_backup(filp);
>>> +}
>>> diff --git a/include/drm/ttm/ttm_backup.h
>>> b/include/drm/ttm/ttm_backup.h
>>> new file mode 100644
>>> index 000000000000..20609da7e281
>>> --- /dev/null
>>> +++ b/include/drm/ttm/ttm_backup.h
>>> @@ -0,0 +1,74 @@
>>> +/* SPDX-License-Identifier: MIT */
>>> +/*
>>> + * Copyright © 2024 Intel Corporation
>>> + */
>>> +
>>> +#ifndef _TTM_BACKUP_H_
>>> +#define _TTM_BACKUP_H_
>>> +
>>> +#include <linux/mm_types.h>
>>> +#include <linux/shmem_fs.h>
>>> +
>>> +struct ttm_backup;
>>> +
>>> +/**
>>> + * ttm_backup_handle_to_page_ptr() - Convert handle to struct page
>>> pointer
>>> + * @handle: The handle to convert.
>>> + *
>>> + * Converts an opaque handle received from the
>>> + * struct ttm_backoup_ops::backup_page() function to an (invalid)
>>> + * struct page pointer suitable for a struct page array.
>>> + *
>>> + * Return: An (invalid) struct page pointer.
>>> + */
>>> +static inline struct page *
>>> +ttm_backup_handle_to_page_ptr(unsigned long handle)
>>> +{
>>> +	return (struct page *)(handle << 1 | 1);
>>> +}
>>> +
>>> +/**
>>> + * ttm_backup_page_ptr_is_handle() - Whether a struct page pointer
>>> is a handle
>>> + * @page: The struct page pointer to check.
>>> + *
>>> + * Return: true if the struct page pointer is a handld returned
>>> from
>>> + * ttm_backup_handle_to_page_ptr(). False otherwise.
>>> + */
>>> +static inline bool ttm_backup_page_ptr_is_handle(const struct page
>>> *page)
>>> +{
>>> +	return (unsigned long)page & 1;
>>> +}
>>> +
>>> +/**
>>> + * ttm_backup_page_ptr_to_handle() - Convert a struct page pointer
>>> to a handle
>>> + * @page: The struct page pointer to convert
>>> + *
>>> + * Return: The handle that was previously used in
>>> + * ttm_backup_handle_to_page_ptr() to obtain a struct page
>>> pointer, suitable
>>> + * for use as argument in the struct ttm_backup_ops drop() or
>>> + * copy_backed_up_page() functions.
>>> + */
>>> +static inline unsigned long
>>> +ttm_backup_page_ptr_to_handle(const struct page *page)
>>> +{
>>> +	WARN_ON(!ttm_backup_page_ptr_is_handle(page));
>>> +	return (unsigned long)page >> 1;
>>> +}
>>> +
>>> +void ttm_backup_drop(struct ttm_backup *backup, pgoff_t handle);
>>> +
>>> +int ttm_backup_copy_page(struct ttm_backup *backup, struct page
>>> *dst,
>>> +			 pgoff_t handle, bool intr);
>>> +
>>> +unsigned long
>>> +ttm_backup_backup_page(struct ttm_backup *backup, struct page
>>> *page,
>>> +		       bool writeback, pgoff_t idx, gfp_t
>>> page_gfp,
>>> +		       gfp_t alloc_gfp);
>>> +
>>> +void ttm_backup_fini(struct ttm_backup *backup);
>>> +
>>> +u64 ttm_backup_bytes_avail(void);
>>> +
>>> +struct ttm_backup *ttm_backup_shmem_create(loff_t size);
>>> +
>>> +#endif

[-- Attachment #2: Type: text/html, Size: 10102 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation
  2024-11-20  9:24       ` Christian König
@ 2024-11-20 10:34         ` Thomas Hellström
  2024-11-20 10:50           ` Christian König
  2024-11-20 11:20         ` Thomas Hellström
  1 sibling, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-11-20 10:34 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Wed, 2024-11-20 at 10:24 +0100, Christian König wrote:
> Am 20.11.24 um 08:58 schrieb Thomas Hellström:
> > On Tue, 2024-11-19 at 14:40 +0100, Christian König wrote:
> > > [SNIP]
> > > > +
> > > > +/*
> > > > + * Casting from randomized struct file * to struct ttm_backup
> > > > * is
> > > > fine since
> > > > + * struct ttm_backup is never defined nor dereferenced.
> > > > + */
> > > > +static struct file *ttm_backup_to_file(struct ttm_backup
> > > > *backup)
> > > Do I get it right that struct ttm_backup is never really defined?
> > Yes.
> > 
> > > What
> > > purpose does that have?
> > It's to make the struct ttm_backup opaque to the users of the
> > ttm_backup interface, so that the implementation doesn't have to
> > worry
> > about the user making illegal assumptions about the implementation.
> 
> That is usually done with a typedef and one of the few cases where 
> typedefs are actually advised to be used.
> 

Well wouldn't ttm_backup.h then have to include the declaration of
struct file plus a typedef that would probably raise many eyebrows even
if it's ok to use it there? 

Having the header just declare a struct without providing a definition
is the typical way of hiding the implementation and avoid includes, no?

If you insist we can drop the struct ttm_backup * and just use struct
file, but then again if we change the implementation to allow for
backuping to a file or similar that needs to be re-done, so as said
unless you insist I'd rather keep it as is.

> 
> [SNIP]
> > > > + *
> > > > + * Context: If called from reclaim context, the caller needs
> > > > to
> > > > + * assert that the shrinker gfp has __GFP_FS set, to avoid
> > > > + * deadlocking on lock_page(). If @writeback is set to true
> > > > and
> > > > + * called from reclaim context, the caller also needs to
> > > > assert
> > > > + * that the shrinker gfp has __GFP_IO set, since without it,
> > > > + * we're not allowed to start backup IO.
> > > > + *
> > > > + * Return: A handle on success. 0 on failure.
> > > > + * (This is following the swp_entry_t convention).
> > > > + *
> > > > + * Note: This function could be extended to back up a folio
> > > > and
> > > > + * implementations would then split the folio internally if
> > > > needed.
> > > > + * Drawback is that the caller would then have to keep track
> > > > of
> > > > + * the folio size- and usage.
> > > > + */
> > > > +unsigned long
> > > > +ttm_backup_backup_page(struct ttm_backup *backup, struct page
> > > > *page,
> > > > +		       bool writeback, pgoff_t idx, gfp_t
> > > > page_gfp,
> > > > +		       gfp_t alloc_gfp)
> > > > +{
> > > > +	struct file *filp = ttm_backup_to_file(backup);
> > > > +	struct address_space *mapping = filp->f_mapping;
> > > > +	unsigned long handle = 0;
> > > > +	struct folio *to_folio;
> > > > +	int ret;
> > > > +
> > > > +	to_folio = shmem_read_folio_gfp(mapping, idx,
> > > > alloc_gfp);
> > > > +	if (IS_ERR(to_folio))
> > > > +		return handle;
> 
> Probably better to explicitly return 0 here.

OK,

> 
> And BTW why are we using 0 as indication for an error? Couldn't we
> just 
> use a long as return value and return a proper -errno here?

0 is the swp_entry_t error value which is the convention also used for
the handles, so rather than inventing something new It'd be good to
keep to something that would work even with handles aliased to
swp_entry_t if we'd need to resort to that at some point.

> 
> > > Just that I sleep better: This can never return a folio larger
> > > than a
> > > page, doesn't it?
> > The interface definitely allows for returning larger folios, but
> > the
> > individual page in the folio is selected by folio_file_page(folio,
> > idx).
> 
> Ah, yeah completely missed that and was really wondering why that
> would 
> work.

Thanks,
Thomas

> 
> > 
> > /Thomas
> > 
> > 
> > > Apart from those background questions looks good to me.
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > > +
> > > > +	folio_mark_accessed(to_folio);
> > > > +	folio_lock(to_folio);
> > > > +	folio_mark_dirty(to_folio);
> > > > +	copy_highpage(folio_file_page(to_folio, idx), page);
> > > > +	handle = ttm_backup_shmem_idx_to_handle(idx);
> > > > +
> > > > +	if (writeback && !folio_mapped(to_folio) &&
> > > > +	    folio_clear_dirty_for_io(to_folio)) {
> > > > +		struct writeback_control wbc = {
> > > > +			.sync_mode = WB_SYNC_NONE,
> > > > +			.nr_to_write = SWAP_CLUSTER_MAX,
> > > > +			.range_start = 0,
> > > > +			.range_end = LLONG_MAX,
> > > > +			.for_reclaim = 1,
> > > > +		};
> > > > +		folio_set_reclaim(to_folio);
> > > > +		ret = mapping->a_ops-
> > > > > writepage(folio_file_page(to_folio, idx), &wbc);
> > > > +		if (!folio_test_writeback(to_folio))
> > > > +			folio_clear_reclaim(to_folio);
> > > > +		/* If writepage succeeds, it unlocks the folio
> > > > */
> > > > +		if (ret)
> > > > +			folio_unlock(to_folio);
> 
> The code ignores the error and potentially deserves an explanation
> for that.
> 
> Regards,
> Christian.
> 
> > > > +	} else {
> > > > +		folio_unlock(to_folio);
> > > > +	}
> > > > +
> > > > +	folio_put(to_folio);
> > > > +
> > > > +	return handle;
> > > > +}
> > > > +
> > > > +/**
> > > > + * ttm_backup_fini() - Free the struct backup resources after
> > > > last
> > > > use.
> > > > + * @backup: Pointer to the struct backup whose resources to
> > > > free.
> > > > + *
> > > > + * After a call to this function, it's illegal to use the
> > > > @backup
> > > > pointer.
> > > > + */
> > > > +void ttm_backup_fini(struct ttm_backup *backup)
> > > > +{
> > > > +	fput(ttm_backup_to_file(backup));
> > > > +}
> > > > +
> > > > +/**
> > > > + * ttm_backup_bytes_avail() - Report the approximate number of
> > > > bytes of backup space
> > > > + * left for backup.
> > > > + *
> > > > + * This function is intended also for driver use to indicate
> > > > whether a
> > > > + * backup attempt is meaningful.
> > > > + *
> > > > + * Return: An approximate size of backup space available.
> > > > + */
> > > > +u64 ttm_backup_bytes_avail(void)
> > > > +{
> > > > +	/*
> > > > +	 * The idea behind backing up to shmem is that shmem
> > > > objects may
> > > > +	 * eventually be swapped out. So no point swapping out
> > > > if
> > > > there
> > > > +	 * is no or low swap-space available. But the accuracy
> > > > of
> > > > this
> > > > +	 * number also depends on shmem actually swapping out
> > > > backed-up
> > > > +	 * shmem objects without too much buffering.
> > > > +	 */
> > > > +	return (u64)get_nr_swap_pages() << PAGE_SHIFT;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(ttm_backup_bytes_avail);
> > > > +
> > > > +/**
> > > > + * ttm_backup_shmem_create() - Create a shmem-based struct
> > > > backup.
> > > > + * @size: The maximum size (in bytes) to back up.
> > > > + *
> > > > + * Create a backup utilizing shmem objects.
> > > > + *
> > > > + * Return: A pointer to a struct ttm_backup on success,
> > > > + * an error pointer on error.
> > > > + */
> > > > +struct ttm_backup *ttm_backup_shmem_create(loff_t size)
> > > > +{
> > > > +	struct file *filp;
> > > > +
> > > > +	filp = shmem_file_setup("ttm shmem backup", size, 0);
> > > > +
> > > > +	return ttm_file_to_backup(filp);
> > > > +}
> > > > diff --git a/include/drm/ttm/ttm_backup.h
> > > > b/include/drm/ttm/ttm_backup.h
> > > > new file mode 100644
> > > > index 000000000000..20609da7e281
> > > > --- /dev/null
> > > > +++ b/include/drm/ttm/ttm_backup.h
> > > > @@ -0,0 +1,74 @@
> > > > +/* SPDX-License-Identifier: MIT */
> > > > +/*
> > > > + * Copyright © 2024 Intel Corporation
> > > > + */
> > > > +
> > > > +#ifndef _TTM_BACKUP_H_
> > > > +#define _TTM_BACKUP_H_
> > > > +
> > > > +#include <linux/mm_types.h>
> > > > +#include <linux/shmem_fs.h>
> > > > +
> > > > +struct ttm_backup;
> > > > +
> > > > +/**
> > > > + * ttm_backup_handle_to_page_ptr() - Convert handle to struct
> > > > page
> > > > pointer
> > > > + * @handle: The handle to convert.
> > > > + *
> > > > + * Converts an opaque handle received from the
> > > > + * struct ttm_backoup_ops::backup_page() function to an
> > > > (invalid)
> > > > + * struct page pointer suitable for a struct page array.
> > > > + *
> > > > + * Return: An (invalid) struct page pointer.
> > > > + */
> > > > +static inline struct page *
> > > > +ttm_backup_handle_to_page_ptr(unsigned long handle)
> > > > +{
> > > > +	return (struct page *)(handle << 1 | 1);
> > > > +}
> > > > +
> > > > +/**
> > > > + * ttm_backup_page_ptr_is_handle() - Whether a struct page
> > > > pointer
> > > > is a handle
> > > > + * @page: The struct page pointer to check.
> > > > + *
> > > > + * Return: true if the struct page pointer is a handld
> > > > returned
> > > > from
> > > > + * ttm_backup_handle_to_page_ptr(). False otherwise.
> > > > + */
> > > > +static inline bool ttm_backup_page_ptr_is_handle(const struct
> > > > page
> > > > *page)
> > > > +{
> > > > +	return (unsigned long)page & 1;
> > > > +}
> > > > +
> > > > +/**
> > > > + * ttm_backup_page_ptr_to_handle() - Convert a struct page
> > > > pointer
> > > > to a handle
> > > > + * @page: The struct page pointer to convert
> > > > + *
> > > > + * Return: The handle that was previously used in
> > > > + * ttm_backup_handle_to_page_ptr() to obtain a struct page
> > > > pointer, suitable
> > > > + * for use as argument in the struct ttm_backup_ops drop() or
> > > > + * copy_backed_up_page() functions.
> > > > + */
> > > > +static inline unsigned long
> > > > +ttm_backup_page_ptr_to_handle(const struct page *page)
> > > > +{
> > > > +	WARN_ON(!ttm_backup_page_ptr_is_handle(page));
> > > > +	return (unsigned long)page >> 1;
> > > > +}
> > > > +
> > > > +void ttm_backup_drop(struct ttm_backup *backup, pgoff_t
> > > > handle);
> > > > +
> > > > +int ttm_backup_copy_page(struct ttm_backup *backup, struct
> > > > page
> > > > *dst,
> > > > +			 pgoff_t handle, bool intr);
> > > > +
> > > > +unsigned long
> > > > +ttm_backup_backup_page(struct ttm_backup *backup, struct page
> > > > *page,
> > > > +		       bool writeback, pgoff_t idx, gfp_t
> > > > page_gfp,
> > > > +		       gfp_t alloc_gfp);
> > > > +
> > > > +void ttm_backup_fini(struct ttm_backup *backup);
> > > > +
> > > > +u64 ttm_backup_bytes_avail(void);
> > > > +
> > > > +struct ttm_backup *ttm_backup_shmem_create(loff_t size);
> > > > +
> > > > +#endif


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation
  2024-11-20 10:34         ` Thomas Hellström
@ 2024-11-20 10:50           ` Christian König
  2024-11-20 11:07             ` Thomas Hellström
  0 siblings, 1 reply; 54+ messages in thread
From: Christian König @ 2024-11-20 10:50 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Am 20.11.24 um 11:34 schrieb Thomas Hellström:
> On Wed, 2024-11-20 at 10:24 +0100, Christian König wrote:
>> Am 20.11.24 um 08:58 schrieb Thomas Hellström:
>>> On Tue, 2024-11-19 at 14:40 +0100, Christian König wrote:
>>>> [SNIP]
>>>>> +
>>>>> +/*
>>>>> + * Casting from randomized struct file * to struct ttm_backup
>>>>> * is
>>>>> fine since
>>>>> + * struct ttm_backup is never defined nor dereferenced.
>>>>> + */
>>>>> +static struct file *ttm_backup_to_file(struct ttm_backup
>>>>> *backup)
>>>> Do I get it right that struct ttm_backup is never really defined?
>>> Yes.
>>>
>>>> What
>>>> purpose does that have?
>>> It's to make the struct ttm_backup opaque to the users of the
>>> ttm_backup interface, so that the implementation doesn't have to
>>> worry
>>> about the user making illegal assumptions about the implementation.
>> That is usually done with a typedef and one of the few cases where
>> typedefs are actually advised to be used.
>>
> Well wouldn't ttm_backup.h then have to include the declaration of
> struct file plus a typedef that would probably raise many eyebrows even
> if it's ok to use it there?

No, what you do is something like this:

typedef struct ttm_backup *ttm_backup;

Then struct ttm_backup is either never defined or only inside your C 
file but not the header.

> Having the header just declare a struct without providing a definition
> is the typical way of hiding the implementation and avoid includes, no?
>
> If you insist we can drop the struct ttm_backup * and just use struct
> file, but then again if we change the implementation to allow for
> backuping to a file or similar that needs to be re-done, so as said
> unless you insist I'd rather keep it as is.

Abstracting that is ok, I was just wondering about why you do it like this.

>
>> [SNIP]
>>>>> + *
>>>>> + * Context: If called from reclaim context, the caller needs
>>>>> to
>>>>> + * assert that the shrinker gfp has __GFP_FS set, to avoid
>>>>> + * deadlocking on lock_page(). If @writeback is set to true
>>>>> and
>>>>> + * called from reclaim context, the caller also needs to
>>>>> assert
>>>>> + * that the shrinker gfp has __GFP_IO set, since without it,
>>>>> + * we're not allowed to start backup IO.
>>>>> + *
>>>>> + * Return: A handle on success. 0 on failure.
>>>>> + * (This is following the swp_entry_t convention).
>>>>> + *
>>>>> + * Note: This function could be extended to back up a folio
>>>>> and
>>>>> + * implementations would then split the folio internally if
>>>>> needed.
>>>>> + * Drawback is that the caller would then have to keep track
>>>>> of
>>>>> + * the folio size- and usage.
>>>>> + */
>>>>> +unsigned long
>>>>> +ttm_backup_backup_page(struct ttm_backup *backup, struct page
>>>>> *page,
>>>>> +		       bool writeback, pgoff_t idx, gfp_t
>>>>> page_gfp,
>>>>> +		       gfp_t alloc_gfp)
>>>>> +{
>>>>> +	struct file *filp = ttm_backup_to_file(backup);
>>>>> +	struct address_space *mapping = filp->f_mapping;
>>>>> +	unsigned long handle = 0;
>>>>> +	struct folio *to_folio;
>>>>> +	int ret;
>>>>> +
>>>>> +	to_folio = shmem_read_folio_gfp(mapping, idx,
>>>>> alloc_gfp);
>>>>> +	if (IS_ERR(to_folio))
>>>>> +		return handle;
>> Probably better to explicitly return 0 here.
> OK,
>
>> And BTW why are we using 0 as indication for an error? Couldn't we
>> just
>> use a long as return value and return a proper -errno here?
> 0 is the swp_entry_t error value which is the convention also used for
> the handles, so rather than inventing something new It'd be good to
> keep to something that would work even with handles aliased to
> swp_entry_t if we'd need to resort to that at some point.

Uff, yeah but that is an implementation detail of the swap subsystem 
caused by how we store the swapped out entries inside CPU PTEs.

I would strongly try to avoid that here. Was already wondering why we 
use long as return value and s64.

Regards,
Christian.

>
>>>> Just that I sleep better: This can never return a folio larger
>>>> than a
>>>> page, doesn't it?
>>> The interface definitely allows for returning larger folios, but
>>> the
>>> individual page in the folio is selected by folio_file_page(folio,
>>> idx).
>> Ah, yeah completely missed that and was really wondering why that
>> would
>> work.
> Thanks,
> Thomas
>
>>> /Thomas
>>>
>>>
>>>> Apart from those background questions looks good to me.
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> +
>>>>> +	folio_mark_accessed(to_folio);
>>>>> +	folio_lock(to_folio);
>>>>> +	folio_mark_dirty(to_folio);
>>>>> +	copy_highpage(folio_file_page(to_folio, idx), page);
>>>>> +	handle = ttm_backup_shmem_idx_to_handle(idx);
>>>>> +
>>>>> +	if (writeback && !folio_mapped(to_folio) &&
>>>>> +	    folio_clear_dirty_for_io(to_folio)) {
>>>>> +		struct writeback_control wbc = {
>>>>> +			.sync_mode = WB_SYNC_NONE,
>>>>> +			.nr_to_write = SWAP_CLUSTER_MAX,
>>>>> +			.range_start = 0,
>>>>> +			.range_end = LLONG_MAX,
>>>>> +			.for_reclaim = 1,
>>>>> +		};
>>>>> +		folio_set_reclaim(to_folio);
>>>>> +		ret = mapping->a_ops-
>>>>>> writepage(folio_file_page(to_folio, idx), &wbc);
>>>>> +		if (!folio_test_writeback(to_folio))
>>>>> +			folio_clear_reclaim(to_folio);
>>>>> +		/* If writepage succeeds, it unlocks the folio
>>>>> */
>>>>> +		if (ret)
>>>>> +			folio_unlock(to_folio);
>> The code ignores the error and potentially deserves an explanation
>> for that.
>>
>> Regards,
>> Christian.
>>
>>>>> +	} else {
>>>>> +		folio_unlock(to_folio);
>>>>> +	}
>>>>> +
>>>>> +	folio_put(to_folio);
>>>>> +
>>>>> +	return handle;
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> + * ttm_backup_fini() - Free the struct backup resources after
>>>>> last
>>>>> use.
>>>>> + * @backup: Pointer to the struct backup whose resources to
>>>>> free.
>>>>> + *
>>>>> + * After a call to this function, it's illegal to use the
>>>>> @backup
>>>>> pointer.
>>>>> + */
>>>>> +void ttm_backup_fini(struct ttm_backup *backup)
>>>>> +{
>>>>> +	fput(ttm_backup_to_file(backup));
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> + * ttm_backup_bytes_avail() - Report the approximate number of
>>>>> bytes of backup space
>>>>> + * left for backup.
>>>>> + *
>>>>> + * This function is intended also for driver use to indicate
>>>>> whether a
>>>>> + * backup attempt is meaningful.
>>>>> + *
>>>>> + * Return: An approximate size of backup space available.
>>>>> + */
>>>>> +u64 ttm_backup_bytes_avail(void)
>>>>> +{
>>>>> +	/*
>>>>> +	 * The idea behind backing up to shmem is that shmem
>>>>> objects may
>>>>> +	 * eventually be swapped out. So no point swapping out
>>>>> if
>>>>> there
>>>>> +	 * is no or low swap-space available. But the accuracy
>>>>> of
>>>>> this
>>>>> +	 * number also depends on shmem actually swapping out
>>>>> backed-up
>>>>> +	 * shmem objects without too much buffering.
>>>>> +	 */
>>>>> +	return (u64)get_nr_swap_pages() << PAGE_SHIFT;
>>>>> +}
>>>>> +EXPORT_SYMBOL_GPL(ttm_backup_bytes_avail);
>>>>> +
>>>>> +/**
>>>>> + * ttm_backup_shmem_create() - Create a shmem-based struct
>>>>> backup.
>>>>> + * @size: The maximum size (in bytes) to back up.
>>>>> + *
>>>>> + * Create a backup utilizing shmem objects.
>>>>> + *
>>>>> + * Return: A pointer to a struct ttm_backup on success,
>>>>> + * an error pointer on error.
>>>>> + */
>>>>> +struct ttm_backup *ttm_backup_shmem_create(loff_t size)
>>>>> +{
>>>>> +	struct file *filp;
>>>>> +
>>>>> +	filp = shmem_file_setup("ttm shmem backup", size, 0);
>>>>> +
>>>>> +	return ttm_file_to_backup(filp);
>>>>> +}
>>>>> diff --git a/include/drm/ttm/ttm_backup.h
>>>>> b/include/drm/ttm/ttm_backup.h
>>>>> new file mode 100644
>>>>> index 000000000000..20609da7e281
>>>>> --- /dev/null
>>>>> +++ b/include/drm/ttm/ttm_backup.h
>>>>> @@ -0,0 +1,74 @@
>>>>> +/* SPDX-License-Identifier: MIT */
>>>>> +/*
>>>>> + * Copyright © 2024 Intel Corporation
>>>>> + */
>>>>> +
>>>>> +#ifndef _TTM_BACKUP_H_
>>>>> +#define _TTM_BACKUP_H_
>>>>> +
>>>>> +#include <linux/mm_types.h>
>>>>> +#include <linux/shmem_fs.h>
>>>>> +
>>>>> +struct ttm_backup;
>>>>> +
>>>>> +/**
>>>>> + * ttm_backup_handle_to_page_ptr() - Convert handle to struct
>>>>> page
>>>>> pointer
>>>>> + * @handle: The handle to convert.
>>>>> + *
>>>>> + * Converts an opaque handle received from the
>>>>> + * struct ttm_backoup_ops::backup_page() function to an
>>>>> (invalid)
>>>>> + * struct page pointer suitable for a struct page array.
>>>>> + *
>>>>> + * Return: An (invalid) struct page pointer.
>>>>> + */
>>>>> +static inline struct page *
>>>>> +ttm_backup_handle_to_page_ptr(unsigned long handle)
>>>>> +{
>>>>> +	return (struct page *)(handle << 1 | 1);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> + * ttm_backup_page_ptr_is_handle() - Whether a struct page
>>>>> pointer
>>>>> is a handle
>>>>> + * @page: The struct page pointer to check.
>>>>> + *
>>>>> + * Return: true if the struct page pointer is a handld
>>>>> returned
>>>>> from
>>>>> + * ttm_backup_handle_to_page_ptr(). False otherwise.
>>>>> + */
>>>>> +static inline bool ttm_backup_page_ptr_is_handle(const struct
>>>>> page
>>>>> *page)
>>>>> +{
>>>>> +	return (unsigned long)page & 1;
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> + * ttm_backup_page_ptr_to_handle() - Convert a struct page
>>>>> pointer
>>>>> to a handle
>>>>> + * @page: The struct page pointer to convert
>>>>> + *
>>>>> + * Return: The handle that was previously used in
>>>>> + * ttm_backup_handle_to_page_ptr() to obtain a struct page
>>>>> pointer, suitable
>>>>> + * for use as argument in the struct ttm_backup_ops drop() or
>>>>> + * copy_backed_up_page() functions.
>>>>> + */
>>>>> +static inline unsigned long
>>>>> +ttm_backup_page_ptr_to_handle(const struct page *page)
>>>>> +{
>>>>> +	WARN_ON(!ttm_backup_page_ptr_is_handle(page));
>>>>> +	return (unsigned long)page >> 1;
>>>>> +}
>>>>> +
>>>>> +void ttm_backup_drop(struct ttm_backup *backup, pgoff_t
>>>>> handle);
>>>>> +
>>>>> +int ttm_backup_copy_page(struct ttm_backup *backup, struct
>>>>> page
>>>>> *dst,
>>>>> +			 pgoff_t handle, bool intr);
>>>>> +
>>>>> +unsigned long
>>>>> +ttm_backup_backup_page(struct ttm_backup *backup, struct page
>>>>> *page,
>>>>> +		       bool writeback, pgoff_t idx, gfp_t
>>>>> page_gfp,
>>>>> +		       gfp_t alloc_gfp);
>>>>> +
>>>>> +void ttm_backup_fini(struct ttm_backup *backup);
>>>>> +
>>>>> +u64 ttm_backup_bytes_avail(void);
>>>>> +
>>>>> +struct ttm_backup *ttm_backup_shmem_create(loff_t size);
>>>>> +
>>>>> +#endif


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 1/8] drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini()
  2024-11-15 15:01 ` [PATCH v14 1/8] drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini() Thomas Hellström
@ 2024-11-20 10:51   ` Christian König
  2024-11-21 15:54     ` Thomas Hellström
  0 siblings, 1 reply; 54+ messages in thread
From: Christian König @ 2024-11-20 10:51 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Matthew Brost, Somalapuram Amaranath, Paulo Zanoni, Simona Vetter,
	dri-devel

Am 15.11.24 um 16:01 schrieb Thomas Hellström:
> Make the interface more symmetric by providing and using a
> ttm_resource_cursor_init().
>
> v10:
> - Fix a stray newline (Matthew Brost)
> - Update kerneldoc (Matthew Brost)
>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> Reviewed-by: Christian König <christian.koenig@amd.com>

Did you plan to merge this through drm-misc-next or the XE branch?

If through drm-misc-next then I would go ahead and push this patch since 
that is really a stand alone cleanup.

Regards,
Christian.

> ---
>   drivers/gpu/drm/ttm/ttm_bo.c       |  3 ++-
>   drivers/gpu/drm/ttm/ttm_bo_util.c  |  3 ++-
>   drivers/gpu/drm/ttm/ttm_resource.c | 35 ++++++++++++++++++++----------
>   include/drm/ttm/ttm_resource.h     | 11 +++++-----
>   4 files changed, 34 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 48c5365efca1..06d6a452c4f4 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -450,7 +450,8 @@ int ttm_bo_evict_first(struct ttm_device *bdev, struct ttm_resource_manager *man
>   	int ret = 0;
>   
>   	spin_lock(&bdev->lru_lock);
> -	res = ttm_resource_manager_first(man, &cursor);
> +	ttm_resource_cursor_init(&cursor, man);
> +	res = ttm_resource_manager_first(&cursor);
>   	ttm_resource_cursor_fini(&cursor);
>   	if (!res) {
>   		ret = -ENOENT;
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index d939925efa81..917096bd5f68 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -865,7 +865,8 @@ s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev,
>   	s64 lret;
>   
>   	spin_lock(&bdev->lru_lock);
> -	ttm_resource_manager_for_each_res(man, &cursor, res) {
> +	ttm_resource_cursor_init(&cursor, man);
> +	ttm_resource_manager_for_each_res(&cursor, res) {
>   		struct ttm_buffer_object *bo = res->bo;
>   		bool bo_needs_unlock = false;
>   		bool bo_locked = false;
> diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
> index a87665eb28a6..e19360cc7930 100644
> --- a/drivers/gpu/drm/ttm/ttm_resource.c
> +++ b/drivers/gpu/drm/ttm/ttm_resource.c
> @@ -81,6 +81,23 @@ static void ttm_bulk_move_drop_cursors(struct ttm_lru_bulk_move *bulk)
>   		ttm_resource_cursor_clear_bulk(cursor);
>   }
>   
> +/**
> + * ttm_resource_cursor_init() - Initialize a struct ttm_resource_cursor
> + * @cursor: The cursor to initialize.
> + * @man: The resource manager.
> + *
> + * Initialize the cursor before using it for iteration.
> + */
> +void ttm_resource_cursor_init(struct ttm_resource_cursor *cursor,
> +			      struct ttm_resource_manager *man)
> +{
> +	cursor->priority = 0;
> +	cursor->man = man;
> +	ttm_lru_item_init(&cursor->hitch, TTM_LRU_HITCH);
> +	INIT_LIST_HEAD(&cursor->bulk_link);
> +	INIT_LIST_HEAD(&cursor->hitch.link);
> +}
> +
>   /**
>    * ttm_resource_cursor_fini() - Finalize the LRU list cursor usage
>    * @cursor: The struct ttm_resource_cursor to finalize.
> @@ -593,7 +610,6 @@ ttm_resource_cursor_check_bulk(struct ttm_resource_cursor *cursor,
>   /**
>    * ttm_resource_manager_first() - Start iterating over the resources
>    * of a resource manager
> - * @man: resource manager to iterate over
>    * @cursor: cursor to record the position
>    *
>    * Initializes the cursor and starts iterating. When done iterating,
> @@ -602,17 +618,16 @@ ttm_resource_cursor_check_bulk(struct ttm_resource_cursor *cursor,
>    * Return: The first resource from the resource manager.
>    */
>   struct ttm_resource *
> -ttm_resource_manager_first(struct ttm_resource_manager *man,
> -			   struct ttm_resource_cursor *cursor)
> +ttm_resource_manager_first(struct ttm_resource_cursor *cursor)
>   {
> -	lockdep_assert_held(&man->bdev->lru_lock);
> +	struct ttm_resource_manager *man = cursor->man;
>   
> -	cursor->priority = 0;
> -	cursor->man = man;
> -	ttm_lru_item_init(&cursor->hitch, TTM_LRU_HITCH);
> -	INIT_LIST_HEAD(&cursor->bulk_link);
> -	list_add(&cursor->hitch.link, &man->lru[cursor->priority]);
> +	if (WARN_ON_ONCE(!man))
> +		return NULL;
> +
> +	lockdep_assert_held(&man->bdev->lru_lock);
>   
> +	list_move(&cursor->hitch.link, &man->lru[cursor->priority]);
>   	return ttm_resource_manager_next(cursor);
>   }
>   
> @@ -648,8 +663,6 @@ ttm_resource_manager_next(struct ttm_resource_cursor *cursor)
>   		ttm_resource_cursor_clear_bulk(cursor);
>   	}
>   
> -	ttm_resource_cursor_fini(cursor);
> -
>   	return NULL;
>   }
>   
> diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
> index be034be56ba1..e1f3b95d73b6 100644
> --- a/include/drm/ttm/ttm_resource.h
> +++ b/include/drm/ttm/ttm_resource.h
> @@ -325,6 +325,9 @@ struct ttm_resource_cursor {
>   	unsigned int priority;
>   };
>   
> +void ttm_resource_cursor_init(struct ttm_resource_cursor *cursor,
> +			      struct ttm_resource_manager *man);
> +
>   void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor);
>   
>   /**
> @@ -456,8 +459,7 @@ void ttm_resource_manager_debug(struct ttm_resource_manager *man,
>   				struct drm_printer *p);
>   
>   struct ttm_resource *
> -ttm_resource_manager_first(struct ttm_resource_manager *man,
> -			   struct ttm_resource_cursor *cursor);
> +ttm_resource_manager_first(struct ttm_resource_cursor *cursor);
>   struct ttm_resource *
>   ttm_resource_manager_next(struct ttm_resource_cursor *cursor);
>   
> @@ -466,14 +468,13 @@ ttm_lru_first_res_or_null(struct list_head *head);
>   
>   /**
>    * ttm_resource_manager_for_each_res - iterate over all resources
> - * @man: the resource manager
>    * @cursor: struct ttm_resource_cursor for the current position
>    * @res: the current resource
>    *
>    * Iterate over all the evictable resources in a resource manager.
>    */
> -#define ttm_resource_manager_for_each_res(man, cursor, res)		\
> -	for (res = ttm_resource_manager_first(man, cursor); res;	\
> +#define ttm_resource_manager_for_each_res(cursor, res)	\
> +	for (res = ttm_resource_manager_first(cursor); res;	\
>   	     res = ttm_resource_manager_next(cursor))
>   
>   struct ttm_kmap_iter *


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation
  2024-11-20 10:50           ` Christian König
@ 2024-11-20 11:07             ` Thomas Hellström
  0 siblings, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-11-20 11:07 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Wed, 2024-11-20 at 11:50 +0100, Christian König wrote:
> Am 20.11.24 um 11:34 schrieb Thomas Hellström:
> > On Wed, 2024-11-20 at 10:24 +0100, Christian König wrote:
> > > Am 20.11.24 um 08:58 schrieb Thomas Hellström:
> > > > On Tue, 2024-11-19 at 14:40 +0100, Christian König wrote:
> > > > > [SNIP]
> > > > > > +
> > > > > > +/*
> > > > > > + * Casting from randomized struct file * to struct
> > > > > > ttm_backup
> > > > > > * is
> > > > > > fine since
> > > > > > + * struct ttm_backup is never defined nor dereferenced.
> > > > > > + */
> > > > > > +static struct file *ttm_backup_to_file(struct ttm_backup
> > > > > > *backup)
> > > > > Do I get it right that struct ttm_backup is never really
> > > > > defined?
> > > > Yes.
> > > > 
> > > > > What
> > > > > purpose does that have?
> > > > It's to make the struct ttm_backup opaque to the users of the
> > > > ttm_backup interface, so that the implementation doesn't have
> > > > to
> > > > worry
> > > > about the user making illegal assumptions about the
> > > > implementation.
> > > That is usually done with a typedef and one of the few cases
> > > where
> > > typedefs are actually advised to be used.
> > > 
> > Well wouldn't ttm_backup.h then have to include the declaration of
> > struct file plus a typedef that would probably raise many eyebrows
> > even
> > if it's ok to use it there?
> 
> No, what you do is something like this:
> 
> typedef struct ttm_backup *ttm_backup;
> 
> Then struct ttm_backup is either never defined or only inside your C 
> file but not the header.
> 
> > Having the header just declare a struct without providing a
> > definition
> > is the typical way of hiding the implementation and avoid includes,
> > no?
> > 
> > If you insist we can drop the struct ttm_backup * and just use
> > struct
> > file, but then again if we change the implementation to allow for
> > backuping to a file or similar that needs to be re-done, so as said
> > unless you insist I'd rather keep it as is.
> 
> Abstracting that is ok, I was just wondering about why you do it like
> this.
> 
> > 
> > > [SNIP]
> > > > > > + *
> > > > > > + * Context: If called from reclaim context, the caller
> > > > > > needs
> > > > > > to
> > > > > > + * assert that the shrinker gfp has __GFP_FS set, to avoid
> > > > > > + * deadlocking on lock_page(). If @writeback is set to
> > > > > > true
> > > > > > and
> > > > > > + * called from reclaim context, the caller also needs to
> > > > > > assert
> > > > > > + * that the shrinker gfp has __GFP_IO set, since without
> > > > > > it,
> > > > > > + * we're not allowed to start backup IO.
> > > > > > + *
> > > > > > + * Return: A handle on success. 0 on failure.
> > > > > > + * (This is following the swp_entry_t convention).
> > > > > > + *
> > > > > > + * Note: This function could be extended to back up a
> > > > > > folio
> > > > > > and
> > > > > > + * implementations would then split the folio internally
> > > > > > if
> > > > > > needed.
> > > > > > + * Drawback is that the caller would then have to keep
> > > > > > track
> > > > > > of
> > > > > > + * the folio size- and usage.
> > > > > > + */
> > > > > > +unsigned long
> > > > > > +ttm_backup_backup_page(struct ttm_backup *backup, struct
> > > > > > page
> > > > > > *page,
> > > > > > +		       bool writeback, pgoff_t idx, gfp_t
> > > > > > page_gfp,
> > > > > > +		       gfp_t alloc_gfp)
> > > > > > +{
> > > > > > +	struct file *filp = ttm_backup_to_file(backup);
> > > > > > +	struct address_space *mapping = filp->f_mapping;
> > > > > > +	unsigned long handle = 0;
> > > > > > +	struct folio *to_folio;
> > > > > > +	int ret;
> > > > > > +
> > > > > > +	to_folio = shmem_read_folio_gfp(mapping, idx,
> > > > > > alloc_gfp);
> > > > > > +	if (IS_ERR(to_folio))
> > > > > > +		return handle;
> > > Probably better to explicitly return 0 here.
> > OK,
> > 
> > > And BTW why are we using 0 as indication for an error? Couldn't
> > > we
> > > just
> > > use a long as return value and return a proper -errno here?
> > 0 is the swp_entry_t error value which is the convention also used
> > for
> > the handles, so rather than inventing something new It'd be good to
> > keep to something that would work even with handles aliased to
> > swp_entry_t if we'd need to resort to that at some point.
> 
> Uff, yeah but that is an implementation detail of the swap subsystem 
> caused by how we store the swapped out entries inside CPU PTEs.
> 
> I would strongly try to avoid that here. Was already wondering why we
> use long as return value and s64.

That is true, The background here is that the initial implementation
allowed for direct insertion into the swap cache, and then the handles
returned would be (unsigned long)swp_entry_t, and the interface was
kept to allow for such a change should it be necessary.

But yeah I guess a logical consequence of removing support for
alternative backup backends would be to drop explicit support for that.

So I can change that to s64 np.

/Thomas


> 
> Regards,
> Christian.
> 
> > 
> > > > > Just that I sleep better: This can never return a folio
> > > > > larger
> > > > > than a
> > > > > page, doesn't it?
> > > > The interface definitely allows for returning larger folios,
> > > > but
> > > > the
> > > > individual page in the folio is selected by
> > > > folio_file_page(folio,
> > > > idx).
> > > Ah, yeah completely missed that and was really wondering why that
> > > would
> > > work.
> > Thanks,
> > Thomas
> > 
> > > > /Thomas
> > > > 
> > > > 
> > > > > Apart from those background questions looks good to me.
> > > > > 
> > > > > Regards,
> > > > > Christian.
> > > > > 
> > > > > > +
> > > > > > +	folio_mark_accessed(to_folio);
> > > > > > +	folio_lock(to_folio);
> > > > > > +	folio_mark_dirty(to_folio);
> > > > > > +	copy_highpage(folio_file_page(to_folio, idx),
> > > > > > page);
> > > > > > +	handle = ttm_backup_shmem_idx_to_handle(idx);
> > > > > > +
> > > > > > +	if (writeback && !folio_mapped(to_folio) &&
> > > > > > +	    folio_clear_dirty_for_io(to_folio)) {
> > > > > > +		struct writeback_control wbc = {
> > > > > > +			.sync_mode = WB_SYNC_NONE,
> > > > > > +			.nr_to_write = SWAP_CLUSTER_MAX,
> > > > > > +			.range_start = 0,
> > > > > > +			.range_end = LLONG_MAX,
> > > > > > +			.for_reclaim = 1,
> > > > > > +		};
> > > > > > +		folio_set_reclaim(to_folio);
> > > > > > +		ret = mapping->a_ops-
> > > > > > > writepage(folio_file_page(to_folio, idx), &wbc);
> > > > > > +		if (!folio_test_writeback(to_folio))
> > > > > > +			folio_clear_reclaim(to_folio);
> > > > > > +		/* If writepage succeeds, it unlocks the
> > > > > > folio
> > > > > > */
> > > > > > +		if (ret)
> > > > > > +			folio_unlock(to_folio);
> > > The code ignores the error and potentially deserves an
> > > explanation
> > > for that.
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > > > > +	} else {
> > > > > > +		folio_unlock(to_folio);
> > > > > > +	}
> > > > > > +
> > > > > > +	folio_put(to_folio);
> > > > > > +
> > > > > > +	return handle;
> > > > > > +}
> > > > > > +
> > > > > > +/**
> > > > > > + * ttm_backup_fini() - Free the struct backup resources
> > > > > > after
> > > > > > last
> > > > > > use.
> > > > > > + * @backup: Pointer to the struct backup whose resources
> > > > > > to
> > > > > > free.
> > > > > > + *
> > > > > > + * After a call to this function, it's illegal to use the
> > > > > > @backup
> > > > > > pointer.
> > > > > > + */
> > > > > > +void ttm_backup_fini(struct ttm_backup *backup)
> > > > > > +{
> > > > > > +	fput(ttm_backup_to_file(backup));
> > > > > > +}
> > > > > > +
> > > > > > +/**
> > > > > > + * ttm_backup_bytes_avail() - Report the approximate
> > > > > > number of
> > > > > > bytes of backup space
> > > > > > + * left for backup.
> > > > > > + *
> > > > > > + * This function is intended also for driver use to
> > > > > > indicate
> > > > > > whether a
> > > > > > + * backup attempt is meaningful.
> > > > > > + *
> > > > > > + * Return: An approximate size of backup space available.
> > > > > > + */
> > > > > > +u64 ttm_backup_bytes_avail(void)
> > > > > > +{
> > > > > > +	/*
> > > > > > +	 * The idea behind backing up to shmem is that
> > > > > > shmem
> > > > > > objects may
> > > > > > +	 * eventually be swapped out. So no point swapping
> > > > > > out
> > > > > > if
> > > > > > there
> > > > > > +	 * is no or low swap-space available. But the
> > > > > > accuracy
> > > > > > of
> > > > > > this
> > > > > > +	 * number also depends on shmem actually swapping
> > > > > > out
> > > > > > backed-up
> > > > > > +	 * shmem objects without too much buffering.
> > > > > > +	 */
> > > > > > +	return (u64)get_nr_swap_pages() << PAGE_SHIFT;
> > > > > > +}
> > > > > > +EXPORT_SYMBOL_GPL(ttm_backup_bytes_avail);
> > > > > > +
> > > > > > +/**
> > > > > > + * ttm_backup_shmem_create() - Create a shmem-based struct
> > > > > > backup.
> > > > > > + * @size: The maximum size (in bytes) to back up.
> > > > > > + *
> > > > > > + * Create a backup utilizing shmem objects.
> > > > > > + *
> > > > > > + * Return: A pointer to a struct ttm_backup on success,
> > > > > > + * an error pointer on error.
> > > > > > + */
> > > > > > +struct ttm_backup *ttm_backup_shmem_create(loff_t size)
> > > > > > +{
> > > > > > +	struct file *filp;
> > > > > > +
> > > > > > +	filp = shmem_file_setup("ttm shmem backup", size,
> > > > > > 0);
> > > > > > +
> > > > > > +	return ttm_file_to_backup(filp);
> > > > > > +}
> > > > > > diff --git a/include/drm/ttm/ttm_backup.h
> > > > > > b/include/drm/ttm/ttm_backup.h
> > > > > > new file mode 100644
> > > > > > index 000000000000..20609da7e281
> > > > > > --- /dev/null
> > > > > > +++ b/include/drm/ttm/ttm_backup.h
> > > > > > @@ -0,0 +1,74 @@
> > > > > > +/* SPDX-License-Identifier: MIT */
> > > > > > +/*
> > > > > > + * Copyright © 2024 Intel Corporation
> > > > > > + */
> > > > > > +
> > > > > > +#ifndef _TTM_BACKUP_H_
> > > > > > +#define _TTM_BACKUP_H_
> > > > > > +
> > > > > > +#include <linux/mm_types.h>
> > > > > > +#include <linux/shmem_fs.h>
> > > > > > +
> > > > > > +struct ttm_backup;
> > > > > > +
> > > > > > +/**
> > > > > > + * ttm_backup_handle_to_page_ptr() - Convert handle to
> > > > > > struct
> > > > > > page
> > > > > > pointer
> > > > > > + * @handle: The handle to convert.
> > > > > > + *
> > > > > > + * Converts an opaque handle received from the
> > > > > > + * struct ttm_backoup_ops::backup_page() function to an
> > > > > > (invalid)
> > > > > > + * struct page pointer suitable for a struct page array.
> > > > > > + *
> > > > > > + * Return: An (invalid) struct page pointer.
> > > > > > + */
> > > > > > +static inline struct page *
> > > > > > +ttm_backup_handle_to_page_ptr(unsigned long handle)
> > > > > > +{
> > > > > > +	return (struct page *)(handle << 1 | 1);
> > > > > > +}
> > > > > > +
> > > > > > +/**
> > > > > > + * ttm_backup_page_ptr_is_handle() - Whether a struct page
> > > > > > pointer
> > > > > > is a handle
> > > > > > + * @page: The struct page pointer to check.
> > > > > > + *
> > > > > > + * Return: true if the struct page pointer is a handld
> > > > > > returned
> > > > > > from
> > > > > > + * ttm_backup_handle_to_page_ptr(). False otherwise.
> > > > > > + */
> > > > > > +static inline bool ttm_backup_page_ptr_is_handle(const
> > > > > > struct
> > > > > > page
> > > > > > *page)
> > > > > > +{
> > > > > > +	return (unsigned long)page & 1;
> > > > > > +}
> > > > > > +
> > > > > > +/**
> > > > > > + * ttm_backup_page_ptr_to_handle() - Convert a struct page
> > > > > > pointer
> > > > > > to a handle
> > > > > > + * @page: The struct page pointer to convert
> > > > > > + *
> > > > > > + * Return: The handle that was previously used in
> > > > > > + * ttm_backup_handle_to_page_ptr() to obtain a struct page
> > > > > > pointer, suitable
> > > > > > + * for use as argument in the struct ttm_backup_ops drop()
> > > > > > or
> > > > > > + * copy_backed_up_page() functions.
> > > > > > + */
> > > > > > +static inline unsigned long
> > > > > > +ttm_backup_page_ptr_to_handle(const struct page *page)
> > > > > > +{
> > > > > > +	WARN_ON(!ttm_backup_page_ptr_is_handle(page));
> > > > > > +	return (unsigned long)page >> 1;
> > > > > > +}
> > > > > > +
> > > > > > +void ttm_backup_drop(struct ttm_backup *backup, pgoff_t
> > > > > > handle);
> > > > > > +
> > > > > > +int ttm_backup_copy_page(struct ttm_backup *backup, struct
> > > > > > page
> > > > > > *dst,
> > > > > > +			 pgoff_t handle, bool intr);
> > > > > > +
> > > > > > +unsigned long
> > > > > > +ttm_backup_backup_page(struct ttm_backup *backup, struct
> > > > > > page
> > > > > > *page,
> > > > > > +		       bool writeback, pgoff_t idx, gfp_t
> > > > > > page_gfp,
> > > > > > +		       gfp_t alloc_gfp);
> > > > > > +
> > > > > > +void ttm_backup_fini(struct ttm_backup *backup);
> > > > > > +
> > > > > > +u64 ttm_backup_bytes_avail(void);
> > > > > > +
> > > > > > +struct ttm_backup *ttm_backup_shmem_create(loff_t size);
> > > > > > +
> > > > > > +#endif
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation
  2024-11-20  9:24       ` Christian König
  2024-11-20 10:34         ` Thomas Hellström
@ 2024-11-20 11:20         ` Thomas Hellström
  1 sibling, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-11-20 11:20 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Wed, 2024-11-20 at 10:24 +0100, Christian König wrote:
> 

[SNIP]

> > > Just that I sleep better: This can never return a folio larger
> > > than a
> > > page, doesn't it?
> > The interface definitely allows for returning larger folios, but
> > the
> > individual page in the folio is selected by folio_file_page(folio,
> > idx).
> 
> Ah, yeah completely missed that and was really wondering why that
> would 
> work.

One remaining sligtht concern, though, is that if we repeatedly call -
>writepage() for *each* page in a folio, that might degrade
performance. Not sure if the filesystem is supposed to coalesce such
requests if they happen, or whether things will start to crawl.

Right now we can't really tell, since shmem typically only allocates
single page folios (unless told otherwise, like i915 does), and in
addition shmem writepage() also splits any large folios. I've seen
patches floating around to remove that split, though.

/Thomas
 
> 
> > 
> > /Thomas
> > 
> > 
> > > Apart from those background questions looks good to me.
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > > +
> > > > +	folio_mark_accessed(to_folio);
> > > > +	folio_lock(to_folio);
> > > > +	folio_mark_dirty(to_folio);
> > > > +	copy_highpage(folio_file_page(to_folio, idx), page);
> > > > +	handle = ttm_backup_shmem_idx_to_handle(idx);
> > > > +
> > > > +	if (writeback && !folio_mapped(to_folio) &&
> > > > +	    folio_clear_dirty_for_io(to_folio)) {
> > > > +		struct writeback_control wbc = {
> > > > +			.sync_mode = WB_SYNC_NONE,
> > > > +			.nr_to_write = SWAP_CLUSTER_MAX,
> > > > +			.range_start = 0,
> > > > +			.range_end = LLONG_MAX,
> > > > +			.for_reclaim = 1,
> > > > +		};
> > > > +		folio_set_reclaim(to_folio);
> > > > +		ret = mapping->a_ops-
> > > > > writepage(folio_file_page(to_folio, idx), &wbc);
> > > > +		if (!folio_test_writeback(to_folio))
> > > > +			folio_clear_reclaim(to_folio);
> > > > +		/* If writepage succeeds, it unlocks the folio
> > > > */
> > > > +		if (ret)
> > > > +			folio_unlock(to_folio);
> 
> The code ignores the error and potentially deserves an explanation
> for that.
> 
> Regards,
> Christian.
> 
> > > > +	} else {
> > > > +		folio_unlock(to_folio);
> > > > +	}
> > > > +
> > > > +	folio_put(to_folio);
> > > > +
> > > > +	return handle;
> > > > +}
> > > > +
> > > > +/**
> > > > + * ttm_backup_fini() - Free the struct backup resources after
> > > > last
> > > > use.
> > > > + * @backup: Pointer to the struct backup whose resources to
> > > > free.
> > > > + *
> > > > + * After a call to this function, it's illegal to use the
> > > > @backup
> > > > pointer.
> > > > + */
> > > > +void ttm_backup_fini(struct ttm_backup *backup)
> > > > +{
> > > > +	fput(ttm_backup_to_file(backup));
> > > > +}
> > > > +
> > > > +/**
> > > > + * ttm_backup_bytes_avail() - Report the approximate number of
> > > > bytes of backup space
> > > > + * left for backup.
> > > > + *
> > > > + * This function is intended also for driver use to indicate
> > > > whether a
> > > > + * backup attempt is meaningful.
> > > > + *
> > > > + * Return: An approximate size of backup space available.
> > > > + */
> > > > +u64 ttm_backup_bytes_avail(void)
> > > > +{
> > > > +	/*
> > > > +	 * The idea behind backing up to shmem is that shmem
> > > > objects may
> > > > +	 * eventually be swapped out. So no point swapping out
> > > > if
> > > > there
> > > > +	 * is no or low swap-space available. But the accuracy
> > > > of
> > > > this
> > > > +	 * number also depends on shmem actually swapping out
> > > > backed-up
> > > > +	 * shmem objects without too much buffering.
> > > > +	 */
> > > > +	return (u64)get_nr_swap_pages() << PAGE_SHIFT;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(ttm_backup_bytes_avail);
> > > > +
> > > > +/**
> > > > + * ttm_backup_shmem_create() - Create a shmem-based struct
> > > > backup.
> > > > + * @size: The maximum size (in bytes) to back up.
> > > > + *
> > > > + * Create a backup utilizing shmem objects.
> > > > + *
> > > > + * Return: A pointer to a struct ttm_backup on success,
> > > > + * an error pointer on error.
> > > > + */
> > > > +struct ttm_backup *ttm_backup_shmem_create(loff_t size)
> > > > +{
> > > > +	struct file *filp;
> > > > +
> > > > +	filp = shmem_file_setup("ttm shmem backup", size, 0);
> > > > +
> > > > +	return ttm_file_to_backup(filp);
> > > > +}
> > > > diff --git a/include/drm/ttm/ttm_backup.h
> > > > b/include/drm/ttm/ttm_backup.h
> > > > new file mode 100644
> > > > index 000000000000..20609da7e281
> > > > --- /dev/null
> > > > +++ b/include/drm/ttm/ttm_backup.h
> > > > @@ -0,0 +1,74 @@
> > > > +/* SPDX-License-Identifier: MIT */
> > > > +/*
> > > > + * Copyright © 2024 Intel Corporation
> > > > + */
> > > > +
> > > > +#ifndef _TTM_BACKUP_H_
> > > > +#define _TTM_BACKUP_H_
> > > > +
> > > > +#include <linux/mm_types.h>
> > > > +#include <linux/shmem_fs.h>
> > > > +
> > > > +struct ttm_backup;
> > > > +
> > > > +/**
> > > > + * ttm_backup_handle_to_page_ptr() - Convert handle to struct
> > > > page
> > > > pointer
> > > > + * @handle: The handle to convert.
> > > > + *
> > > > + * Converts an opaque handle received from the
> > > > + * struct ttm_backoup_ops::backup_page() function to an
> > > > (invalid)
> > > > + * struct page pointer suitable for a struct page array.
> > > > + *
> > > > + * Return: An (invalid) struct page pointer.
> > > > + */
> > > > +static inline struct page *
> > > > +ttm_backup_handle_to_page_ptr(unsigned long handle)
> > > > +{
> > > > +	return (struct page *)(handle << 1 | 1);
> > > > +}
> > > > +
> > > > +/**
> > > > + * ttm_backup_page_ptr_is_handle() - Whether a struct page
> > > > pointer
> > > > is a handle
> > > > + * @page: The struct page pointer to check.
> > > > + *
> > > > + * Return: true if the struct page pointer is a handld
> > > > returned
> > > > from
> > > > + * ttm_backup_handle_to_page_ptr(). False otherwise.
> > > > + */
> > > > +static inline bool ttm_backup_page_ptr_is_handle(const struct
> > > > page
> > > > *page)
> > > > +{
> > > > +	return (unsigned long)page & 1;
> > > > +}
> > > > +
> > > > +/**
> > > > + * ttm_backup_page_ptr_to_handle() - Convert a struct page
> > > > pointer
> > > > to a handle
> > > > + * @page: The struct page pointer to convert
> > > > + *
> > > > + * Return: The handle that was previously used in
> > > > + * ttm_backup_handle_to_page_ptr() to obtain a struct page
> > > > pointer, suitable
> > > > + * for use as argument in the struct ttm_backup_ops drop() or
> > > > + * copy_backed_up_page() functions.
> > > > + */
> > > > +static inline unsigned long
> > > > +ttm_backup_page_ptr_to_handle(const struct page *page)
> > > > +{
> > > > +	WARN_ON(!ttm_backup_page_ptr_is_handle(page));
> > > > +	return (unsigned long)page >> 1;
> > > > +}
> > > > +
> > > > +void ttm_backup_drop(struct ttm_backup *backup, pgoff_t
> > > > handle);
> > > > +
> > > > +int ttm_backup_copy_page(struct ttm_backup *backup, struct
> > > > page
> > > > *dst,
> > > > +			 pgoff_t handle, bool intr);
> > > > +
> > > > +unsigned long
> > > > +ttm_backup_backup_page(struct ttm_backup *backup, struct page
> > > > *page,
> > > > +		       bool writeback, pgoff_t idx, gfp_t
> > > > page_gfp,
> > > > +		       gfp_t alloc_gfp);
> > > > +
> > > > +void ttm_backup_fini(struct ttm_backup *backup);
> > > > +
> > > > +u64 ttm_backup_bytes_avail(void);
> > > > +
> > > > +struct ttm_backup *ttm_backup_shmem_create(loff_t size);
> > > > +
> > > > +#endif


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 1/8] drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini()
  2024-11-20 10:51   ` Christian König
@ 2024-11-21 15:54     ` Thomas Hellström
  0 siblings, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-11-21 15:54 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Matthew Brost, Somalapuram Amaranath, Paulo Zanoni, Simona Vetter,
	dri-devel

On Wed, 2024-11-20 at 11:51 +0100, Christian König wrote:
> Am 15.11.24 um 16:01 schrieb Thomas Hellström:
> > Make the interface more symmetric by providing and using a
> > ttm_resource_cursor_init().
> > 
> > v10:
> > - Fix a stray newline (Matthew Brost)
> > - Update kerneldoc (Matthew Brost)
> > 
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> > Reviewed-by: Christian König <christian.koenig@amd.com>
> 
> Did you plan to merge this through drm-misc-next or the XE branch?
> 
> If through drm-misc-next then I would go ahead and push this patch
> since 
> that is really a stand alone cleanup.

I was planning to merge it all through drm-xe-next, so I'll hold off
merging that patch.

Thanks,
Thomas

> 
> Regards,
> Christian.
> 
> > ---
> >   drivers/gpu/drm/ttm/ttm_bo.c       |  3 ++-
> >   drivers/gpu/drm/ttm/ttm_bo_util.c  |  3 ++-
> >   drivers/gpu/drm/ttm/ttm_resource.c | 35 ++++++++++++++++++++-----
> > -----
> >   include/drm/ttm/ttm_resource.h     | 11 +++++-----
> >   4 files changed, 34 insertions(+), 18 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c
> > b/drivers/gpu/drm/ttm/ttm_bo.c
> > index 48c5365efca1..06d6a452c4f4 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> > @@ -450,7 +450,8 @@ int ttm_bo_evict_first(struct ttm_device *bdev,
> > struct ttm_resource_manager *man
> >   	int ret = 0;
> >   
> >   	spin_lock(&bdev->lru_lock);
> > -	res = ttm_resource_manager_first(man, &cursor);
> > +	ttm_resource_cursor_init(&cursor, man);
> > +	res = ttm_resource_manager_first(&cursor);
> >   	ttm_resource_cursor_fini(&cursor);
> >   	if (!res) {
> >   		ret = -ENOENT;
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > index d939925efa81..917096bd5f68 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -865,7 +865,8 @@ s64 ttm_lru_walk_for_evict(struct ttm_lru_walk
> > *walk, struct ttm_device *bdev,
> >   	s64 lret;
> >   
> >   	spin_lock(&bdev->lru_lock);
> > -	ttm_resource_manager_for_each_res(man, &cursor, res) {
> > +	ttm_resource_cursor_init(&cursor, man);
> > +	ttm_resource_manager_for_each_res(&cursor, res) {
> >   		struct ttm_buffer_object *bo = res->bo;
> >   		bool bo_needs_unlock = false;
> >   		bool bo_locked = false;
> > diff --git a/drivers/gpu/drm/ttm/ttm_resource.c
> > b/drivers/gpu/drm/ttm/ttm_resource.c
> > index a87665eb28a6..e19360cc7930 100644
> > --- a/drivers/gpu/drm/ttm/ttm_resource.c
> > +++ b/drivers/gpu/drm/ttm/ttm_resource.c
> > @@ -81,6 +81,23 @@ static void ttm_bulk_move_drop_cursors(struct
> > ttm_lru_bulk_move *bulk)
> >   		ttm_resource_cursor_clear_bulk(cursor);
> >   }
> >   
> > +/**
> > + * ttm_resource_cursor_init() - Initialize a struct
> > ttm_resource_cursor
> > + * @cursor: The cursor to initialize.
> > + * @man: The resource manager.
> > + *
> > + * Initialize the cursor before using it for iteration.
> > + */
> > +void ttm_resource_cursor_init(struct ttm_resource_cursor *cursor,
> > +			      struct ttm_resource_manager *man)
> > +{
> > +	cursor->priority = 0;
> > +	cursor->man = man;
> > +	ttm_lru_item_init(&cursor->hitch, TTM_LRU_HITCH);
> > +	INIT_LIST_HEAD(&cursor->bulk_link);
> > +	INIT_LIST_HEAD(&cursor->hitch.link);
> > +}
> > +
> >   /**
> >    * ttm_resource_cursor_fini() - Finalize the LRU list cursor
> > usage
> >    * @cursor: The struct ttm_resource_cursor to finalize.
> > @@ -593,7 +610,6 @@ ttm_resource_cursor_check_bulk(struct
> > ttm_resource_cursor *cursor,
> >   /**
> >    * ttm_resource_manager_first() - Start iterating over the
> > resources
> >    * of a resource manager
> > - * @man: resource manager to iterate over
> >    * @cursor: cursor to record the position
> >    *
> >    * Initializes the cursor and starts iterating. When done
> > iterating,
> > @@ -602,17 +618,16 @@ ttm_resource_cursor_check_bulk(struct
> > ttm_resource_cursor *cursor,
> >    * Return: The first resource from the resource manager.
> >    */
> >   struct ttm_resource *
> > -ttm_resource_manager_first(struct ttm_resource_manager *man,
> > -			   struct ttm_resource_cursor *cursor)
> > +ttm_resource_manager_first(struct ttm_resource_cursor *cursor)
> >   {
> > -	lockdep_assert_held(&man->bdev->lru_lock);
> > +	struct ttm_resource_manager *man = cursor->man;
> >   
> > -	cursor->priority = 0;
> > -	cursor->man = man;
> > -	ttm_lru_item_init(&cursor->hitch, TTM_LRU_HITCH);
> > -	INIT_LIST_HEAD(&cursor->bulk_link);
> > -	list_add(&cursor->hitch.link, &man->lru[cursor-
> > >priority]);
> > +	if (WARN_ON_ONCE(!man))
> > +		return NULL;
> > +
> > +	lockdep_assert_held(&man->bdev->lru_lock);
> >   
> > +	list_move(&cursor->hitch.link, &man->lru[cursor-
> > >priority]);
> >   	return ttm_resource_manager_next(cursor);
> >   }
> >   
> > @@ -648,8 +663,6 @@ ttm_resource_manager_next(struct
> > ttm_resource_cursor *cursor)
> >   		ttm_resource_cursor_clear_bulk(cursor);
> >   	}
> >   
> > -	ttm_resource_cursor_fini(cursor);
> > -
> >   	return NULL;
> >   }
> >   
> > diff --git a/include/drm/ttm/ttm_resource.h
> > b/include/drm/ttm/ttm_resource.h
> > index be034be56ba1..e1f3b95d73b6 100644
> > --- a/include/drm/ttm/ttm_resource.h
> > +++ b/include/drm/ttm/ttm_resource.h
> > @@ -325,6 +325,9 @@ struct ttm_resource_cursor {
> >   	unsigned int priority;
> >   };
> >   
> > +void ttm_resource_cursor_init(struct ttm_resource_cursor *cursor,
> > +			      struct ttm_resource_manager *man);
> > +
> >   void ttm_resource_cursor_fini(struct ttm_resource_cursor
> > *cursor);
> >   
> >   /**
> > @@ -456,8 +459,7 @@ void ttm_resource_manager_debug(struct
> > ttm_resource_manager *man,
> >   				struct drm_printer *p);
> >   
> >   struct ttm_resource *
> > -ttm_resource_manager_first(struct ttm_resource_manager *man,
> > -			   struct ttm_resource_cursor *cursor);
> > +ttm_resource_manager_first(struct ttm_resource_cursor *cursor);
> >   struct ttm_resource *
> >   ttm_resource_manager_next(struct ttm_resource_cursor *cursor);
> >   
> > @@ -466,14 +468,13 @@ ttm_lru_first_res_or_null(struct list_head
> > *head);
> >   
> >   /**
> >    * ttm_resource_manager_for_each_res - iterate over all resources
> > - * @man: the resource manager
> >    * @cursor: struct ttm_resource_cursor for the current position
> >    * @res: the current resource
> >    *
> >    * Iterate over all the evictable resources in a resource
> > manager.
> >    */
> > -#define ttm_resource_manager_for_each_res(man, cursor,
> > res)		\
> > -	for (res = ttm_resource_manager_first(man, cursor);
> > res;	\
> > +#define ttm_resource_manager_for_each_res(cursor, res)	\
> > +	for (res = ttm_resource_manager_first(cursor); res;	\
> >   	     res = ttm_resource_manager_next(cursor))
> >   
> >   struct ttm_kmap_iter *
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-11-15 15:01 ` [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages Thomas Hellström
@ 2024-12-03 13:12   ` Christian König
  2024-12-03 13:42     ` Thomas Hellström
  0 siblings, 1 reply; 54+ messages in thread
From: Christian König @ 2024-12-03 13:12 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Am 15.11.24 um 16:01 schrieb Thomas Hellström:
> Provide a helper to shrink ttm_tt page-vectors on a per-page
> basis. A ttm_backup backend could then in theory get away with
> allocating a single temporary page for each struct ttm_tt.
>
> This is accomplished by splitting larger pages before trying to
> back them up.
>
> In the future we could allow ttm_backup to handle backing up
> large pages as well, but currently there's no benefit in
> doing that, since the shmem backup backend would have to
> split those anyway to avoid allocating too much temporary
> memory, and if the backend instead inserts pages into the
> swap-cache, those are split on reclaim by the core.
>
> Due to potential backup- and recover errors, allow partially swapped
> out struct ttm_tt's, although mark them as swapped out stopping them
> from being swapped out a second time. More details in the ttm_pool.c
> DOC section.
>
> v2:
> - A couple of cleanups and error fixes in ttm_pool_back_up_tt.
> - s/back_up/backup/
> - Add a writeback parameter to the exported interface.
> v8:
> - Use a struct for flags for readability (Matt Brost)
> - Address misc other review comments (Matt Brost)
> v9:
> - Update the kerneldoc for the ttm_tt::backup field.
> v10:
> - Rebase.
> v13:
> - Rebase on ttm_backup interface change. Update kerneldoc.
> - Rebase and adjust ttm_tt_is_swapped().
>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
>   drivers/gpu/drm/ttm/ttm_pool.c | 396 +++++++++++++++++++++++++++++++--
>   drivers/gpu/drm/ttm/ttm_tt.c   |  37 +++
>   include/drm/ttm/ttm_pool.h     |   6 +
>   include/drm/ttm/ttm_tt.h       |  32 ++-
>   4 files changed, 457 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> index 8504dbe19c1a..f58864439edb 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -41,6 +41,7 @@
>   #include <asm/set_memory.h>
>   #endif
>   
> +#include <drm/ttm/ttm_backup.h>
>   #include <drm/ttm/ttm_pool.h>
>   #include <drm/ttm/ttm_tt.h>
>   #include <drm/ttm/ttm_bo.h>
> @@ -58,6 +59,32 @@ struct ttm_pool_dma {
>   	unsigned long vaddr;
>   };
>   
> +/**
> + * struct ttm_pool_tt_restore - State representing restore from backup
> + * @alloced_pages: Total number of already allocated pages for the ttm_tt.
> + * @restored_pages: Number of (sub) pages restored from swap for this
> + *		     chunk of 1 << @order pages.
> + * @first_page: The ttm page ptr representing for @old_pages[0].
> + * @caching_divide: Page pointer where subsequent pages are cached.
> + * @old_pages: Backup copy of page pointers that were replaced by the new
> + *	       page allocation.
> + * @pool: The pool used for page allocation while restoring.
> + * @order: The order of the last page allocated while restoring.
> + *
> + * Recovery from backup might fail when we've recovered less than the
> + * full ttm_tt. In order not to loose any data (yet), keep information
> + * around that allows us to restart a failed ttm backup recovery.
> + */
> +struct ttm_pool_tt_restore {
> +	pgoff_t alloced_pages;
> +	pgoff_t restored_pages;
> +	struct page **first_page;
> +	struct page **caching_divide;
> +	struct ttm_pool *pool;
> +	unsigned int order;
> +	struct page *old_pages[];
> +};
> +
>   static unsigned long page_pool_size;
>   
>   MODULE_PARM_DESC(page_pool_size, "Number of pages in the WC/UC/DMA pool");
> @@ -354,11 +381,105 @@ static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p)
>   	return p->private;
>   }
>   
> +/*
> + * To be able to insert single pages into backup directly,
> + * we need to split multi-order page allocations and make them look
> + * like single-page allocations.
> + */
> +static void ttm_pool_split_for_swap(struct ttm_pool *pool, struct page *p)
> +{
> +	unsigned int order = ttm_pool_page_order(pool, p);
> +	pgoff_t nr;
> +
> +	if (!order)
> +		return;
> +
> +	split_page(p, order);

What exactly should split_page() do here and why is that necessary?

IIRC that function just updated the reference count and updated things 
like page owner tracking and memcg accounting. Which should both be 
completely irrelevant here.

Or do you just do that so that you can free each page individually?

> +	nr = 1UL << order;
> +	while (nr--)
> +		(p++)->private = 0;
> +}
> +
> +/**
> + * DOC: Partial backup and restoration of a struct ttm_tt.
> + *
> + * Swapout using ttm_backup_backup_page() and swapin using
> + * ttm_backup_copy_page() may fail.
> + * The former most likely due to lack of swap-space or memory, the latter due
> + * to lack of memory or because of signal interruption during waits.
> + *
> + * Backup failure is easily handled by using a ttm_tt pages vector that holds
> + * both swap entries and page pointers. This has to be taken into account when
> + * restoring such a ttm_tt from backup, and when freeing it while backed up.
> + * When restoring, for simplicity, new pages are actually allocated from the
> + * pool and the contents of any old pages are copied in and then the old pages
> + * are released.
> + *
> + * For restoration failures, the struct ttm_pool_tt_restore holds sufficient state
> + * to be able to resume an interrupted restore, and that structure is freed once
> + * the restoration is complete. If the struct ttm_tt is destroyed while there
> + * is a valid struct ttm_pool_tt_restore attached, that is also properly taken
> + * care of.
> + */
> +
> +static bool ttm_pool_restore_valid(const struct ttm_pool_tt_restore *restore)
> +{
> +	return restore && restore->restored_pages < (1 << restore->order);
> +}
> +
> +static int ttm_pool_restore_tt(struct ttm_pool_tt_restore *restore,
> +			       struct ttm_backup *backup,
> +			       struct ttm_operation_ctx *ctx)
> +{
> +	unsigned int i, nr = 1 << restore->order;
> +	int ret = 0;
> +
> +	if (!ttm_pool_restore_valid(restore))
> +		return 0;
> +
> +	for (i = restore->restored_pages; i < nr; ++i) {
> +		struct page *p = restore->old_pages[i];
> +
> +		if (ttm_backup_page_ptr_is_handle(p)) {
> +			unsigned long handle = ttm_backup_page_ptr_to_handle(p);
> +
> +			if (handle == 0)
> +				continue;
> +
> +			ret = ttm_backup_copy_page
> +				(backup, restore->first_page[i],
> +				 handle, ctx->interruptible);

That coding style looks really odd, I didn't even notice that it is a 
function call initially.

Maybe put everything under the if into a separate function.

> +			if (ret)
> +				break;
> +
> +			ttm_backup_drop(backup, handle);
> +		} else if (p) {
> +			/*
> +			 * We could probably avoid splitting the old page
> +			 * using clever logic, but ATM we don't care, as
> +			 * we prioritize releasing memory ASAP. Note that
> +			 * here, the old retained page is always write-back
> +			 * cached.
> +			 */
> +			ttm_pool_split_for_swap(restore->pool, p);
> +			copy_highpage(restore->first_page[i], p);
> +			__free_pages(p, 0);
> +		}
> +
> +		restore->restored_pages++;
> +		restore->old_pages[i] = NULL;
> +		cond_resched();

There is a push to remove cond_resched(), see here: 
https://patchwork.kernel.org/project/linux-mm/patch/20231107230822.371443-30-ankur.a.arora@oracle.com/

Not sure in which discussion that removal went, but IIRC we should not 
add any new users of it.

> +	}
> +
> +	return ret;
> +}
> +
>   /* Called when we got a page, either from a pool or newly allocated */
>   static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order,
>   				   struct page *p, dma_addr_t **dma_addr,
>   				   unsigned long *num_pages,
> -				   struct page ***pages)
> +				   struct page ***pages,
> +				   struct ttm_pool_tt_restore *restore)
>   {
>   	unsigned int i;
>   	int r;
> @@ -369,6 +490,16 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order,
>   			return r;
>   	}
>   
> +	if (restore) {
> +		memcpy(restore->old_pages, *pages,
> +		       (1 << order) * sizeof(*restore->old_pages));
> +		memset(*pages, 0, (1 << order) * sizeof(**pages));
> +		restore->order = order;
> +		restore->restored_pages = 0;
> +		restore->first_page = *pages;
> +		restore->alloced_pages += 1UL << order;
> +	}
> +
>   	*num_pages -= 1 << order;
>   	for (i = 1 << order; i; --i, ++(*pages), ++p)
>   		**pages = p;
> @@ -394,22 +525,39 @@ static void ttm_pool_free_range(struct ttm_pool *pool, struct ttm_tt *tt,
>   				pgoff_t start_page, pgoff_t end_page)
>   {
>   	struct page **pages = &tt->pages[start_page];
> +	struct ttm_backup *backup = tt->backup;
>   	unsigned int order;
>   	pgoff_t i, nr;
>   
>   	for (i = start_page; i < end_page; i += nr, pages += nr) {
>   		struct ttm_pool_type *pt = NULL;
> +		struct page *p = *pages;
> +
> +		if (ttm_backup_page_ptr_is_handle(p)) {
> +			unsigned long handle = ttm_backup_page_ptr_to_handle(p);
> +
> +			nr = 1;
> +			if (handle != 0)
> +				ttm_backup_drop(backup, handle);
> +			continue;
> +		}
> +
> +		if (pool) {
> +			order = ttm_pool_page_order(pool, p);
> +			nr = (1UL << order);
> +			if (tt->dma_address)
> +				ttm_pool_unmap(pool, tt->dma_address[i], nr);
>   
> -		order = ttm_pool_page_order(pool, *pages);
> -		nr = (1UL << order);
> -		if (tt->dma_address)
> -			ttm_pool_unmap(pool, tt->dma_address[i], nr);
> +			pt = ttm_pool_select_type(pool, caching, order);
> +		} else {
> +			order = p->private;
> +			nr = (1UL << order);
> +		}
>   
> -		pt = ttm_pool_select_type(pool, caching, order);
>   		if (pt)
> -			ttm_pool_type_give(pt, *pages);
> +			ttm_pool_type_give(pt, p);
>   		else
> -			ttm_pool_free_page(pool, caching, order, *pages);
> +			ttm_pool_free_page(pool, caching, order, p);
>   	}
>   }
>   
> @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
>   	else
>   		gfp_flags |= GFP_HIGHUSER;
>   
> -	for (order = min_t(unsigned int, MAX_PAGE_ORDER, __fls(num_pages));
> -	     num_pages;
> -	     order = min_t(unsigned int, order, __fls(num_pages))) {
> +	order = min_t(unsigned int, MAX_PAGE_ORDER, __fls(num_pages));
> +
> +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
> +		if (!tt->restore) {
> +			gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
> +
> +			if (ctx->gfp_retry_mayfail)
> +				gfp |= __GFP_RETRY_MAYFAIL;
> +
> +			tt->restore =
> +				kvzalloc(struct_size(tt->restore, old_pages,
> +						     (size_t)1 << order), gfp);
> +			if (!tt->restore)
> +				return -ENOMEM;
> +		} else if (ttm_pool_restore_valid(tt->restore)) {
> +			struct ttm_pool_tt_restore *restore = tt->restore;
> +
> +			num_pages -= restore->alloced_pages;
> +			order = min_t(unsigned int, order, __fls(num_pages));
> +			pages += restore->alloced_pages;
> +			r = ttm_pool_restore_tt(restore, tt->backup, ctx);
> +			if (r)
> +				return r;
> +			caching = restore->caching_divide;
> +		}
> +
> +		tt->restore->pool = pool;
> +	}

Hui? Why is that part of the allocation function now?

At bare minimum I would expect that this is a new function.

Regards,
Christian.

> +
> +	for (; num_pages; order = min_t(unsigned int, order, __fls(num_pages))) {
>   		struct ttm_pool_type *pt;
>   
>   		page_caching = tt->caching;
> @@ -472,11 +647,19 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
>   				r = ttm_pool_page_allocated(pool, order, p,
>   							    &dma_addr,
>   							    &num_pages,
> -							    &pages);
> +							    &pages,
> +							    tt->restore);
>   				if (r)
>   					goto error_free_page;
>   
>   				caching = pages;
> +				if (ttm_pool_restore_valid(tt->restore)) {
> +					r = ttm_pool_restore_tt(tt->restore, tt->backup,
> +								ctx);
> +					if (r)
> +						goto error_free_all;
> +				}
> +
>   				if (num_pages < (1 << order))
>   					break;
>   
> @@ -496,9 +679,17 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
>   				caching = pages;
>   			}
>   			r = ttm_pool_page_allocated(pool, order, p, &dma_addr,
> -						    &num_pages, &pages);
> +						    &num_pages, &pages,
> +						    tt->restore);
>   			if (r)
>   				goto error_free_page;
> +
> +			if (ttm_pool_restore_valid(tt->restore)) {
> +				r = ttm_pool_restore_tt(tt->restore, tt->backup, ctx);
> +				if (r)
> +					goto error_free_all;
> +			}
> +
>   			if (PageHighMem(p))
>   				caching = pages;
>   		}
> @@ -517,12 +708,26 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
>   	if (r)
>   		goto error_free_all;
>   
> +	if (tt->restore) {
> +		kvfree(tt->restore);
> +		tt->restore = NULL;
> +	}
> +
> +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)
> +		tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP |
> +				    TTM_TT_FLAG_SWAPPED);
> +
>   	return 0;
>   
>   error_free_page:
>   	ttm_pool_free_page(pool, page_caching, order, p);
>   
>   error_free_all:
> +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
> +		tt->restore->caching_divide = caching;
> +		return r;
> +	}
> +
>   	num_pages = tt->num_pages - num_pages;
>   	caching_divide = caching - tt->pages;
>   	ttm_pool_free_range(pool, tt, tt->caching, 0, caching_divide);
> @@ -549,6 +754,171 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt)
>   }
>   EXPORT_SYMBOL(ttm_pool_free);
>   
> +/**
> + * ttm_pool_release_backed_up() - Release content of a swapped-out struct ttm_tt
> + * @tt: The struct ttm_tt.
> + *
> + * Release handles with associated content or any remaining pages of
> + * a backed-up struct ttm_tt.
> + */
> +void ttm_pool_release_backed_up(struct ttm_tt *tt)
> +{
> +	struct ttm_backup *backup = tt->backup;
> +	struct ttm_pool_tt_restore *restore;
> +	pgoff_t i, start_page = 0;
> +	unsigned long handle;
> +
> +	if (!(tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP))
> +		return;
> +
> +	restore = tt->restore;
> +
> +	if (ttm_pool_restore_valid(restore)) {
> +		pgoff_t nr = 1UL << restore->order;
> +
> +		for (i = restore->restored_pages; i < nr; ++i) {
> +			struct page *p = restore->old_pages[i];
> +
> +			if (ttm_backup_page_ptr_is_handle(p)) {
> +				handle = ttm_backup_page_ptr_to_handle(p);
> +				if (handle == 0)
> +					continue;
> +
> +				ttm_backup_drop(backup, handle);
> +			} else if (p) {
> +				ttm_pool_split_for_swap(restore->pool, p);
> +				__free_pages(p, 0);
> +			}
> +		}
> +	}
> +
> +	if (restore) {
> +		pgoff_t mid = restore->caching_divide - tt->pages;
> +
> +		start_page = restore->alloced_pages;
> +		/* Pages that might be dma-mapped and non-cached */
> +		ttm_pool_free_range(restore->pool, tt, tt->caching,
> +				    0, mid);
> +		/* Pages that might be dma-mapped but cached */
> +		ttm_pool_free_range(restore->pool, tt, ttm_cached,
> +				    mid, restore->alloced_pages);
> +	}
> +
> +	/* Shrunken pages. Cached and not dma-mapped. */
> +	ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt->num_pages);
> +
> +	if (restore) {
> +		kvfree(restore);
> +		tt->restore = NULL;
> +	}
> +
> +	tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP | TTM_TT_FLAG_SWAPPED);
> +}
> +
> +/**
> + * ttm_pool_backup_tt() - Back up or purge a struct ttm_tt
> + * @pool: The pool used when allocating the struct ttm_tt.
> + * @ttm: The struct ttm_tt.
> + * @flags: Flags to govern the backup behaviour.
> + *
> + * Back up or purge a struct ttm_tt. If @purge is true, then
> + * all pages will be freed directly to the system rather than to the pool
> + * they were allocated from, making the function behave similarly to
> + * ttm_pool_free(). If @purge is false the pages will be backed up instead,
> + * exchanged for handles.
> + * A subsequent call to ttm_pool_alloc() will then read back the content and
> + * a subsequent call to ttm_pool_release_shrunken() will drop it.
> + * If backup of a page fails for whatever reason, @ttm will still be
> + * partially backed up, retaining those pages for which backup fails.
> + *
> + * Return: Number of pages actually backed up or freed, or negative
> + * error code on error.
> + */
> +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm,
> +			const struct ttm_backup_flags *flags)
> +{
> +	struct ttm_backup *backup = ttm->backup;
> +	struct page *page;
> +	unsigned long handle;
> +	gfp_t alloc_gfp;
> +	gfp_t gfp;
> +	int ret = 0;
> +	pgoff_t shrunken = 0;
> +	pgoff_t i, num_pages;
> +
> +	if ((!ttm_backup_bytes_avail() && !flags->purge) ||
> +	    pool->use_dma_alloc ||
> +	    (ttm->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP))
> +		return -EBUSY;
> +
> +#ifdef CONFIG_X86
> +	/* Anything returned to the system needs to be cached. */
> +	if (ttm->caching != ttm_cached)
> +		set_pages_array_wb(ttm->pages, ttm->num_pages);
> +#endif
> +
> +	if (ttm->dma_address || flags->purge) {
> +		for (i = 0; i < ttm->num_pages; i += num_pages) {
> +			unsigned int order;
> +
> +			page = ttm->pages[i];
> +			if (unlikely(!page)) {
> +				num_pages = 1;
> +				continue;
> +			}
> +
> +			order = ttm_pool_page_order(pool, page);
> +			num_pages = 1UL << order;
> +			if (ttm->dma_address)
> +				ttm_pool_unmap(pool, ttm->dma_address[i],
> +					       num_pages);
> +			if (flags->purge) {
> +				shrunken += num_pages;
> +				page->private = 0;
> +				__free_pages(page, order);
> +				memset(ttm->pages + i, 0,
> +				       num_pages * sizeof(*ttm->pages));
> +			}
> +		}
> +	}
> +
> +	if (flags->purge)
> +		return shrunken;
> +
> +	if (pool->use_dma32)
> +		gfp = GFP_DMA32;
> +	else
> +		gfp = GFP_HIGHUSER;
> +
> +	alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN | __GFP_RETRY_MAYFAIL;
> +
> +	for (i = 0; i < ttm->num_pages; ++i) {
> +		page = ttm->pages[i];
> +		if (unlikely(!page))
> +			continue;
> +
> +		ttm_pool_split_for_swap(pool, page);
> +
> +		handle = ttm_backup_backup_page(backup, page, flags->writeback, i,
> +						gfp, alloc_gfp);
> +		if (handle) {
> +			ttm->pages[i] = ttm_backup_handle_to_page_ptr(handle);
> +			put_page(page);
> +			shrunken++;
> +		} else {
> +			/* We allow partially shrunken tts */
> +			ret = -ENOMEM;
> +			break;
> +		}
> +	}
> +
> +	if (shrunken)
> +		ttm->page_flags |= (TTM_TT_FLAG_PRIV_BACKED_UP |
> +				    TTM_TT_FLAG_SWAPPED);
> +
> +	return shrunken ? shrunken : ret;
> +}
> +
>   /**
>    * ttm_pool_init - Initialize a pool
>    *
> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
> index 3baf215eca23..dd4eabe4ad79 100644
> --- a/drivers/gpu/drm/ttm/ttm_tt.c
> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> @@ -40,6 +40,7 @@
>   #include <drm/drm_cache.h>
>   #include <drm/drm_device.h>
>   #include <drm/drm_util.h>
> +#include <drm/ttm/ttm_backup.h>
>   #include <drm/ttm/ttm_bo.h>
>   #include <drm/ttm/ttm_tt.h>
>   
> @@ -158,6 +159,8 @@ static void ttm_tt_init_fields(struct ttm_tt *ttm,
>   	ttm->swap_storage = NULL;
>   	ttm->sg = bo->sg;
>   	ttm->caching = caching;
> +	ttm->restore = NULL;
> +	ttm->backup = NULL;
>   }
>   
>   int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> @@ -182,6 +185,12 @@ void ttm_tt_fini(struct ttm_tt *ttm)
>   		fput(ttm->swap_storage);
>   	ttm->swap_storage = NULL;
>   
> +	ttm_pool_release_backed_up(ttm);
> +	if (ttm->backup) {
> +		ttm_backup_fini(ttm->backup);
> +		ttm->backup = NULL;
> +	}
> +
>   	if (ttm->pages)
>   		kvfree(ttm->pages);
>   	else
> @@ -253,6 +262,34 @@ int ttm_tt_swapin(struct ttm_tt *ttm)
>   }
>   EXPORT_SYMBOL_FOR_TESTS_ONLY(ttm_tt_swapin);
>   
> +/**
> + * ttm_tt_backup() - Helper to back up a struct ttm_tt.
> + * @bdev: The TTM device.
> + * @tt: The struct ttm_tt.
> + * @flags: Flags that govern the backup behaviour.
> + *
> + * Update the page accounting and call ttm_pool_shrink_tt to free pages
> + * or back them up.
> + *
> + * Return: Number of pages freed or swapped out, or negative error code on
> + * error.
> + */
> +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
> +		   const struct ttm_backup_flags flags)
> +{
> +	long ret;
> +
> +	if (WARN_ON(IS_ERR_OR_NULL(tt->backup)))
> +		return 0;
> +
> +	ret = ttm_pool_backup_tt(&bdev->pool, tt, &flags);
> +
> +	if (ret > 0)
> +		tt->page_flags &= ~TTM_TT_FLAG_PRIV_POPULATED;
> +
> +	return ret;
> +}
> +
>   /**
>    * ttm_tt_swapout - swap out tt object
>    *
> diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h
> index 160d954a261e..3112a4be835c 100644
> --- a/include/drm/ttm/ttm_pool.h
> +++ b/include/drm/ttm/ttm_pool.h
> @@ -33,6 +33,7 @@
>   
>   struct device;
>   struct seq_file;
> +struct ttm_backup_flags;
>   struct ttm_operation_ctx;
>   struct ttm_pool;
>   struct ttm_tt;
> @@ -89,6 +90,11 @@ void ttm_pool_fini(struct ttm_pool *pool);
>   
>   int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m);
>   
> +void ttm_pool_release_backed_up(struct ttm_tt *tt);
> +
> +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm,
> +			const struct ttm_backup_flags *flags);
> +
>   int ttm_pool_mgr_init(unsigned long num_pages);
>   void ttm_pool_mgr_fini(void);
>   
> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
> index 991edafdb2dd..6ca2fc7b2a26 100644
> --- a/include/drm/ttm/ttm_tt.h
> +++ b/include/drm/ttm/ttm_tt.h
> @@ -32,11 +32,13 @@
>   #include <drm/ttm/ttm_caching.h>
>   #include <drm/ttm/ttm_kmap_iter.h>
>   
> +struct ttm_backup;
>   struct ttm_device;
>   struct ttm_tt;
>   struct ttm_resource;
>   struct ttm_buffer_object;
>   struct ttm_operation_ctx;
> +struct ttm_pool_tt_restore;
>   
>   /**
>    * struct ttm_tt - This is a structure holding the pages, caching- and aperture
> @@ -88,6 +90,9 @@ struct ttm_tt {
>   	 * TTM_TT_FLAG_PRIV_POPULATED: TTM internal only. DO NOT USE. This is
>   	 * set by TTM after ttm_tt_populate() has successfully returned, and is
>   	 * then unset when TTM calls ttm_tt_unpopulate().
> +	 *
> +	 * TTM_TT_FLAG_PRIV_BACKED_UP: TTM internal only. This is set if the
> +	 * struct ttm_tt has been (possibly partially) backed up.
>   	 */
>   #define TTM_TT_FLAG_SWAPPED		BIT(0)
>   #define TTM_TT_FLAG_ZERO_ALLOC		BIT(1)
> @@ -96,6 +101,7 @@ struct ttm_tt {
>   #define TTM_TT_FLAG_DECRYPTED		BIT(4)
>   
>   #define TTM_TT_FLAG_PRIV_POPULATED	BIT(5)
> +#define TTM_TT_FLAG_PRIV_BACKED_UP	BIT(6)
>   	uint32_t page_flags;
>   	/** @num_pages: Number of pages in the page array. */
>   	uint32_t num_pages;
> @@ -105,11 +111,20 @@ struct ttm_tt {
>   	dma_addr_t *dma_address;
>   	/** @swap_storage: Pointer to shmem struct file for swap storage. */
>   	struct file *swap_storage;
> +	/**
> +	 * @backup: Pointer to backup struct for backed up tts.
> +	 * Could be unified with @swap_storage. Meanwhile, the driver's
> +	 * ttm_tt_create() callback is responsible for assigning
> +	 * this field.
> +	 */
> +	struct ttm_backup *backup;
>   	/**
>   	 * @caching: The current caching state of the pages, see enum
>   	 * ttm_caching.
>   	 */
>   	enum ttm_caching caching;
> +	/** @restore: Partial restoration from backup state. TTM private */
> +	struct ttm_pool_tt_restore *restore;
>   };
>   
>   /**
> @@ -131,7 +146,7 @@ static inline bool ttm_tt_is_populated(struct ttm_tt *tt)
>   
>   static inline bool ttm_tt_is_swapped(const struct ttm_tt *tt)
>   {
> -	return tt->page_flags & TTM_TT_FLAG_SWAPPED;
> +	return tt->page_flags & (TTM_TT_FLAG_SWAPPED | TTM_TT_FLAG_PRIV_BACKED_UP);
>   }
>   
>   /**
> @@ -235,6 +250,21 @@ void ttm_tt_mgr_init(unsigned long num_pages, unsigned long num_dma32_pages);
>   struct ttm_kmap_iter *ttm_kmap_iter_tt_init(struct ttm_kmap_iter_tt *iter_tt,
>   					    struct ttm_tt *tt);
>   unsigned long ttm_tt_pages_limit(void);
> +
> +/**
> + * struct ttm_backup_flags - Flags to govern backup behaviour.
> + * @purge: Free pages without backing up. Bypass pools.
> + * @writeback: Attempt to copy contents directly to swap space, even
> + * if that means blocking on writes to external memory.
> + */
> +struct ttm_backup_flags {
> +	u32 purge : 1;
> +	u32 writeback : 1;
> +};
> +
> +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
> +		   const struct ttm_backup_flags flags);
> +
>   #if IS_ENABLED(CONFIG_AGP)
>   #include <linux/agp_backend.h>
>   


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 13:12   ` Christian König
@ 2024-12-03 13:42     ` Thomas Hellström
  2024-12-03 14:51       ` Christian König
  0 siblings, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-12-03 13:42 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Tue, 2024-12-03 at 14:12 +0100, Christian König wrote:
> Am 15.11.24 um 16:01 schrieb Thomas Hellström:
> > Provide a helper to shrink ttm_tt page-vectors on a per-page
> > basis. A ttm_backup backend could then in theory get away with
> > allocating a single temporary page for each struct ttm_tt.
> > 
> > This is accomplished by splitting larger pages before trying to
> > back them up.
> > 
> > In the future we could allow ttm_backup to handle backing up
> > large pages as well, but currently there's no benefit in
> > doing that, since the shmem backup backend would have to
> > split those anyway to avoid allocating too much temporary
> > memory, and if the backend instead inserts pages into the
> > swap-cache, those are split on reclaim by the core.
> > 
> > Due to potential backup- and recover errors, allow partially
> > swapped
> > out struct ttm_tt's, although mark them as swapped out stopping
> > them
> > from being swapped out a second time. More details in the
> > ttm_pool.c
> > DOC section.
> > 
> > v2:
> > - A couple of cleanups and error fixes in ttm_pool_back_up_tt.
> > - s/back_up/backup/
> > - Add a writeback parameter to the exported interface.
> > v8:
> > - Use a struct for flags for readability (Matt Brost)
> > - Address misc other review comments (Matt Brost)
> > v9:
> > - Update the kerneldoc for the ttm_tt::backup field.
> > v10:
> > - Rebase.
> > v13:
> > - Rebase on ttm_backup interface change. Update kerneldoc.
> > - Rebase and adjust ttm_tt_is_swapped().
> > 
> > Cc: Christian König <christian.koenig@amd.com>
> > Cc: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: <dri-devel@lists.freedesktop.org>
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >   drivers/gpu/drm/ttm/ttm_pool.c | 396
> > +++++++++++++++++++++++++++++++--
> >   drivers/gpu/drm/ttm/ttm_tt.c   |  37 +++
> >   include/drm/ttm/ttm_pool.h     |   6 +
> >   include/drm/ttm/ttm_tt.h       |  32 ++-
> >   4 files changed, 457 insertions(+), 14 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
> > b/drivers/gpu/drm/ttm/ttm_pool.c
> > index 8504dbe19c1a..f58864439edb 100644
> > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > @@ -41,6 +41,7 @@
> >   #include <asm/set_memory.h>
> >   #endif
> >   
> > +#include <drm/ttm/ttm_backup.h>
> >   #include <drm/ttm/ttm_pool.h>
> >   #include <drm/ttm/ttm_tt.h>
> >   #include <drm/ttm/ttm_bo.h>
> > @@ -58,6 +59,32 @@ struct ttm_pool_dma {
> >   	unsigned long vaddr;
> >   };
> >   
> > +/**
> > + * struct ttm_pool_tt_restore - State representing restore from
> > backup
> > + * @alloced_pages: Total number of already allocated pages for the
> > ttm_tt.
> > + * @restored_pages: Number of (sub) pages restored from swap for
> > this
> > + *		     chunk of 1 << @order pages.
> > + * @first_page: The ttm page ptr representing for @old_pages[0].
> > + * @caching_divide: Page pointer where subsequent pages are
> > cached.
> > + * @old_pages: Backup copy of page pointers that were replaced by
> > the new
> > + *	       page allocation.
> > + * @pool: The pool used for page allocation while restoring.
> > + * @order: The order of the last page allocated while restoring.
> > + *
> > + * Recovery from backup might fail when we've recovered less than
> > the
> > + * full ttm_tt. In order not to loose any data (yet), keep
> > information
> > + * around that allows us to restart a failed ttm backup recovery.
> > + */
> > +struct ttm_pool_tt_restore {
> > +	pgoff_t alloced_pages;
> > +	pgoff_t restored_pages;
> > +	struct page **first_page;
> > +	struct page **caching_divide;
> > +	struct ttm_pool *pool;
> > +	unsigned int order;
> > +	struct page *old_pages[];
> > +};
> > +
> >   static unsigned long page_pool_size;
> >   
> >   MODULE_PARM_DESC(page_pool_size, "Number of pages in the
> > WC/UC/DMA pool");
> > @@ -354,11 +381,105 @@ static unsigned int
> > ttm_pool_page_order(struct ttm_pool *pool, struct page *p)
> >   	return p->private;
> >   }
> >   
> > +/*
> > + * To be able to insert single pages into backup directly,
> > + * we need to split multi-order page allocations and make them
> > look
> > + * like single-page allocations.
> > + */
> > +static void ttm_pool_split_for_swap(struct ttm_pool *pool, struct
> > page *p)
> > +{
> > +	unsigned int order = ttm_pool_page_order(pool, p);
> > +	pgoff_t nr;
> > +
> > +	if (!order)
> > +		return;
> > +
> > +	split_page(p, order);
> 
> What exactly should split_page() do here and why is that necessary?
> 
> IIRC that function just updated the reference count and updated
> things 
> like page owner tracking and memcg accounting. Which should both be 
> completely irrelevant here.
> 
> Or do you just do that so that you can free each page individually?

Yes, exactly. Like For a 2MiB page we'd otherwise have to allocate 2MiB
of shmem backing storage, potentially from kernel reserves before we
could actually free anything. Since (currently) the shmem objects we
use are 4K-page only, this should make the process "allocate shmem and
back up" much less likely to deplete the kernel memory reserves.

Taking a step back and looking at potentially other solution, like
direct insertion into the swap cache, then even if inserting a 2MiB
page into the swap cache, vmscan would split it before writeback, and
still it didn't appear very stable. So inserting one 4K page at a time
seemed neccessary. If I were to take a guess that's why shmem, when
configured for 2MiB pages, like with i915, also splits the pages before
moving to swap-cache / writeback.


> 
> > +	nr = 1UL << order;
> > +	while (nr--)
> > +		(p++)->private = 0;
> > +}
> > +
> > +/**
> > + * DOC: Partial backup and restoration of a struct ttm_tt.
> > + *
> > + * Swapout using ttm_backup_backup_page() and swapin using
> > + * ttm_backup_copy_page() may fail.
> > + * The former most likely due to lack of swap-space or memory, the
> > latter due
> > + * to lack of memory or because of signal interruption during
> > waits.
> > + *
> > + * Backup failure is easily handled by using a ttm_tt pages vector
> > that holds
> > + * both swap entries and page pointers. This has to be taken into
> > account when
> > + * restoring such a ttm_tt from backup, and when freeing it while
> > backed up.
> > + * When restoring, for simplicity, new pages are actually
> > allocated from the
> > + * pool and the contents of any old pages are copied in and then
> > the old pages
> > + * are released.
> > + *
> > + * For restoration failures, the struct ttm_pool_tt_restore holds
> > sufficient state
> > + * to be able to resume an interrupted restore, and that structure
> > is freed once
> > + * the restoration is complete. If the struct ttm_tt is destroyed
> > while there
> > + * is a valid struct ttm_pool_tt_restore attached, that is also
> > properly taken
> > + * care of.
> > + */
> > +
> > +static bool ttm_pool_restore_valid(const struct
> > ttm_pool_tt_restore *restore)
> > +{
> > +	return restore && restore->restored_pages < (1 << restore-
> > >order);
> > +}
> > +
> > +static int ttm_pool_restore_tt(struct ttm_pool_tt_restore
> > *restore,
> > +			       struct ttm_backup *backup,
> > +			       struct ttm_operation_ctx *ctx)
> > +{
> > +	unsigned int i, nr = 1 << restore->order;
> > +	int ret = 0;
> > +
> > +	if (!ttm_pool_restore_valid(restore))
> > +		return 0;
> > +
> > +	for (i = restore->restored_pages; i < nr; ++i) {
> > +		struct page *p = restore->old_pages[i];
> > +
> > +		if (ttm_backup_page_ptr_is_handle(p)) {
> > +			unsigned long handle =
> > ttm_backup_page_ptr_to_handle(p);
> > +
> > +			if (handle == 0)
> > +				continue;
> > +
> > +			ret = ttm_backup_copy_page
> > +				(backup, restore->first_page[i],
> > +				 handle, ctx->interruptible);
> 
> That coding style looks really odd, I didn't even notice that it is a
> function call initially.
> 
> Maybe put everything under the if into a separate function.

At a minimum, I'll fix up the formatting here.

> 
> > +			if (ret)
> > +				break;
> > +
> > +			ttm_backup_drop(backup, handle);
> > +		} else if (p) {
> > +			/*
> > +			 * We could probably avoid splitting the
> > old page
> > +			 * using clever logic, but ATM we don't
> > care, as
> > +			 * we prioritize releasing memory ASAP.
> > Note that
> > +			 * here, the old retained page is always
> > write-back
> > +			 * cached.
> > +			 */
> > +			ttm_pool_split_for_swap(restore->pool, p);
> > +			copy_highpage(restore->first_page[i], p);
> > +			__free_pages(p, 0);
> > +		}
> > +
> > +		restore->restored_pages++;
> > +		restore->old_pages[i] = NULL;
> > +		cond_resched();
> 
> There is a push to remove cond_resched(), see here: 
> https://patchwork.kernel.org/project/linux-mm/patch/20231107230822.371443-30-ankur.a.arora@oracle.com/
> 
> Not sure in which discussion that removal went, but IIRC we should
> not 
> add any new users of it.

I'll read up on that and remove if needed. I'm curious how / if
voluntary preemption is going to be handled.

> 
> > +	}
> > +
> > +	return ret;
> > +}
> > +
> >   /* Called when we got a page, either from a pool or newly
> > allocated */
> >   static int ttm_pool_page_allocated(struct ttm_pool *pool,
> > unsigned int order,
> >   				   struct page *p, dma_addr_t
> > **dma_addr,
> >   				   unsigned long *num_pages,
> > -				   struct page ***pages)
> > +				   struct page ***pages,
> > +				   struct ttm_pool_tt_restore
> > *restore)
> >   {
> >   	unsigned int i;
> >   	int r;
> > @@ -369,6 +490,16 @@ static int ttm_pool_page_allocated(struct
> > ttm_pool *pool, unsigned int order,
> >   			return r;
> >   	}
> >   
> > +	if (restore) {
> > +		memcpy(restore->old_pages, *pages,
> > +		       (1 << order) * sizeof(*restore-
> > >old_pages));
> > +		memset(*pages, 0, (1 << order) * sizeof(**pages));
> > +		restore->order = order;
> > +		restore->restored_pages = 0;
> > +		restore->first_page = *pages;
> > +		restore->alloced_pages += 1UL << order;
> > +	}
> > +
> >   	*num_pages -= 1 << order;
> >   	for (i = 1 << order; i; --i, ++(*pages), ++p)
> >   		**pages = p;
> > @@ -394,22 +525,39 @@ static void ttm_pool_free_range(struct
> > ttm_pool *pool, struct ttm_tt *tt,
> >   				pgoff_t start_page, pgoff_t
> > end_page)
> >   {
> >   	struct page **pages = &tt->pages[start_page];
> > +	struct ttm_backup *backup = tt->backup;
> >   	unsigned int order;
> >   	pgoff_t i, nr;
> >   
> >   	for (i = start_page; i < end_page; i += nr, pages += nr) {
> >   		struct ttm_pool_type *pt = NULL;
> > +		struct page *p = *pages;
> > +
> > +		if (ttm_backup_page_ptr_is_handle(p)) {
> > +			unsigned long handle =
> > ttm_backup_page_ptr_to_handle(p);
> > +
> > +			nr = 1;
> > +			if (handle != 0)
> > +				ttm_backup_drop(backup, handle);
> > +			continue;
> > +		}
> > +
> > +		if (pool) {
> > +			order = ttm_pool_page_order(pool, p);
> > +			nr = (1UL << order);
> > +			if (tt->dma_address)
> > +				ttm_pool_unmap(pool, tt-
> > >dma_address[i], nr);
> >   
> > -		order = ttm_pool_page_order(pool, *pages);
> > -		nr = (1UL << order);
> > -		if (tt->dma_address)
> > -			ttm_pool_unmap(pool, tt->dma_address[i],
> > nr);
> > +			pt = ttm_pool_select_type(pool, caching,
> > order);
> > +		} else {
> > +			order = p->private;
> > +			nr = (1UL << order);
> > +		}
> >   
> > -		pt = ttm_pool_select_type(pool, caching, order);
> >   		if (pt)
> > -			ttm_pool_type_give(pt, *pages);
> > +			ttm_pool_type_give(pt, p);
> >   		else
> > -			ttm_pool_free_page(pool, caching, order,
> > *pages);
> > +			ttm_pool_free_page(pool, caching, order,
> > p);
> >   	}
> >   }
> >   
> > @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool *pool,
> > struct ttm_tt *tt,
> >   	else
> >   		gfp_flags |= GFP_HIGHUSER;
> >   
> > -	for (order = min_t(unsigned int, MAX_PAGE_ORDER,
> > __fls(num_pages));
> > -	     num_pages;
> > -	     order = min_t(unsigned int, order, __fls(num_pages)))
> > {
> > +	order = min_t(unsigned int, MAX_PAGE_ORDER,
> > __fls(num_pages));
> > +
> > +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
> > +		if (!tt->restore) {
> > +			gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
> > +
> > +			if (ctx->gfp_retry_mayfail)
> > +				gfp |= __GFP_RETRY_MAYFAIL;
> > +
> > +			tt->restore =
> > +				kvzalloc(struct_size(tt->restore,
> > old_pages,
> > +						     (size_t)1 <<
> > order), gfp);
> > +			if (!tt->restore)
> > +				return -ENOMEM;
> > +		} else if (ttm_pool_restore_valid(tt->restore)) {
> > +			struct ttm_pool_tt_restore *restore = tt-
> > >restore;
> > +
> > +			num_pages -= restore->alloced_pages;
> > +			order = min_t(unsigned int, order,
> > __fls(num_pages));
> > +			pages += restore->alloced_pages;
> > +			r = ttm_pool_restore_tt(restore, tt-
> > >backup, ctx);
> > +			if (r)
> > +				return r;
> > +			caching = restore->caching_divide;
> > +		}
> > +
> > +		tt->restore->pool = pool;
> > +	}
> 
> Hui? Why is that part of the allocation function now?
> 
> At bare minimum I would expect that this is a new function.

It's because we now have partially backed up tts, so the restore is
interleaved on a per-page basis, replacing the backup handles with
page-pointers. I'll see if I can separate out at least the
initialization here.

/Thomas


> 
> Regards,
> Christian.
> 
> > +
> > +	for (; num_pages; order = min_t(unsigned int, order,
> > __fls(num_pages))) {
> >   		struct ttm_pool_type *pt;
> >   
> >   		page_caching = tt->caching;
> > @@ -472,11 +647,19 @@ int ttm_pool_alloc(struct ttm_pool *pool,
> > struct ttm_tt *tt,
> >   				r = ttm_pool_page_allocated(pool,
> > order, p,
> >   							   
> > &dma_addr,
> >   							   
> > &num_pages,
> > -							   
> > &pages);
> > +							   
> > &pages,
> > +							    tt-
> > >restore);
> >   				if (r)
> >   					goto error_free_page;
> >   
> >   				caching = pages;
> > +				if (ttm_pool_restore_valid(tt-
> > >restore)) {
> > +					r =
> > ttm_pool_restore_tt(tt->restore, tt->backup,
> > +								ct
> > x);
> > +					if (r)
> > +						goto
> > error_free_all;
> > +				}
> > +
> >   				if (num_pages < (1 << order))
> >   					break;
> >   
> > @@ -496,9 +679,17 @@ int ttm_pool_alloc(struct ttm_pool *pool,
> > struct ttm_tt *tt,
> >   				caching = pages;
> >   			}
> >   			r = ttm_pool_page_allocated(pool, order,
> > p, &dma_addr,
> > -						    &num_pages,
> > &pages);
> > +						    &num_pages,
> > &pages,
> > +						    tt->restore);
> >   			if (r)
> >   				goto error_free_page;
> > +
> > +			if (ttm_pool_restore_valid(tt->restore)) {
> > +				r = ttm_pool_restore_tt(tt-
> > >restore, tt->backup, ctx);
> > +				if (r)
> > +					goto error_free_all;
> > +			}
> > +
> >   			if (PageHighMem(p))
> >   				caching = pages;
> >   		}
> > @@ -517,12 +708,26 @@ int ttm_pool_alloc(struct ttm_pool *pool,
> > struct ttm_tt *tt,
> >   	if (r)
> >   		goto error_free_all;
> >   
> > +	if (tt->restore) {
> > +		kvfree(tt->restore);
> > +		tt->restore = NULL;
> > +	}
> > +
> > +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)
> > +		tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP |
> > +				    TTM_TT_FLAG_SWAPPED);
> > +
> >   	return 0;
> >   
> >   error_free_page:
> >   	ttm_pool_free_page(pool, page_caching, order, p);
> >   
> >   error_free_all:
> > +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
> > +		tt->restore->caching_divide = caching;
> > +		return r;
> > +	}
> > +
> >   	num_pages = tt->num_pages - num_pages;
> >   	caching_divide = caching - tt->pages;
> >   	ttm_pool_free_range(pool, tt, tt->caching, 0,
> > caching_divide);
> > @@ -549,6 +754,171 @@ void ttm_pool_free(struct ttm_pool *pool,
> > struct ttm_tt *tt)
> >   }
> >   EXPORT_SYMBOL(ttm_pool_free);
> >   
> > +/**
> > + * ttm_pool_release_backed_up() - Release content of a swapped-out
> > struct ttm_tt
> > + * @tt: The struct ttm_tt.
> > + *
> > + * Release handles with associated content or any remaining pages
> > of
> > + * a backed-up struct ttm_tt.
> > + */
> > +void ttm_pool_release_backed_up(struct ttm_tt *tt)
> > +{
> > +	struct ttm_backup *backup = tt->backup;
> > +	struct ttm_pool_tt_restore *restore;
> > +	pgoff_t i, start_page = 0;
> > +	unsigned long handle;
> > +
> > +	if (!(tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP))
> > +		return;
> > +
> > +	restore = tt->restore;
> > +
> > +	if (ttm_pool_restore_valid(restore)) {
> > +		pgoff_t nr = 1UL << restore->order;
> > +
> > +		for (i = restore->restored_pages; i < nr; ++i) {
> > +			struct page *p = restore->old_pages[i];
> > +
> > +			if (ttm_backup_page_ptr_is_handle(p)) {
> > +				handle =
> > ttm_backup_page_ptr_to_handle(p);
> > +				if (handle == 0)
> > +					continue;
> > +
> > +				ttm_backup_drop(backup, handle);
> > +			} else if (p) {
> > +				ttm_pool_split_for_swap(restore-
> > >pool, p);
> > +				__free_pages(p, 0);
> > +			}
> > +		}
> > +	}
> > +
> > +	if (restore) {
> > +		pgoff_t mid = restore->caching_divide - tt->pages;
> > +
> > +		start_page = restore->alloced_pages;
> > +		/* Pages that might be dma-mapped and non-cached
> > */
> > +		ttm_pool_free_range(restore->pool, tt, tt-
> > >caching,
> > +				    0, mid);
> > +		/* Pages that might be dma-mapped but cached */
> > +		ttm_pool_free_range(restore->pool, tt, ttm_cached,
> > +				    mid, restore->alloced_pages);
> > +	}
> > +
> > +	/* Shrunken pages. Cached and not dma-mapped. */
> > +	ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt-
> > >num_pages);
> > +
> > +	if (restore) {
> > +		kvfree(restore);
> > +		tt->restore = NULL;
> > +	}
> > +
> > +	tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP |
> > TTM_TT_FLAG_SWAPPED);
> > +}
> > +
> > +/**
> > + * ttm_pool_backup_tt() - Back up or purge a struct ttm_tt
> > + * @pool: The pool used when allocating the struct ttm_tt.
> > + * @ttm: The struct ttm_tt.
> > + * @flags: Flags to govern the backup behaviour.
> > + *
> > + * Back up or purge a struct ttm_tt. If @purge is true, then
> > + * all pages will be freed directly to the system rather than to
> > the pool
> > + * they were allocated from, making the function behave similarly
> > to
> > + * ttm_pool_free(). If @purge is false the pages will be backed up
> > instead,
> > + * exchanged for handles.
> > + * A subsequent call to ttm_pool_alloc() will then read back the
> > content and
> > + * a subsequent call to ttm_pool_release_shrunken() will drop it.
> > + * If backup of a page fails for whatever reason, @ttm will still
> > be
> > + * partially backed up, retaining those pages for which backup
> > fails.
> > + *
> > + * Return: Number of pages actually backed up or freed, or
> > negative
> > + * error code on error.
> > + */
> > +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm,
> > +			const struct ttm_backup_flags *flags)
> > +{
> > +	struct ttm_backup *backup = ttm->backup;
> > +	struct page *page;
> > +	unsigned long handle;
> > +	gfp_t alloc_gfp;
> > +	gfp_t gfp;
> > +	int ret = 0;
> > +	pgoff_t shrunken = 0;
> > +	pgoff_t i, num_pages;
> > +
> > +	if ((!ttm_backup_bytes_avail() && !flags->purge) ||
> > +	    pool->use_dma_alloc ||
> > +	    (ttm->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP))
> > +		return -EBUSY;
> > +
> > +#ifdef CONFIG_X86
> > +	/* Anything returned to the system needs to be cached. */
> > +	if (ttm->caching != ttm_cached)
> > +		set_pages_array_wb(ttm->pages, ttm->num_pages);
> > +#endif
> > +
> > +	if (ttm->dma_address || flags->purge) {
> > +		for (i = 0; i < ttm->num_pages; i += num_pages) {
> > +			unsigned int order;
> > +
> > +			page = ttm->pages[i];
> > +			if (unlikely(!page)) {
> > +				num_pages = 1;
> > +				continue;
> > +			}
> > +
> > +			order = ttm_pool_page_order(pool, page);
> > +			num_pages = 1UL << order;
> > +			if (ttm->dma_address)
> > +				ttm_pool_unmap(pool, ttm-
> > >dma_address[i],
> > +					       num_pages);
> > +			if (flags->purge) {
> > +				shrunken += num_pages;
> > +				page->private = 0;
> > +				__free_pages(page, order);
> > +				memset(ttm->pages + i, 0,
> > +				       num_pages * sizeof(*ttm-
> > >pages));
> > +			}
> > +		}
> > +	}
> > +
> > +	if (flags->purge)
> > +		return shrunken;
> > +
> > +	if (pool->use_dma32)
> > +		gfp = GFP_DMA32;
> > +	else
> > +		gfp = GFP_HIGHUSER;
> > +
> > +	alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN |
> > __GFP_RETRY_MAYFAIL;
> > +
> > +	for (i = 0; i < ttm->num_pages; ++i) {
> > +		page = ttm->pages[i];
> > +		if (unlikely(!page))
> > +			continue;
> > +
> > +		ttm_pool_split_for_swap(pool, page);
> > +
> > +		handle = ttm_backup_backup_page(backup, page,
> > flags->writeback, i,
> > +						gfp, alloc_gfp);
> > +		if (handle) {
> > +			ttm->pages[i] =
> > ttm_backup_handle_to_page_ptr(handle);
> > +			put_page(page);
> > +			shrunken++;
> > +		} else {
> > +			/* We allow partially shrunken tts */
> > +			ret = -ENOMEM;
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (shrunken)
> > +		ttm->page_flags |= (TTM_TT_FLAG_PRIV_BACKED_UP |
> > +				    TTM_TT_FLAG_SWAPPED);
> > +
> > +	return shrunken ? shrunken : ret;
> > +}
> > +
> >   /**
> >    * ttm_pool_init - Initialize a pool
> >    *
> > diff --git a/drivers/gpu/drm/ttm/ttm_tt.c
> > b/drivers/gpu/drm/ttm/ttm_tt.c
> > index 3baf215eca23..dd4eabe4ad79 100644
> > --- a/drivers/gpu/drm/ttm/ttm_tt.c
> > +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> > @@ -40,6 +40,7 @@
> >   #include <drm/drm_cache.h>
> >   #include <drm/drm_device.h>
> >   #include <drm/drm_util.h>
> > +#include <drm/ttm/ttm_backup.h>
> >   #include <drm/ttm/ttm_bo.h>
> >   #include <drm/ttm/ttm_tt.h>
> >   
> > @@ -158,6 +159,8 @@ static void ttm_tt_init_fields(struct ttm_tt
> > *ttm,
> >   	ttm->swap_storage = NULL;
> >   	ttm->sg = bo->sg;
> >   	ttm->caching = caching;
> > +	ttm->restore = NULL;
> > +	ttm->backup = NULL;
> >   }
> >   
> >   int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> > @@ -182,6 +185,12 @@ void ttm_tt_fini(struct ttm_tt *ttm)
> >   		fput(ttm->swap_storage);
> >   	ttm->swap_storage = NULL;
> >   
> > +	ttm_pool_release_backed_up(ttm);
> > +	if (ttm->backup) {
> > +		ttm_backup_fini(ttm->backup);
> > +		ttm->backup = NULL;
> > +	}
> > +
> >   	if (ttm->pages)
> >   		kvfree(ttm->pages);
> >   	else
> > @@ -253,6 +262,34 @@ int ttm_tt_swapin(struct ttm_tt *ttm)
> >   }
> >   EXPORT_SYMBOL_FOR_TESTS_ONLY(ttm_tt_swapin);
> >   
> > +/**
> > + * ttm_tt_backup() - Helper to back up a struct ttm_tt.
> > + * @bdev: The TTM device.
> > + * @tt: The struct ttm_tt.
> > + * @flags: Flags that govern the backup behaviour.
> > + *
> > + * Update the page accounting and call ttm_pool_shrink_tt to free
> > pages
> > + * or back them up.
> > + *
> > + * Return: Number of pages freed or swapped out, or negative error
> > code on
> > + * error.
> > + */
> > +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
> > +		   const struct ttm_backup_flags flags)
> > +{
> > +	long ret;
> > +
> > +	if (WARN_ON(IS_ERR_OR_NULL(tt->backup)))
> > +		return 0;
> > +
> > +	ret = ttm_pool_backup_tt(&bdev->pool, tt, &flags);
> > +
> > +	if (ret > 0)
> > +		tt->page_flags &= ~TTM_TT_FLAG_PRIV_POPULATED;
> > +
> > +	return ret;
> > +}
> > +
> >   /**
> >    * ttm_tt_swapout - swap out tt object
> >    *
> > diff --git a/include/drm/ttm/ttm_pool.h
> > b/include/drm/ttm/ttm_pool.h
> > index 160d954a261e..3112a4be835c 100644
> > --- a/include/drm/ttm/ttm_pool.h
> > +++ b/include/drm/ttm/ttm_pool.h
> > @@ -33,6 +33,7 @@
> >   
> >   struct device;
> >   struct seq_file;
> > +struct ttm_backup_flags;
> >   struct ttm_operation_ctx;
> >   struct ttm_pool;
> >   struct ttm_tt;
> > @@ -89,6 +90,11 @@ void ttm_pool_fini(struct ttm_pool *pool);
> >   
> >   int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m);
> >   
> > +void ttm_pool_release_backed_up(struct ttm_tt *tt);
> > +
> > +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm,
> > +			const struct ttm_backup_flags *flags);
> > +
> >   int ttm_pool_mgr_init(unsigned long num_pages);
> >   void ttm_pool_mgr_fini(void);
> >   
> > diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
> > index 991edafdb2dd..6ca2fc7b2a26 100644
> > --- a/include/drm/ttm/ttm_tt.h
> > +++ b/include/drm/ttm/ttm_tt.h
> > @@ -32,11 +32,13 @@
> >   #include <drm/ttm/ttm_caching.h>
> >   #include <drm/ttm/ttm_kmap_iter.h>
> >   
> > +struct ttm_backup;
> >   struct ttm_device;
> >   struct ttm_tt;
> >   struct ttm_resource;
> >   struct ttm_buffer_object;
> >   struct ttm_operation_ctx;
> > +struct ttm_pool_tt_restore;
> >   
> >   /**
> >    * struct ttm_tt - This is a structure holding the pages,
> > caching- and aperture
> > @@ -88,6 +90,9 @@ struct ttm_tt {
> >   	 * TTM_TT_FLAG_PRIV_POPULATED: TTM internal only. DO NOT
> > USE. This is
> >   	 * set by TTM after ttm_tt_populate() has successfully
> > returned, and is
> >   	 * then unset when TTM calls ttm_tt_unpopulate().
> > +	 *
> > +	 * TTM_TT_FLAG_PRIV_BACKED_UP: TTM internal only. This is
> > set if the
> > +	 * struct ttm_tt has been (possibly partially) backed up.
> >   	 */
> >   #define TTM_TT_FLAG_SWAPPED		BIT(0)
> >   #define TTM_TT_FLAG_ZERO_ALLOC		BIT(1)
> > @@ -96,6 +101,7 @@ struct ttm_tt {
> >   #define TTM_TT_FLAG_DECRYPTED		BIT(4)
> >   
> >   #define TTM_TT_FLAG_PRIV_POPULATED	BIT(5)
> > +#define TTM_TT_FLAG_PRIV_BACKED_UP	BIT(6)
> >   	uint32_t page_flags;
> >   	/** @num_pages: Number of pages in the page array. */
> >   	uint32_t num_pages;
> > @@ -105,11 +111,20 @@ struct ttm_tt {
> >   	dma_addr_t *dma_address;
> >   	/** @swap_storage: Pointer to shmem struct file for swap
> > storage. */
> >   	struct file *swap_storage;
> > +	/**
> > +	 * @backup: Pointer to backup struct for backed up tts.
> > +	 * Could be unified with @swap_storage. Meanwhile, the
> > driver's
> > +	 * ttm_tt_create() callback is responsible for assigning
> > +	 * this field.
> > +	 */
> > +	struct ttm_backup *backup;
> >   	/**
> >   	 * @caching: The current caching state of the pages, see
> > enum
> >   	 * ttm_caching.
> >   	 */
> >   	enum ttm_caching caching;
> > +	/** @restore: Partial restoration from backup state. TTM
> > private */
> > +	struct ttm_pool_tt_restore *restore;
> >   };
> >   
> >   /**
> > @@ -131,7 +146,7 @@ static inline bool ttm_tt_is_populated(struct
> > ttm_tt *tt)
> >   
> >   static inline bool ttm_tt_is_swapped(const struct ttm_tt *tt)
> >   {
> > -	return tt->page_flags & TTM_TT_FLAG_SWAPPED;
> > +	return tt->page_flags & (TTM_TT_FLAG_SWAPPED |
> > TTM_TT_FLAG_PRIV_BACKED_UP);
> >   }
> >   
> >   /**
> > @@ -235,6 +250,21 @@ void ttm_tt_mgr_init(unsigned long num_pages,
> > unsigned long num_dma32_pages);
> >   struct ttm_kmap_iter *ttm_kmap_iter_tt_init(struct
> > ttm_kmap_iter_tt *iter_tt,
> >   					    struct ttm_tt *tt);
> >   unsigned long ttm_tt_pages_limit(void);
> > +
> > +/**
> > + * struct ttm_backup_flags - Flags to govern backup behaviour.
> > + * @purge: Free pages without backing up. Bypass pools.
> > + * @writeback: Attempt to copy contents directly to swap space,
> > even
> > + * if that means blocking on writes to external memory.
> > + */
> > +struct ttm_backup_flags {
> > +	u32 purge : 1;
> > +	u32 writeback : 1;
> > +};
> > +
> > +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
> > +		   const struct ttm_backup_flags flags);
> > +
> >   #if IS_ENABLED(CONFIG_AGP)
> >   #include <linux/agp_backend.h>
> >   
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 13:42     ` Thomas Hellström
@ 2024-12-03 14:51       ` Christian König
  2024-12-03 15:50         ` Thomas Hellström
  2024-12-18 10:15         ` Thomas Hellström
  0 siblings, 2 replies; 54+ messages in thread
From: Christian König @ 2024-12-03 14:51 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

[-- Attachment #1: Type: text/plain, Size: 28313 bytes --]

Am 03.12.24 um 14:42 schrieb Thomas Hellström:
> On Tue, 2024-12-03 at 14:12 +0100, Christian König wrote:
>> Am 15.11.24 um 16:01 schrieb Thomas Hellström:
>>> Provide a helper to shrink ttm_tt page-vectors on a per-page
>>> basis. A ttm_backup backend could then in theory get away with
>>> allocating a single temporary page for each struct ttm_tt.
>>>
>>> This is accomplished by splitting larger pages before trying to
>>> back them up.
>>>
>>> In the future we could allow ttm_backup to handle backing up
>>> large pages as well, but currently there's no benefit in
>>> doing that, since the shmem backup backend would have to
>>> split those anyway to avoid allocating too much temporary
>>> memory, and if the backend instead inserts pages into the
>>> swap-cache, those are split on reclaim by the core.
>>>
>>> Due to potential backup- and recover errors, allow partially
>>> swapped
>>> out struct ttm_tt's, although mark them as swapped out stopping
>>> them
>>> from being swapped out a second time. More details in the
>>> ttm_pool.c
>>> DOC section.
>>>
>>> v2:
>>> - A couple of cleanups and error fixes in ttm_pool_back_up_tt.
>>> - s/back_up/backup/
>>> - Add a writeback parameter to the exported interface.
>>> v8:
>>> - Use a struct for flags for readability (Matt Brost)
>>> - Address misc other review comments (Matt Brost)
>>> v9:
>>> - Update the kerneldoc for the ttm_tt::backup field.
>>> v10:
>>> - Rebase.
>>> v13:
>>> - Rebase on ttm_backup interface change. Update kerneldoc.
>>> - Rebase and adjust ttm_tt_is_swapped().
>>>
>>> Cc: Christian König<christian.koenig@amd.com>
>>> Cc: Somalapuram Amaranath<Amaranath.Somalapuram@amd.com>
>>> Cc: Matthew Brost<matthew.brost@intel.com>
>>> Cc:<dri-devel@lists.freedesktop.org>
>>> Signed-off-by: Thomas Hellström<thomas.hellstrom@linux.intel.com>
>>> Reviewed-by: Matthew Brost<matthew.brost@intel.com>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_pool.c | 396
>>> +++++++++++++++++++++++++++++++--
>>>    drivers/gpu/drm/ttm/ttm_tt.c   |  37 +++
>>>    include/drm/ttm/ttm_pool.h     |   6 +
>>>    include/drm/ttm/ttm_tt.h       |  32 ++-
>>>    4 files changed, 457 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
>>> b/drivers/gpu/drm/ttm/ttm_pool.c
>>> index 8504dbe19c1a..f58864439edb 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
>>> @@ -41,6 +41,7 @@
>>>    #include <asm/set_memory.h>
>>>    #endif
>>>    
>>> +#include <drm/ttm/ttm_backup.h>
>>>    #include <drm/ttm/ttm_pool.h>
>>>    #include <drm/ttm/ttm_tt.h>
>>>    #include <drm/ttm/ttm_bo.h>
>>> @@ -58,6 +59,32 @@ struct ttm_pool_dma {
>>>    	unsigned long vaddr;
>>>    };
>>>    
>>> +/**
>>> + * struct ttm_pool_tt_restore - State representing restore from
>>> backup
>>> + * @alloced_pages: Total number of already allocated pages for the
>>> ttm_tt.
>>> + * @restored_pages: Number of (sub) pages restored from swap for
>>> this
>>> + *		     chunk of 1 << @order pages.
>>> + * @first_page: The ttm page ptr representing for @old_pages[0].
>>> + * @caching_divide: Page pointer where subsequent pages are
>>> cached.
>>> + * @old_pages: Backup copy of page pointers that were replaced by
>>> the new
>>> + *	       page allocation.
>>> + * @pool: The pool used for page allocation while restoring.
>>> + * @order: The order of the last page allocated while restoring.
>>> + *
>>> + * Recovery from backup might fail when we've recovered less than
>>> the
>>> + * full ttm_tt. In order not to loose any data (yet), keep
>>> information
>>> + * around that allows us to restart a failed ttm backup recovery.
>>> + */
>>> +struct ttm_pool_tt_restore {
>>> +	pgoff_t alloced_pages;
>>> +	pgoff_t restored_pages;
>>> +	struct page **first_page;
>>> +	struct page **caching_divide;
>>> +	struct ttm_pool *pool;
>>> +	unsigned int order;
>>> +	struct page *old_pages[];
>>> +};
>>> +
>>>    static unsigned long page_pool_size;
>>>    
>>>    MODULE_PARM_DESC(page_pool_size, "Number of pages in the
>>> WC/UC/DMA pool");
>>> @@ -354,11 +381,105 @@ static unsigned int
>>> ttm_pool_page_order(struct ttm_pool *pool, struct page *p)
>>>    	return p->private;
>>>    }
>>>    
>>> +/*
>>> + * To be able to insert single pages into backup directly,
>>> + * we need to split multi-order page allocations and make them
>>> look
>>> + * like single-page allocations.
>>> + */
>>> +static void ttm_pool_split_for_swap(struct ttm_pool *pool, struct
>>> page *p)
>>> +{
>>> +	unsigned int order = ttm_pool_page_order(pool, p);
>>> +	pgoff_t nr;
>>> +
>>> +	if (!order)
>>> +		return;
>>> +
>>> +	split_page(p, order);
>> What exactly should split_page() do here and why is that necessary?
>>
>> IIRC that function just updated the reference count and updated
>> things
>> like page owner tracking and memcg accounting. Which should both be
>> completely irrelevant here.
>>
>> Or do you just do that so that you can free each page individually?
> Yes, exactly. Like For a 2MiB page we'd otherwise have to allocate 2MiB
> of shmem backing storage, potentially from kernel reserves before we
> could actually free anything. Since (currently) the shmem objects we
> use are 4K-page only, this should make the process "allocate shmem and
> back up" much less likely to deplete the kernel memory reserves.

Ah, yes that makes totally sense now.

>
> Taking a step back and looking at potentially other solution, like
> direct insertion into the swap cache, then even if inserting a 2MiB
> page into the swap cache, vmscan would split it before writeback, and
> still it didn't appear very stable. So inserting one 4K page at a time
> seemed neccessary. If I were to take a guess that's why shmem, when
> configured for 2MiB pages, like with i915, also splits the pages before
> moving to swap-cache / writeback.
>
>
>>> +	nr = 1UL << order;
>>> +	while (nr--)
>>> +		(p++)->private = 0;
>>> +}
>>> +
>>> +/**
>>> + * DOC: Partial backup and restoration of a struct ttm_tt.
>>> + *
>>> + * Swapout using ttm_backup_backup_page() and swapin using
>>> + * ttm_backup_copy_page() may fail.
>>> + * The former most likely due to lack of swap-space or memory, the
>>> latter due
>>> + * to lack of memory or because of signal interruption during
>>> waits.
>>> + *
>>> + * Backup failure is easily handled by using a ttm_tt pages vector
>>> that holds
>>> + * both swap entries and page pointers. This has to be taken into
>>> account when
>>> + * restoring such a ttm_tt from backup, and when freeing it while
>>> backed up.
>>> + * When restoring, for simplicity, new pages are actually
>>> allocated from the
>>> + * pool and the contents of any old pages are copied in and then
>>> the old pages
>>> + * are released.
>>> + *
>>> + * For restoration failures, the struct ttm_pool_tt_restore holds
>>> sufficient state
>>> + * to be able to resume an interrupted restore, and that structure
>>> is freed once
>>> + * the restoration is complete. If the struct ttm_tt is destroyed
>>> while there
>>> + * is a valid struct ttm_pool_tt_restore attached, that is also
>>> properly taken
>>> + * care of.
>>> + */
>>> +
>>> +static bool ttm_pool_restore_valid(const struct
>>> ttm_pool_tt_restore *restore)
>>> +{
>>> +	return restore && restore->restored_pages < (1 << restore-
>>>> order);
>>> +}
>>> +
>>> +static int ttm_pool_restore_tt(struct ttm_pool_tt_restore
>>> *restore,
>>> +			       struct ttm_backup *backup,
>>> +			       struct ttm_operation_ctx *ctx)
>>> +{
>>> +	unsigned int i, nr = 1 << restore->order;
>>> +	int ret = 0;
>>> +
>>> +	if (!ttm_pool_restore_valid(restore))
>>> +		return 0;
>>> +
>>> +	for (i = restore->restored_pages; i < nr; ++i) {
>>> +		struct page *p = restore->old_pages[i];
>>> +
>>> +		if (ttm_backup_page_ptr_is_handle(p)) {
>>> +			unsigned long handle =
>>> ttm_backup_page_ptr_to_handle(p);
>>> +
>>> +			if (handle == 0)
>>> +				continue;
>>> +
>>> +			ret = ttm_backup_copy_page
>>> +				(backup, restore->first_page[i],
>>> +				 handle, ctx->interruptible);
>> That coding style looks really odd, I didn't even notice that it is a
>> function call initially.
>>
>> Maybe put everything under the if into a separate function.
> At a minimum, I'll fix up the formatting here.
>
>>> +			if (ret)
>>> +				break;
>>> +
>>> +			ttm_backup_drop(backup, handle);
>>> +		} else if (p) {
>>> +			/*
>>> +			 * We could probably avoid splitting the
>>> old page
>>> +			 * using clever logic, but ATM we don't
>>> care, as
>>> +			 * we prioritize releasing memory ASAP.
>>> Note that
>>> +			 * here, the old retained page is always
>>> write-back
>>> +			 * cached.
>>> +			 */
>>> +			ttm_pool_split_for_swap(restore->pool, p);
>>> +			copy_highpage(restore->first_page[i], p);
>>> +			__free_pages(p, 0);
>>> +		}
>>> +
>>> +		restore->restored_pages++;
>>> +		restore->old_pages[i] = NULL;
>>> +		cond_resched();
>> There is a push to remove cond_resched(), see here:
>> https://patchwork.kernel.org/project/linux-mm/patch/20231107230822.371443-30-ankur.a.arora@oracle.com/
>>
>> Not sure in which discussion that removal went, but IIRC we should
>> not
>> add any new users of it.
> I'll read up on that and remove if needed. I'm curious how / if
> voluntary preemption is going to be handled.

I didn't fully understood it either, but the push kind of seems to be 
that drivers or in this cases subsystems are not supposed to mess with 
cond_resched() any more and just rely on preemptive kernels.

>>> +	}
>>> +
>>> +	return ret;
>>> +}
>>> +
>>>    /* Called when we got a page, either from a pool or newly
>>> allocated */
>>>    static int ttm_pool_page_allocated(struct ttm_pool *pool,
>>> unsigned int order,
>>>    				   struct page *p, dma_addr_t
>>> **dma_addr,
>>>    				   unsigned long *num_pages,
>>> -				   struct page ***pages)
>>> +				   struct page ***pages,
>>> +				   struct ttm_pool_tt_restore
>>> *restore)
>>>    {
>>>    	unsigned int i;
>>>    	int r;
>>> @@ -369,6 +490,16 @@ static int ttm_pool_page_allocated(struct
>>> ttm_pool *pool, unsigned int order,
>>>    			return r;
>>>    	}
>>>    
>>> +	if (restore) {
>>> +		memcpy(restore->old_pages, *pages,
>>> +		       (1 << order) * sizeof(*restore-
>>>> old_pages));
>>> +		memset(*pages, 0, (1 << order) * sizeof(**pages));
>>> +		restore->order = order;
>>> +		restore->restored_pages = 0;
>>> +		restore->first_page = *pages;
>>> +		restore->alloced_pages += 1UL << order;
>>> +	}
>>> +
>>>    	*num_pages -= 1 << order;
>>>    	for (i = 1 << order; i; --i, ++(*pages), ++p)
>>>    		**pages = p;
>>> @@ -394,22 +525,39 @@ static void ttm_pool_free_range(struct
>>> ttm_pool *pool, struct ttm_tt *tt,
>>>    				pgoff_t start_page, pgoff_t
>>> end_page)
>>>    {
>>>    	struct page **pages = &tt->pages[start_page];
>>> +	struct ttm_backup *backup = tt->backup;
>>>    	unsigned int order;
>>>    	pgoff_t i, nr;
>>>    
>>>    	for (i = start_page; i < end_page; i += nr, pages += nr) {
>>>    		struct ttm_pool_type *pt = NULL;
>>> +		struct page *p = *pages;
>>> +
>>> +		if (ttm_backup_page_ptr_is_handle(p)) {
>>> +			unsigned long handle =
>>> ttm_backup_page_ptr_to_handle(p);
>>> +
>>> +			nr = 1;
>>> +			if (handle != 0)
>>> +				ttm_backup_drop(backup, handle);
>>> +			continue;
>>> +		}
>>> +
>>> +		if (pool) {
>>> +			order = ttm_pool_page_order(pool, p);
>>> +			nr = (1UL << order);
>>> +			if (tt->dma_address)
>>> +				ttm_pool_unmap(pool, tt-
>>>> dma_address[i], nr);
>>>    
>>> -		order = ttm_pool_page_order(pool, *pages);
>>> -		nr = (1UL << order);
>>> -		if (tt->dma_address)
>>> -			ttm_pool_unmap(pool, tt->dma_address[i],
>>> nr);
>>> +			pt = ttm_pool_select_type(pool, caching,
>>> order);
>>> +		} else {
>>> +			order = p->private;
>>> +			nr = (1UL << order);
>>> +		}
>>>    
>>> -		pt = ttm_pool_select_type(pool, caching, order);
>>>    		if (pt)
>>> -			ttm_pool_type_give(pt, *pages);
>>> +			ttm_pool_type_give(pt, p);
>>>    		else
>>> -			ttm_pool_free_page(pool, caching, order,
>>> *pages);
>>> +			ttm_pool_free_page(pool, caching, order,
>>> p);
>>>    	}
>>>    }
>>>    
>>> @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool *pool,
>>> struct ttm_tt *tt,
>>>    	else
>>>    		gfp_flags |= GFP_HIGHUSER;
>>>    
>>> -	for (order = min_t(unsigned int, MAX_PAGE_ORDER,
>>> __fls(num_pages));
>>> -	     num_pages;
>>> -	     order = min_t(unsigned int, order, __fls(num_pages)))
>>> {
>>> +	order = min_t(unsigned int, MAX_PAGE_ORDER,
>>> __fls(num_pages));
>>> +
>>> +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
>>> +		if (!tt->restore) {
>>> +			gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
>>> +
>>> +			if (ctx->gfp_retry_mayfail)
>>> +				gfp |= __GFP_RETRY_MAYFAIL;
>>> +
>>> +			tt->restore =
>>> +				kvzalloc(struct_size(tt->restore,
>>> old_pages,
>>> +						     (size_t)1 <<
>>> order), gfp);
>>> +			if (!tt->restore)
>>> +				return -ENOMEM;
>>> +		} else if (ttm_pool_restore_valid(tt->restore)) {
>>> +			struct ttm_pool_tt_restore *restore = tt-
>>>> restore;
>>> +
>>> +			num_pages -= restore->alloced_pages;
>>> +			order = min_t(unsigned int, order,
>>> __fls(num_pages));
>>> +			pages += restore->alloced_pages;
>>> +			r = ttm_pool_restore_tt(restore, tt-
>>>> backup, ctx);
>>> +			if (r)
>>> +				return r;
>>> +			caching = restore->caching_divide;
>>> +		}
>>> +
>>> +		tt->restore->pool = pool;
>>> +	}
>> Hui? Why is that part of the allocation function now?
>>
>> At bare minimum I would expect that this is a new function.
> It's because we now have partially backed up tts, so the restore is
> interleaved on a per-page basis, replacing the backup handles with
> page-pointers. I'll see if I can separate out at least the
> initialization here.

Yeah, that kind of makes sense.

My expectation was just that we now have explicit ttm_pool_swapout() and 
ttm_pool_swapin() functions.

Christian.

>
> /Thomas
>
>
>> Regards,
>> Christian.
>>
>>> +
>>> +	for (; num_pages; order = min_t(unsigned int, order,
>>> __fls(num_pages))) {
>>>    		struct ttm_pool_type *pt;
>>>    
>>>    		page_caching = tt->caching;
>>> @@ -472,11 +647,19 @@ int ttm_pool_alloc(struct ttm_pool *pool,
>>> struct ttm_tt *tt,
>>>    				r = ttm_pool_page_allocated(pool,
>>> order, p,
>>>    							
>>> &dma_addr,
>>>    							
>>> &num_pages,
>>> -							
>>> &pages);
>>> +							
>>> &pages,
>>> +							    tt-
>>>> restore);
>>>    				if (r)
>>>    					goto error_free_page;
>>>    
>>>    				caching = pages;
>>> +				if (ttm_pool_restore_valid(tt-
>>>> restore)) {
>>> +					r =
>>> ttm_pool_restore_tt(tt->restore, tt->backup,
>>> +								ct
>>> x);
>>> +					if (r)
>>> +						goto
>>> error_free_all;
>>> +				}
>>> +
>>>    				if (num_pages < (1 << order))
>>>    					break;
>>>    
>>> @@ -496,9 +679,17 @@ int ttm_pool_alloc(struct ttm_pool *pool,
>>> struct ttm_tt *tt,
>>>    				caching = pages;
>>>    			}
>>>    			r = ttm_pool_page_allocated(pool, order,
>>> p, &dma_addr,
>>> -						    &num_pages,
>>> &pages);
>>> +						    &num_pages,
>>> &pages,
>>> +						    tt->restore);
>>>    			if (r)
>>>    				goto error_free_page;
>>> +
>>> +			if (ttm_pool_restore_valid(tt->restore)) {
>>> +				r = ttm_pool_restore_tt(tt-
>>>> restore, tt->backup, ctx);
>>> +				if (r)
>>> +					goto error_free_all;
>>> +			}
>>> +
>>>    			if (PageHighMem(p))
>>>    				caching = pages;
>>>    		}
>>> @@ -517,12 +708,26 @@ int ttm_pool_alloc(struct ttm_pool *pool,
>>> struct ttm_tt *tt,
>>>    	if (r)
>>>    		goto error_free_all;
>>>    
>>> +	if (tt->restore) {
>>> +		kvfree(tt->restore);
>>> +		tt->restore = NULL;
>>> +	}
>>> +
>>> +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)
>>> +		tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP |
>>> +				    TTM_TT_FLAG_SWAPPED);
>>> +
>>>    	return 0;
>>>    
>>>    error_free_page:
>>>    	ttm_pool_free_page(pool, page_caching, order, p);
>>>    
>>>    error_free_all:
>>> +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
>>> +		tt->restore->caching_divide = caching;
>>> +		return r;
>>> +	}
>>> +
>>>    	num_pages = tt->num_pages - num_pages;
>>>    	caching_divide = caching - tt->pages;
>>>    	ttm_pool_free_range(pool, tt, tt->caching, 0,
>>> caching_divide);
>>> @@ -549,6 +754,171 @@ void ttm_pool_free(struct ttm_pool *pool,
>>> struct ttm_tt *tt)
>>>    }
>>>    EXPORT_SYMBOL(ttm_pool_free);
>>>    
>>> +/**
>>> + * ttm_pool_release_backed_up() - Release content of a swapped-out
>>> struct ttm_tt
>>> + * @tt: The struct ttm_tt.
>>> + *
>>> + * Release handles with associated content or any remaining pages
>>> of
>>> + * a backed-up struct ttm_tt.
>>> + */
>>> +void ttm_pool_release_backed_up(struct ttm_tt *tt)
>>> +{
>>> +	struct ttm_backup *backup = tt->backup;
>>> +	struct ttm_pool_tt_restore *restore;
>>> +	pgoff_t i, start_page = 0;
>>> +	unsigned long handle;
>>> +
>>> +	if (!(tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP))
>>> +		return;
>>> +
>>> +	restore = tt->restore;
>>> +
>>> +	if (ttm_pool_restore_valid(restore)) {
>>> +		pgoff_t nr = 1UL << restore->order;
>>> +
>>> +		for (i = restore->restored_pages; i < nr; ++i) {
>>> +			struct page *p = restore->old_pages[i];
>>> +
>>> +			if (ttm_backup_page_ptr_is_handle(p)) {
>>> +				handle =
>>> ttm_backup_page_ptr_to_handle(p);
>>> +				if (handle == 0)
>>> +					continue;
>>> +
>>> +				ttm_backup_drop(backup, handle);
>>> +			} else if (p) {
>>> +				ttm_pool_split_for_swap(restore-
>>>> pool, p);
>>> +				__free_pages(p, 0);
>>> +			}
>>> +		}
>>> +	}
>>> +
>>> +	if (restore) {
>>> +		pgoff_t mid = restore->caching_divide - tt->pages;
>>> +
>>> +		start_page = restore->alloced_pages;
>>> +		/* Pages that might be dma-mapped and non-cached
>>> */
>>> +		ttm_pool_free_range(restore->pool, tt, tt-
>>>> caching,
>>> +				    0, mid);
>>> +		/* Pages that might be dma-mapped but cached */
>>> +		ttm_pool_free_range(restore->pool, tt, ttm_cached,
>>> +				    mid, restore->alloced_pages);
>>> +	}
>>> +
>>> +	/* Shrunken pages. Cached and not dma-mapped. */
>>> +	ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt-
>>>> num_pages);
>>> +
>>> +	if (restore) {
>>> +		kvfree(restore);
>>> +		tt->restore = NULL;
>>> +	}
>>> +
>>> +	tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP |
>>> TTM_TT_FLAG_SWAPPED);
>>> +}
>>> +
>>> +/**
>>> + * ttm_pool_backup_tt() - Back up or purge a struct ttm_tt
>>> + * @pool: The pool used when allocating the struct ttm_tt.
>>> + * @ttm: The struct ttm_tt.
>>> + * @flags: Flags to govern the backup behaviour.
>>> + *
>>> + * Back up or purge a struct ttm_tt. If @purge is true, then
>>> + * all pages will be freed directly to the system rather than to
>>> the pool
>>> + * they were allocated from, making the function behave similarly
>>> to
>>> + * ttm_pool_free(). If @purge is false the pages will be backed up
>>> instead,
>>> + * exchanged for handles.
>>> + * A subsequent call to ttm_pool_alloc() will then read back the
>>> content and
>>> + * a subsequent call to ttm_pool_release_shrunken() will drop it.
>>> + * If backup of a page fails for whatever reason, @ttm will still
>>> be
>>> + * partially backed up, retaining those pages for which backup
>>> fails.
>>> + *
>>> + * Return: Number of pages actually backed up or freed, or
>>> negative
>>> + * error code on error.
>>> + */
>>> +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm,
>>> +			const struct ttm_backup_flags *flags)
>>> +{
>>> +	struct ttm_backup *backup = ttm->backup;
>>> +	struct page *page;
>>> +	unsigned long handle;
>>> +	gfp_t alloc_gfp;
>>> +	gfp_t gfp;
>>> +	int ret = 0;
>>> +	pgoff_t shrunken = 0;
>>> +	pgoff_t i, num_pages;
>>> +
>>> +	if ((!ttm_backup_bytes_avail() && !flags->purge) ||
>>> +	    pool->use_dma_alloc ||
>>> +	    (ttm->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP))
>>> +		return -EBUSY;
>>> +
>>> +#ifdef CONFIG_X86
>>> +	/* Anything returned to the system needs to be cached. */
>>> +	if (ttm->caching != ttm_cached)
>>> +		set_pages_array_wb(ttm->pages, ttm->num_pages);
>>> +#endif
>>> +
>>> +	if (ttm->dma_address || flags->purge) {
>>> +		for (i = 0; i < ttm->num_pages; i += num_pages) {
>>> +			unsigned int order;
>>> +
>>> +			page = ttm->pages[i];
>>> +			if (unlikely(!page)) {
>>> +				num_pages = 1;
>>> +				continue;
>>> +			}
>>> +
>>> +			order = ttm_pool_page_order(pool, page);
>>> +			num_pages = 1UL << order;
>>> +			if (ttm->dma_address)
>>> +				ttm_pool_unmap(pool, ttm-
>>>> dma_address[i],
>>> +					       num_pages);
>>> +			if (flags->purge) {
>>> +				shrunken += num_pages;
>>> +				page->private = 0;
>>> +				__free_pages(page, order);
>>> +				memset(ttm->pages + i, 0,
>>> +				       num_pages * sizeof(*ttm-
>>>> pages));
>>> +			}
>>> +		}
>>> +	}
>>> +
>>> +	if (flags->purge)
>>> +		return shrunken;
>>> +
>>> +	if (pool->use_dma32)
>>> +		gfp = GFP_DMA32;
>>> +	else
>>> +		gfp = GFP_HIGHUSER;
>>> +
>>> +	alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN |
>>> __GFP_RETRY_MAYFAIL;
>>> +
>>> +	for (i = 0; i < ttm->num_pages; ++i) {
>>> +		page = ttm->pages[i];
>>> +		if (unlikely(!page))
>>> +			continue;
>>> +
>>> +		ttm_pool_split_for_swap(pool, page);
>>> +
>>> +		handle = ttm_backup_backup_page(backup, page,
>>> flags->writeback, i,
>>> +						gfp, alloc_gfp);
>>> +		if (handle) {
>>> +			ttm->pages[i] =
>>> ttm_backup_handle_to_page_ptr(handle);
>>> +			put_page(page);
>>> +			shrunken++;
>>> +		} else {
>>> +			/* We allow partially shrunken tts */
>>> +			ret = -ENOMEM;
>>> +			break;
>>> +		}
>>> +	}
>>> +
>>> +	if (shrunken)
>>> +		ttm->page_flags |= (TTM_TT_FLAG_PRIV_BACKED_UP |
>>> +				    TTM_TT_FLAG_SWAPPED);
>>> +
>>> +	return shrunken ? shrunken : ret;
>>> +}
>>> +
>>>    /**
>>>     * ttm_pool_init - Initialize a pool
>>>     *
>>> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c
>>> b/drivers/gpu/drm/ttm/ttm_tt.c
>>> index 3baf215eca23..dd4eabe4ad79 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_tt.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
>>> @@ -40,6 +40,7 @@
>>>    #include <drm/drm_cache.h>
>>>    #include <drm/drm_device.h>
>>>    #include <drm/drm_util.h>
>>> +#include <drm/ttm/ttm_backup.h>
>>>    #include <drm/ttm/ttm_bo.h>
>>>    #include <drm/ttm/ttm_tt.h>
>>>    
>>> @@ -158,6 +159,8 @@ static void ttm_tt_init_fields(struct ttm_tt
>>> *ttm,
>>>    	ttm->swap_storage = NULL;
>>>    	ttm->sg = bo->sg;
>>>    	ttm->caching = caching;
>>> +	ttm->restore = NULL;
>>> +	ttm->backup = NULL;
>>>    }
>>>    
>>>    int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>> @@ -182,6 +185,12 @@ void ttm_tt_fini(struct ttm_tt *ttm)
>>>    		fput(ttm->swap_storage);
>>>    	ttm->swap_storage = NULL;
>>>    
>>> +	ttm_pool_release_backed_up(ttm);
>>> +	if (ttm->backup) {
>>> +		ttm_backup_fini(ttm->backup);
>>> +		ttm->backup = NULL;
>>> +	}
>>> +
>>>    	if (ttm->pages)
>>>    		kvfree(ttm->pages);
>>>    	else
>>> @@ -253,6 +262,34 @@ int ttm_tt_swapin(struct ttm_tt *ttm)
>>>    }
>>>    EXPORT_SYMBOL_FOR_TESTS_ONLY(ttm_tt_swapin);
>>>    
>>> +/**
>>> + * ttm_tt_backup() - Helper to back up a struct ttm_tt.
>>> + * @bdev: The TTM device.
>>> + * @tt: The struct ttm_tt.
>>> + * @flags: Flags that govern the backup behaviour.
>>> + *
>>> + * Update the page accounting and call ttm_pool_shrink_tt to free
>>> pages
>>> + * or back them up.
>>> + *
>>> + * Return: Number of pages freed or swapped out, or negative error
>>> code on
>>> + * error.
>>> + */
>>> +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
>>> +		   const struct ttm_backup_flags flags)
>>> +{
>>> +	long ret;
>>> +
>>> +	if (WARN_ON(IS_ERR_OR_NULL(tt->backup)))
>>> +		return 0;
>>> +
>>> +	ret = ttm_pool_backup_tt(&bdev->pool, tt, &flags);
>>> +
>>> +	if (ret > 0)
>>> +		tt->page_flags &= ~TTM_TT_FLAG_PRIV_POPULATED;
>>> +
>>> +	return ret;
>>> +}
>>> +
>>>    /**
>>>     * ttm_tt_swapout - swap out tt object
>>>     *
>>> diff --git a/include/drm/ttm/ttm_pool.h
>>> b/include/drm/ttm/ttm_pool.h
>>> index 160d954a261e..3112a4be835c 100644
>>> --- a/include/drm/ttm/ttm_pool.h
>>> +++ b/include/drm/ttm/ttm_pool.h
>>> @@ -33,6 +33,7 @@
>>>    
>>>    struct device;
>>>    struct seq_file;
>>> +struct ttm_backup_flags;
>>>    struct ttm_operation_ctx;
>>>    struct ttm_pool;
>>>    struct ttm_tt;
>>> @@ -89,6 +90,11 @@ void ttm_pool_fini(struct ttm_pool *pool);
>>>    
>>>    int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m);
>>>    
>>> +void ttm_pool_release_backed_up(struct ttm_tt *tt);
>>> +
>>> +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt *ttm,
>>> +			const struct ttm_backup_flags *flags);
>>> +
>>>    int ttm_pool_mgr_init(unsigned long num_pages);
>>>    void ttm_pool_mgr_fini(void);
>>>    
>>> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
>>> index 991edafdb2dd..6ca2fc7b2a26 100644
>>> --- a/include/drm/ttm/ttm_tt.h
>>> +++ b/include/drm/ttm/ttm_tt.h
>>> @@ -32,11 +32,13 @@
>>>    #include <drm/ttm/ttm_caching.h>
>>>    #include <drm/ttm/ttm_kmap_iter.h>
>>>    
>>> +struct ttm_backup;
>>>    struct ttm_device;
>>>    struct ttm_tt;
>>>    struct ttm_resource;
>>>    struct ttm_buffer_object;
>>>    struct ttm_operation_ctx;
>>> +struct ttm_pool_tt_restore;
>>>    
>>>    /**
>>>     * struct ttm_tt - This is a structure holding the pages,
>>> caching- and aperture
>>> @@ -88,6 +90,9 @@ struct ttm_tt {
>>>    	 * TTM_TT_FLAG_PRIV_POPULATED: TTM internal only. DO NOT
>>> USE. This is
>>>    	 * set by TTM after ttm_tt_populate() has successfully
>>> returned, and is
>>>    	 * then unset when TTM calls ttm_tt_unpopulate().
>>> +	 *
>>> +	 * TTM_TT_FLAG_PRIV_BACKED_UP: TTM internal only. This is
>>> set if the
>>> +	 * struct ttm_tt has been (possibly partially) backed up.
>>>    	 */
>>>    #define TTM_TT_FLAG_SWAPPED		BIT(0)
>>>    #define TTM_TT_FLAG_ZERO_ALLOC		BIT(1)
>>> @@ -96,6 +101,7 @@ struct ttm_tt {
>>>    #define TTM_TT_FLAG_DECRYPTED		BIT(4)
>>>    
>>>    #define TTM_TT_FLAG_PRIV_POPULATED	BIT(5)
>>> +#define TTM_TT_FLAG_PRIV_BACKED_UP	BIT(6)
>>>    	uint32_t page_flags;
>>>    	/** @num_pages: Number of pages in the page array. */
>>>    	uint32_t num_pages;
>>> @@ -105,11 +111,20 @@ struct ttm_tt {
>>>    	dma_addr_t *dma_address;
>>>    	/** @swap_storage: Pointer to shmem struct file for swap
>>> storage. */
>>>    	struct file *swap_storage;
>>> +	/**
>>> +	 * @backup: Pointer to backup struct for backed up tts.
>>> +	 * Could be unified with @swap_storage. Meanwhile, the
>>> driver's
>>> +	 * ttm_tt_create() callback is responsible for assigning
>>> +	 * this field.
>>> +	 */
>>> +	struct ttm_backup *backup;
>>>    	/**
>>>    	 * @caching: The current caching state of the pages, see
>>> enum
>>>    	 * ttm_caching.
>>>    	 */
>>>    	enum ttm_caching caching;
>>> +	/** @restore: Partial restoration from backup state. TTM
>>> private */
>>> +	struct ttm_pool_tt_restore *restore;
>>>    };
>>>    
>>>    /**
>>> @@ -131,7 +146,7 @@ static inline bool ttm_tt_is_populated(struct
>>> ttm_tt *tt)
>>>    
>>>    static inline bool ttm_tt_is_swapped(const struct ttm_tt *tt)
>>>    {
>>> -	return tt->page_flags & TTM_TT_FLAG_SWAPPED;
>>> +	return tt->page_flags & (TTM_TT_FLAG_SWAPPED |
>>> TTM_TT_FLAG_PRIV_BACKED_UP);
>>>    }
>>>    
>>>    /**
>>> @@ -235,6 +250,21 @@ void ttm_tt_mgr_init(unsigned long num_pages,
>>> unsigned long num_dma32_pages);
>>>    struct ttm_kmap_iter *ttm_kmap_iter_tt_init(struct
>>> ttm_kmap_iter_tt *iter_tt,
>>>    					    struct ttm_tt *tt);
>>>    unsigned long ttm_tt_pages_limit(void);
>>> +
>>> +/**
>>> + * struct ttm_backup_flags - Flags to govern backup behaviour.
>>> + * @purge: Free pages without backing up. Bypass pools.
>>> + * @writeback: Attempt to copy contents directly to swap space,
>>> even
>>> + * if that means blocking on writes to external memory.
>>> + */
>>> +struct ttm_backup_flags {
>>> +	u32 purge : 1;
>>> +	u32 writeback : 1;
>>> +};
>>> +
>>> +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
>>> +		   const struct ttm_backup_flags flags);
>>> +
>>>    #if IS_ENABLED(CONFIG_AGP)
>>>    #include <linux/agp_backend.h>
>>>    

[-- Attachment #2: Type: text/html, Size: 32161 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 14:51       ` Christian König
@ 2024-12-03 15:50         ` Thomas Hellström
  2024-12-03 16:20           ` Christian König
  2024-12-18 10:15         ` Thomas Hellström
  1 sibling, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-12-03 15:50 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Tue, 2024-12-03 at 15:51 +0100, Christian König wrote:
> Am 03.12.24 um 14:42 schrieb Thomas Hellström:
> > On Tue, 2024-12-03 at 14:12 +0100, Christian König wrote:
> > > Am 15.11.24 um 16:01 schrieb Thomas Hellström:
> > > > Provide a helper to shrink ttm_tt page-vectors on a per-page
> > > > basis. A ttm_backup backend could then in theory get away with
> > > > allocating a single temporary page for each struct ttm_tt.
> > > > 
> > > > This is accomplished by splitting larger pages before trying to
> > > > back them up.
> > > > 
> > > > In the future we could allow ttm_backup to handle backing up
> > > > large pages as well, but currently there's no benefit in
> > > > doing that, since the shmem backup backend would have to
> > > > split those anyway to avoid allocating too much temporary
> > > > memory, and if the backend instead inserts pages into the
> > > > swap-cache, those are split on reclaim by the core.
> > > > 
> > > > Due to potential backup- and recover errors, allow partially
> > > > swapped
> > > > out struct ttm_tt's, although mark them as swapped out stopping
> > > > them
> > > > from being swapped out a second time. More details in the
> > > > ttm_pool.c
> > > > DOC section.
> > > > 
> > > > v2:
> > > > - A couple of cleanups and error fixes in ttm_pool_back_up_tt.
> > > > - s/back_up/backup/
> > > > - Add a writeback parameter to the exported interface.
> > > > v8:
> > > > - Use a struct for flags for readability (Matt Brost)
> > > > - Address misc other review comments (Matt Brost)
> > > > v9:
> > > > - Update the kerneldoc for the ttm_tt::backup field.
> > > > v10:
> > > > - Rebase.
> > > > v13:
> > > > - Rebase on ttm_backup interface change. Update kerneldoc.
> > > > - Rebase and adjust ttm_tt_is_swapped().
> > > > 
> > > > Cc: Christian König<christian.koenig@amd.com>
> > > > Cc: Somalapuram Amaranath<Amaranath.Somalapuram@amd.com>
> > > > Cc: Matthew Brost<matthew.brost@intel.com>
> > > > Cc:<dri-devel@lists.freedesktop.org>
> > > > Signed-off-by: Thomas
> > > > Hellström<thomas.hellstrom@linux.intel.com>
> > > > Reviewed-by: Matthew Brost<matthew.brost@intel.com>
> > > > ---
> > > >    drivers/gpu/drm/ttm/ttm_pool.c | 396
> > > > +++++++++++++++++++++++++++++++--
> > > >    drivers/gpu/drm/ttm/ttm_tt.c   |  37 +++
> > > >    include/drm/ttm/ttm_pool.h     |   6 +
> > > >    include/drm/ttm/ttm_tt.h       |  32 ++-
> > > >    4 files changed, 457 insertions(+), 14 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
> > > > b/drivers/gpu/drm/ttm/ttm_pool.c
> > > > index 8504dbe19c1a..f58864439edb 100644
> > > > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > > > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > > > @@ -41,6 +41,7 @@
> > > >    #include <asm/set_memory.h>
> > > >    #endif
> > > >    
> > > > +#include <drm/ttm/ttm_backup.h>
> > > >    #include <drm/ttm/ttm_pool.h>
> > > >    #include <drm/ttm/ttm_tt.h>
> > > >    #include <drm/ttm/ttm_bo.h>
> > > > @@ -58,6 +59,32 @@ struct ttm_pool_dma {
> > > >    	unsigned long vaddr;
> > > >    };
> > > >    
> > > > +/**
> > > > + * struct ttm_pool_tt_restore - State representing restore
> > > > from
> > > > backup
> > > > + * @alloced_pages: Total number of already allocated pages for
> > > > the
> > > > ttm_tt.
> > > > + * @restored_pages: Number of (sub) pages restored from swap
> > > > for
> > > > this
> > > > + *		     chunk of 1 << @order pages.
> > > > + * @first_page: The ttm page ptr representing for
> > > > @old_pages[0].
> > > > + * @caching_divide: Page pointer where subsequent pages are
> > > > cached.
> > > > + * @old_pages: Backup copy of page pointers that were replaced
> > > > by
> > > > the new
> > > > + *	       page allocation.
> > > > + * @pool: The pool used for page allocation while restoring.
> > > > + * @order: The order of the last page allocated while
> > > > restoring.
> > > > + *
> > > > + * Recovery from backup might fail when we've recovered less
> > > > than
> > > > the
> > > > + * full ttm_tt. In order not to loose any data (yet), keep
> > > > information
> > > > + * around that allows us to restart a failed ttm backup
> > > > recovery.
> > > > + */
> > > > +struct ttm_pool_tt_restore {
> > > > +	pgoff_t alloced_pages;
> > > > +	pgoff_t restored_pages;
> > > > +	struct page **first_page;
> > > > +	struct page **caching_divide;
> > > > +	struct ttm_pool *pool;
> > > > +	unsigned int order;
> > > > +	struct page *old_pages[];
> > > > +};
> > > > +
> > > >    static unsigned long page_pool_size;
> > > >    
> > > >    MODULE_PARM_DESC(page_pool_size, "Number of pages in the
> > > > WC/UC/DMA pool");
> > > > @@ -354,11 +381,105 @@ static unsigned int
> > > > ttm_pool_page_order(struct ttm_pool *pool, struct page *p)
> > > >    	return p->private;
> > > >    }
> > > >    
> > > > +/*
> > > > + * To be able to insert single pages into backup directly,
> > > > + * we need to split multi-order page allocations and make them
> > > > look
> > > > + * like single-page allocations.
> > > > + */
> > > > +static void ttm_pool_split_for_swap(struct ttm_pool *pool,
> > > > struct
> > > > page *p)
> > > > +{
> > > > +	unsigned int order = ttm_pool_page_order(pool, p);
> > > > +	pgoff_t nr;
> > > > +
> > > > +	if (!order)
> > > > +		return;
> > > > +
> > > > +	split_page(p, order);
> > > What exactly should split_page() do here and why is that
> > > necessary?
> > > 
> > > IIRC that function just updated the reference count and updated
> > > things
> > > like page owner tracking and memcg accounting. Which should both
> > > be
> > > completely irrelevant here.
> > > 
> > > Or do you just do that so that you can free each page
> > > individually?
> > Yes, exactly. Like For a 2MiB page we'd otherwise have to allocate
> > 2MiB
> > of shmem backing storage, potentially from kernel reserves before
> > we
> > could actually free anything. Since (currently) the shmem objects
> > we
> > use are 4K-page only, this should make the process "allocate shmem
> > and
> > back up" much less likely to deplete the kernel memory reserves.
> 
> Ah, yes that makes totally sense now.
> 
> > 
> > Taking a step back and looking at potentially other solution, like
> > direct insertion into the swap cache, then even if inserting a 2MiB
> > page into the swap cache, vmscan would split it before writeback,
> > and
> > still it didn't appear very stable. So inserting one 4K page at a
> > time
> > seemed neccessary. If I were to take a guess that's why shmem, when
> > configured for 2MiB pages, like with i915, also splits the pages
> > before
> > moving to swap-cache / writeback.
> > 
> > 
> > > > +	nr = 1UL << order;
> > > > +	while (nr--)
> > > > +		(p++)->private = 0;
> > > > +}
> > > > +
> > > > +/**
> > > > + * DOC: Partial backup and restoration of a struct ttm_tt.
> > > > + *
> > > > + * Swapout using ttm_backup_backup_page() and swapin using
> > > > + * ttm_backup_copy_page() may fail.
> > > > + * The former most likely due to lack of swap-space or memory,
> > > > the
> > > > latter due
> > > > + * to lack of memory or because of signal interruption during
> > > > waits.
> > > > + *
> > > > + * Backup failure is easily handled by using a ttm_tt pages
> > > > vector
> > > > that holds
> > > > + * both swap entries and page pointers. This has to be taken
> > > > into
> > > > account when
> > > > + * restoring such a ttm_tt from backup, and when freeing it
> > > > while
> > > > backed up.
> > > > + * When restoring, for simplicity, new pages are actually
> > > > allocated from the
> > > > + * pool and the contents of any old pages are copied in and
> > > > then
> > > > the old pages
> > > > + * are released.
> > > > + *
> > > > + * For restoration failures, the struct ttm_pool_tt_restore
> > > > holds
> > > > sufficient state
> > > > + * to be able to resume an interrupted restore, and that
> > > > structure
> > > > is freed once
> > > > + * the restoration is complete. If the struct ttm_tt is
> > > > destroyed
> > > > while there
> > > > + * is a valid struct ttm_pool_tt_restore attached, that is
> > > > also
> > > > properly taken
> > > > + * care of.
> > > > + */
> > > > +
> > > > +static bool ttm_pool_restore_valid(const struct
> > > > ttm_pool_tt_restore *restore)
> > > > +{
> > > > +	return restore && restore->restored_pages < (1 <<
> > > > restore-
> > > > > order);
> > > > +}
> > > > +
> > > > +static int ttm_pool_restore_tt(struct ttm_pool_tt_restore
> > > > *restore,
> > > > +			       struct ttm_backup *backup,
> > > > +			       struct ttm_operation_ctx *ctx)
> > > > +{
> > > > +	unsigned int i, nr = 1 << restore->order;
> > > > +	int ret = 0;
> > > > +
> > > > +	if (!ttm_pool_restore_valid(restore))
> > > > +		return 0;
> > > > +
> > > > +	for (i = restore->restored_pages; i < nr; ++i) {
> > > > +		struct page *p = restore->old_pages[i];
> > > > +
> > > > +		if (ttm_backup_page_ptr_is_handle(p)) {
> > > > +			unsigned long handle =
> > > > ttm_backup_page_ptr_to_handle(p);
> > > > +
> > > > +			if (handle == 0)
> > > > +				continue;
> > > > +
> > > > +			ret = ttm_backup_copy_page
> > > > +				(backup, restore-
> > > > >first_page[i],
> > > > +				 handle, ctx->interruptible);
> > > That coding style looks really odd, I didn't even notice that it
> > > is a
> > > function call initially.
> > > 
> > > Maybe put everything under the if into a separate function.
> > At a minimum, I'll fix up the formatting here.
> > 
> > > > +			if (ret)
> > > > +				break;
> > > > +
> > > > +			ttm_backup_drop(backup, handle);
> > > > +		} else if (p) {
> > > > +			/*
> > > > +			 * We could probably avoid splitting
> > > > the
> > > > old page
> > > > +			 * using clever logic, but ATM we
> > > > don't
> > > > care, as
> > > > +			 * we prioritize releasing memory
> > > > ASAP.
> > > > Note that
> > > > +			 * here, the old retained page is
> > > > always
> > > > write-back
> > > > +			 * cached.
> > > > +			 */
> > > > +			ttm_pool_split_for_swap(restore->pool,
> > > > p);
> > > > +			copy_highpage(restore->first_page[i],
> > > > p);
> > > > +			__free_pages(p, 0);
> > > > +		}
> > > > +
> > > > +		restore->restored_pages++;
> > > > +		restore->old_pages[i] = NULL;
> > > > +		cond_resched();
> > > There is a push to remove cond_resched(), see here:
> > > https://patchwork.kernel.org/project/linux-mm/patch/20231107230822.371443-30-ankur.a.arora@oracle.com/
> > > 
> > > Not sure in which discussion that removal went, but IIRC we
> > > should
> > > not
> > > add any new users of it.
> > I'll read up on that and remove if needed. I'm curious how / if
> > voluntary preemption is going to be handled.
> 
> I didn't fully understood it either, but the push kind of seems to be
> that drivers or in this cases subsystems are not supposed to mess
> with 
> cond_resched() any more and just rely on preemptive kernels.
> 
> > > > +	}
> > > > +
> > > > +	return ret;
> > > > +}
> > > > +
> > > >    /* Called when we got a page, either from a pool or newly
> > > > allocated */
> > > >    static int ttm_pool_page_allocated(struct ttm_pool *pool,
> > > > unsigned int order,
> > > >    				   struct page *p, dma_addr_t
> > > > **dma_addr,
> > > >    				   unsigned long *num_pages,
> > > > -				   struct page ***pages)
> > > > +				   struct page ***pages,
> > > > +				   struct ttm_pool_tt_restore
> > > > *restore)
> > > >    {
> > > >    	unsigned int i;
> > > >    	int r;
> > > > @@ -369,6 +490,16 @@ static int ttm_pool_page_allocated(struct
> > > > ttm_pool *pool, unsigned int order,
> > > >    			return r;
> > > >    	}
> > > >    
> > > > +	if (restore) {
> > > > +		memcpy(restore->old_pages, *pages,
> > > > +		       (1 << order) * sizeof(*restore-
> > > > > old_pages));
> > > > +		memset(*pages, 0, (1 << order) *
> > > > sizeof(**pages));
> > > > +		restore->order = order;
> > > > +		restore->restored_pages = 0;
> > > > +		restore->first_page = *pages;
> > > > +		restore->alloced_pages += 1UL << order;
> > > > +	}
> > > > +
> > > >    	*num_pages -= 1 << order;
> > > >    	for (i = 1 << order; i; --i, ++(*pages), ++p)
> > > >    		**pages = p;
> > > > @@ -394,22 +525,39 @@ static void ttm_pool_free_range(struct
> > > > ttm_pool *pool, struct ttm_tt *tt,
> > > >    				pgoff_t start_page, pgoff_t
> > > > end_page)
> > > >    {
> > > >    	struct page **pages = &tt->pages[start_page];
> > > > +	struct ttm_backup *backup = tt->backup;
> > > >    	unsigned int order;
> > > >    	pgoff_t i, nr;
> > > >    
> > > >    	for (i = start_page; i < end_page; i += nr, pages +=
> > > > nr) {
> > > >    		struct ttm_pool_type *pt = NULL;
> > > > +		struct page *p = *pages;
> > > > +
> > > > +		if (ttm_backup_page_ptr_is_handle(p)) {
> > > > +			unsigned long handle =
> > > > ttm_backup_page_ptr_to_handle(p);
> > > > +
> > > > +			nr = 1;
> > > > +			if (handle != 0)
> > > > +				ttm_backup_drop(backup,
> > > > handle);
> > > > +			continue;
> > > > +		}
> > > > +
> > > > +		if (pool) {
> > > > +			order = ttm_pool_page_order(pool, p);
> > > > +			nr = (1UL << order);
> > > > +			if (tt->dma_address)
> > > > +				ttm_pool_unmap(pool, tt-
> > > > > dma_address[i], nr);
> > > >    
> > > > -		order = ttm_pool_page_order(pool, *pages);
> > > > -		nr = (1UL << order);
> > > > -		if (tt->dma_address)
> > > > -			ttm_pool_unmap(pool, tt-
> > > > >dma_address[i],
> > > > nr);
> > > > +			pt = ttm_pool_select_type(pool,
> > > > caching,
> > > > order);
> > > > +		} else {
> > > > +			order = p->private;
> > > > +			nr = (1UL << order);
> > > > +		}
> > > >    
> > > > -		pt = ttm_pool_select_type(pool, caching,
> > > > order);
> > > >    		if (pt)
> > > > -			ttm_pool_type_give(pt, *pages);
> > > > +			ttm_pool_type_give(pt, p);
> > > >    		else
> > > > -			ttm_pool_free_page(pool, caching,
> > > > order,
> > > > *pages);
> > > > +			ttm_pool_free_page(pool, caching,
> > > > order,
> > > > p);
> > > >    	}
> > > >    }
> > > >    
> > > > @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool *pool,
> > > > struct ttm_tt *tt,
> > > >    	else
> > > >    		gfp_flags |= GFP_HIGHUSER;
> > > >    
> > > > -	for (order = min_t(unsigned int, MAX_PAGE_ORDER,
> > > > __fls(num_pages));
> > > > -	     num_pages;
> > > > -	     order = min_t(unsigned int, order,
> > > > __fls(num_pages)))
> > > > {
> > > > +	order = min_t(unsigned int, MAX_PAGE_ORDER,
> > > > __fls(num_pages));
> > > > +
> > > > +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
> > > > +		if (!tt->restore) {
> > > > +			gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
> > > > +
> > > > +			if (ctx->gfp_retry_mayfail)
> > > > +				gfp |= __GFP_RETRY_MAYFAIL;
> > > > +
> > > > +			tt->restore =
> > > > +				kvzalloc(struct_size(tt-
> > > > >restore,
> > > > old_pages,
> > > > +						     (size_t)1
> > > > <<
> > > > order), gfp);
> > > > +			if (!tt->restore)
> > > > +				return -ENOMEM;
> > > > +		} else if (ttm_pool_restore_valid(tt-
> > > > >restore)) {
> > > > +			struct ttm_pool_tt_restore *restore =
> > > > tt-
> > > > > restore;
> > > > +
> > > > +			num_pages -= restore->alloced_pages;
> > > > +			order = min_t(unsigned int, order,
> > > > __fls(num_pages));
> > > > +			pages += restore->alloced_pages;
> > > > +			r = ttm_pool_restore_tt(restore, tt-
> > > > > backup, ctx);
> > > > +			if (r)
> > > > +				return r;
> > > > +			caching = restore->caching_divide;
> > > > +		}
> > > > +
> > > > +		tt->restore->pool = pool;
> > > > +	}
> > > Hui? Why is that part of the allocation function now?
> > > 
> > > At bare minimum I would expect that this is a new function.
> > It's because we now have partially backed up tts, so the restore is
> > interleaved on a per-page basis, replacing the backup handles with
> > page-pointers. I'll see if I can separate out at least the
> > initialization here.
> 
> Yeah, that kind of makes sense.
> 
> My expectation was just that we now have explicit ttm_pool_swapout()
> and 
> ttm_pool_swapin() functions.

I fully understand, although in the allocation step, that would also
increase the memory pressure since we might momentarily have twice the
bo-size allocated, if the shmem object was never swapped out, and we
don't want to unnecessarily risc OOM at recover time, although that
should be a recoverable situation now. If the OOM receiver can free up
system memory resources they can could potentially restart the recover.

/Thomas




> 
> Christian.
> 
> > 
> > /Thomas
> > 
> > 
> > > Regards,
> > > Christian.
> > > 
> > > > +
> > > > +	for (; num_pages; order = min_t(unsigned int, order,
> > > > __fls(num_pages))) {
> > > >    		struct ttm_pool_type *pt;
> > > >    
> > > >    		page_caching = tt->caching;
> > > > @@ -472,11 +647,19 @@ int ttm_pool_alloc(struct ttm_pool *pool,
> > > > struct ttm_tt *tt,
> > > >    				r =
> > > > ttm_pool_page_allocated(pool,
> > > > order, p,
> > > >    							
> > > > &dma_addr,
> > > >    							
> > > > &num_pages,
> > > > -							
> > > > &pages);
> > > > +							
> > > > &pages,
> > > > +							   
> > > > tt-
> > > > > restore);
> > > >    				if (r)
> > > >    					goto error_free_page;
> > > >    
> > > >    				caching = pages;
> > > > +				if (ttm_pool_restore_valid(tt-
> > > > > restore)) {
> > > > +					r =
> > > > ttm_pool_restore_tt(tt->restore, tt->backup,
> > > > +							
> > > > 	ct
> > > > x);
> > > > +					if (r)
> > > > +						goto
> > > > error_free_all;
> > > > +				}
> > > > +
> > > >    				if (num_pages < (1 << order))
> > > >    					break;
> > > >    
> > > > @@ -496,9 +679,17 @@ int ttm_pool_alloc(struct ttm_pool *pool,
> > > > struct ttm_tt *tt,
> > > >    				caching = pages;
> > > >    			}
> > > >    			r = ttm_pool_page_allocated(pool,
> > > > order,
> > > > p, &dma_addr,
> > > > -						   
> > > > &num_pages,
> > > > &pages);
> > > > +						   
> > > > &num_pages,
> > > > &pages,
> > > > +						    tt-
> > > > >restore);
> > > >    			if (r)
> > > >    				goto error_free_page;
> > > > +
> > > > +			if (ttm_pool_restore_valid(tt-
> > > > >restore)) {
> > > > +				r = ttm_pool_restore_tt(tt-
> > > > > restore, tt->backup, ctx);
> > > > +				if (r)
> > > > +					goto error_free_all;
> > > > +			}
> > > > +
> > > >    			if (PageHighMem(p))
> > > >    				caching = pages;
> > > >    		}
> > > > @@ -517,12 +708,26 @@ int ttm_pool_alloc(struct ttm_pool *pool,
> > > > struct ttm_tt *tt,
> > > >    	if (r)
> > > >    		goto error_free_all;
> > > >    
> > > > +	if (tt->restore) {
> > > > +		kvfree(tt->restore);
> > > > +		tt->restore = NULL;
> > > > +	}
> > > > +
> > > > +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)
> > > > +		tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP
> > > > |
> > > > +				    TTM_TT_FLAG_SWAPPED);
> > > > +
> > > >    	return 0;
> > > >    
> > > >    error_free_page:
> > > >    	ttm_pool_free_page(pool, page_caching, order, p);
> > > >    
> > > >    error_free_all:
> > > > +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
> > > > +		tt->restore->caching_divide = caching;
> > > > +		return r;
> > > > +	}
> > > > +
> > > >    	num_pages = tt->num_pages - num_pages;
> > > >    	caching_divide = caching - tt->pages;
> > > >    	ttm_pool_free_range(pool, tt, tt->caching, 0,
> > > > caching_divide);
> > > > @@ -549,6 +754,171 @@ void ttm_pool_free(struct ttm_pool *pool,
> > > > struct ttm_tt *tt)
> > > >    }
> > > >    EXPORT_SYMBOL(ttm_pool_free);
> > > >    
> > > > +/**
> > > > + * ttm_pool_release_backed_up() - Release content of a
> > > > swapped-out
> > > > struct ttm_tt
> > > > + * @tt: The struct ttm_tt.
> > > > + *
> > > > + * Release handles with associated content or any remaining
> > > > pages
> > > > of
> > > > + * a backed-up struct ttm_tt.
> > > > + */
> > > > +void ttm_pool_release_backed_up(struct ttm_tt *tt)
> > > > +{
> > > > +	struct ttm_backup *backup = tt->backup;
> > > > +	struct ttm_pool_tt_restore *restore;
> > > > +	pgoff_t i, start_page = 0;
> > > > +	unsigned long handle;
> > > > +
> > > > +	if (!(tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP))
> > > > +		return;
> > > > +
> > > > +	restore = tt->restore;
> > > > +
> > > > +	if (ttm_pool_restore_valid(restore)) {
> > > > +		pgoff_t nr = 1UL << restore->order;
> > > > +
> > > > +		for (i = restore->restored_pages; i < nr; ++i)
> > > > {
> > > > +			struct page *p = restore-
> > > > >old_pages[i];
> > > > +
> > > > +			if (ttm_backup_page_ptr_is_handle(p))
> > > > {
> > > > +				handle =
> > > > ttm_backup_page_ptr_to_handle(p);
> > > > +				if (handle == 0)
> > > > +					continue;
> > > > +
> > > > +				ttm_backup_drop(backup,
> > > > handle);
> > > > +			} else if (p) {
> > > > +				ttm_pool_split_for_swap(restor
> > > > e-
> > > > > pool, p);
> > > > +				__free_pages(p, 0);
> > > > +			}
> > > > +		}
> > > > +	}
> > > > +
> > > > +	if (restore) {
> > > > +		pgoff_t mid = restore->caching_divide - tt-
> > > > >pages;
> > > > +
> > > > +		start_page = restore->alloced_pages;
> > > > +		/* Pages that might be dma-mapped and non-
> > > > cached
> > > > */
> > > > +		ttm_pool_free_range(restore->pool, tt, tt-
> > > > > caching,
> > > > +				    0, mid);
> > > > +		/* Pages that might be dma-mapped but cached
> > > > */
> > > > +		ttm_pool_free_range(restore->pool, tt,
> > > > ttm_cached,
> > > > +				    mid, restore-
> > > > >alloced_pages);
> > > > +	}
> > > > +
> > > > +	/* Shrunken pages. Cached and not dma-mapped. */
> > > > +	ttm_pool_free_range(NULL, tt, ttm_cached, start_page,
> > > > tt-
> > > > > num_pages);
> > > > +
> > > > +	if (restore) {
> > > > +		kvfree(restore);
> > > > +		tt->restore = NULL;
> > > > +	}
> > > > +
> > > > +	tt->page_flags &= ~(TTM_TT_FLAG_PRIV_BACKED_UP |
> > > > TTM_TT_FLAG_SWAPPED);
> > > > +}
> > > > +
> > > > +/**
> > > > + * ttm_pool_backup_tt() - Back up or purge a struct ttm_tt
> > > > + * @pool: The pool used when allocating the struct ttm_tt.
> > > > + * @ttm: The struct ttm_tt.
> > > > + * @flags: Flags to govern the backup behaviour.
> > > > + *
> > > > + * Back up or purge a struct ttm_tt. If @purge is true, then
> > > > + * all pages will be freed directly to the system rather than
> > > > to
> > > > the pool
> > > > + * they were allocated from, making the function behave
> > > > similarly
> > > > to
> > > > + * ttm_pool_free(). If @purge is false the pages will be
> > > > backed up
> > > > instead,
> > > > + * exchanged for handles.
> > > > + * A subsequent call to ttm_pool_alloc() will then read back
> > > > the
> > > > content and
> > > > + * a subsequent call to ttm_pool_release_shrunken() will drop
> > > > it.
> > > > + * If backup of a page fails for whatever reason, @ttm will
> > > > still
> > > > be
> > > > + * partially backed up, retaining those pages for which backup
> > > > fails.
> > > > + *
> > > > + * Return: Number of pages actually backed up or freed, or
> > > > negative
> > > > + * error code on error.
> > > > + */
> > > > +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt
> > > > *ttm,
> > > > +			const struct ttm_backup_flags *flags)
> > > > +{
> > > > +	struct ttm_backup *backup = ttm->backup;
> > > > +	struct page *page;
> > > > +	unsigned long handle;
> > > > +	gfp_t alloc_gfp;
> > > > +	gfp_t gfp;
> > > > +	int ret = 0;
> > > > +	pgoff_t shrunken = 0;
> > > > +	pgoff_t i, num_pages;
> > > > +
> > > > +	if ((!ttm_backup_bytes_avail() && !flags->purge) ||
> > > > +	    pool->use_dma_alloc ||
> > > > +	    (ttm->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP))
> > > > +		return -EBUSY;
> > > > +
> > > > +#ifdef CONFIG_X86
> > > > +	/* Anything returned to the system needs to be cached.
> > > > */
> > > > +	if (ttm->caching != ttm_cached)
> > > > +		set_pages_array_wb(ttm->pages, ttm-
> > > > >num_pages);
> > > > +#endif
> > > > +
> > > > +	if (ttm->dma_address || flags->purge) {
> > > > +		for (i = 0; i < ttm->num_pages; i +=
> > > > num_pages) {
> > > > +			unsigned int order;
> > > > +
> > > > +			page = ttm->pages[i];
> > > > +			if (unlikely(!page)) {
> > > > +				num_pages = 1;
> > > > +				continue;
> > > > +			}
> > > > +
> > > > +			order = ttm_pool_page_order(pool,
> > > > page);
> > > > +			num_pages = 1UL << order;
> > > > +			if (ttm->dma_address)
> > > > +				ttm_pool_unmap(pool, ttm-
> > > > > dma_address[i],
> > > > +					       num_pages);
> > > > +			if (flags->purge) {
> > > > +				shrunken += num_pages;
> > > > +				page->private = 0;
> > > > +				__free_pages(page, order);
> > > > +				memset(ttm->pages + i, 0,
> > > > +				       num_pages *
> > > > sizeof(*ttm-
> > > > > pages));
> > > > +			}
> > > > +		}
> > > > +	}
> > > > +
> > > > +	if (flags->purge)
> > > > +		return shrunken;
> > > > +
> > > > +	if (pool->use_dma32)
> > > > +		gfp = GFP_DMA32;
> > > > +	else
> > > > +		gfp = GFP_HIGHUSER;
> > > > +
> > > > +	alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN |
> > > > __GFP_RETRY_MAYFAIL;
> > > > +
> > > > +	for (i = 0; i < ttm->num_pages; ++i) {
> > > > +		page = ttm->pages[i];
> > > > +		if (unlikely(!page))
> > > > +			continue;
> > > > +
> > > > +		ttm_pool_split_for_swap(pool, page);
> > > > +
> > > > +		handle = ttm_backup_backup_page(backup, page,
> > > > flags->writeback, i,
> > > > +						gfp,
> > > > alloc_gfp);
> > > > +		if (handle) {
> > > > +			ttm->pages[i] =
> > > > ttm_backup_handle_to_page_ptr(handle);
> > > > +			put_page(page);
> > > > +			shrunken++;
> > > > +		} else {
> > > > +			/* We allow partially shrunken tts */
> > > > +			ret = -ENOMEM;
> > > > +			break;
> > > > +		}
> > > > +	}
> > > > +
> > > > +	if (shrunken)
> > > > +		ttm->page_flags |= (TTM_TT_FLAG_PRIV_BACKED_UP
> > > > |
> > > > +				    TTM_TT_FLAG_SWAPPED);
> > > > +
> > > > +	return shrunken ? shrunken : ret;
> > > > +}
> > > > +
> > > >    /**
> > > >     * ttm_pool_init - Initialize a pool
> > > >     *
> > > > diff --git a/drivers/gpu/drm/ttm/ttm_tt.c
> > > > b/drivers/gpu/drm/ttm/ttm_tt.c
> > > > index 3baf215eca23..dd4eabe4ad79 100644
> > > > --- a/drivers/gpu/drm/ttm/ttm_tt.c
> > > > +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> > > > @@ -40,6 +40,7 @@
> > > >    #include <drm/drm_cache.h>
> > > >    #include <drm/drm_device.h>
> > > >    #include <drm/drm_util.h>
> > > > +#include <drm/ttm/ttm_backup.h>
> > > >    #include <drm/ttm/ttm_bo.h>
> > > >    #include <drm/ttm/ttm_tt.h>
> > > >    
> > > > @@ -158,6 +159,8 @@ static void ttm_tt_init_fields(struct
> > > > ttm_tt
> > > > *ttm,
> > > >    	ttm->swap_storage = NULL;
> > > >    	ttm->sg = bo->sg;
> > > >    	ttm->caching = caching;
> > > > +	ttm->restore = NULL;
> > > > +	ttm->backup = NULL;
> > > >    }
> > > >    
> > > >    int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object
> > > > *bo,
> > > > @@ -182,6 +185,12 @@ void ttm_tt_fini(struct ttm_tt *ttm)
> > > >    		fput(ttm->swap_storage);
> > > >    	ttm->swap_storage = NULL;
> > > >    
> > > > +	ttm_pool_release_backed_up(ttm);
> > > > +	if (ttm->backup) {
> > > > +		ttm_backup_fini(ttm->backup);
> > > > +		ttm->backup = NULL;
> > > > +	}
> > > > +
> > > >    	if (ttm->pages)
> > > >    		kvfree(ttm->pages);
> > > >    	else
> > > > @@ -253,6 +262,34 @@ int ttm_tt_swapin(struct ttm_tt *ttm)
> > > >    }
> > > >    EXPORT_SYMBOL_FOR_TESTS_ONLY(ttm_tt_swapin);
> > > >    
> > > > +/**
> > > > + * ttm_tt_backup() - Helper to back up a struct ttm_tt.
> > > > + * @bdev: The TTM device.
> > > > + * @tt: The struct ttm_tt.
> > > > + * @flags: Flags that govern the backup behaviour.
> > > > + *
> > > > + * Update the page accounting and call ttm_pool_shrink_tt to
> > > > free
> > > > pages
> > > > + * or back them up.
> > > > + *
> > > > + * Return: Number of pages freed or swapped out, or negative
> > > > error
> > > > code on
> > > > + * error.
> > > > + */
> > > > +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
> > > > +		   const struct ttm_backup_flags flags)
> > > > +{
> > > > +	long ret;
> > > > +
> > > > +	if (WARN_ON(IS_ERR_OR_NULL(tt->backup)))
> > > > +		return 0;
> > > > +
> > > > +	ret = ttm_pool_backup_tt(&bdev->pool, tt, &flags);
> > > > +
> > > > +	if (ret > 0)
> > > > +		tt->page_flags &= ~TTM_TT_FLAG_PRIV_POPULATED;
> > > > +
> > > > +	return ret;
> > > > +}
> > > > +
> > > >    /**
> > > >     * ttm_tt_swapout - swap out tt object
> > > >     *
> > > > diff --git a/include/drm/ttm/ttm_pool.h
> > > > b/include/drm/ttm/ttm_pool.h
> > > > index 160d954a261e..3112a4be835c 100644
> > > > --- a/include/drm/ttm/ttm_pool.h
> > > > +++ b/include/drm/ttm/ttm_pool.h
> > > > @@ -33,6 +33,7 @@
> > > >    
> > > >    struct device;
> > > >    struct seq_file;
> > > > +struct ttm_backup_flags;
> > > >    struct ttm_operation_ctx;
> > > >    struct ttm_pool;
> > > >    struct ttm_tt;
> > > > @@ -89,6 +90,11 @@ void ttm_pool_fini(struct ttm_pool *pool);
> > > >    
> > > >    int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file
> > > > *m);
> > > >    
> > > > +void ttm_pool_release_backed_up(struct ttm_tt *tt);
> > > > +
> > > > +long ttm_pool_backup_tt(struct ttm_pool *pool, struct ttm_tt
> > > > *ttm,
> > > > +			const struct ttm_backup_flags *flags);
> > > > +
> > > >    int ttm_pool_mgr_init(unsigned long num_pages);
> > > >    void ttm_pool_mgr_fini(void);
> > > >    
> > > > diff --git a/include/drm/ttm/ttm_tt.h
> > > > b/include/drm/ttm/ttm_tt.h
> > > > index 991edafdb2dd..6ca2fc7b2a26 100644
> > > > --- a/include/drm/ttm/ttm_tt.h
> > > > +++ b/include/drm/ttm/ttm_tt.h
> > > > @@ -32,11 +32,13 @@
> > > >    #include <drm/ttm/ttm_caching.h>
> > > >    #include <drm/ttm/ttm_kmap_iter.h>
> > > >    
> > > > +struct ttm_backup;
> > > >    struct ttm_device;
> > > >    struct ttm_tt;
> > > >    struct ttm_resource;
> > > >    struct ttm_buffer_object;
> > > >    struct ttm_operation_ctx;
> > > > +struct ttm_pool_tt_restore;
> > > >    
> > > >    /**
> > > >     * struct ttm_tt - This is a structure holding the pages,
> > > > caching- and aperture
> > > > @@ -88,6 +90,9 @@ struct ttm_tt {
> > > >    	 * TTM_TT_FLAG_PRIV_POPULATED: TTM internal only. DO
> > > > NOT
> > > > USE. This is
> > > >    	 * set by TTM after ttm_tt_populate() has successfully
> > > > returned, and is
> > > >    	 * then unset when TTM calls ttm_tt_unpopulate().
> > > > +	 *
> > > > +	 * TTM_TT_FLAG_PRIV_BACKED_UP: TTM internal only. This
> > > > is
> > > > set if the
> > > > +	 * struct ttm_tt has been (possibly partially) backed
> > > > up.
> > > >    	 */
> > > >    #define TTM_TT_FLAG_SWAPPED		BIT(0)
> > > >    #define TTM_TT_FLAG_ZERO_ALLOC		BIT(1)
> > > > @@ -96,6 +101,7 @@ struct ttm_tt {
> > > >    #define TTM_TT_FLAG_DECRYPTED		BIT(4)
> > > >    
> > > >    #define TTM_TT_FLAG_PRIV_POPULATED	BIT(5)
> > > > +#define TTM_TT_FLAG_PRIV_BACKED_UP	BIT(6)
> > > >    	uint32_t page_flags;
> > > >    	/** @num_pages: Number of pages in the page array. */
> > > >    	uint32_t num_pages;
> > > > @@ -105,11 +111,20 @@ struct ttm_tt {
> > > >    	dma_addr_t *dma_address;
> > > >    	/** @swap_storage: Pointer to shmem struct file for
> > > > swap
> > > > storage. */
> > > >    	struct file *swap_storage;
> > > > +	/**
> > > > +	 * @backup: Pointer to backup struct for backed up
> > > > tts.
> > > > +	 * Could be unified with @swap_storage. Meanwhile, the
> > > > driver's
> > > > +	 * ttm_tt_create() callback is responsible for
> > > > assigning
> > > > +	 * this field.
> > > > +	 */
> > > > +	struct ttm_backup *backup;
> > > >    	/**
> > > >    	 * @caching: The current caching state of the pages,
> > > > see
> > > > enum
> > > >    	 * ttm_caching.
> > > >    	 */
> > > >    	enum ttm_caching caching;
> > > > +	/** @restore: Partial restoration from backup state.
> > > > TTM
> > > > private */
> > > > +	struct ttm_pool_tt_restore *restore;
> > > >    };
> > > >    
> > > >    /**
> > > > @@ -131,7 +146,7 @@ static inline bool
> > > > ttm_tt_is_populated(struct
> > > > ttm_tt *tt)
> > > >    
> > > >    static inline bool ttm_tt_is_swapped(const struct ttm_tt
> > > > *tt)
> > > >    {
> > > > -	return tt->page_flags & TTM_TT_FLAG_SWAPPED;
> > > > +	return tt->page_flags & (TTM_TT_FLAG_SWAPPED |
> > > > TTM_TT_FLAG_PRIV_BACKED_UP);
> > > >    }
> > > >    
> > > >    /**
> > > > @@ -235,6 +250,21 @@ void ttm_tt_mgr_init(unsigned long
> > > > num_pages,
> > > > unsigned long num_dma32_pages);
> > > >    struct ttm_kmap_iter *ttm_kmap_iter_tt_init(struct
> > > > ttm_kmap_iter_tt *iter_tt,
> > > >    					    struct ttm_tt
> > > > *tt);
> > > >    unsigned long ttm_tt_pages_limit(void);
> > > > +
> > > > +/**
> > > > + * struct ttm_backup_flags - Flags to govern backup behaviour.
> > > > + * @purge: Free pages without backing up. Bypass pools.
> > > > + * @writeback: Attempt to copy contents directly to swap
> > > > space,
> > > > even
> > > > + * if that means blocking on writes to external memory.
> > > > + */
> > > > +struct ttm_backup_flags {
> > > > +	u32 purge : 1;
> > > > +	u32 writeback : 1;
> > > > +};
> > > > +
> > > > +long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt,
> > > > +		   const struct ttm_backup_flags flags);
> > > > +
> > > >    #if IS_ENABLED(CONFIG_AGP)
> > > >    #include <linux/agp_backend.h>
> > > >    


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 15:50         ` Thomas Hellström
@ 2024-12-03 16:20           ` Christian König
  2024-12-03 16:31             ` Thomas Hellström
  0 siblings, 1 reply; 54+ messages in thread
From: Christian König @ 2024-12-03 16:20 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

[-- Attachment #1: Type: text/plain, Size: 2981 bytes --]

[SNIP]
>>>>> @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool *pool,
>>>>> struct ttm_tt *tt,
>>>>>     	else
>>>>>     		gfp_flags |= GFP_HIGHUSER;
>>>>>     
>>>>> -	for (order = min_t(unsigned int, MAX_PAGE_ORDER,
>>>>> __fls(num_pages));
>>>>> -	     num_pages;
>>>>> -	     order = min_t(unsigned int, order,
>>>>> __fls(num_pages)))
>>>>> {
>>>>> +	order = min_t(unsigned int, MAX_PAGE_ORDER,
>>>>> __fls(num_pages));
>>>>> +
>>>>> +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
>>>>> +		if (!tt->restore) {
>>>>> +			gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
>>>>> +
>>>>> +			if (ctx->gfp_retry_mayfail)
>>>>> +				gfp |= __GFP_RETRY_MAYFAIL;
>>>>> +
>>>>> +			tt->restore =
>>>>> +				kvzalloc(struct_size(tt-
>>>>>> restore,
>>>>> old_pages,
>>>>> +						     (size_t)1
>>>>> <<
>>>>> order), gfp);
>>>>> +			if (!tt->restore)
>>>>> +				return -ENOMEM;
>>>>> +		} else if (ttm_pool_restore_valid(tt-
>>>>>> restore)) {
>>>>> +			struct ttm_pool_tt_restore *restore =
>>>>> tt-
>>>>>> restore;
>>>>> +
>>>>> +			num_pages -= restore->alloced_pages;
>>>>> +			order = min_t(unsigned int, order,
>>>>> __fls(num_pages));
>>>>> +			pages += restore->alloced_pages;
>>>>> +			r = ttm_pool_restore_tt(restore, tt-
>>>>>> backup, ctx);
>>>>> +			if (r)
>>>>> +				return r;
>>>>> +			caching = restore->caching_divide;
>>>>> +		}
>>>>> +
>>>>> +		tt->restore->pool = pool;
>>>>> +	}
>>>> Hui? Why is that part of the allocation function now?
>>>>
>>>> At bare minimum I would expect that this is a new function.
>>> It's because we now have partially backed up tts, so the restore is
>>> interleaved on a per-page basis, replacing the backup handles with
>>> page-pointers. I'll see if I can separate out at least the
>>> initialization here.
>> Yeah, that kind of makes sense.
>>
>> My expectation was just that we now have explicit ttm_pool_swapout()
>> and
>> ttm_pool_swapin() functions.
> I fully understand, although in the allocation step, that would also
> increase the memory pressure since we might momentarily have twice the
> bo-size allocated, if the shmem object was never swapped out, and we
> don't want to unnecessarily risc OOM at recover time, although that
> should be a recoverable situation now. If the OOM receiver can free up
> system memory resources they can could potentially restart the recover.

What I meant was more that we have ttm_pool_swapout() which does a mix 
of moving each page to a swap backend and freeing one by one.

And ttm_pool_swapin() which allocates a bit of memory (usually one huge 
page) and then copies the content back in from the swap backend.

Alternatively we could rename ttm_pool_alloc() into something like 
ttm_pool_populate() and ttm_pool_free() into ttm_pool_unpopulate(), but 
those names are not very descriptive either.

It's just that we now do a bit more than just alloc and free in those 
functions, so the naming doesn't really match that well any more.

Christian.

>
> /Thomas
>

[-- Attachment #2: Type: text/html, Size: 4552 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 16:20           ` Christian König
@ 2024-12-03 16:31             ` Thomas Hellström
  2024-12-03 16:39               ` Christian König
  0 siblings, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-12-03 16:31 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Tue, 2024-12-03 at 17:20 +0100, Christian König wrote:
> [SNIP]
> > > > > > @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool
> > > > > > *pool,
> > > > > > struct ttm_tt *tt,
> > > > > >     	else
> > > > > >     		gfp_flags |= GFP_HIGHUSER;
> > > > > >     
> > > > > > -	for (order = min_t(unsigned int, MAX_PAGE_ORDER,
> > > > > > __fls(num_pages));
> > > > > > -	     num_pages;
> > > > > > -	     order = min_t(unsigned int, order,
> > > > > > __fls(num_pages)))
> > > > > > {
> > > > > > +	order = min_t(unsigned int, MAX_PAGE_ORDER,
> > > > > > __fls(num_pages));
> > > > > > +
> > > > > > +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
> > > > > > +		if (!tt->restore) {
> > > > > > +			gfp_t gfp = GFP_KERNEL |
> > > > > > __GFP_NOWARN;
> > > > > > +
> > > > > > +			if (ctx->gfp_retry_mayfail)
> > > > > > +				gfp |=
> > > > > > __GFP_RETRY_MAYFAIL;
> > > > > > +
> > > > > > +			tt->restore =
> > > > > > +				kvzalloc(struct_size(tt-
> > > > > > > restore,
> > > > > > old_pages,
> > > > > > +						    
> > > > > > (size_t)1
> > > > > > <<
> > > > > > order), gfp);
> > > > > > +			if (!tt->restore)
> > > > > > +				return -ENOMEM;
> > > > > > +		} else if (ttm_pool_restore_valid(tt-
> > > > > > > restore)) {
> > > > > > +			struct ttm_pool_tt_restore
> > > > > > *restore =
> > > > > > tt-
> > > > > > > restore;
> > > > > > +
> > > > > > +			num_pages -= restore-
> > > > > > >alloced_pages;
> > > > > > +			order = min_t(unsigned int, order,
> > > > > > __fls(num_pages));
> > > > > > +			pages += restore->alloced_pages;
> > > > > > +			r = ttm_pool_restore_tt(restore,
> > > > > > tt-
> > > > > > > backup, ctx);
> > > > > > +			if (r)
> > > > > > +				return r;
> > > > > > +			caching = restore->caching_divide;
> > > > > > +		}
> > > > > > +
> > > > > > +		tt->restore->pool = pool;
> > > > > > +	}
> > > > > Hui? Why is that part of the allocation function now?
> > > > > 
> > > > > At bare minimum I would expect that this is a new function.
> > > > It's because we now have partially backed up tts, so the
> > > > restore is
> > > > interleaved on a per-page basis, replacing the backup handles
> > > > with
> > > > page-pointers. I'll see if I can separate out at least the
> > > > initialization here.
> > > Yeah, that kind of makes sense.
> > > 
> > > My expectation was just that we now have explicit
> > > ttm_pool_swapout()
> > > and
> > > ttm_pool_swapin() functions.
> > I fully understand, although in the allocation step, that would
> > also
> > increase the memory pressure since we might momentarily have twice
> > the
> > bo-size allocated, if the shmem object was never swapped out, and
> > we
> > don't want to unnecessarily risc OOM at recover time, although that
> > should be a recoverable situation now. If the OOM receiver can free
> > up
> > system memory resources they can could potentially restart the
> > recover.
> 
> What I meant was more that we have ttm_pool_swapout() which does a
> mix 
> of moving each page to a swap backend and freeing one by one.
> 
> And ttm_pool_swapin() which allocates a bit of memory (usually one
> huge 
> page) and then copies the content back in from the swap backend.
> 
> Alternatively we could rename ttm_pool_alloc() into something like 
> ttm_pool_populate() and ttm_pool_free() into ttm_pool_unpopulate(),
> but 
> those names are not very descriptive either.
> 
> It's just that we now do a bit more than just alloc and free in those
> functions, so the naming doesn't really match that well any more.

So what about ttm_pool_alloc() and ttm_pool_recover/swapin(), both
pointing to the same code, but _alloc() asserts that the tt isn't
backed up?

That would give a clean interface at least.

For a renaming change that touch all TTM drivers, I'd rather put that
as a last patch since getting acks for that from all TTM driver
maintainers seems like a hopeless undertaking.

/Thomas




> 
> Christian.
> 
> > 
> > /Thomas


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 16:31             ` Thomas Hellström
@ 2024-12-03 16:39               ` Christian König
  2024-12-03 16:43                 ` Thomas Hellström
  0 siblings, 1 reply; 54+ messages in thread
From: Christian König @ 2024-12-03 16:39 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Am 03.12.24 um 17:31 schrieb Thomas Hellström:
> On Tue, 2024-12-03 at 17:20 +0100, Christian König wrote:
>> [SNIP]
>>>>>>> @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool
>>>>>>> *pool,
>>>>>>> struct ttm_tt *tt,
>>>>>>>      	else
>>>>>>>      		gfp_flags |= GFP_HIGHUSER;
>>>>>>>      
>>>>>>> -	for (order = min_t(unsigned int, MAX_PAGE_ORDER,
>>>>>>> __fls(num_pages));
>>>>>>> -	     num_pages;
>>>>>>> -	     order = min_t(unsigned int, order,
>>>>>>> __fls(num_pages)))
>>>>>>> {
>>>>>>> +	order = min_t(unsigned int, MAX_PAGE_ORDER,
>>>>>>> __fls(num_pages));
>>>>>>> +
>>>>>>> +	if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
>>>>>>> +		if (!tt->restore) {
>>>>>>> +			gfp_t gfp = GFP_KERNEL |
>>>>>>> __GFP_NOWARN;
>>>>>>> +
>>>>>>> +			if (ctx->gfp_retry_mayfail)
>>>>>>> +				gfp |=
>>>>>>> __GFP_RETRY_MAYFAIL;
>>>>>>> +
>>>>>>> +			tt->restore =
>>>>>>> +				kvzalloc(struct_size(tt-
>>>>>>>> restore,
>>>>>>> old_pages,
>>>>>>> +						
>>>>>>> (size_t)1
>>>>>>> <<
>>>>>>> order), gfp);
>>>>>>> +			if (!tt->restore)
>>>>>>> +				return -ENOMEM;
>>>>>>> +		} else if (ttm_pool_restore_valid(tt-
>>>>>>>> restore)) {
>>>>>>> +			struct ttm_pool_tt_restore
>>>>>>> *restore =
>>>>>>> tt-
>>>>>>>> restore;
>>>>>>> +
>>>>>>> +			num_pages -= restore-
>>>>>>>> alloced_pages;
>>>>>>> +			order = min_t(unsigned int, order,
>>>>>>> __fls(num_pages));
>>>>>>> +			pages += restore->alloced_pages;
>>>>>>> +			r = ttm_pool_restore_tt(restore,
>>>>>>> tt-
>>>>>>>> backup, ctx);
>>>>>>> +			if (r)
>>>>>>> +				return r;
>>>>>>> +			caching = restore->caching_divide;
>>>>>>> +		}
>>>>>>> +
>>>>>>> +		tt->restore->pool = pool;
>>>>>>> +	}
>>>>>> Hui? Why is that part of the allocation function now?
>>>>>>
>>>>>> At bare minimum I would expect that this is a new function.
>>>>> It's because we now have partially backed up tts, so the
>>>>> restore is
>>>>> interleaved on a per-page basis, replacing the backup handles
>>>>> with
>>>>> page-pointers. I'll see if I can separate out at least the
>>>>> initialization here.
>>>> Yeah, that kind of makes sense.
>>>>
>>>> My expectation was just that we now have explicit
>>>> ttm_pool_swapout()
>>>> and
>>>> ttm_pool_swapin() functions.
>>> I fully understand, although in the allocation step, that would
>>> also
>>> increase the memory pressure since we might momentarily have twice
>>> the
>>> bo-size allocated, if the shmem object was never swapped out, and
>>> we
>>> don't want to unnecessarily risc OOM at recover time, although that
>>> should be a recoverable situation now. If the OOM receiver can free
>>> up
>>> system memory resources they can could potentially restart the
>>> recover.
>> What I meant was more that we have ttm_pool_swapout() which does a
>> mix
>> of moving each page to a swap backend and freeing one by one.
>>
>> And ttm_pool_swapin() which allocates a bit of memory (usually one
>> huge
>> page) and then copies the content back in from the swap backend.
>>
>> Alternatively we could rename ttm_pool_alloc() into something like
>> ttm_pool_populate() and ttm_pool_free() into ttm_pool_unpopulate(),
>> but
>> those names are not very descriptive either.
>>
>> It's just that we now do a bit more than just alloc and free in those
>> functions, so the naming doesn't really match that well any more.
> So what about ttm_pool_alloc() and ttm_pool_recover/swapin(), both
> pointing to the same code, but _alloc() asserts that the tt isn't
> backed up?
>
> That would give a clean interface at least.

More or less ok. I would just put figuring out the gfp flags and the 
stuff inside the for (order... loop into separate functions. And then 
remove the if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) from the pool.

In other words you trigger the back restore by calling a different 
function than the allocation one.

>
> For a renaming change that touch all TTM drivers, I'd rather put that
> as a last patch since getting acks for that from all TTM driver
> maintainers seems like a hopeless undertaking.

Yeah the acks are not the problem, merging it through the xe tree would be.

Christian.


>
> /Thomas
>
>
>
>
>> Christian.
>>
>>> /Thomas


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 16:39               ` Christian König
@ 2024-12-03 16:43                 ` Thomas Hellström
  2024-12-03 16:46                   ` Christian König
  0 siblings, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-12-03 16:43 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Tue, 2024-12-03 at 17:39 +0100, Christian König wrote:
> Am 03.12.24 um 17:31 schrieb Thomas Hellström:
> > On Tue, 2024-12-03 at 17:20 +0100, Christian König wrote:
> > > [SNIP]
> > > > > > > > @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool
> > > > > > > > *pool,
> > > > > > > > struct ttm_tt *tt,
> > > > > > > >      	else
> > > > > > > >      		gfp_flags |= GFP_HIGHUSER;
> > > > > > > >      
> > > > > > > > -	for (order = min_t(unsigned int,
> > > > > > > > MAX_PAGE_ORDER,
> > > > > > > > __fls(num_pages));
> > > > > > > > -	     num_pages;
> > > > > > > > -	     order = min_t(unsigned int, order,
> > > > > > > > __fls(num_pages)))
> > > > > > > > {
> > > > > > > > +	order = min_t(unsigned int, MAX_PAGE_ORDER,
> > > > > > > > __fls(num_pages));
> > > > > > > > +
> > > > > > > > +	if (tt->page_flags &
> > > > > > > > TTM_TT_FLAG_PRIV_BACKED_UP) {
> > > > > > > > +		if (!tt->restore) {
> > > > > > > > +			gfp_t gfp = GFP_KERNEL |
> > > > > > > > __GFP_NOWARN;
> > > > > > > > +
> > > > > > > > +			if (ctx->gfp_retry_mayfail)
> > > > > > > > +				gfp |=
> > > > > > > > __GFP_RETRY_MAYFAIL;
> > > > > > > > +
> > > > > > > > +			tt->restore =
> > > > > > > > +				kvzalloc(struct_size(t
> > > > > > > > t-
> > > > > > > > > restore,
> > > > > > > > old_pages,
> > > > > > > > +						
> > > > > > > > (size_t)1
> > > > > > > > <<
> > > > > > > > order), gfp);
> > > > > > > > +			if (!tt->restore)
> > > > > > > > +				return -ENOMEM;
> > > > > > > > +		} else if (ttm_pool_restore_valid(tt-
> > > > > > > > > restore)) {
> > > > > > > > +			struct ttm_pool_tt_restore
> > > > > > > > *restore =
> > > > > > > > tt-
> > > > > > > > > restore;
> > > > > > > > +
> > > > > > > > +			num_pages -= restore-
> > > > > > > > > alloced_pages;
> > > > > > > > +			order = min_t(unsigned int,
> > > > > > > > order,
> > > > > > > > __fls(num_pages));
> > > > > > > > +			pages += restore-
> > > > > > > > >alloced_pages;
> > > > > > > > +			r =
> > > > > > > > ttm_pool_restore_tt(restore,
> > > > > > > > tt-
> > > > > > > > > backup, ctx);
> > > > > > > > +			if (r)
> > > > > > > > +				return r;
> > > > > > > > +			caching = restore-
> > > > > > > > >caching_divide;
> > > > > > > > +		}
> > > > > > > > +
> > > > > > > > +		tt->restore->pool = pool;
> > > > > > > > +	}
> > > > > > > Hui? Why is that part of the allocation function now?
> > > > > > > 
> > > > > > > At bare minimum I would expect that this is a new
> > > > > > > function.
> > > > > > It's because we now have partially backed up tts, so the
> > > > > > restore is
> > > > > > interleaved on a per-page basis, replacing the backup
> > > > > > handles
> > > > > > with
> > > > > > page-pointers. I'll see if I can separate out at least the
> > > > > > initialization here.
> > > > > Yeah, that kind of makes sense.
> > > > > 
> > > > > My expectation was just that we now have explicit
> > > > > ttm_pool_swapout()
> > > > > and
> > > > > ttm_pool_swapin() functions.
> > > > I fully understand, although in the allocation step, that would
> > > > also
> > > > increase the memory pressure since we might momentarily have
> > > > twice
> > > > the
> > > > bo-size allocated, if the shmem object was never swapped out,
> > > > and
> > > > we
> > > > don't want to unnecessarily risc OOM at recover time, although
> > > > that
> > > > should be a recoverable situation now. If the OOM receiver can
> > > > free
> > > > up
> > > > system memory resources they can could potentially restart the
> > > > recover.
> > > What I meant was more that we have ttm_pool_swapout() which does
> > > a
> > > mix
> > > of moving each page to a swap backend and freeing one by one.
> > > 
> > > And ttm_pool_swapin() which allocates a bit of memory (usually
> > > one
> > > huge
> > > page) and then copies the content back in from the swap backend.
> > > 
> > > Alternatively we could rename ttm_pool_alloc() into something
> > > like
> > > ttm_pool_populate() and ttm_pool_free() into
> > > ttm_pool_unpopulate(),
> > > but
> > > those names are not very descriptive either.
> > > 
> > > It's just that we now do a bit more than just alloc and free in
> > > those
> > > functions, so the naming doesn't really match that well any more.
> > So what about ttm_pool_alloc() and ttm_pool_recover/swapin(), both
> > pointing to the same code, but _alloc() asserts that the tt isn't
> > backed up?
> > 
> > That would give a clean interface at least.
> 
> More or less ok. I would just put figuring out the gfp flags and the 
> stuff inside the for (order... loop into separate functions. And then
> remove the if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) from the
> pool.
> 
> In other words you trigger the back restore by calling a different 
> function than the allocation one.

I'll take a look at this as well.

/Thomas


> 
> > 
> > For a renaming change that touch all TTM drivers, I'd rather put
> > that
> > as a last patch since getting acks for that from all TTM driver
> > maintainers seems like a hopeless undertaking.
> 
> Yeah the acks are not the problem, merging it through the xe tree
> would be.
> 
> Christian.
> 
> 
> > 
> > /Thomas
> > 
> > 
> > 
> > 
> > > Christian.
> > > 
> > > > /Thomas
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 16:43                 ` Thomas Hellström
@ 2024-12-03 16:46                   ` Christian König
  2024-12-03 17:44                     ` Thomas Hellström
  0 siblings, 1 reply; 54+ messages in thread
From: Christian König @ 2024-12-03 16:46 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Am 03.12.24 um 17:43 schrieb Thomas Hellström:
> On Tue, 2024-12-03 at 17:39 +0100, Christian König wrote:
>> Am 03.12.24 um 17:31 schrieb Thomas Hellström:
>>> On Tue, 2024-12-03 at 17:20 +0100, Christian König wrote:
>>>> [SNIP]
>>>>>>>>> @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool
>>>>>>>>> *pool,
>>>>>>>>> struct ttm_tt *tt,
>>>>>>>>>       	else
>>>>>>>>>       		gfp_flags |= GFP_HIGHUSER;
>>>>>>>>>       
>>>>>>>>> -	for (order = min_t(unsigned int,
>>>>>>>>> MAX_PAGE_ORDER,
>>>>>>>>> __fls(num_pages));
>>>>>>>>> -	     num_pages;
>>>>>>>>> -	     order = min_t(unsigned int, order,
>>>>>>>>> __fls(num_pages)))
>>>>>>>>> {
>>>>>>>>> +	order = min_t(unsigned int, MAX_PAGE_ORDER,
>>>>>>>>> __fls(num_pages));
>>>>>>>>> +
>>>>>>>>> +	if (tt->page_flags &
>>>>>>>>> TTM_TT_FLAG_PRIV_BACKED_UP) {
>>>>>>>>> +		if (!tt->restore) {
>>>>>>>>> +			gfp_t gfp = GFP_KERNEL |
>>>>>>>>> __GFP_NOWARN;
>>>>>>>>> +
>>>>>>>>> +			if (ctx->gfp_retry_mayfail)
>>>>>>>>> +				gfp |=
>>>>>>>>> __GFP_RETRY_MAYFAIL;
>>>>>>>>> +
>>>>>>>>> +			tt->restore =
>>>>>>>>> +				kvzalloc(struct_size(t
>>>>>>>>> t-
>>>>>>>>>> restore,
>>>>>>>>> old_pages,
>>>>>>>>> +						
>>>>>>>>> (size_t)1
>>>>>>>>> <<
>>>>>>>>> order), gfp);
>>>>>>>>> +			if (!tt->restore)
>>>>>>>>> +				return -ENOMEM;
>>>>>>>>> +		} else if (ttm_pool_restore_valid(tt-
>>>>>>>>>> restore)) {
>>>>>>>>> +			struct ttm_pool_tt_restore
>>>>>>>>> *restore =
>>>>>>>>> tt-
>>>>>>>>>> restore;
>>>>>>>>> +
>>>>>>>>> +			num_pages -= restore-
>>>>>>>>>> alloced_pages;
>>>>>>>>> +			order = min_t(unsigned int,
>>>>>>>>> order,
>>>>>>>>> __fls(num_pages));
>>>>>>>>> +			pages += restore-
>>>>>>>>>> alloced_pages;
>>>>>>>>> +			r =
>>>>>>>>> ttm_pool_restore_tt(restore,
>>>>>>>>> tt-
>>>>>>>>>> backup, ctx);
>>>>>>>>> +			if (r)
>>>>>>>>> +				return r;
>>>>>>>>> +			caching = restore-
>>>>>>>>>> caching_divide;
>>>>>>>>> +		}
>>>>>>>>> +
>>>>>>>>> +		tt->restore->pool = pool;
>>>>>>>>> +	}
>>>>>>>> Hui? Why is that part of the allocation function now?
>>>>>>>>
>>>>>>>> At bare minimum I would expect that this is a new
>>>>>>>> function.
>>>>>>> It's because we now have partially backed up tts, so the
>>>>>>> restore is
>>>>>>> interleaved on a per-page basis, replacing the backup
>>>>>>> handles
>>>>>>> with
>>>>>>> page-pointers. I'll see if I can separate out at least the
>>>>>>> initialization here.
>>>>>> Yeah, that kind of makes sense.
>>>>>>
>>>>>> My expectation was just that we now have explicit
>>>>>> ttm_pool_swapout()
>>>>>> and
>>>>>> ttm_pool_swapin() functions.
>>>>> I fully understand, although in the allocation step, that would
>>>>> also
>>>>> increase the memory pressure since we might momentarily have
>>>>> twice
>>>>> the
>>>>> bo-size allocated, if the shmem object was never swapped out,
>>>>> and
>>>>> we
>>>>> don't want to unnecessarily risc OOM at recover time, although
>>>>> that
>>>>> should be a recoverable situation now. If the OOM receiver can
>>>>> free
>>>>> up
>>>>> system memory resources they can could potentially restart the
>>>>> recover.
>>>> What I meant was more that we have ttm_pool_swapout() which does
>>>> a
>>>> mix
>>>> of moving each page to a swap backend and freeing one by one.
>>>>
>>>> And ttm_pool_swapin() which allocates a bit of memory (usually
>>>> one
>>>> huge
>>>> page) and then copies the content back in from the swap backend.
>>>>
>>>> Alternatively we could rename ttm_pool_alloc() into something
>>>> like
>>>> ttm_pool_populate() and ttm_pool_free() into
>>>> ttm_pool_unpopulate(),
>>>> but
>>>> those names are not very descriptive either.
>>>>
>>>> It's just that we now do a bit more than just alloc and free in
>>>> those
>>>> functions, so the naming doesn't really match that well any more.
>>> So what about ttm_pool_alloc() and ttm_pool_recover/swapin(), both
>>> pointing to the same code, but _alloc() asserts that the tt isn't
>>> backed up?
>>>
>>> That would give a clean interface at least.
>> More or less ok. I would just put figuring out the gfp flags and the
>> stuff inside the for (order... loop into separate functions. And then
>> remove the if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) from the
>> pool.
>>
>> In other words you trigger the back restore by calling a different
>> function than the allocation one.
> I'll take a look at this as well.

Ah, and BTW: It's perfectly possible that ttm_tt_free() is called 
because a halve swapped TT is about to be destroyed!

If I'm not completely mistaken that is not handled gracefully when we 
try to always backup from in the ttm_tt_free() function.

So we clearly need the separation of move this TT to a backup (and 
eventually only partially) and freeing it.

Christian.

>
> /Thomas
>
>
>>> For a renaming change that touch all TTM drivers, I'd rather put
>>> that
>>> as a last patch since getting acks for that from all TTM driver
>>> maintainers seems like a hopeless undertaking.
>> Yeah the acks are not the problem, merging it through the xe tree
>> would be.
>>
>> Christian.
>>
>>
>>> /Thomas
>>>
>>>
>>>
>>>
>>>> Christian.
>>>>
>>>>> /Thomas


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 16:46                   ` Christian König
@ 2024-12-03 17:44                     ` Thomas Hellström
  2024-12-04  9:16                       ` Christian König
  0 siblings, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-12-03 17:44 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Tue, 2024-12-03 at 17:46 +0100, Christian König wrote:
> Am 03.12.24 um 17:43 schrieb Thomas Hellström:
> > On Tue, 2024-12-03 at 17:39 +0100, Christian König wrote:
> > > Am 03.12.24 um 17:31 schrieb Thomas Hellström:
> > > > On Tue, 2024-12-03 at 17:20 +0100, Christian König wrote:
> > > > > [SNIP]
> > > > > > > > > > @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct
> > > > > > > > > > ttm_pool
> > > > > > > > > > *pool,
> > > > > > > > > > struct ttm_tt *tt,
> > > > > > > > > >       	else
> > > > > > > > > >       		gfp_flags |= GFP_HIGHUSER;
> > > > > > > > > >       
> > > > > > > > > > -	for (order = min_t(unsigned int,
> > > > > > > > > > MAX_PAGE_ORDER,
> > > > > > > > > > __fls(num_pages));
> > > > > > > > > > -	     num_pages;
> > > > > > > > > > -	     order = min_t(unsigned int, order,
> > > > > > > > > > __fls(num_pages)))
> > > > > > > > > > {
> > > > > > > > > > +	order = min_t(unsigned int,
> > > > > > > > > > MAX_PAGE_ORDER,
> > > > > > > > > > __fls(num_pages));
> > > > > > > > > > +
> > > > > > > > > > +	if (tt->page_flags &
> > > > > > > > > > TTM_TT_FLAG_PRIV_BACKED_UP) {
> > > > > > > > > > +		if (!tt->restore) {
> > > > > > > > > > +			gfp_t gfp = GFP_KERNEL |
> > > > > > > > > > __GFP_NOWARN;
> > > > > > > > > > +
> > > > > > > > > > +			if (ctx-
> > > > > > > > > > >gfp_retry_mayfail)
> > > > > > > > > > +				gfp |=
> > > > > > > > > > __GFP_RETRY_MAYFAIL;
> > > > > > > > > > +
> > > > > > > > > > +			tt->restore =
> > > > > > > > > > +				kvzalloc(struct_si
> > > > > > > > > > ze(t
> > > > > > > > > > t-
> > > > > > > > > > > restore,
> > > > > > > > > > old_pages,
> > > > > > > > > > +						
> > > > > > > > > > (size_t)1
> > > > > > > > > > <<
> > > > > > > > > > order), gfp);
> > > > > > > > > > +			if (!tt->restore)
> > > > > > > > > > +				return -ENOMEM;
> > > > > > > > > > +		} else if
> > > > > > > > > > (ttm_pool_restore_valid(tt-
> > > > > > > > > > > restore)) {
> > > > > > > > > > +			struct ttm_pool_tt_restore
> > > > > > > > > > *restore =
> > > > > > > > > > tt-
> > > > > > > > > > > restore;
> > > > > > > > > > +
> > > > > > > > > > +			num_pages -= restore-
> > > > > > > > > > > alloced_pages;
> > > > > > > > > > +			order = min_t(unsigned
> > > > > > > > > > int,
> > > > > > > > > > order,
> > > > > > > > > > __fls(num_pages));
> > > > > > > > > > +			pages += restore-
> > > > > > > > > > > alloced_pages;
> > > > > > > > > > +			r =
> > > > > > > > > > ttm_pool_restore_tt(restore,
> > > > > > > > > > tt-
> > > > > > > > > > > backup, ctx);
> > > > > > > > > > +			if (r)
> > > > > > > > > > +				return r;
> > > > > > > > > > +			caching = restore-
> > > > > > > > > > > caching_divide;
> > > > > > > > > > +		}
> > > > > > > > > > +
> > > > > > > > > > +		tt->restore->pool = pool;
> > > > > > > > > > +	}
> > > > > > > > > Hui? Why is that part of the allocation function now?
> > > > > > > > > 
> > > > > > > > > At bare minimum I would expect that this is a new
> > > > > > > > > function.
> > > > > > > > It's because we now have partially backed up tts, so
> > > > > > > > the
> > > > > > > > restore is
> > > > > > > > interleaved on a per-page basis, replacing the backup
> > > > > > > > handles
> > > > > > > > with
> > > > > > > > page-pointers. I'll see if I can separate out at least
> > > > > > > > the
> > > > > > > > initialization here.
> > > > > > > Yeah, that kind of makes sense.
> > > > > > > 
> > > > > > > My expectation was just that we now have explicit
> > > > > > > ttm_pool_swapout()
> > > > > > > and
> > > > > > > ttm_pool_swapin() functions.
> > > > > > I fully understand, although in the allocation step, that
> > > > > > would
> > > > > > also
> > > > > > increase the memory pressure since we might momentarily
> > > > > > have
> > > > > > twice
> > > > > > the
> > > > > > bo-size allocated, if the shmem object was never swapped
> > > > > > out,
> > > > > > and
> > > > > > we
> > > > > > don't want to unnecessarily risc OOM at recover time,
> > > > > > although
> > > > > > that
> > > > > > should be a recoverable situation now. If the OOM receiver
> > > > > > can
> > > > > > free
> > > > > > up
> > > > > > system memory resources they can could potentially restart
> > > > > > the
> > > > > > recover.
> > > > > What I meant was more that we have ttm_pool_swapout() which
> > > > > does
> > > > > a
> > > > > mix
> > > > > of moving each page to a swap backend and freeing one by one.
> > > > > 
> > > > > And ttm_pool_swapin() which allocates a bit of memory
> > > > > (usually
> > > > > one
> > > > > huge
> > > > > page) and then copies the content back in from the swap
> > > > > backend.
> > > > > 
> > > > > Alternatively we could rename ttm_pool_alloc() into something
> > > > > like
> > > > > ttm_pool_populate() and ttm_pool_free() into
> > > > > ttm_pool_unpopulate(),
> > > > > but
> > > > > those names are not very descriptive either.
> > > > > 
> > > > > It's just that we now do a bit more than just alloc and free
> > > > > in
> > > > > those
> > > > > functions, so the naming doesn't really match that well any
> > > > > more.
> > > > So what about ttm_pool_alloc() and ttm_pool_recover/swapin(),
> > > > both
> > > > pointing to the same code, but _alloc() asserts that the tt
> > > > isn't
> > > > backed up?
> > > > 
> > > > That would give a clean interface at least.
> > > More or less ok. I would just put figuring out the gfp flags and
> > > the
> > > stuff inside the for (order... loop into separate functions. And
> > > then
> > > remove the if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) from
> > > the
> > > pool.
> > > 
> > > In other words you trigger the back restore by calling a
> > > different
> > > function than the allocation one.
> > I'll take a look at this as well.
> 
> Ah, and BTW: It's perfectly possible that ttm_tt_free() is called 
> because a halve swapped TT is about to be destroyed!
> 
> If I'm not completely mistaken that is not handled gracefully when we
> try to always backup from in the ttm_tt_free() function.
> 
> So we clearly need the separation of move this TT to a backup (and 
> eventually only partially) and freeing it.

Hm. I'm not sure I follow completely.

The ttm_pool interface is currently:

ttm_pool_alloc() -> allocs and may recover from backup. May leave
partially backed up. Called from ttm_tt_populate() or its driver
callbacks.

ttm_pool_backup_tt() -> Attempts to back up (the not already backed up
part of a tt. Called from ttm_tt_backup(), which is just a tt layer
wrapper. If called with purge==true, then frees memory bypassing the
pool to return it to the system directly.

ttm_pool_free() -> Frees a (potentially backed up or partially backed
up) tt. Called from ttm_tt_unpopulate() or its driver callbacks.

So the backup functionality is implemented with a minimal change to
upper layers, and I don't think there is a correctness problem on
free().

So could you clarify a bit if it is this interface you think needs
changing or that the implementation is better at separating out the
backup functionality from the pool functionality?

Thanks,
Thomas




> 
> Christian.
> 
> > 
> > /Thomas
> > 
> > 
> > > > For a renaming change that touch all TTM drivers, I'd rather
> > > > put
> > > > that
> > > > as a last patch since getting acks for that from all TTM driver
> > > > maintainers seems like a hopeless undertaking.
> > > Yeah the acks are not the problem, merging it through the xe tree
> > > would be.
> > > 
> > > Christian.
> > > 
> > > 
> > > > /Thomas
> > > > 
> > > > 
> > > > 
> > > > 
> > > > > Christian.
> > > > > 
> > > > > > /Thomas
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 17:44                     ` Thomas Hellström
@ 2024-12-04  9:16                       ` Christian König
  2024-12-04  9:56                         ` Thomas Hellström
  0 siblings, 1 reply; 54+ messages in thread
From: Christian König @ 2024-12-04  9:16 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Am 03.12.24 um 18:44 schrieb Thomas Hellström:
> On Tue, 2024-12-03 at 17:46 +0100, Christian König wrote:
>> Am 03.12.24 um 17:43 schrieb Thomas Hellström:
>>> On Tue, 2024-12-03 at 17:39 +0100, Christian König wrote:
>>>> Am 03.12.24 um 17:31 schrieb Thomas Hellström:
>>>>> On Tue, 2024-12-03 at 17:20 +0100, Christian König wrote:
>>>>>> [SNIP]
>>>>>>>>>>> @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct
>>>>>>>>>>> ttm_pool
>>>>>>>>>>> *pool,
>>>>>>>>>>> struct ttm_tt *tt,
>>>>>>>>>>>        	else
>>>>>>>>>>>        		gfp_flags |= GFP_HIGHUSER;
>>>>>>>>>>>        
>>>>>>>>>>> -	for (order = min_t(unsigned int,
>>>>>>>>>>> MAX_PAGE_ORDER,
>>>>>>>>>>> __fls(num_pages));
>>>>>>>>>>> -	     num_pages;
>>>>>>>>>>> -	     order = min_t(unsigned int, order,
>>>>>>>>>>> __fls(num_pages)))
>>>>>>>>>>> {
>>>>>>>>>>> +	order = min_t(unsigned int,
>>>>>>>>>>> MAX_PAGE_ORDER,
>>>>>>>>>>> __fls(num_pages));
>>>>>>>>>>> +
>>>>>>>>>>> +	if (tt->page_flags &
>>>>>>>>>>> TTM_TT_FLAG_PRIV_BACKED_UP) {
>>>>>>>>>>> +		if (!tt->restore) {
>>>>>>>>>>> +			gfp_t gfp = GFP_KERNEL |
>>>>>>>>>>> __GFP_NOWARN;
>>>>>>>>>>> +
>>>>>>>>>>> +			if (ctx-
>>>>>>>>>>>> gfp_retry_mayfail)
>>>>>>>>>>> +				gfp |=
>>>>>>>>>>> __GFP_RETRY_MAYFAIL;
>>>>>>>>>>> +
>>>>>>>>>>> +			tt->restore =
>>>>>>>>>>> +				kvzalloc(struct_si
>>>>>>>>>>> ze(t
>>>>>>>>>>> t-
>>>>>>>>>>>> restore,
>>>>>>>>>>> old_pages,
>>>>>>>>>>> +						
>>>>>>>>>>> (size_t)1
>>>>>>>>>>> <<
>>>>>>>>>>> order), gfp);
>>>>>>>>>>> +			if (!tt->restore)
>>>>>>>>>>> +				return -ENOMEM;
>>>>>>>>>>> +		} else if
>>>>>>>>>>> (ttm_pool_restore_valid(tt-
>>>>>>>>>>>> restore)) {
>>>>>>>>>>> +			struct ttm_pool_tt_restore
>>>>>>>>>>> *restore =
>>>>>>>>>>> tt-
>>>>>>>>>>>> restore;
>>>>>>>>>>> +
>>>>>>>>>>> +			num_pages -= restore-
>>>>>>>>>>>> alloced_pages;
>>>>>>>>>>> +			order = min_t(unsigned
>>>>>>>>>>> int,
>>>>>>>>>>> order,
>>>>>>>>>>> __fls(num_pages));
>>>>>>>>>>> +			pages += restore-
>>>>>>>>>>>> alloced_pages;
>>>>>>>>>>> +			r =
>>>>>>>>>>> ttm_pool_restore_tt(restore,
>>>>>>>>>>> tt-
>>>>>>>>>>>> backup, ctx);
>>>>>>>>>>> +			if (r)
>>>>>>>>>>> +				return r;
>>>>>>>>>>> +			caching = restore-
>>>>>>>>>>>> caching_divide;
>>>>>>>>>>> +		}
>>>>>>>>>>> +
>>>>>>>>>>> +		tt->restore->pool = pool;
>>>>>>>>>>> +	}
>>>>>>>>>> Hui? Why is that part of the allocation function now?
>>>>>>>>>>
>>>>>>>>>> At bare minimum I would expect that this is a new
>>>>>>>>>> function.
>>>>>>>>> It's because we now have partially backed up tts, so
>>>>>>>>> the
>>>>>>>>> restore is
>>>>>>>>> interleaved on a per-page basis, replacing the backup
>>>>>>>>> handles
>>>>>>>>> with
>>>>>>>>> page-pointers. I'll see if I can separate out at least
>>>>>>>>> the
>>>>>>>>> initialization here.
>>>>>>>> Yeah, that kind of makes sense.
>>>>>>>>
>>>>>>>> My expectation was just that we now have explicit
>>>>>>>> ttm_pool_swapout()
>>>>>>>> and
>>>>>>>> ttm_pool_swapin() functions.
>>>>>>> I fully understand, although in the allocation step, that
>>>>>>> would
>>>>>>> also
>>>>>>> increase the memory pressure since we might momentarily
>>>>>>> have
>>>>>>> twice
>>>>>>> the
>>>>>>> bo-size allocated, if the shmem object was never swapped
>>>>>>> out,
>>>>>>> and
>>>>>>> we
>>>>>>> don't want to unnecessarily risc OOM at recover time,
>>>>>>> although
>>>>>>> that
>>>>>>> should be a recoverable situation now. If the OOM receiver
>>>>>>> can
>>>>>>> free
>>>>>>> up
>>>>>>> system memory resources they can could potentially restart
>>>>>>> the
>>>>>>> recover.
>>>>>> What I meant was more that we have ttm_pool_swapout() which
>>>>>> does
>>>>>> a
>>>>>> mix
>>>>>> of moving each page to a swap backend and freeing one by one.
>>>>>>
>>>>>> And ttm_pool_swapin() which allocates a bit of memory
>>>>>> (usually
>>>>>> one
>>>>>> huge
>>>>>> page) and then copies the content back in from the swap
>>>>>> backend.
>>>>>>
>>>>>> Alternatively we could rename ttm_pool_alloc() into something
>>>>>> like
>>>>>> ttm_pool_populate() and ttm_pool_free() into
>>>>>> ttm_pool_unpopulate(),
>>>>>> but
>>>>>> those names are not very descriptive either.
>>>>>>
>>>>>> It's just that we now do a bit more than just alloc and free
>>>>>> in
>>>>>> those
>>>>>> functions, so the naming doesn't really match that well any
>>>>>> more.
>>>>> So what about ttm_pool_alloc() and ttm_pool_recover/swapin(),
>>>>> both
>>>>> pointing to the same code, but _alloc() asserts that the tt
>>>>> isn't
>>>>> backed up?
>>>>>
>>>>> That would give a clean interface at least.
>>>> More or less ok. I would just put figuring out the gfp flags and
>>>> the
>>>> stuff inside the for (order... loop into separate functions. And
>>>> then
>>>> remove the if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) from
>>>> the
>>>> pool.
>>>>
>>>> In other words you trigger the back restore by calling a
>>>> different
>>>> function than the allocation one.
>>> I'll take a look at this as well.
>> Ah, and BTW: It's perfectly possible that ttm_tt_free() is called
>> because a halve swapped TT is about to be destroyed!
>>
>> If I'm not completely mistaken that is not handled gracefully when we
>> try to always backup from in the ttm_tt_free() function.
>>
>> So we clearly need the separation of move this TT to a backup (and
>> eventually only partially) and freeing it.
> Hm. I'm not sure I follow completely.
>
> The ttm_pool interface is currently:
>
> ttm_pool_alloc() -> allocs and may recover from backup. May leave
> partially backed up. Called from ttm_tt_populate() or its driver
> callbacks.

Yeah that this is done by a single function looks really strange to me.

> ttm_pool_backup_tt() -> Attempts to back up (the not already backed up
> part of a tt. Called from ttm_tt_backup(), which is just a tt layer
> wrapper. If called with purge==true, then frees memory bypassing the
> pool to return it to the system directly.
>
> ttm_pool_free() -> Frees a (potentially backed up or partially backed
> up) tt. Called from ttm_tt_unpopulate() or its driver callbacks.

Ah! I missed that you have separated that functionality from the free path.

I've only saw the allocation path and though I need to clear that up first.

> So the backup functionality is implemented with a minimal change to
> upper layers, and I don't think there is a correctness problem on
> free().
>
> So could you clarify a bit if it is this interface you think needs
> changing or that the implementation is better at separating out the
> backup functionality from the pool functionality?

I think we should just make the ttm pool object take charge of 
allocation, backup, restore and free operation on the TT objects.

And all of those are separate operations, they just internally share 
steps to archive what they should do.

BTW I really dislike that tt->restore is allocated dynamically. That is 
just another allocation which can cause problems.

We should probably have all the state necessary for the operation in the 
TT object.

Regards,
Christian.

>
> Thanks,
> Thomas
>
>
>
>
>> Christian.
>>
>>> /Thomas
>>>
>>>
>>>>> For a renaming change that touch all TTM drivers, I'd rather
>>>>> put
>>>>> that
>>>>> as a last patch since getting acks for that from all TTM driver
>>>>> maintainers seems like a hopeless undertaking.
>>>> Yeah the acks are not the problem, merging it through the xe tree
>>>> would be.
>>>>
>>>> Christian.
>>>>
>>>>
>>>>> /Thomas
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>> /Thomas


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-04  9:16                       ` Christian König
@ 2024-12-04  9:56                         ` Thomas Hellström
  2024-12-04 10:56                           ` Christian König
  0 siblings, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-12-04  9:56 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Wed, 2024-12-04 at 10:16 +0100, Christian König wrote:
> Am 03.12.24 um 18:44 schrieb Thomas Hellström:
> > On Tue, 2024-12-03 at 17:46 +0100, Christian König wrote:
> > > Am 03.12.24 um 17:43 schrieb Thomas Hellström:
> > > > On Tue, 2024-12-03 at 17:39 +0100, Christian König wrote:
> > > > > Am 03.12.24 um 17:31 schrieb Thomas Hellström:
> > > > > > On Tue, 2024-12-03 at 17:20 +0100, Christian König wrote:
> > > > > > > [SNIP]
> > > > > > > > > > > > @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct
> > > > > > > > > > > > ttm_pool
> > > > > > > > > > > > *pool,
> > > > > > > > > > > > struct ttm_tt *tt,
> > > > > > > > > > > >        	else
> > > > > > > > > > > >        		gfp_flags |= GFP_HIGHUSER;
> > > > > > > > > > > >        
> > > > > > > > > > > > -	for (order = min_t(unsigned int,
> > > > > > > > > > > > MAX_PAGE_ORDER,
> > > > > > > > > > > > __fls(num_pages));
> > > > > > > > > > > > -	     num_pages;
> > > > > > > > > > > > -	     order = min_t(unsigned int,
> > > > > > > > > > > > order,
> > > > > > > > > > > > __fls(num_pages)))
> > > > > > > > > > > > {
> > > > > > > > > > > > +	order = min_t(unsigned int,
> > > > > > > > > > > > MAX_PAGE_ORDER,
> > > > > > > > > > > > __fls(num_pages));
> > > > > > > > > > > > +
> > > > > > > > > > > > +	if (tt->page_flags &
> > > > > > > > > > > > TTM_TT_FLAG_PRIV_BACKED_UP) {
> > > > > > > > > > > > +		if (!tt->restore) {
> > > > > > > > > > > > +			gfp_t gfp = GFP_KERNEL
> > > > > > > > > > > > |
> > > > > > > > > > > > __GFP_NOWARN;
> > > > > > > > > > > > +
> > > > > > > > > > > > +			if (ctx-
> > > > > > > > > > > > > gfp_retry_mayfail)
> > > > > > > > > > > > +				gfp |=
> > > > > > > > > > > > __GFP_RETRY_MAYFAIL;
> > > > > > > > > > > > +
> > > > > > > > > > > > +			tt->restore =
> > > > > > > > > > > > +				kvzalloc(struc
> > > > > > > > > > > > t_si
> > > > > > > > > > > > ze(t
> > > > > > > > > > > > t-
> > > > > > > > > > > > > restore,
> > > > > > > > > > > > old_pages,
> > > > > > > > > > > > +					
> > > > > > > > > > > > 	
> > > > > > > > > > > > (size_t)1
> > > > > > > > > > > > <<
> > > > > > > > > > > > order), gfp);
> > > > > > > > > > > > +			if (!tt->restore)
> > > > > > > > > > > > +				return -
> > > > > > > > > > > > ENOMEM;
> > > > > > > > > > > > +		} else if
> > > > > > > > > > > > (ttm_pool_restore_valid(tt-
> > > > > > > > > > > > > restore)) {
> > > > > > > > > > > > +			struct
> > > > > > > > > > > > ttm_pool_tt_restore
> > > > > > > > > > > > *restore =
> > > > > > > > > > > > tt-
> > > > > > > > > > > > > restore;
> > > > > > > > > > > > +
> > > > > > > > > > > > +			num_pages -= restore-
> > > > > > > > > > > > > alloced_pages;
> > > > > > > > > > > > +			order = min_t(unsigned
> > > > > > > > > > > > int,
> > > > > > > > > > > > order,
> > > > > > > > > > > > __fls(num_pages));
> > > > > > > > > > > > +			pages += restore-
> > > > > > > > > > > > > alloced_pages;
> > > > > > > > > > > > +			r =
> > > > > > > > > > > > ttm_pool_restore_tt(restore,
> > > > > > > > > > > > tt-
> > > > > > > > > > > > > backup, ctx);
> > > > > > > > > > > > +			if (r)
> > > > > > > > > > > > +				return r;
> > > > > > > > > > > > +			caching = restore-
> > > > > > > > > > > > > caching_divide;
> > > > > > > > > > > > +		}
> > > > > > > > > > > > +
> > > > > > > > > > > > +		tt->restore->pool = pool;
> > > > > > > > > > > > +	}
> > > > > > > > > > > Hui? Why is that part of the allocation function
> > > > > > > > > > > now?
> > > > > > > > > > > 
> > > > > > > > > > > At bare minimum I would expect that this is a new
> > > > > > > > > > > function.
> > > > > > > > > > It's because we now have partially backed up tts,
> > > > > > > > > > so
> > > > > > > > > > the
> > > > > > > > > > restore is
> > > > > > > > > > interleaved on a per-page basis, replacing the
> > > > > > > > > > backup
> > > > > > > > > > handles
> > > > > > > > > > with
> > > > > > > > > > page-pointers. I'll see if I can separate out at
> > > > > > > > > > least
> > > > > > > > > > the
> > > > > > > > > > initialization here.
> > > > > > > > > Yeah, that kind of makes sense.
> > > > > > > > > 
> > > > > > > > > My expectation was just that we now have explicit
> > > > > > > > > ttm_pool_swapout()
> > > > > > > > > and
> > > > > > > > > ttm_pool_swapin() functions.
> > > > > > > > I fully understand, although in the allocation step,
> > > > > > > > that
> > > > > > > > would
> > > > > > > > also
> > > > > > > > increase the memory pressure since we might momentarily
> > > > > > > > have
> > > > > > > > twice
> > > > > > > > the
> > > > > > > > bo-size allocated, if the shmem object was never
> > > > > > > > swapped
> > > > > > > > out,
> > > > > > > > and
> > > > > > > > we
> > > > > > > > don't want to unnecessarily risc OOM at recover time,
> > > > > > > > although
> > > > > > > > that
> > > > > > > > should be a recoverable situation now. If the OOM
> > > > > > > > receiver
> > > > > > > > can
> > > > > > > > free
> > > > > > > > up
> > > > > > > > system memory resources they can could potentially
> > > > > > > > restart
> > > > > > > > the
> > > > > > > > recover.
> > > > > > > What I meant was more that we have ttm_pool_swapout()
> > > > > > > which
> > > > > > > does
> > > > > > > a
> > > > > > > mix
> > > > > > > of moving each page to a swap backend and freeing one by
> > > > > > > one.
> > > > > > > 
> > > > > > > And ttm_pool_swapin() which allocates a bit of memory
> > > > > > > (usually
> > > > > > > one
> > > > > > > huge
> > > > > > > page) and then copies the content back in from the swap
> > > > > > > backend.
> > > > > > > 
> > > > > > > Alternatively we could rename ttm_pool_alloc() into
> > > > > > > something
> > > > > > > like
> > > > > > > ttm_pool_populate() and ttm_pool_free() into
> > > > > > > ttm_pool_unpopulate(),
> > > > > > > but
> > > > > > > those names are not very descriptive either.
> > > > > > > 
> > > > > > > It's just that we now do a bit more than just alloc and
> > > > > > > free
> > > > > > > in
> > > > > > > those
> > > > > > > functions, so the naming doesn't really match that well
> > > > > > > any
> > > > > > > more.
> > > > > > So what about ttm_pool_alloc() and
> > > > > > ttm_pool_recover/swapin(),
> > > > > > both
> > > > > > pointing to the same code, but _alloc() asserts that the tt
> > > > > > isn't
> > > > > > backed up?
> > > > > > 
> > > > > > That would give a clean interface at least.
> > > > > More or less ok. I would just put figuring out the gfp flags
> > > > > and
> > > > > the
> > > > > stuff inside the for (order... loop into separate functions.
> > > > > And
> > > > > then
> > > > > remove the if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)
> > > > > from
> > > > > the
> > > > > pool.
> > > > > 
> > > > > In other words you trigger the back restore by calling a
> > > > > different
> > > > > function than the allocation one.
> > > > I'll take a look at this as well.
> > > Ah, and BTW: It's perfectly possible that ttm_tt_free() is called
> > > because a halve swapped TT is about to be destroyed!
> > > 
> > > If I'm not completely mistaken that is not handled gracefully
> > > when we
> > > try to always backup from in the ttm_tt_free() function.
> > > 
> > > So we clearly need the separation of move this TT to a backup
> > > (and
> > > eventually only partially) and freeing it.
> > Hm. I'm not sure I follow completely.
> > 
> > The ttm_pool interface is currently:
> > 
> > ttm_pool_alloc() -> allocs and may recover from backup. May leave
> > partially backed up. Called from ttm_tt_populate() or its driver
> > callbacks.
> 
> Yeah that this is done by a single function looks really strange to
> me.
> 
> > ttm_pool_backup_tt() -> Attempts to back up (the not already backed
> > up
> > part of a tt. Called from ttm_tt_backup(), which is just a tt layer
> > wrapper. If called with purge==true, then frees memory bypassing
> > the
> > pool to return it to the system directly.
> > 
> > ttm_pool_free() -> Frees a (potentially backed up or partially
> > backed
> > up) tt. Called from ttm_tt_unpopulate() or its driver callbacks.
> 
> Ah! I missed that you have separated that functionality from the free
> path.
> 
> I've only saw the allocation path and though I need to clear that up
> first.
> 
> > So the backup functionality is implemented with a minimal change to
> > upper layers, and I don't think there is a correctness problem on
> > free().
> > 
> > So could you clarify a bit if it is this interface you think needs
> > changing or that the implementation is better at separating out the
> > backup functionality from the pool functionality?
> 
> I think we should just make the ttm pool object take charge of 
> allocation, backup, restore and free operation on the TT objects.
> 
> And all of those are separate operations, they just internally share 
> steps to archive what they should do.

So are we looking at an interface change like:

ttm_pool_alloc() // no recover. Errors if backed-up-data present.
ttm_pool_alloc_and_recover() // because you can't alloc first and then
recover in a memory-efficient manner, since you need to interleave.
ttm_pool_backup() // as currently
ttm_pool_drop_backed_up() //drops the backed-up data if any.
ttm_pool_free() // frees all data. errors if backed-up-data present.

Is this what you mean?

> 
> BTW I really dislike that tt->restore is allocated dynamically. That
> is 
> just another allocation which can cause problems.

> 
> We should probably have all the state necessary for the operation in
> the 
> TT object.

Initially it was done this way. But that meant a pre-allocated struct
page-pointer array the of 1 << MAX_PAGE_ORDER size (2MiB) for each
ttm_tt. That lead to a patch to reduce the MAX_PAGE_ORDER to PMD size
order, but  as you might remember, that needed to be ripped out because
the PMD size macros aren't constant across all architectures. IIRC it
was ARM causing compilation failures, and Linus wasn't happy.

So, enter the dynamic allocation which is temporary, and 1/512 of the
size of the memory we need to allocate for the buffer object. IIRC that
was discussed with Matt when he reiewed and we concluded that it should
be ok. I think this approach leads to less memory pressure than if we'd
keep that array around all the time for *all* the allocated bos, and
the allocation is never during reclaim.

Thanks,
Thomas



> 
> Regards,
> Christian.
> 
> > 
> > Thanks,
> > Thomas
> > 
> > 
> > 
> > 
> > > Christian.
> > > 
> > > > /Thomas
> > > > 
> > > > 
> > > > > > For a renaming change that touch all TTM drivers, I'd
> > > > > > rather
> > > > > > put
> > > > > > that
> > > > > > as a last patch since getting acks for that from all TTM
> > > > > > driver
> > > > > > maintainers seems like a hopeless undertaking.
> > > > > Yeah the acks are not the problem, merging it through the xe
> > > > > tree
> > > > > would be.
> > > > > 
> > > > > Christian.
> > > > > 
> > > > > 
> > > > > > /Thomas
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > > Christian.
> > > > > > > 
> > > > > > > > /Thomas
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-04  9:56                         ` Thomas Hellström
@ 2024-12-04 10:56                           ` Christian König
  2024-12-04 11:09                             ` Thomas Hellström
  0 siblings, 1 reply; 54+ messages in thread
From: Christian König @ 2024-12-04 10:56 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

Am 04.12.24 um 10:56 schrieb Thomas Hellström:
> On Wed, 2024-12-04 at 10:16 +0100, Christian König wrote:
>> Am 03.12.24 um 18:44 schrieb Thomas Hellström:
>>> On Tue, 2024-12-03 at 17:46 +0100, Christian König wrote:
>>>> Am 03.12.24 um 17:43 schrieb Thomas Hellström:
>>>>> On Tue, 2024-12-03 at 17:39 +0100, Christian König wrote:
>>>>>> Am 03.12.24 um 17:31 schrieb Thomas Hellström:
>>>>>>> On Tue, 2024-12-03 at 17:20 +0100, Christian König wrote:
>>>>>>>> [SNIP]
>>>>>>>>>>>>> @@ -453,9 +601,36 @@ int ttm_pool_alloc(struct
>>>>>>>>>>>>> ttm_pool
>>>>>>>>>>>>> *pool,
>>>>>>>>>>>>> struct ttm_tt *tt,
>>>>>>>>>>>>>         	else
>>>>>>>>>>>>>         		gfp_flags |= GFP_HIGHUSER;
>>>>>>>>>>>>>         
>>>>>>>>>>>>> -	for (order = min_t(unsigned int,
>>>>>>>>>>>>> MAX_PAGE_ORDER,
>>>>>>>>>>>>> __fls(num_pages));
>>>>>>>>>>>>> -	     num_pages;
>>>>>>>>>>>>> -	     order = min_t(unsigned int,
>>>>>>>>>>>>> order,
>>>>>>>>>>>>> __fls(num_pages)))
>>>>>>>>>>>>> {
>>>>>>>>>>>>> +	order = min_t(unsigned int,
>>>>>>>>>>>>> MAX_PAGE_ORDER,
>>>>>>>>>>>>> __fls(num_pages));
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +	if (tt->page_flags &
>>>>>>>>>>>>> TTM_TT_FLAG_PRIV_BACKED_UP) {
>>>>>>>>>>>>> +		if (!tt->restore) {
>>>>>>>>>>>>> +			gfp_t gfp = GFP_KERNEL
>>>>>>>>>>>>> |
>>>>>>>>>>>>> __GFP_NOWARN;
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +			if (ctx-
>>>>>>>>>>>>>> gfp_retry_mayfail)
>>>>>>>>>>>>> +				gfp |=
>>>>>>>>>>>>> __GFP_RETRY_MAYFAIL;
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +			tt->restore =
>>>>>>>>>>>>> +				kvzalloc(struc
>>>>>>>>>>>>> t_si
>>>>>>>>>>>>> ze(t
>>>>>>>>>>>>> t-
>>>>>>>>>>>>>> restore,
>>>>>>>>>>>>> old_pages,
>>>>>>>>>>>>> +					
>>>>>>>>>>>>> 	
>>>>>>>>>>>>> (size_t)1
>>>>>>>>>>>>> <<
>>>>>>>>>>>>> order), gfp);
>>>>>>>>>>>>> +			if (!tt->restore)
>>>>>>>>>>>>> +				return -
>>>>>>>>>>>>> ENOMEM;
>>>>>>>>>>>>> +		} else if
>>>>>>>>>>>>> (ttm_pool_restore_valid(tt-
>>>>>>>>>>>>>> restore)) {
>>>>>>>>>>>>> +			struct
>>>>>>>>>>>>> ttm_pool_tt_restore
>>>>>>>>>>>>> *restore =
>>>>>>>>>>>>> tt-
>>>>>>>>>>>>>> restore;
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +			num_pages -= restore-
>>>>>>>>>>>>>> alloced_pages;
>>>>>>>>>>>>> +			order = min_t(unsigned
>>>>>>>>>>>>> int,
>>>>>>>>>>>>> order,
>>>>>>>>>>>>> __fls(num_pages));
>>>>>>>>>>>>> +			pages += restore-
>>>>>>>>>>>>>> alloced_pages;
>>>>>>>>>>>>> +			r =
>>>>>>>>>>>>> ttm_pool_restore_tt(restore,
>>>>>>>>>>>>> tt-
>>>>>>>>>>>>>> backup, ctx);
>>>>>>>>>>>>> +			if (r)
>>>>>>>>>>>>> +				return r;
>>>>>>>>>>>>> +			caching = restore-
>>>>>>>>>>>>>> caching_divide;
>>>>>>>>>>>>> +		}
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +		tt->restore->pool = pool;
>>>>>>>>>>>>> +	}
>>>>>>>>>>>> Hui? Why is that part of the allocation function
>>>>>>>>>>>> now?
>>>>>>>>>>>>
>>>>>>>>>>>> At bare minimum I would expect that this is a new
>>>>>>>>>>>> function.
>>>>>>>>>>> It's because we now have partially backed up tts,
>>>>>>>>>>> so
>>>>>>>>>>> the
>>>>>>>>>>> restore is
>>>>>>>>>>> interleaved on a per-page basis, replacing the
>>>>>>>>>>> backup
>>>>>>>>>>> handles
>>>>>>>>>>> with
>>>>>>>>>>> page-pointers. I'll see if I can separate out at
>>>>>>>>>>> least
>>>>>>>>>>> the
>>>>>>>>>>> initialization here.
>>>>>>>>>> Yeah, that kind of makes sense.
>>>>>>>>>>
>>>>>>>>>> My expectation was just that we now have explicit
>>>>>>>>>> ttm_pool_swapout()
>>>>>>>>>> and
>>>>>>>>>> ttm_pool_swapin() functions.
>>>>>>>>> I fully understand, although in the allocation step,
>>>>>>>>> that
>>>>>>>>> would
>>>>>>>>> also
>>>>>>>>> increase the memory pressure since we might momentarily
>>>>>>>>> have
>>>>>>>>> twice
>>>>>>>>> the
>>>>>>>>> bo-size allocated, if the shmem object was never
>>>>>>>>> swapped
>>>>>>>>> out,
>>>>>>>>> and
>>>>>>>>> we
>>>>>>>>> don't want to unnecessarily risc OOM at recover time,
>>>>>>>>> although
>>>>>>>>> that
>>>>>>>>> should be a recoverable situation now. If the OOM
>>>>>>>>> receiver
>>>>>>>>> can
>>>>>>>>> free
>>>>>>>>> up
>>>>>>>>> system memory resources they can could potentially
>>>>>>>>> restart
>>>>>>>>> the
>>>>>>>>> recover.
>>>>>>>> What I meant was more that we have ttm_pool_swapout()
>>>>>>>> which
>>>>>>>> does
>>>>>>>> a
>>>>>>>> mix
>>>>>>>> of moving each page to a swap backend and freeing one by
>>>>>>>> one.
>>>>>>>>
>>>>>>>> And ttm_pool_swapin() which allocates a bit of memory
>>>>>>>> (usually
>>>>>>>> one
>>>>>>>> huge
>>>>>>>> page) and then copies the content back in from the swap
>>>>>>>> backend.
>>>>>>>>
>>>>>>>> Alternatively we could rename ttm_pool_alloc() into
>>>>>>>> something
>>>>>>>> like
>>>>>>>> ttm_pool_populate() and ttm_pool_free() into
>>>>>>>> ttm_pool_unpopulate(),
>>>>>>>> but
>>>>>>>> those names are not very descriptive either.
>>>>>>>>
>>>>>>>> It's just that we now do a bit more than just alloc and
>>>>>>>> free
>>>>>>>> in
>>>>>>>> those
>>>>>>>> functions, so the naming doesn't really match that well
>>>>>>>> any
>>>>>>>> more.
>>>>>>> So what about ttm_pool_alloc() and
>>>>>>> ttm_pool_recover/swapin(),
>>>>>>> both
>>>>>>> pointing to the same code, but _alloc() asserts that the tt
>>>>>>> isn't
>>>>>>> backed up?
>>>>>>>
>>>>>>> That would give a clean interface at least.
>>>>>> More or less ok. I would just put figuring out the gfp flags
>>>>>> and
>>>>>> the
>>>>>> stuff inside the for (order... loop into separate functions.
>>>>>> And
>>>>>> then
>>>>>> remove the if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP)
>>>>>> from
>>>>>> the
>>>>>> pool.
>>>>>>
>>>>>> In other words you trigger the back restore by calling a
>>>>>> different
>>>>>> function than the allocation one.
>>>>> I'll take a look at this as well.
>>>> Ah, and BTW: It's perfectly possible that ttm_tt_free() is called
>>>> because a halve swapped TT is about to be destroyed!
>>>>
>>>> If I'm not completely mistaken that is not handled gracefully
>>>> when we
>>>> try to always backup from in the ttm_tt_free() function.
>>>>
>>>> So we clearly need the separation of move this TT to a backup
>>>> (and
>>>> eventually only partially) and freeing it.
>>> Hm. I'm not sure I follow completely.
>>>
>>> The ttm_pool interface is currently:
>>>
>>> ttm_pool_alloc() -> allocs and may recover from backup. May leave
>>> partially backed up. Called from ttm_tt_populate() or its driver
>>> callbacks.
>> Yeah that this is done by a single function looks really strange to
>> me.
>>
>>> ttm_pool_backup_tt() -> Attempts to back up (the not already backed
>>> up
>>> part of a tt. Called from ttm_tt_backup(), which is just a tt layer
>>> wrapper. If called with purge==true, then frees memory bypassing
>>> the
>>> pool to return it to the system directly.
>>>
>>> ttm_pool_free() -> Frees a (potentially backed up or partially
>>> backed
>>> up) tt. Called from ttm_tt_unpopulate() or its driver callbacks.
>> Ah! I missed that you have separated that functionality from the free
>> path.
>>
>> I've only saw the allocation path and though I need to clear that up
>> first.
>>
>>> So the backup functionality is implemented with a minimal change to
>>> upper layers, and I don't think there is a correctness problem on
>>> free().
>>>
>>> So could you clarify a bit if it is this interface you think needs
>>> changing or that the implementation is better at separating out the
>>> backup functionality from the pool functionality?
>> I think we should just make the ttm pool object take charge of
>> allocation, backup, restore and free operation on the TT objects.
>>
>> And all of those are separate operations, they just internally share
>> steps to archive what they should do.
> So are we looking at an interface change like:
>
> ttm_pool_alloc() // no recover. Errors if backed-up-data present.
> ttm_pool_alloc_and_recover() // because you can't alloc first and then
> recover in a memory-efficient manner, since you need to interleave.
> ttm_pool_backup() // as currently
> ttm_pool_drop_backed_up() //drops the backed-up data if any.
> ttm_pool_free() // frees all data. errors if backed-up-data present.
>
> Is this what you mean?

Yes, exactly that.

>
>> BTW I really dislike that tt->restore is allocated dynamically. That
>> is
>> just another allocation which can cause problems.
>> We should probably have all the state necessary for the operation in
>> the
>> TT object.
> Initially it was done this way. But that meant a pre-allocated struct
> page-pointer array the of 1 << MAX_PAGE_ORDER size (2MiB) for each
> ttm_tt. That lead to a patch to reduce the MAX_PAGE_ORDER to PMD size
> order, but  as you might remember, that needed to be ripped out because
> the PMD size macros aren't constant across all architectures. IIRC it
> was ARM causing compilation failures, and Linus wasn't happy.

Yeah, I do remember that. But I don't fully get why you need this 
page-pointer array in the first place?

>
> So, enter the dynamic allocation which is temporary, and 1/512 of the
> size of the memory we need to allocate for the buffer object. IIRC that
> was discussed with Matt when he reiewed and we concluded that it should
> be ok. I think this approach leads to less memory pressure than if we'd
> keep that array around all the time for *all* the allocated bos, and
> the allocation is never during reclaim.

Hui? How do you avoid having to allocate that during reclaim?

I absolutely don't see that on the code currently.

Regards,
Christian.

>
> Thanks,
> Thomas
>
>
>
>> Regards,
>> Christian.
>>
>>> Thanks,
>>> Thomas
>>>
>>>
>>>
>>>
>>>> Christian.
>>>>
>>>>> /Thomas
>>>>>
>>>>>
>>>>>>> For a renaming change that touch all TTM drivers, I'd
>>>>>>> rather
>>>>>>> put
>>>>>>> that
>>>>>>> as a last patch since getting acks for that from all TTM
>>>>>>> driver
>>>>>>> maintainers seems like a hopeless undertaking.
>>>>>> Yeah the acks are not the problem, merging it through the xe
>>>>>> tree
>>>>>> would be.
>>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>
>>>>>>> /Thomas
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> Christian.
>>>>>>>>
>>>>>>>>> /Thomas


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-04 10:56                           ` Christian König
@ 2024-12-04 11:09                             ` Thomas Hellström
  2024-12-04 11:24                               ` Christian König
  0 siblings, 1 reply; 54+ messages in thread
From: Thomas Hellström @ 2024-12-04 11:09 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Wed, 2024-12-04 at 11:56 +0100, Christian König wrote:
> Am 04.12.24 um 10:56 schrieb Thomas Hellström:
> > On Wed, 2024-12-04 at 10:16 +0100, Christian König wrote:
> > > Am 03.12.24 um 18:44 schrieb Thomas Hellström:
> > > > On Tue, 2024-12-03 at 17:46 +0100, Christian König wrote:
> > > > > Am 03.12.24 um 17:43 schrieb Thomas Hellström:
> > > > > > On Tue, 2024-12-03 at 17:39 +0100, Christian König wrote:
> > > > > > > Am 03.12.24 um 17:31 schrieb Thomas Hellström:
> > > > > > > > On Tue, 2024-12-03 at 17:20 +0100, Christian König
> > > > > > > > wrote:
> > > > > > > > > [SNIP]
> > > > > > > > > > > > > > @@ -453,9 +601,36 @@ int
> > > > > > > > > > > > > > ttm_pool_alloc(struct
> > > > > > > > > > > > > > ttm_pool
> > > > > > > > > > > > > > *pool,
> > > > > > > > > > > > > > struct ttm_tt *tt,
> > > > > > > > > > > > > >         	else
> > > > > > > > > > > > > >         		gfp_flags |=
> > > > > > > > > > > > > > GFP_HIGHUSER;
> > > > > > > > > > > > > >         
> > > > > > > > > > > > > > -	for (order = min_t(unsigned int,
> > > > > > > > > > > > > > MAX_PAGE_ORDER,
> > > > > > > > > > > > > > __fls(num_pages));
> > > > > > > > > > > > > > -	     num_pages;
> > > > > > > > > > > > > > -	     order = min_t(unsigned int,
> > > > > > > > > > > > > > order,
> > > > > > > > > > > > > > __fls(num_pages)))
> > > > > > > > > > > > > > {
> > > > > > > > > > > > > > +	order = min_t(unsigned int,
> > > > > > > > > > > > > > MAX_PAGE_ORDER,
> > > > > > > > > > > > > > __fls(num_pages));
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +	if (tt->page_flags &
> > > > > > > > > > > > > > TTM_TT_FLAG_PRIV_BACKED_UP) {
> > > > > > > > > > > > > > +		if (!tt->restore) {
> > > > > > > > > > > > > > +			gfp_t gfp =
> > > > > > > > > > > > > > GFP_KERNEL
> > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > __GFP_NOWARN;
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +			if (ctx-
> > > > > > > > > > > > > > > gfp_retry_mayfail)
> > > > > > > > > > > > > > +				gfp |=
> > > > > > > > > > > > > > __GFP_RETRY_MAYFAIL;
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +			tt->restore =
> > > > > > > > > > > > > > +				kvzalloc(s
> > > > > > > > > > > > > > truc
> > > > > > > > > > > > > > t_si
> > > > > > > > > > > > > > ze(t
> > > > > > > > > > > > > > t-
> > > > > > > > > > > > > > > restore,
> > > > > > > > > > > > > > old_pages,
> > > > > > > > > > > > > > +					
> > > > > > > > > > > > > > 	
> > > > > > > > > > > > > > (size_t)1
> > > > > > > > > > > > > > <<
> > > > > > > > > > > > > > order), gfp);
> > > > > > > > > > > > > > +			if (!tt->restore)
> > > > > > > > > > > > > > +				return -
> > > > > > > > > > > > > > ENOMEM;
> > > > > > > > > > > > > > +		} else if
> > > > > > > > > > > > > > (ttm_pool_restore_valid(tt-
> > > > > > > > > > > > > > > restore)) {
> > > > > > > > > > > > > > +			struct
> > > > > > > > > > > > > > ttm_pool_tt_restore
> > > > > > > > > > > > > > *restore =
> > > > > > > > > > > > > > tt-
> > > > > > > > > > > > > > > restore;
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +			num_pages -=
> > > > > > > > > > > > > > restore-
> > > > > > > > > > > > > > > alloced_pages;
> > > > > > > > > > > > > > +			order =
> > > > > > > > > > > > > > min_t(unsigned
> > > > > > > > > > > > > > int,
> > > > > > > > > > > > > > order,
> > > > > > > > > > > > > > __fls(num_pages));
> > > > > > > > > > > > > > +			pages += restore-
> > > > > > > > > > > > > > > alloced_pages;
> > > > > > > > > > > > > > +			r =
> > > > > > > > > > > > > > ttm_pool_restore_tt(restore,
> > > > > > > > > > > > > > tt-
> > > > > > > > > > > > > > > backup, ctx);
> > > > > > > > > > > > > > +			if (r)
> > > > > > > > > > > > > > +				return r;
> > > > > > > > > > > > > > +			caching = restore-
> > > > > > > > > > > > > > > caching_divide;
> > > > > > > > > > > > > > +		}
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +		tt->restore->pool = pool;
> > > > > > > > > > > > > > +	}
> > > > > > > > > > > > > Hui? Why is that part of the allocation
> > > > > > > > > > > > > function
> > > > > > > > > > > > > now?
> > > > > > > > > > > > > 
> > > > > > > > > > > > > At bare minimum I would expect that this is a
> > > > > > > > > > > > > new
> > > > > > > > > > > > > function.
> > > > > > > > > > > > It's because we now have partially backed up
> > > > > > > > > > > > tts,
> > > > > > > > > > > > so
> > > > > > > > > > > > the
> > > > > > > > > > > > restore is
> > > > > > > > > > > > interleaved on a per-page basis, replacing the
> > > > > > > > > > > > backup
> > > > > > > > > > > > handles
> > > > > > > > > > > > with
> > > > > > > > > > > > page-pointers. I'll see if I can separate out
> > > > > > > > > > > > at
> > > > > > > > > > > > least
> > > > > > > > > > > > the
> > > > > > > > > > > > initialization here.
> > > > > > > > > > > Yeah, that kind of makes sense.
> > > > > > > > > > > 
> > > > > > > > > > > My expectation was just that we now have explicit
> > > > > > > > > > > ttm_pool_swapout()
> > > > > > > > > > > and
> > > > > > > > > > > ttm_pool_swapin() functions.
> > > > > > > > > > I fully understand, although in the allocation
> > > > > > > > > > step,
> > > > > > > > > > that
> > > > > > > > > > would
> > > > > > > > > > also
> > > > > > > > > > increase the memory pressure since we might
> > > > > > > > > > momentarily
> > > > > > > > > > have
> > > > > > > > > > twice
> > > > > > > > > > the
> > > > > > > > > > bo-size allocated, if the shmem object was never
> > > > > > > > > > swapped
> > > > > > > > > > out,
> > > > > > > > > > and
> > > > > > > > > > we
> > > > > > > > > > don't want to unnecessarily risc OOM at recover
> > > > > > > > > > time,
> > > > > > > > > > although
> > > > > > > > > > that
> > > > > > > > > > should be a recoverable situation now. If the OOM
> > > > > > > > > > receiver
> > > > > > > > > > can
> > > > > > > > > > free
> > > > > > > > > > up
> > > > > > > > > > system memory resources they can could potentially
> > > > > > > > > > restart
> > > > > > > > > > the
> > > > > > > > > > recover.
> > > > > > > > > What I meant was more that we have ttm_pool_swapout()
> > > > > > > > > which
> > > > > > > > > does
> > > > > > > > > a
> > > > > > > > > mix
> > > > > > > > > of moving each page to a swap backend and freeing one
> > > > > > > > > by
> > > > > > > > > one.
> > > > > > > > > 
> > > > > > > > > And ttm_pool_swapin() which allocates a bit of memory
> > > > > > > > > (usually
> > > > > > > > > one
> > > > > > > > > huge
> > > > > > > > > page) and then copies the content back in from the
> > > > > > > > > swap
> > > > > > > > > backend.
> > > > > > > > > 
> > > > > > > > > Alternatively we could rename ttm_pool_alloc() into
> > > > > > > > > something
> > > > > > > > > like
> > > > > > > > > ttm_pool_populate() and ttm_pool_free() into
> > > > > > > > > ttm_pool_unpopulate(),
> > > > > > > > > but
> > > > > > > > > those names are not very descriptive either.
> > > > > > > > > 
> > > > > > > > > It's just that we now do a bit more than just alloc
> > > > > > > > > and
> > > > > > > > > free
> > > > > > > > > in
> > > > > > > > > those
> > > > > > > > > functions, so the naming doesn't really match that
> > > > > > > > > well
> > > > > > > > > any
> > > > > > > > > more.
> > > > > > > > So what about ttm_pool_alloc() and
> > > > > > > > ttm_pool_recover/swapin(),
> > > > > > > > both
> > > > > > > > pointing to the same code, but _alloc() asserts that
> > > > > > > > the tt
> > > > > > > > isn't
> > > > > > > > backed up?
> > > > > > > > 
> > > > > > > > That would give a clean interface at least.
> > > > > > > More or less ok. I would just put figuring out the gfp
> > > > > > > flags
> > > > > > > and
> > > > > > > the
> > > > > > > stuff inside the for (order... loop into separate
> > > > > > > functions.
> > > > > > > And
> > > > > > > then
> > > > > > > remove the if (tt->page_flags &
> > > > > > > TTM_TT_FLAG_PRIV_BACKED_UP)
> > > > > > > from
> > > > > > > the
> > > > > > > pool.
> > > > > > > 
> > > > > > > In other words you trigger the back restore by calling a
> > > > > > > different
> > > > > > > function than the allocation one.
> > > > > > I'll take a look at this as well.
> > > > > Ah, and BTW: It's perfectly possible that ttm_tt_free() is
> > > > > called
> > > > > because a halve swapped TT is about to be destroyed!
> > > > > 
> > > > > If I'm not completely mistaken that is not handled gracefully
> > > > > when we
> > > > > try to always backup from in the ttm_tt_free() function.
> > > > > 
> > > > > So we clearly need the separation of move this TT to a backup
> > > > > (and
> > > > > eventually only partially) and freeing it.
> > > > Hm. I'm not sure I follow completely.
> > > > 
> > > > The ttm_pool interface is currently:
> > > > 
> > > > ttm_pool_alloc() -> allocs and may recover from backup. May
> > > > leave
> > > > partially backed up. Called from ttm_tt_populate() or its
> > > > driver
> > > > callbacks.
> > > Yeah that this is done by a single function looks really strange
> > > to
> > > me.
> > > 
> > > > ttm_pool_backup_tt() -> Attempts to back up (the not already
> > > > backed
> > > > up
> > > > part of a tt. Called from ttm_tt_backup(), which is just a tt
> > > > layer
> > > > wrapper. If called with purge==true, then frees memory
> > > > bypassing
> > > > the
> > > > pool to return it to the system directly.
> > > > 
> > > > ttm_pool_free() -> Frees a (potentially backed up or partially
> > > > backed
> > > > up) tt. Called from ttm_tt_unpopulate() or its driver
> > > > callbacks.
> > > Ah! I missed that you have separated that functionality from the
> > > free
> > > path.
> > > 
> > > I've only saw the allocation path and though I need to clear that
> > > up
> > > first.
> > > 
> > > > So the backup functionality is implemented with a minimal
> > > > change to
> > > > upper layers, and I don't think there is a correctness problem
> > > > on
> > > > free().
> > > > 
> > > > So could you clarify a bit if it is this interface you think
> > > > needs
> > > > changing or that the implementation is better at separating out
> > > > the
> > > > backup functionality from the pool functionality?
> > > I think we should just make the ttm pool object take charge of
> > > allocation, backup, restore and free operation on the TT objects.
> > > 
> > > And all of those are separate operations, they just internally
> > > share
> > > steps to archive what they should do.
> > So are we looking at an interface change like:
> > 
> > ttm_pool_alloc() // no recover. Errors if backed-up-data present.
> > ttm_pool_alloc_and_recover() // because you can't alloc first and
> > then
> > recover in a memory-efficient manner, since you need to interleave.
> > ttm_pool_backup() // as currently
> > ttm_pool_drop_backed_up() //drops the backed-up data if any.
> > ttm_pool_free() // frees all data. errors if backed-up-data
> > present.
> > 
> > Is this what you mean?
> 
> Yes, exactly that.

OK, then sure I'll update.

> 
> > 
> > > BTW I really dislike that tt->restore is allocated dynamically.
> > > That
> > > is
> > > just another allocation which can cause problems.
> > > We should probably have all the state necessary for the operation
> > > in
> > > the
> > > TT object.
> > Initially it was done this way. But that meant a pre-allocated
> > struct
> > page-pointer array the of 1 << MAX_PAGE_ORDER size (2MiB) for each
> > ttm_tt. That lead to a patch to reduce the MAX_PAGE_ORDER to PMD
> > size
> > order, but  as you might remember, that needed to be ripped out
> > because
> > the PMD size macros aren't constant across all architectures. IIRC
> > it
> > was ARM causing compilation failures, and Linus wasn't happy.
> 
> Yeah, I do remember that. But I don't fully get why you need this 
> page-pointer array in the first place?

So the TTM page-pointer array holds the backup handles when backed up.
During recovery, We allocate a (potentially huge) page and populate the
TTM page-pointer array with pointers into that. Meanwhile we need to
keep the backup handles for the recover phase in the restore structure,
and in the middle of the recover phase you might hit an -EINTR.

Thanks,
Thomas


> 
> > 
> > So, enter the dynamic allocation which is temporary, and 1/512 of
> > the
> > size of the memory we need to allocate for the buffer object. IIRC
> > that
> > was discussed with Matt when he reiewed and we concluded that it
> > should
> > be ok. I think this approach leads to less memory pressure than if
> > we'd
> > keep that array around all the time for *all* the allocated bos,
> > and
> > the allocation is never during reclaim.
> 
> Hui? How do you avoid having to allocate that during reclaim?
> 
> I absolutely don't see that on the code currently.

During reclaim we back up only. When this allocation happens we're
about to recover, which means we are not in reclaim.

/Thomas


> 
> Regards,
> Christian.
> 
> > 
> > Thanks,
> > Thomas
> > 
> > 
> > 
> > > Regards,
> > > Christian.
> > > 
> > > > Thanks,
> > > > Thomas
> > > > 
> > > > 
> > > > 
> > > > 
> > > > > Christian.
> > > > > 
> > > > > > /Thomas
> > > > > > 
> > > > > > 
> > > > > > > > For a renaming change that touch all TTM drivers, I'd
> > > > > > > > rather
> > > > > > > > put
> > > > > > > > that
> > > > > > > > as a last patch since getting acks for that from all
> > > > > > > > TTM
> > > > > > > > driver
> > > > > > > > maintainers seems like a hopeless undertaking.
> > > > > > > Yeah the acks are not the problem, merging it through the
> > > > > > > xe
> > > > > > > tree
> > > > > > > would be.
> > > > > > > 
> > > > > > > Christian.
> > > > > > > 
> > > > > > > 
> > > > > > > > /Thomas
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > Christian.
> > > > > > > > > 
> > > > > > > > > > /Thomas
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-04 11:09                             ` Thomas Hellström
@ 2024-12-04 11:24                               ` Christian König
  2024-12-04 12:24                                 ` Thomas Hellström
  2024-12-18 10:07                                 ` Thomas Hellström
  0 siblings, 2 replies; 54+ messages in thread
From: Christian König @ 2024-12-04 11:24 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

[-- Attachment #1: Type: text/plain, Size: 2169 bytes --]

Am 04.12.24 um 12:09 schrieb Thomas Hellström:
> [SNIP]

>>>> BTW I really dislike that tt->restore is allocated dynamically.
>>>> That
>>>> is
>>>> just another allocation which can cause problems.
>>>> We should probably have all the state necessary for the operation
>>>> in
>>>> the
>>>> TT object.
>>> Initially it was done this way. But that meant a pre-allocated
>>> struct
>>> page-pointer array the of 1 << MAX_PAGE_ORDER size (2MiB) for each
>>> ttm_tt. That lead to a patch to reduce the MAX_PAGE_ORDER to PMD
>>> size
>>> order, but  as you might remember, that needed to be ripped out
>>> because
>>> the PMD size macros aren't constant across all architectures. IIRC
>>> it
>>> was ARM causing compilation failures, and Linus wasn't happy.
>> Yeah, I do remember that. But I don't fully get why you need this
>> page-pointer array in the first place?
> So the TTM page-pointer array holds the backup handles when backed up.
> During recovery, We allocate a (potentially huge) page and populate the
> TTM page-pointer array with pointers into that. Meanwhile we need to
> keep the backup handles for the recover phase in the restore structure,
> and in the middle of the recover phase you might hit an -EINTR.

I still don't see the problem to be honest.

What you basically do on recovery is the following:
1. Allocate a bunch of contiguous memory of order X.
2. Take the first entry from the page_array, convert that to your backup 
handle and copy the data back into the just allocated contiguous memory.
3. Replace the first entry in the page array with the struct page 
pointer of the allocated contiguous memory.
4. Take the next entry from the page_array, convert that to your backup 
handle and copy the data back into the just allocated contiguous memory.
5. Replace the next entry in the page_array with the struct page pointer 
+ 1 of the allocated contiguous memory.
6. Repeat until the contiguous memory is fully recovered and we jump to 
1 again.

What exactly do you need this pre-allocated struct page-pointer array of 
1 << MAX_PAGE_ORDER for?

Sorry, I must really be missing something here.

Regards,
Christian.

>
> Thanks,
> Thomas
>

[-- Attachment #2: Type: text/html, Size: 3269 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-04 11:24                               ` Christian König
@ 2024-12-04 12:24                                 ` Thomas Hellström
  2024-12-18 10:07                                 ` Thomas Hellström
  1 sibling, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-12-04 12:24 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Wed, 2024-12-04 at 12:24 +0100, Christian König wrote:
> Am 04.12.24 um 12:09 schrieb Thomas Hellström:
> > [SNIP]
> 
> > > > > BTW I really dislike that tt->restore is allocated
> > > > > dynamically.
> > > > > That
> > > > > is
> > > > > just another allocation which can cause problems.
> > > > > We should probably have all the state necessary for the
> > > > > operation
> > > > > in
> > > > > the
> > > > > TT object.
> > > > Initially it was done this way. But that meant a pre-allocated
> > > > struct
> > > > page-pointer array the of 1 << MAX_PAGE_ORDER size (2MiB) for
> > > > each
> > > > ttm_tt. That lead to a patch to reduce the MAX_PAGE_ORDER to
> > > > PMD
> > > > size
> > > > order, but  as you might remember, that needed to be ripped out
> > > > because
> > > > the PMD size macros aren't constant across all architectures.
> > > > IIRC
> > > > it
> > > > was ARM causing compilation failures, and Linus wasn't happy.
> > > Yeah, I do remember that. But I don't fully get why you need this
> > > page-pointer array in the first place?
> > So the TTM page-pointer array holds the backup handles when backed
> > up.
> > During recovery, We allocate a (potentially huge) page and populate
> > the
> > TTM page-pointer array with pointers into that. Meanwhile we need
> > to
> > keep the backup handles for the recover phase in the restore
> > structure,
> > and in the middle of the recover phase you might hit an -EINTR.
> 
> I still don't see the problem to be honest.
> 
> What you basically do on recovery is the following:
> 1. Allocate a bunch of contiguous memory of order X.
> 2. Take the first entry from the page_array, convert that to your
> backup 
> handle and copy the data back into the just allocated contiguous
> memory.
> 3. Replace the first entry in the page array with the struct page 
> pointer of the allocated contiguous memory.
> 4. Take the next entry from the page_array, convert that to your
> backup 
> handle and copy the data back into the just allocated contiguous
> memory.
> 5. Replace the next entry in the page_array with the struct page
> pointer 
> + 1 of the allocated contiguous memory.
> 6. Repeat until the contiguous memory is fully recovered and we jump
> to 
> 1 again.
> 
> What exactly do you need this pre-allocated struct page-pointer array
> of 
> 1 << MAX_PAGE_ORDER for?
> 
> Sorry, I must really be missing something here.

It was like a year or more ago since I put this patch together, so TBH
I can't recall the details, other than I'm pretty sure I tried that and
decided against it. Could have been that the changes were too invasive,
and it's pretty easy to break this code even without invasive
changes...

However with an accessor function for the old page pointers and one for
the new ones I imagine it should be possible.

I'll give it a try and see what can be done.

/Thomas


> 
> Regards,
> Christian.
> 
> > 
> > Thanks,
> > Thomas


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-04 11:24                               ` Christian König
  2024-12-04 12:24                                 ` Thomas Hellström
@ 2024-12-18 10:07                                 ` Thomas Hellström
  1 sibling, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-12-18 10:07 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Wed, 2024-12-04 at 12:24 +0100, Christian König wrote:
> Am 04.12.24 um 12:09 schrieb Thomas Hellström:
> > [SNIP]
> 
> > > > > BTW I really dislike that tt->restore is allocated
> > > > > dynamically.
> > > > > That
> > > > > is
> > > > > just another allocation which can cause problems.
> > > > > We should probably have all the state necessary for the
> > > > > operation
> > > > > in
> > > > > the
> > > > > TT object.
> > > > Initially it was done this way. But that meant a pre-allocated
> > > > struct
> > > > page-pointer array the of 1 << MAX_PAGE_ORDER size (2MiB) for
> > > > each
> > > > ttm_tt. That lead to a patch to reduce the MAX_PAGE_ORDER to
> > > > PMD
> > > > size
> > > > order, but  as you might remember, that needed to be ripped out
> > > > because
> > > > the PMD size macros aren't constant across all architectures.
> > > > IIRC
> > > > it
> > > > was ARM causing compilation failures, and Linus wasn't happy.
> > > Yeah, I do remember that. But I don't fully get why you need this
> > > page-pointer array in the first place?
> > So the TTM page-pointer array holds the backup handles when backed
> > up.
> > During recovery, We allocate a (potentially huge) page and populate
> > the
> > TTM page-pointer array with pointers into that. Meanwhile we need
> > to
> > keep the backup handles for the recover phase in the restore
> > structure,
> > and in the middle of the recover phase you might hit an -EINTR.
> 
> I still don't see the problem to be honest.
> 
> What you basically do on recovery is the following:
> 1. Allocate a bunch of contiguous memory of order X.
> 2. Take the first entry from the page_array, convert that to your
> backup 
> handle and copy the data back into the just allocated contiguous
> memory.
> 3. Replace the first entry in the page array with the struct page 
> pointer of the allocated contiguous memory.
> 4. Take the next entry from the page_array, convert that to your
> backup 
> handle and copy the data back into the just allocated contiguous
> memory.
> 5. Replace the next entry in the page_array with the struct page
> pointer 
> + 1 of the allocated contiguous memory.
> 6. Repeat until the contiguous memory is fully recovered and we jump
> to 
> 1 again.

OK so the reason I skipped this previously was apparently because of
inconsistencies, since the the dma_addr array is fully populated it
would look awkward if pages array were only partly populated.

But I reworked this in the latest version to follow the above so now
both are populated once the whole new multi-order page is successfully
read back from backup. The patch becomes bigger, and I also added a
restructuring patch, but OTOH some of the additional additions is
documentation.

The (now small) kmalloc at start of restore is still present, though. I
figure if that fails, restore would fail anyway so it shouldn't be an
issue whatsoever.

On an unrelated issue,
I notice that HighMem pages are skipping cache-transitioning. That
makes sense since they don't have a kernel linear map, but content
added using writes to other cached maps (page clearing, restore, resume
from hibernation) might still remain in a PIPT cache, right? Don't we
need to clflush these pages on wb->wc transition and ensure any resume-
from-hibernation content is similarly flushed?

/Thomas


> 
> What exactly do you need this pre-allocated struct page-pointer array
> of 
> 1 << MAX_PAGE_ORDER for?
> 
> Sorry, I must really be missing something here.
> 
> Regards,
> Christian.
> 
> > 
> > Thanks,
> > Thomas


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages
  2024-12-03 14:51       ` Christian König
  2024-12-03 15:50         ` Thomas Hellström
@ 2024-12-18 10:15         ` Thomas Hellström
  1 sibling, 0 replies; 54+ messages in thread
From: Thomas Hellström @ 2024-12-18 10:15 UTC (permalink / raw)
  To: Christian König, intel-xe
  Cc: Somalapuram Amaranath, Matthew Brost, dri-devel, Paulo Zanoni,
	Simona Vetter

On Tue, 2024-12-03 at 15:51 +0100, Christian König wrote:
> Am 03.12.24 um 14:42 schrieb Thomas Hellström:
> > On Tue, 2024-12-03 at 14:12 +0100, Christian König wrote:
> > > Am 15.11.24 um 16:01 schrieb Thomas Hellström:
> > > > Provide a helper to shrink ttm_tt page-vectors on a per-page
> > > > basis. A ttm_backup backend could then in theory get away with
> > > > allocating a single temporary page for each struct ttm_tt.
> > > > 
> > > > This is accomplished by splitting larger pages before trying to
> > > > back them up.
> > > > 
> > > > In the future we could allow ttm_backup to handle backing up
> > > > large pages as well, but currently there's no benefit in
> > > > doing that, since the shmem backup backend would have to
> > > > split those anyway to avoid allocating too much temporary
> > > > memory, and if the backend instead inserts pages into the
> > > > swap-cache, those are split on reclaim by the core.
> > > > 
> > > > Due to potential backup- and recover errors, allow partially
> > > > swapped
> > > > out struct ttm_tt's, although mark them as swapped out stopping
> > > > them
> > > > from being swapped out a second time. More details in the
> > > > ttm_pool.c
> > > > DOC section.
> > > > 
> > > > v2:
> > > > - A couple of cleanups and error fixes in ttm_pool_back_up_tt.
> > > > - s/back_up/backup/
> > > > - Add a writeback parameter to the exported interface.
> > > > v8:
> > > > - Use a struct for flags for readability (Matt Brost)
> > > > - Address misc other review comments (Matt Brost)
> > > > v9:
> > > > - Update the kerneldoc for the ttm_tt::backup field.
> > > > v10:
> > > > - Rebase.
> > > > v13:
> > > > - Rebase on ttm_backup interface change. Update kerneldoc.
> > > > - Rebase and adjust ttm_tt_is_swapped().
> > > > 
> > > > Cc: Christian König<christian.koenig@amd.com>
> > > > Cc: Somalapuram Amaranath<Amaranath.Somalapuram@amd.com>
> > > > Cc: Matthew Brost<matthew.brost@intel.com>
> > > > Cc:<dri-devel@lists.freedesktop.org>
> > > > Signed-off-by: Thomas
> > > > Hellström<thomas.hellstrom@linux.intel.com>
> > > > Reviewed-by: Matthew Brost<matthew.brost@intel.com>
> > > > ---
> > > >    drivers/gpu/drm/ttm/ttm_pool.c | 396
> > > > +++++++++++++++++++++++++++++++--
> > > >    drivers/gpu/drm/ttm/ttm_tt.c   |  37 +++
> > > >    include/drm/ttm/ttm_pool.h     |   6 +
> > > >    include/drm/ttm/ttm_tt.h       |  32 ++-
> > > >    4 files changed, 457 insertions(+), 14 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
> > > > b/drivers/gpu/drm/ttm/ttm_pool.c
> > > > index 8504dbe19c1a..f58864439edb 100644
> > > > --- a/drivers/gpu/drm/ttm/ttm_pool.c
> > > > +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> > > > @@ -41,6 +41,7 @@
> > > >    #include <asm/set_memory.h>
> > > >    #endif
> > > >    
> > > > +#include <drm/ttm/ttm_backup.h>
> > > >    #include <drm/ttm/ttm_pool.h>
> > > >    #include <drm/ttm/ttm_tt.h>
> > > >    #include <drm/ttm/ttm_bo.h>
> > > > @@ -58,6 +59,32 @@ struct ttm_pool_dma {
> > > >    	unsigned long vaddr;
> > > >    };
> > > >    
> > > > +/**
> > > > + * struct ttm_pool_tt_restore - State representing restore
> > > > from
> > > > backup
> > > > + * @alloced_pages: Total number of already allocated pages for
> > > > the
> > > > ttm_tt.
> > > > + * @restored_pages: Number of (sub) pages restored from swap
> > > > for
> > > > this
> > > > + *		     chunk of 1 << @order pages.
> > > > + * @first_page: The ttm page ptr representing for
> > > > @old_pages[0].
> > > > + * @caching_divide: Page pointer where subsequent pages are
> > > > cached.
> > > > + * @old_pages: Backup copy of page pointers that were replaced
> > > > by
> > > > the new
> > > > + *	       page allocation.
> > > > + * @pool: The pool used for page allocation while restoring.
> > > > + * @order: The order of the last page allocated while
> > > > restoring.
> > > > + *
> > > > + * Recovery from backup might fail when we've recovered less
> > > > than
> > > > the
> > > > + * full ttm_tt. In order not to loose any data (yet), keep
> > > > information
> > > > + * around that allows us to restart a failed ttm backup
> > > > recovery.
> > > > + */
> > > > +struct ttm_pool_tt_restore {
> > > > +	pgoff_t alloced_pages;
> > > > +	pgoff_t restored_pages;
> > > > +	struct page **first_page;
> > > > +	struct page **caching_divide;
> > > > +	struct ttm_pool *pool;
> > > > +	unsigned int order;
> > > > +	struct page *old_pages[];
> > > > +};
> > > > +
> > > >    static unsigned long page_pool_size;
> > > >    
> > > >    MODULE_PARM_DESC(page_pool_size, "Number of pages in the
> > > > WC/UC/DMA pool");
> > > > @@ -354,11 +381,105 @@ static unsigned int
> > > > ttm_pool_page_order(struct ttm_pool *pool, struct page *p)
> > > >    	return p->private;
> > > >    }
> > > >    
> > > > +/*
> > > > + * To be able to insert single pages into backup directly,
> > > > + * we need to split multi-order page allocations and make them
> > > > look
> > > > + * like single-page allocations.
> > > > + */
> > > > +static void ttm_pool_split_for_swap(struct ttm_pool *pool,
> > > > struct
> > > > page *p)
> > > > +{
> > > > +	unsigned int order = ttm_pool_page_order(pool, p);
> > > > +	pgoff_t nr;
> > > > +
> > > > +	if (!order)
> > > > +		return;
> > > > +
> > > > +	split_page(p, order);
> > > What exactly should split_page() do here and why is that
> > > necessary?
> > > 
> > > IIRC that function just updated the reference count and updated
> > > things
> > > like page owner tracking and memcg accounting. Which should both
> > > be
> > > completely irrelevant here.
> > > 
> > > Or do you just do that so that you can free each page
> > > individually?
> > Yes, exactly. Like For a 2MiB page we'd otherwise have to allocate
> > 2MiB
> > of shmem backing storage, potentially from kernel reserves before
> > we
> > could actually free anything. Since (currently) the shmem objects
> > we
> > use are 4K-page only, this should make the process "allocate shmem
> > and
> > back up" much less likely to deplete the kernel memory reserves.
> 
> Ah, yes that makes totally sense now.
> 
> > 
> > Taking a step back and looking at potentially other solution, like
> > direct insertion into the swap cache, then even if inserting a 2MiB
> > page into the swap cache, vmscan would split it before writeback,
> > and
> > still it didn't appear very stable. So inserting one 4K page at a
> > time
> > seemed neccessary. If I were to take a guess that's why shmem, when
> > configured for 2MiB pages, like with i915, also splits the pages
> > before
> > moving to swap-cache / writeback.
> > 
> > 
> > > > +	nr = 1UL << order;
> > > > +	while (nr--)
> > > > +		(p++)->private = 0;
> > > > +}
> > > > +
> > > > +/**
> > > > + * DOC: Partial backup and restoration of a struct ttm_tt.
> > > > + *
> > > > + * Swapout using ttm_backup_backup_page() and swapin using
> > > > + * ttm_backup_copy_page() may fail.
> > > > + * The former most likely due to lack of swap-space or memory,
> > > > the
> > > > latter due
> > > > + * to lack of memory or because of signal interruption during
> > > > waits.
> > > > + *
> > > > + * Backup failure is easily handled by using a ttm_tt pages
> > > > vector
> > > > that holds
> > > > + * both swap entries and page pointers. This has to be taken
> > > > into
> > > > account when
> > > > + * restoring such a ttm_tt from backup, and when freeing it
> > > > while
> > > > backed up.
> > > > + * When restoring, for simplicity, new pages are actually
> > > > allocated from the
> > > > + * pool and the contents of any old pages are copied in and
> > > > then
> > > > the old pages
> > > > + * are released.
> > > > + *
> > > > + * For restoration failures, the struct ttm_pool_tt_restore
> > > > holds
> > > > sufficient state
> > > > + * to be able to resume an interrupted restore, and that
> > > > structure
> > > > is freed once
> > > > + * the restoration is complete. If the struct ttm_tt is
> > > > destroyed
> > > > while there
> > > > + * is a valid struct ttm_pool_tt_restore attached, that is
> > > > also
> > > > properly taken
> > > > + * care of.
> > > > + */
> > > > +
> > > > +static bool ttm_pool_restore_valid(const struct
> > > > ttm_pool_tt_restore *restore)
> > > > +{
> > > > +	return restore && restore->restored_pages < (1 <<
> > > > restore-
> > > > > order);
> > > > +}
> > > > +
> > > > +static int ttm_pool_restore_tt(struct ttm_pool_tt_restore
> > > > *restore,
> > > > +			       struct ttm_backup *backup,
> > > > +			       struct ttm_operation_ctx *ctx)
> > > > +{
> > > > +	unsigned int i, nr = 1 << restore->order;
> > > > +	int ret = 0;
> > > > +
> > > > +	if (!ttm_pool_restore_valid(restore))
> > > > +		return 0;
> > > > +
> > > > +	for (i = restore->restored_pages; i < nr; ++i) {
> > > > +		struct page *p = restore->old_pages[i];
> > > > +
> > > > +		if (ttm_backup_page_ptr_is_handle(p)) {
> > > > +			unsigned long handle =
> > > > ttm_backup_page_ptr_to_handle(p);
> > > > +
> > > > +			if (handle == 0)
> > > > +				continue;
> > > > +
> > > > +			ret = ttm_backup_copy_page
> > > > +				(backup, restore-
> > > > >first_page[i],
> > > > +				 handle, ctx->interruptible);
> > > That coding style looks really odd, I didn't even notice that it
> > > is a
> > > function call initially.
> > > 
> > > Maybe put everything under the if into a separate function.
> > At a minimum, I'll fix up the formatting here.
> > 
> > > > +			if (ret)
> > > > +				break;
> > > > +
> > > > +			ttm_backup_drop(backup, handle);
> > > > +		} else if (p) {
> > > > +			/*
> > > > +			 * We could probably avoid splitting
> > > > the
> > > > old page
> > > > +			 * using clever logic, but ATM we
> > > > don't
> > > > care, as
> > > > +			 * we prioritize releasing memory
> > > > ASAP.
> > > > Note that
> > > > +			 * here, the old retained page is
> > > > always
> > > > write-back
> > > > +			 * cached.
> > > > +			 */
> > > > +			ttm_pool_split_for_swap(restore->pool,
> > > > p);
> > > > +			copy_highpage(restore->first_page[i],
> > > > p);
> > > > +			__free_pages(p, 0);
> > > > +		}
> > > > +
> > > > +		restore->restored_pages++;
> > > > +		restore->old_pages[i] = NULL;
> > > > +		cond_resched();
> > > There is a push to remove cond_resched(), see here:
> > > https://patchwork.kernel.org/project/linux-mm/patch/20231107230822.371443-30-ankur.a.arora@oracle.com/
> > > 
> > > Not sure in which discussion that removal went, but IIRC we
> > > should
> > > not
> > > add any new users of it.
> > I'll read up on that and remove if needed. I'm curious how / if
> > voluntary preemption is going to be handled.
> 
> I didn't fully understood it either, but the push kind of seems to be
> that drivers or in this cases subsystems are not supposed to mess
> with 
> cond_resched() any more and just rely on preemptive kernels.
> 
> > > 

So I took a deeper look into this. From what I can tell, cond_resched()
is replaced by some other implicit preemption mechanism, and it seems
the series is still being worked on, but meanwhile there's nothing
ensuring that latency-causing long loops will be preempted.

So IMHO it should be easy to just remove the cond_resched() when that
series lands, and if it is deemed necessary to add it meanwhile.

But OTOH, the cond_resched() in this code was added without benchmark
justification, so I have removed it. If needed could be re-added
pending the merge of the new preemption code.

Thanks,
Thomas







^ permalink raw reply	[flat|nested] 54+ messages in thread

end of thread, other threads:[~2024-12-18 10:15 UTC | newest]

Thread overview: 54+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-15 15:01 [PATCH v14 0/8] TTM shrinker helpers and xe buffer object shrinker Thomas Hellström
2024-11-15 15:01 ` [PATCH v14 1/8] drm/ttm: Balance ttm_resource_cursor_init() and ttm_resource_cursor_fini() Thomas Hellström
2024-11-20 10:51   ` Christian König
2024-11-21 15:54     ` Thomas Hellström
2024-11-15 15:01 ` [PATCH v14 2/8] drm/ttm: Provide a shmem backup implementation Thomas Hellström
2024-11-19 13:40   ` Christian König
2024-11-20  7:58     ` Thomas Hellström
2024-11-20  9:24       ` Christian König
2024-11-20 10:34         ` Thomas Hellström
2024-11-20 10:50           ` Christian König
2024-11-20 11:07             ` Thomas Hellström
2024-11-20 11:20         ` Thomas Hellström
2024-11-15 15:01 ` [PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages Thomas Hellström
2024-12-03 13:12   ` Christian König
2024-12-03 13:42     ` Thomas Hellström
2024-12-03 14:51       ` Christian König
2024-12-03 15:50         ` Thomas Hellström
2024-12-03 16:20           ` Christian König
2024-12-03 16:31             ` Thomas Hellström
2024-12-03 16:39               ` Christian König
2024-12-03 16:43                 ` Thomas Hellström
2024-12-03 16:46                   ` Christian König
2024-12-03 17:44                     ` Thomas Hellström
2024-12-04  9:16                       ` Christian König
2024-12-04  9:56                         ` Thomas Hellström
2024-12-04 10:56                           ` Christian König
2024-12-04 11:09                             ` Thomas Hellström
2024-12-04 11:24                               ` Christian König
2024-12-04 12:24                                 ` Thomas Hellström
2024-12-18 10:07                                 ` Thomas Hellström
2024-12-18 10:15         ` Thomas Hellström
2024-11-15 15:01 ` [PATCH v14 4/8] drm/ttm: Use fault-injection to test error paths Thomas Hellström
2024-11-15 15:01 ` [PATCH v14 5/8] drm/ttm: Add a macro to perform LRU iteration Thomas Hellström
2024-11-15 15:01 ` [PATCH v14 6/8] drm/ttm: Add helpers for shrinking Thomas Hellström
2024-11-15 15:01 ` [PATCH v14 7/8] drm/xe: Add a shrinker for xe bos Thomas Hellström
2024-11-15 15:01 ` [PATCH v14 8/8] drm/xe: Increase the XE_PL_TT watermark Thomas Hellström
2024-11-15 15:06 ` ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev13) Patchwork
2024-11-15 15:07 ` ✗ CI.checkpatch: warning " Patchwork
2024-11-15 15:08 ` ✓ CI.KUnit: success " Patchwork
2024-11-15 15:17 ` ✗ CI.Build: failure " Patchwork
2024-11-16 11:26 ` ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev14) Patchwork
2024-11-16 11:26 ` ✗ CI.checkpatch: warning " Patchwork
2024-11-16 11:28 ` ✓ CI.KUnit: success " Patchwork
2024-11-16 11:46 ` ✓ CI.Build: " Patchwork
2024-11-16 11:46 ` ✗ CI.Hooks: failure " Patchwork
2024-11-16 11:47 ` ✗ CI.checksparse: warning " Patchwork
2024-11-18 12:37 ` ✓ CI.Patch_applied: success for TTM shrinker helpers and xe buffer object shrinker (rev15) Patchwork
2024-11-18 12:37 ` ✗ CI.checkpatch: warning " Patchwork
2024-11-18 12:38 ` ✓ CI.KUnit: success " Patchwork
2024-11-18 12:56 ` ✓ CI.Build: " Patchwork
2024-11-18 12:56 ` ✗ CI.Hooks: failure " Patchwork
2024-11-18 12:58 ` ✗ CI.checksparse: warning " Patchwork
2024-11-18 13:16 ` ✓ CI.BAT: success " Patchwork
2024-11-18 16:29 ` ✗ CI.FULL: failure " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).