Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/5] Convert multiple bind ops to 1 job
@ 2024-05-29 18:31 Matthew Brost
  2024-05-29 18:31 ` [PATCH v3 1/5] drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue Matthew Brost
                   ` (12 more replies)
  0 siblings, 13 replies; 16+ messages in thread
From: Matthew Brost @ 2024-05-29 18:31 UTC (permalink / raw)
  To: intel-xe; +Cc: Matthew Brost

Continuation of merging parts of [1]. Patch #2 in this series is quite
large but unsure how to split the patch without breaking functionality.

Tested with [2].

v2:
 - Rebase
 - Add error injection patch
 - Fix dma-fence reservation for binds
v3:
 - Rebase
 - Error injection patch omitted in this rev

Matt

[1] https://patchwork.freedesktop.org/series/125608/
[2] https://patchwork.freedesktop.org/series/129606/

Matthew Brost (5):
  drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue
  drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops
  drm/xe: Convert multiple bind ops into single job
  drm/xe: Update VM trace events
  drm/xe: Update PT layer with better error handling

 drivers/gpu/drm/xe/xe_bo_types.h |    2 +
 drivers/gpu/drm/xe/xe_migrate.c  |  305 ++++---
 drivers/gpu/drm/xe/xe_migrate.h  |   34 +-
 drivers/gpu/drm/xe/xe_pt.c       | 1276 ++++++++++++++++++++----------
 drivers/gpu/drm/xe/xe_pt.h       |   14 +-
 drivers/gpu/drm/xe/xe_pt_types.h |   48 ++
 drivers/gpu/drm/xe/xe_trace.h    |   10 +-
 drivers/gpu/drm/xe/xe_vm.c       |  623 +++++----------
 drivers/gpu/drm/xe/xe_vm.h       |    2 +
 drivers/gpu/drm/xe/xe_vm_types.h |   41 +-
 10 files changed, 1298 insertions(+), 1057 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 1/5] drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
@ 2024-05-29 18:31 ` Matthew Brost
  2024-05-29 18:31 ` [PATCH v3 2/5] drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops Matthew Brost
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Matthew Brost @ 2024-05-29 18:31 UTC (permalink / raw)
  To: intel-xe; +Cc: Matthew Brost, Jonathan Cavitt

Engine is old nomenclature, replace with exec queue.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
---
 drivers/gpu/drm/xe/xe_migrate.c | 9 ++++-----
 drivers/gpu/drm/xe/xe_migrate.h | 2 +-
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
index cccffaf3db06..384d33feac6a 100644
--- a/drivers/gpu/drm/xe/xe_migrate.c
+++ b/drivers/gpu/drm/xe/xe_migrate.c
@@ -84,15 +84,14 @@ struct xe_migrate {
 #define MAX_PTE_PER_SDI 0x1FE
 
 /**
- * xe_tile_migrate_engine() - Get this tile's migrate engine.
+ * xe_tile_migrate_exec_queue() - Get this tile's migrate exec queue.
  * @tile: The tile.
  *
- * Returns the default migrate engine of this tile.
- * TODO: Perhaps this function is slightly misplaced, and even unneeded?
+ * Returns the default migrate exec queue of this tile.
  *
- * Return: The default migrate engine
+ * Return: The default migrate exec queue
  */
-struct xe_exec_queue *xe_tile_migrate_engine(struct xe_tile *tile)
+struct xe_exec_queue *xe_tile_migrate_exec_queue(struct xe_tile *tile)
 {
 	return tile->migrate->q;
 }
diff --git a/drivers/gpu/drm/xe/xe_migrate.h b/drivers/gpu/drm/xe/xe_migrate.h
index 951f19318ea4..a5bcaafe4a99 100644
--- a/drivers/gpu/drm/xe/xe_migrate.h
+++ b/drivers/gpu/drm/xe/xe_migrate.h
@@ -106,5 +106,5 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
 
 void xe_migrate_wait(struct xe_migrate *m);
 
-struct xe_exec_queue *xe_tile_migrate_engine(struct xe_tile *tile);
+struct xe_exec_queue *xe_tile_migrate_exec_queue(struct xe_tile *tile);
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 2/5] drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
  2024-05-29 18:31 ` [PATCH v3 1/5] drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue Matthew Brost
@ 2024-05-29 18:31 ` Matthew Brost
  2024-05-29 18:31 ` [PATCH v3 3/5] drm/xe: Convert multiple bind ops into single job Matthew Brost
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Matthew Brost @ 2024-05-29 18:31 UTC (permalink / raw)
  To: intel-xe; +Cc: Matthew Brost, Oak Zeng, Thomas Hellström, Jonathan Cavitt

Each xe_vma_op resolves to 0-3 pt_ops. Add storage for the pt_ops to
xe_vma_ops which is dynamically allocated based the number and types of
xe_vma_op in the xe_vma_ops list. Allocation only implemented in this
patch.

This will help with converting xe_vma_ops (multiple xe_vma_op) in a
atomic update unit.

Cc: Oak Zeng <oak.zeng@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
---
 drivers/gpu/drm/xe/xe_pt_types.h | 12 ++++++
 drivers/gpu/drm/xe/xe_vm.c       | 66 +++++++++++++++++++++++++++++++-
 drivers/gpu/drm/xe/xe_vm_types.h |  8 ++++
 3 files changed, 84 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_pt_types.h b/drivers/gpu/drm/xe/xe_pt_types.h
index cee70cb0f014..2093150f461e 100644
--- a/drivers/gpu/drm/xe/xe_pt_types.h
+++ b/drivers/gpu/drm/xe/xe_pt_types.h
@@ -74,4 +74,16 @@ struct xe_vm_pgtable_update {
 	u32 flags;
 };
 
+/** struct xe_vm_pgtable_update_op - Page table update operation */
+struct xe_vm_pgtable_update_op {
+	/** @entries: entries to update for this operation */
+	struct xe_vm_pgtable_update entries[XE_VM_MAX_LEVEL * 2 + 1];
+	/** @num_entries: number of entries for this update operation */
+	u32 num_entries;
+	/** @bind: is a bind */
+	bool bind;
+	/** @rebind: is a rebind */
+	bool rebind;
+};
+
 #endif
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 26b409e1b0f0..f3795a7a0f25 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -712,6 +712,42 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm)
 		list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
 }
 
+static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
+{
+	int i;
+
+	for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i) {
+		if (!vops->pt_update_ops[i].num_ops)
+			continue;
+
+		vops->pt_update_ops[i].ops =
+			kmalloc_array(vops->pt_update_ops[i].num_ops,
+				      sizeof(*vops->pt_update_ops[i].ops),
+				      GFP_KERNEL);
+		if (!vops->pt_update_ops[i].ops)
+			return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void xe_vma_ops_fini(struct xe_vma_ops *vops)
+{
+	int i;
+
+	for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
+		kfree(vops->pt_update_ops[i].ops);
+}
+
+static void xe_vma_ops_incr_pt_update_ops(struct xe_vma_ops *vops, u8 tile_mask)
+{
+	int i;
+
+	for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
+		if (BIT(i) & tile_mask)
+			++vops->pt_update_ops[i].num_ops;
+}
+
 static void xe_vm_populate_rebind(struct xe_vma_op *op, struct xe_vma *vma,
 				  u8 tile_mask)
 {
@@ -739,6 +775,7 @@ static int xe_vm_ops_add_rebind(struct xe_vma_ops *vops, struct xe_vma *vma,
 
 	xe_vm_populate_rebind(op, vma, tile_mask);
 	list_add_tail(&op->link, &vops->list);
+	xe_vma_ops_incr_pt_update_ops(vops, tile_mask);
 
 	return 0;
 }
@@ -779,6 +816,10 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
 			goto free_ops;
 	}
 
+	err = xe_vma_ops_alloc(&vops);
+	if (err)
+		goto free_ops;
+
 	fence = ops_execute(vm, &vops);
 	if (IS_ERR(fence)) {
 		err = PTR_ERR(fence);
@@ -793,6 +834,7 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
 		list_del(&op->link);
 		kfree(op);
 	}
+	xe_vma_ops_fini(&vops);
 
 	return err;
 }
@@ -814,12 +856,20 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma
 	if (err)
 		return ERR_PTR(err);
 
+	err = xe_vma_ops_alloc(&vops);
+	if (err) {
+		fence = ERR_PTR(err);
+		goto free_ops;
+	}
+
 	fence = ops_execute(vm, &vops);
 
+free_ops:
 	list_for_each_entry_safe(op, next_op, &vops.list, link) {
 		list_del(&op->link);
 		kfree(op);
 	}
+	xe_vma_ops_fini(&vops);
 
 	return fence;
 }
@@ -2282,7 +2332,6 @@ static int xe_vma_op_commit(struct xe_vm *vm, struct xe_vma_op *op)
 	return err;
 }
 
-
 static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q,
 				   struct drm_gpuva_ops *ops,
 				   struct xe_sync_entry *syncs, u32 num_syncs,
@@ -2334,6 +2383,9 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q,
 				return PTR_ERR(vma);
 
 			op->map.vma = vma;
+			if (op->map.immediate || !xe_vm_in_fault_mode(vm))
+				xe_vma_ops_incr_pt_update_ops(vops,
+							      op->tile_mask);
 			break;
 		}
 		case DRM_GPUVA_OP_REMAP:
@@ -2378,6 +2430,8 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q,
 					vm_dbg(&xe->drm, "REMAP:SKIP_PREV: addr=0x%016llx, range=0x%016llx",
 					       (ULL)op->remap.start,
 					       (ULL)op->remap.range);
+				} else {
+					xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
 				}
 			}
 
@@ -2414,13 +2468,16 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q,
 					vm_dbg(&xe->drm, "REMAP:SKIP_NEXT: addr=0x%016llx, range=0x%016llx",
 					       (ULL)op->remap.start,
 					       (ULL)op->remap.range);
+				} else {
+					xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
 				}
 			}
+			xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
 			break;
 		}
 		case DRM_GPUVA_OP_UNMAP:
 		case DRM_GPUVA_OP_PREFETCH:
-			/* Nothing to do */
+			xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
 			break;
 		default:
 			drm_warn(&vm->xe->drm, "NOT POSSIBLE");
@@ -3267,11 +3324,16 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 		goto unwind_ops;
 	}
 
+	err = xe_vma_ops_alloc(&vops);
+	if (err)
+		goto unwind_ops;
+
 	err = vm_bind_ioctl_ops_execute(vm, &vops);
 
 unwind_ops:
 	if (err && err != -ENODATA)
 		vm_bind_ioctl_ops_unwind(vm, ops, args->num_binds);
+	xe_vma_ops_fini(&vops);
 	for (i = args->num_binds - 1; i >= 0; --i)
 		if (ops[i])
 			drm_gpuva_ops_free(&vm->gpuvm, ops[i]);
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index ce1a63a5e3e7..211c88801182 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -21,6 +21,7 @@ struct xe_bo;
 struct xe_sync_entry;
 struct xe_user_fence;
 struct xe_vm;
+struct xe_vm_pgtable_update_op;
 
 #define XE_VMA_READ_ONLY	DRM_GPUVA_USERBITS
 #define XE_VMA_DESTROYED	(DRM_GPUVA_USERBITS << 1)
@@ -368,6 +369,13 @@ struct xe_vma_ops {
 	struct xe_sync_entry *syncs;
 	/** @num_syncs: number of syncs */
 	u32 num_syncs;
+	/** @pt_update_ops: page table update operations */
+	struct {
+		/** @ops: operations */
+		struct xe_vm_pgtable_update_op *ops;
+		/** @num_ops: number of operations */
+		u32 num_ops;
+	} pt_update_ops[XE_MAX_TILES_PER_DEVICE];
 };
 
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 3/5] drm/xe: Convert multiple bind ops into single job
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
  2024-05-29 18:31 ` [PATCH v3 1/5] drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue Matthew Brost
  2024-05-29 18:31 ` [PATCH v3 2/5] drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops Matthew Brost
@ 2024-05-29 18:31 ` Matthew Brost
  2024-05-30  0:32   ` Zanoni, Paulo R
  2024-05-29 18:31 ` [PATCH v3 4/5] drm/xe: Update VM trace events Matthew Brost
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 16+ messages in thread
From: Matthew Brost @ 2024-05-29 18:31 UTC (permalink / raw)
  To: intel-xe; +Cc: Matthew Brost, Oak Zeng, Thomas Hellström

This aligns with the uAPI of an array of binds or single bind that
results in multiple GPUVA ops to be considered a single atomic
operations.

The implemenation is roughly:
- xe_vma_ops is a list of xe_vma_op (GPUVA op)
- each xe_vma_op resolves to 0-3 PT ops
- xe_vma_ops creates a single job
- if at any point during binding a failure occurs, xe_vma_ops contains
  the information necessary unwind the PT and VMA (GPUVA) state

v2:
 - add missing dma-resv slot reservation (CI, testing)

Cc: Oak Zeng <oak.zeng@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_bo_types.h |    2 +
 drivers/gpu/drm/xe/xe_migrate.c  |  296 ++++----
 drivers/gpu/drm/xe/xe_migrate.h  |   32 +-
 drivers/gpu/drm/xe/xe_pt.c       | 1108 +++++++++++++++++++-----------
 drivers/gpu/drm/xe/xe_pt.h       |   14 +-
 drivers/gpu/drm/xe/xe_pt_types.h |   36 +
 drivers/gpu/drm/xe/xe_vm.c       |  519 +++-----------
 drivers/gpu/drm/xe/xe_vm.h       |    2 +
 drivers/gpu/drm/xe/xe_vm_types.h |   45 +-
 9 files changed, 1032 insertions(+), 1022 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
index 86422e113d39..02d68873558a 100644
--- a/drivers/gpu/drm/xe/xe_bo_types.h
+++ b/drivers/gpu/drm/xe/xe_bo_types.h
@@ -58,6 +58,8 @@ struct xe_bo {
 #endif
 	/** @freed: List node for delayed put. */
 	struct llist_node freed;
+	/** @update_index: Update index if PT BO */
+	int update_index;
 	/** @created: Whether the bo has passed initial creation */
 	bool created;
 
diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
index 384d33feac6a..b882d32eebfe 100644
--- a/drivers/gpu/drm/xe/xe_migrate.c
+++ b/drivers/gpu/drm/xe/xe_migrate.c
@@ -1102,6 +1102,7 @@ struct dma_fence *xe_migrate_clear(struct xe_migrate *m,
 }
 
 static void write_pgtable(struct xe_tile *tile, struct xe_bb *bb, u64 ppgtt_ofs,
+			  const struct xe_vm_pgtable_update_op *pt_op,
 			  const struct xe_vm_pgtable_update *update,
 			  struct xe_migrate_pt_update *pt_update)
 {
@@ -1136,8 +1137,12 @@ static void write_pgtable(struct xe_tile *tile, struct xe_bb *bb, u64 ppgtt_ofs,
 		bb->cs[bb->len++] = MI_STORE_DATA_IMM | MI_SDI_NUM_QW(chunk);
 		bb->cs[bb->len++] = lower_32_bits(addr);
 		bb->cs[bb->len++] = upper_32_bits(addr);
-		ops->populate(pt_update, tile, NULL, bb->cs + bb->len, ofs, chunk,
-			      update);
+		if (pt_op->bind)
+			ops->populate(pt_update, tile, NULL, bb->cs + bb->len,
+				      ofs, chunk, update);
+		else
+			ops->clear(pt_update, tile, NULL, bb->cs + bb->len,
+				   ofs, chunk, update);
 
 		bb->len += chunk * 2;
 		ofs += chunk;
@@ -1162,114 +1167,58 @@ struct migrate_test_params {
 
 static struct dma_fence *
 xe_migrate_update_pgtables_cpu(struct xe_migrate *m,
-			       struct xe_vm *vm, struct xe_bo *bo,
-			       const struct  xe_vm_pgtable_update *updates,
-			       u32 num_updates, bool wait_vm,
 			       struct xe_migrate_pt_update *pt_update)
 {
 	XE_TEST_DECLARE(struct migrate_test_params *test =
 			to_migrate_test_params
 			(xe_cur_kunit_priv(XE_TEST_LIVE_MIGRATE));)
 	const struct xe_migrate_pt_update_ops *ops = pt_update->ops;
-	struct dma_fence *fence;
+	struct xe_vm *vm = pt_update->vops->vm;
+	struct xe_vm_pgtable_update_ops *pt_update_ops =
+		&pt_update->vops->pt_update_ops[pt_update->tile_id];
 	int err;
-	u32 i;
+	u32 j, i;
 
 	if (XE_TEST_ONLY(test && test->force_gpu))
 		return ERR_PTR(-ETIME);
 
-	if (bo && !dma_resv_test_signaled(bo->ttm.base.resv,
-					  DMA_RESV_USAGE_KERNEL))
-		return ERR_PTR(-ETIME);
-
-	if (wait_vm && !dma_resv_test_signaled(xe_vm_resv(vm),
-					       DMA_RESV_USAGE_BOOKKEEP))
-		return ERR_PTR(-ETIME);
-
 	if (ops->pre_commit) {
 		pt_update->job = NULL;
 		err = ops->pre_commit(pt_update);
 		if (err)
 			return ERR_PTR(err);
 	}
-	for (i = 0; i < num_updates; i++) {
-		const struct xe_vm_pgtable_update *update = &updates[i];
-
-		ops->populate(pt_update, m->tile, &update->pt_bo->vmap, NULL,
-			      update->ofs, update->qwords, update);
-	}
-
-	if (vm) {
-		trace_xe_vm_cpu_bind(vm);
-		xe_device_wmb(vm->xe);
-	}
-
-	fence = dma_fence_get_stub();
-
-	return fence;
-}
-
-static bool no_in_syncs(struct xe_vm *vm, struct xe_exec_queue *q,
-			struct xe_sync_entry *syncs, u32 num_syncs)
-{
-	struct dma_fence *fence;
-	int i;
 
-	for (i = 0; i < num_syncs; i++) {
-		fence = syncs[i].fence;
-
-		if (fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
-				       &fence->flags))
-			return false;
-	}
-	if (q) {
-		fence = xe_exec_queue_last_fence_get(q, vm);
-		if (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
-			dma_fence_put(fence);
-			return false;
+	for (j = 0; j < pt_update_ops->num_ops; ++j) {
+		const struct xe_vm_pgtable_update_op *pt_op =
+			&pt_update_ops->ops[j];
+
+		for (i = 0; i < pt_op->num_entries; i++) {
+			const struct xe_vm_pgtable_update *update =
+				&pt_op->entries[i];
+
+			if (pt_op->bind)
+				ops->populate(pt_update, m->tile,
+					      &update->pt_bo->vmap, NULL,
+					      update->ofs, update->qwords,
+					      update);
+			else
+				ops->clear(pt_update, m->tile,
+					   &update->pt_bo->vmap, NULL,
+					   update->ofs, update->qwords, update);
 		}
-		dma_fence_put(fence);
 	}
 
-	return true;
+	trace_xe_vm_cpu_bind(vm);
+	xe_device_wmb(vm->xe);
+
+	return dma_fence_get_stub();
 }
 
-/**
- * xe_migrate_update_pgtables() - Pipelined page-table update
- * @m: The migrate context.
- * @vm: The vm we'll be updating.
- * @bo: The bo whose dma-resv we will await before updating, or NULL if userptr.
- * @q: The exec queue to be used for the update or NULL if the default
- * migration engine is to be used.
- * @updates: An array of update descriptors.
- * @num_updates: Number of descriptors in @updates.
- * @syncs: Array of xe_sync_entry to await before updating. Note that waits
- * will block the engine timeline.
- * @num_syncs: Number of entries in @syncs.
- * @pt_update: Pointer to a struct xe_migrate_pt_update, which contains
- * pointers to callback functions and, if subclassed, private arguments to
- * those.
- *
- * Perform a pipelined page-table update. The update descriptors are typically
- * built under the same lock critical section as a call to this function. If
- * using the default engine for the updates, they will be performed in the
- * order they grab the job_mutex. If different engines are used, external
- * synchronization is needed for overlapping updates to maintain page-table
- * consistency. Note that the meaing of "overlapping" is that the updates
- * touch the same page-table, which might be a higher-level page-directory.
- * If no pipelining is needed, then updates may be performed by the cpu.
- *
- * Return: A dma_fence that, when signaled, indicates the update completion.
- */
-struct dma_fence *
-xe_migrate_update_pgtables(struct xe_migrate *m,
-			   struct xe_vm *vm,
-			   struct xe_bo *bo,
-			   struct xe_exec_queue *q,
-			   const struct xe_vm_pgtable_update *updates,
-			   u32 num_updates,
-			   struct xe_sync_entry *syncs, u32 num_syncs,
-			   struct xe_migrate_pt_update *pt_update)
+static struct dma_fence *
+__xe_migrate_update_pgtables(struct xe_migrate *m,
+			     struct xe_migrate_pt_update *pt_update,
+			     struct xe_vm_pgtable_update_ops *pt_update_ops)
 {
 	const struct xe_migrate_pt_update_ops *ops = pt_update->ops;
 	struct xe_tile *tile = m->tile;
@@ -1278,59 +1227,45 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
 	struct xe_sched_job *job;
 	struct dma_fence *fence;
 	struct drm_suballoc *sa_bo = NULL;
-	struct xe_vma *vma = pt_update->vma;
 	struct xe_bb *bb;
-	u32 i, batch_size, ppgtt_ofs, update_idx, page_ofs = 0;
+	u32 i, j, batch_size = 0, ppgtt_ofs, update_idx, page_ofs = 0;
+	u32 num_updates = 0, current_update = 0;
 	u64 addr;
 	int err = 0;
-	bool usm = !q && xe->info.has_usm;
-	bool first_munmap_rebind = vma &&
-		vma->gpuva.flags & XE_VMA_FIRST_REBIND;
-	struct xe_exec_queue *q_override = !q ? m->q : q;
-	u16 pat_index = xe->pat.idx[XE_CACHE_WB];
+	bool is_migrate = pt_update_ops->q == m->q;
+	bool usm = is_migrate && xe->info.has_usm;
 
-	/* Use the CPU if no in syncs and engine is idle */
-	if (no_in_syncs(vm, q, syncs, num_syncs) && xe_exec_queue_is_idle(q_override)) {
-		fence =  xe_migrate_update_pgtables_cpu(m, vm, bo, updates,
-							num_updates,
-							first_munmap_rebind,
-							pt_update);
-		if (!IS_ERR(fence) || fence == ERR_PTR(-EAGAIN))
-			return fence;
+	for (i = 0; i < pt_update_ops->num_ops; ++i) {
+		struct xe_vm_pgtable_update_op *pt_op = &pt_update_ops->ops[i];
+		struct xe_vm_pgtable_update *updates = pt_op->entries;
+
+		num_updates += pt_op->num_entries;
+		for (j = 0; j < pt_op->num_entries; ++j) {
+			u32 num_cmds = DIV_ROUND_UP(updates[j].qwords, 0x1ff);
+
+			/* align noop + MI_STORE_DATA_IMM cmd prefix */
+			batch_size += 4 * num_cmds + updates[j].qwords * 2;
+		}
 	}
 
 	/* fixed + PTE entries */
 	if (IS_DGFX(xe))
-		batch_size = 2;
+		batch_size += 2;
 	else
-		batch_size = 6 + num_updates * 2;
+		batch_size += 6 + num_updates * 2;
 
-	for (i = 0; i < num_updates; i++) {
-		u32 num_cmds = DIV_ROUND_UP(updates[i].qwords, MAX_PTE_PER_SDI);
-
-		/* align noop + MI_STORE_DATA_IMM cmd prefix */
-		batch_size += 4 * num_cmds + updates[i].qwords * 2;
-	}
-
-	/*
-	 * XXX: Create temp bo to copy from, if batch_size becomes too big?
-	 *
-	 * Worst case: Sum(2 * (each lower level page size) + (top level page size))
-	 * Should be reasonably bound..
-	 */
-	xe_tile_assert(tile, batch_size < SZ_128K);
-
-	bb = xe_bb_new(gt, batch_size, !q && xe->info.has_usm);
+	bb = xe_bb_new(gt, batch_size, usm);
 	if (IS_ERR(bb))
 		return ERR_CAST(bb);
 
 	/* For sysmem PTE's, need to map them in our hole.. */
 	if (!IS_DGFX(xe)) {
 		ppgtt_ofs = NUM_KERNEL_PDE - 1;
-		if (q) {
-			xe_tile_assert(tile, num_updates <= NUM_VMUSA_WRITES_PER_UNIT);
+		if (!is_migrate) {
+			u32 num_units = DIV_ROUND_UP(num_updates,
+						     NUM_VMUSA_WRITES_PER_UNIT);
 
-			sa_bo = drm_suballoc_new(&m->vm_update_sa, 1,
+			sa_bo = drm_suballoc_new(&m->vm_update_sa, num_units,
 						 GFP_KERNEL, true, 0);
 			if (IS_ERR(sa_bo)) {
 				err = PTR_ERR(sa_bo);
@@ -1350,14 +1285,26 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
 		bb->cs[bb->len++] = ppgtt_ofs * XE_PAGE_SIZE + page_ofs;
 		bb->cs[bb->len++] = 0; /* upper_32_bits */
 
-		for (i = 0; i < num_updates; i++) {
-			struct xe_bo *pt_bo = updates[i].pt_bo;
+		for (i = 0; i < pt_update_ops->num_ops; ++i) {
+			struct xe_vm_pgtable_update_op *pt_op =
+				&pt_update_ops->ops[i];
+			struct xe_vm_pgtable_update *updates = pt_op->entries;
 
-			xe_tile_assert(tile, pt_bo->size == SZ_4K);
+			for (j = 0; j < pt_op->num_entries; ++j, ++current_update) {
+				struct xe_vm *vm = pt_update->vops->vm;
+				struct xe_bo *pt_bo = updates[j].pt_bo;
 
-			addr = vm->pt_ops->pte_encode_bo(pt_bo, 0, pat_index, 0);
-			bb->cs[bb->len++] = lower_32_bits(addr);
-			bb->cs[bb->len++] = upper_32_bits(addr);
+				xe_tile_assert(tile, pt_bo->size == SZ_4K);
+
+				/* Map a PT at most once */
+				if (pt_bo->update_index < 0)
+					pt_bo->update_index = current_update;
+
+				addr = vm->pt_ops->pte_encode_bo(pt_bo, 0,
+								 XE_CACHE_WB, 0);
+				bb->cs[bb->len++] = lower_32_bits(addr);
+				bb->cs[bb->len++] = upper_32_bits(addr);
+			}
 		}
 
 		bb->cs[bb->len++] = MI_BATCH_BUFFER_END;
@@ -1365,19 +1312,36 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
 
 		addr = xe_migrate_vm_addr(ppgtt_ofs, 0) +
 			(page_ofs / sizeof(u64)) * XE_PAGE_SIZE;
-		for (i = 0; i < num_updates; i++)
-			write_pgtable(tile, bb, addr + i * XE_PAGE_SIZE,
-				      &updates[i], pt_update);
+		for (i = 0; i < pt_update_ops->num_ops; ++i) {
+			struct xe_vm_pgtable_update_op *pt_op =
+				&pt_update_ops->ops[i];
+			struct xe_vm_pgtable_update *updates = pt_op->entries;
+
+			for (j = 0; j < pt_op->num_entries; ++j) {
+				struct xe_bo *pt_bo = updates[j].pt_bo;
+
+				write_pgtable(tile, bb, addr +
+					      pt_bo->update_index * XE_PAGE_SIZE,
+					      pt_op, &updates[j], pt_update);
+			}
+		}
 	} else {
 		/* phys pages, no preamble required */
 		bb->cs[bb->len++] = MI_BATCH_BUFFER_END;
 		update_idx = bb->len;
 
-		for (i = 0; i < num_updates; i++)
-			write_pgtable(tile, bb, 0, &updates[i], pt_update);
+		for (i = 0; i < pt_update_ops->num_ops; ++i) {
+			struct xe_vm_pgtable_update_op *pt_op =
+				&pt_update_ops->ops[i];
+			struct xe_vm_pgtable_update *updates = pt_op->entries;
+
+			for (j = 0; j < pt_op->num_entries; ++j)
+				write_pgtable(tile, bb, 0, pt_op, &updates[j],
+					      pt_update);
+		}
 	}
 
-	job = xe_bb_create_migration_job(q ?: m->q, bb,
+	job = xe_bb_create_migration_job(pt_update_ops->q ?: m->q, bb,
 					 xe_migrate_batch_base(m, usm),
 					 update_idx);
 	if (IS_ERR(job)) {
@@ -1385,46 +1349,20 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
 		goto err_bb;
 	}
 
-	/* Wait on BO move */
-	if (bo) {
-		err = job_add_deps(job, bo->ttm.base.resv,
-				   DMA_RESV_USAGE_KERNEL);
-		if (err)
-			goto err_job;
-	}
-
-	/*
-	 * Munmap style VM unbind, need to wait for all jobs to be complete /
-	 * trigger preempts before moving forward
-	 */
-	if (first_munmap_rebind) {
-		err = job_add_deps(job, xe_vm_resv(vm),
-				   DMA_RESV_USAGE_BOOKKEEP);
-		if (err)
-			goto err_job;
-	}
-
-	err = xe_sched_job_last_fence_add_dep(job, vm);
-	for (i = 0; !err && i < num_syncs; i++)
-		err = xe_sync_entry_add_deps(&syncs[i], job);
-
-	if (err)
-		goto err_job;
-
 	if (ops->pre_commit) {
 		pt_update->job = job;
 		err = ops->pre_commit(pt_update);
 		if (err)
 			goto err_job;
 	}
-	if (!q)
+	if (is_migrate)
 		mutex_lock(&m->job_mutex);
 
 	xe_sched_job_arm(job);
 	fence = dma_fence_get(&job->drm.s_fence->finished);
 	xe_sched_job_push(job);
 
-	if (!q)
+	if (is_migrate)
 		mutex_unlock(&m->job_mutex);
 
 	xe_bb_free(bb, fence);
@@ -1441,6 +1379,38 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
 	return ERR_PTR(err);
 }
 
+/**
+ * xe_migrate_update_pgtables() - Pipelined page-table update
+ * @m: The migrate context.
+ * @pt_update: PT update arguments
+ *
+ * Perform a pipelined page-table update. The update descriptors are typically
+ * built under the same lock critical section as a call to this function. If
+ * using the default engine for the updates, they will be performed in the
+ * order they grab the job_mutex. If different engines are used, external
+ * synchronization is needed for overlapping updates to maintain page-table
+ * consistency. Note that the meaing of "overlapping" is that the updates
+ * touch the same page-table, which might be a higher-level page-directory.
+ * If no pipelining is needed, then updates may be performed by the cpu.
+ *
+ * Return: A dma_fence that, when signaled, indicates the update completion.
+ */
+struct dma_fence *
+xe_migrate_update_pgtables(struct xe_migrate *m,
+			   struct xe_migrate_pt_update *pt_update)
+
+{
+	struct xe_vm_pgtable_update_ops *pt_update_ops =
+		&pt_update->vops->pt_update_ops[pt_update->tile_id];
+	struct dma_fence *fence;
+
+	fence =  xe_migrate_update_pgtables_cpu(m, pt_update);
+	if (!IS_ERR(fence))
+		return fence;
+
+	return __xe_migrate_update_pgtables(m, pt_update, pt_update_ops);
+}
+
 /**
  * xe_migrate_wait() - Complete all operations using the xe_migrate context
  * @m: Migrate context to wait for.
diff --git a/drivers/gpu/drm/xe/xe_migrate.h b/drivers/gpu/drm/xe/xe_migrate.h
index a5bcaafe4a99..453e0ecf5034 100644
--- a/drivers/gpu/drm/xe/xe_migrate.h
+++ b/drivers/gpu/drm/xe/xe_migrate.h
@@ -47,6 +47,24 @@ struct xe_migrate_pt_update_ops {
 			 struct xe_tile *tile, struct iosys_map *map,
 			 void *pos, u32 ofs, u32 num_qwords,
 			 const struct xe_vm_pgtable_update *update);
+	/**
+	 * @clear: Clear a command buffer or page-table with ptes.
+	 * @pt_update: Embeddable callback argument.
+	 * @tile: The tile for the current operation.
+	 * @map: struct iosys_map into the memory to be populated.
+	 * @pos: If @map is NULL, map into the memory to be populated.
+	 * @ofs: qword offset into @map, unused if @map is NULL.
+	 * @num_qwords: Number of qwords to write.
+	 * @update: Information about the PTEs to be inserted.
+	 *
+	 * This interface is intended to be used as a callback into the
+	 * page-table system to populate command buffers or shared
+	 * page-tables with PTEs.
+	 */
+	void (*clear)(struct xe_migrate_pt_update *pt_update,
+		      struct xe_tile *tile, struct iosys_map *map,
+		      void *pos, u32 ofs, u32 num_qwords,
+		      const struct xe_vm_pgtable_update *update);
 
 	/**
 	 * @pre_commit: Callback to be called just before arming the
@@ -67,14 +85,10 @@ struct xe_migrate_pt_update_ops {
 struct xe_migrate_pt_update {
 	/** @ops: Pointer to the struct xe_migrate_pt_update_ops callbacks */
 	const struct xe_migrate_pt_update_ops *ops;
-	/** @vma: The vma we're updating the pagetable for. */
-	struct xe_vma *vma;
+	/** @vops: VMA operations */
+	struct xe_vma_ops *vops;
 	/** @job: The job if a GPU page-table update. NULL otherwise */
 	struct xe_sched_job *job;
-	/** @start: Start of update for the range fence */
-	u64 start;
-	/** @last: Last of update for the range fence */
-	u64 last;
 	/** @tile_id: Tile ID of the update */
 	u8 tile_id;
 };
@@ -96,12 +110,6 @@ struct xe_vm *xe_migrate_get_vm(struct xe_migrate *m);
 
 struct dma_fence *
 xe_migrate_update_pgtables(struct xe_migrate *m,
-			   struct xe_vm *vm,
-			   struct xe_bo *bo,
-			   struct xe_exec_queue *q,
-			   const struct xe_vm_pgtable_update *updates,
-			   u32 num_updates,
-			   struct xe_sync_entry *syncs, u32 num_syncs,
 			   struct xe_migrate_pt_update *pt_update);
 
 void xe_migrate_wait(struct xe_migrate *m);
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index cd60c009b679..4353a3bdf6c8 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -9,12 +9,14 @@
 #include "xe_bo.h"
 #include "xe_device.h"
 #include "xe_drm_client.h"
+#include "xe_exec_queue.h"
 #include "xe_gt.h"
 #include "xe_gt_tlb_invalidation.h"
 #include "xe_migrate.h"
 #include "xe_pt_types.h"
 #include "xe_pt_walk.h"
 #include "xe_res_cursor.h"
+#include "xe_sync.h"
 #include "xe_trace.h"
 #include "xe_ttm_stolen_mgr.h"
 #include "xe_vm.h"
@@ -325,6 +327,7 @@ xe_pt_new_shared(struct xe_walk_update *wupd, struct xe_pt *parent,
 	entry->pt = parent;
 	entry->flags = 0;
 	entry->qwords = 0;
+	entry->pt_bo->update_index = -1;
 
 	if (alloc_entries) {
 		entry->pt_entries = kmalloc_array(XE_PDES,
@@ -864,9 +867,7 @@ static void xe_pt_commit_locks_assert(struct xe_vma *vma)
 
 	lockdep_assert_held(&vm->lock);
 
-	if (xe_vma_is_userptr(vma))
-		lockdep_assert_held_read(&vm->userptr.notifier_lock);
-	else if (!xe_vma_is_null(vma))
+	if (!xe_vma_is_userptr(vma) && !xe_vma_is_null(vma))
 		dma_resv_assert_held(xe_vma_bo(vma)->ttm.base.resv);
 
 	xe_vm_assert_held(vm);
@@ -888,10 +889,8 @@ static void xe_pt_commit_bind(struct xe_vma *vma,
 		if (!rebind)
 			pt->num_live += entries[i].qwords;
 
-		if (!pt->level) {
-			kfree(entries[i].pt_entries);
+		if (!pt->level)
 			continue;
-		}
 
 		pt_dir = as_xe_pt_dir(pt);
 		for (j = 0; j < entries[i].qwords; j++) {
@@ -904,10 +903,18 @@ static void xe_pt_commit_bind(struct xe_vma *vma,
 
 			pt_dir->children[j_] = &newpte->base;
 		}
-		kfree(entries[i].pt_entries);
 	}
 }
 
+static void xe_pt_free_bind(struct xe_vm_pgtable_update *entries,
+			    u32 num_entries)
+{
+	u32 i;
+
+	for (i = 0; i < num_entries; i++)
+		kfree(entries[i].pt_entries);
+}
+
 static int
 xe_pt_prepare_bind(struct xe_tile *tile, struct xe_vma *vma,
 		   struct xe_vm_pgtable_update *entries, u32 *num_entries)
@@ -926,12 +933,13 @@ xe_pt_prepare_bind(struct xe_tile *tile, struct xe_vma *vma,
 
 static void xe_vm_dbg_print_entries(struct xe_device *xe,
 				    const struct xe_vm_pgtable_update *entries,
-				    unsigned int num_entries)
+				    unsigned int num_entries, bool bind)
 #if (IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM))
 {
 	unsigned int i;
 
-	vm_dbg(&xe->drm, "%u entries to update\n", num_entries);
+	vm_dbg(&xe->drm, "%s: %u entries to update\n", bind ? "bind" : "unbind",
+	       num_entries);
 	for (i = 0; i < num_entries; i++) {
 		const struct xe_vm_pgtable_update *entry = &entries[i];
 		struct xe_pt *xe_pt = entry->pt;
@@ -952,66 +960,122 @@ static void xe_vm_dbg_print_entries(struct xe_device *xe,
 {}
 #endif
 
-#ifdef CONFIG_DRM_XE_USERPTR_INVAL_INJECT
+static int job_add_deps(struct xe_sched_job *job, struct dma_resv *resv,
+			enum dma_resv_usage usage)
+{
+	return drm_sched_job_add_resv_dependencies(&job->drm, resv, usage);
+}
 
-static int xe_pt_userptr_inject_eagain(struct xe_userptr_vma *uvma)
+static bool no_in_syncs(struct xe_sync_entry *syncs, u32 num_syncs)
 {
-	u32 divisor = uvma->userptr.divisor ? uvma->userptr.divisor : 2;
-	static u32 count;
+	int i;
 
-	if (count++ % divisor == divisor - 1) {
-		struct xe_vm *vm = xe_vma_vm(&uvma->vma);
+	for (i = 0; i < num_syncs; i++) {
+		struct dma_fence *fence = syncs[i].fence;
 
-		uvma->userptr.divisor = divisor << 1;
-		spin_lock(&vm->userptr.invalidated_lock);
-		list_move_tail(&uvma->userptr.invalidate_link,
-			       &vm->userptr.invalidated);
-		spin_unlock(&vm->userptr.invalidated_lock);
-		return true;
+		if (fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
+				       &fence->flags))
+			return false;
 	}
 
-	return false;
+	return true;
 }
 
-#else
-
-static bool xe_pt_userptr_inject_eagain(struct xe_userptr_vma *uvma)
+static int vma_add_deps(struct xe_vma *vma, struct xe_sched_job *job)
 {
-	return false;
+	struct xe_bo *bo = xe_vma_bo(vma);
+
+	xe_bo_assert_held(bo);
+
+	if (bo && !bo->vm) {
+		if (!job) {
+			if (!dma_resv_test_signaled(bo->ttm.base.resv,
+						    DMA_RESV_USAGE_KERNEL))
+				return -ETIME;
+		} else {
+			return job_add_deps(job, bo->ttm.base.resv,
+					    DMA_RESV_USAGE_KERNEL);
+		}
+	}
+
+	return 0;
 }
 
-#endif
+static int op_add_deps(struct xe_vm *vm, struct xe_vma_op *op,
+		       struct xe_sched_job *job)
+{
+	int err = 0;
 
-/**
- * struct xe_pt_migrate_pt_update - Callback argument for pre-commit callbacks
- * @base: Base we derive from.
- * @bind: Whether this is a bind or an unbind operation. A bind operation
- *        makes the pre-commit callback error with -EAGAIN if it detects a
- *        pending invalidation.
- * @locked: Whether the pre-commit callback locked the userptr notifier lock
- *          and it needs unlocking.
- */
-struct xe_pt_migrate_pt_update {
-	struct xe_migrate_pt_update base;
-	bool bind;
-	bool locked;
-};
+	switch (op->base.op) {
+	case DRM_GPUVA_OP_MAP:
+		if (!op->map.immediate && xe_vm_in_fault_mode(vm))
+			break;
+
+		err = vma_add_deps(op->map.vma, job);
+		break;
+	case DRM_GPUVA_OP_REMAP:
+		if (op->remap.prev)
+			err = vma_add_deps(op->remap.prev, job);
+		if (!err && op->remap.next)
+			err = vma_add_deps(op->remap.next, job);
+		break;
+	case DRM_GPUVA_OP_UNMAP:
+		break;
+	case DRM_GPUVA_OP_PREFETCH:
+		err = vma_add_deps(gpuva_to_vma(op->base.prefetch.va), job);
+		break;
+	default:
+		drm_warn(&vm->xe->drm, "NOT POSSIBLE");
+	}
+
+	return err;
+}
 
-/*
- * This function adds the needed dependencies to a page-table update job
- * to make sure racing jobs for separate bind engines don't race writing
- * to the same page-table range, wreaking havoc. Initially use a single
- * fence for the entire VM. An optimization would use smaller granularity.
- */
 static int xe_pt_vm_dependencies(struct xe_sched_job *job,
-				 struct xe_range_fence_tree *rftree,
-				 u64 start, u64 last)
+				 struct xe_vm *vm,
+				 struct xe_vma_ops *vops,
+				 struct xe_vm_pgtable_update_ops *pt_update_ops,
+				 struct xe_range_fence_tree *rftree)
 {
 	struct xe_range_fence *rtfence;
 	struct dma_fence *fence;
-	int err;
+	struct xe_vma_op *op;
+	int err = 0, i;
 
-	rtfence = xe_range_fence_tree_first(rftree, start, last);
+	xe_vm_assert_held(vm);
+
+	if (!job && !no_in_syncs(vops->syncs, vops->num_syncs))
+		return -ETIME;
+
+	if (!job && !xe_exec_queue_is_idle(pt_update_ops->q))
+		return -ETIME;
+
+	if (pt_update_ops->wait_vm_bookkeep) {
+		if (!job) {
+			if (!dma_resv_test_signaled(xe_vm_resv(vm),
+						    DMA_RESV_USAGE_BOOKKEEP))
+				return -ETIME;
+		} else {
+			err = job_add_deps(job, xe_vm_resv(vm),
+					   DMA_RESV_USAGE_BOOKKEEP);
+			if (err)
+				return err;
+		}
+	} else if (pt_update_ops->wait_vm_kernel) {
+		if (!job) {
+			if (!dma_resv_test_signaled(xe_vm_resv(vm),
+						    DMA_RESV_USAGE_KERNEL))
+				return -ETIME;
+		} else {
+			err = job_add_deps(job, xe_vm_resv(vm),
+					   DMA_RESV_USAGE_KERNEL);
+			if (err)
+				return err;
+		}
+	}
+
+	rtfence = xe_range_fence_tree_first(rftree, pt_update_ops->start,
+					    pt_update_ops->last);
 	while (rtfence) {
 		fence = rtfence->fence;
 
@@ -1029,80 +1093,168 @@ static int xe_pt_vm_dependencies(struct xe_sched_job *job,
 				return err;
 		}
 
-		rtfence = xe_range_fence_tree_next(rtfence, start, last);
+		rtfence = xe_range_fence_tree_next(rtfence,
+						   pt_update_ops->start,
+						   pt_update_ops->last);
 	}
 
-	return 0;
+	list_for_each_entry(op, &vops->list, link) {
+		err = op_add_deps(vm, op, job);
+		if (err)
+			return err;
+	}
+
+	for (i = 0; job && !err && i < vops->num_syncs; i++)
+		err = xe_sync_entry_add_deps(&vops->syncs[i], job);
+
+	return err;
 }
 
 static int xe_pt_pre_commit(struct xe_migrate_pt_update *pt_update)
 {
-	struct xe_range_fence_tree *rftree =
-		&xe_vma_vm(pt_update->vma)->rftree[pt_update->tile_id];
+	struct xe_vma_ops *vops = pt_update->vops;
+	struct xe_vm *vm = vops->vm;
+	struct xe_range_fence_tree *rftree = &vm->rftree[pt_update->tile_id];
+	struct xe_vm_pgtable_update_ops *pt_update_ops =
+		&vops->pt_update_ops[pt_update->tile_id];
+
+	return xe_pt_vm_dependencies(pt_update->job, vm, pt_update->vops,
+				     pt_update_ops, rftree);
+}
+
+#ifdef CONFIG_DRM_XE_USERPTR_INVAL_INJECT
+
+static bool xe_pt_userptr_inject_eagain(struct xe_userptr_vma *uvma)
+{
+	u32 divisor = uvma->userptr.divisor ? uvma->userptr.divisor : 2;
+	static u32 count;
 
-	return xe_pt_vm_dependencies(pt_update->job, rftree,
-				     pt_update->start, pt_update->last);
+	if (count++ % divisor == divisor - 1) {
+		uvma->userptr.divisor = divisor << 1;
+		return true;
+	}
+
+	return false;
 }
 
-static int xe_pt_userptr_pre_commit(struct xe_migrate_pt_update *pt_update)
+#else
+
+static bool xe_pt_userptr_inject_eagain(struct xe_userptr_vma *uvma)
 {
-	struct xe_pt_migrate_pt_update *userptr_update =
-		container_of(pt_update, typeof(*userptr_update), base);
-	struct xe_userptr_vma *uvma = to_userptr_vma(pt_update->vma);
-	unsigned long notifier_seq = uvma->userptr.notifier_seq;
-	struct xe_vm *vm = xe_vma_vm(&uvma->vma);
-	int err = xe_pt_vm_dependencies(pt_update->job,
-					&vm->rftree[pt_update->tile_id],
-					pt_update->start,
-					pt_update->last);
+	return false;
+}
 
-	if (err)
-		return err;
+#endif
 
-	userptr_update->locked = false;
+static int vma_check_userptr(struct xe_vm *vm, struct xe_vma *vma,
+			     struct xe_vm_pgtable_update_ops *pt_update)
+{
+	struct xe_userptr_vma *uvma;
+	unsigned long notifier_seq;
 
-	/*
-	 * Wait until nobody is running the invalidation notifier, and
-	 * since we're exiting the loop holding the notifier lock,
-	 * nobody can proceed invalidating either.
-	 *
-	 * Note that we don't update the vma->userptr.notifier_seq since
-	 * we don't update the userptr pages.
-	 */
-	do {
-		down_read(&vm->userptr.notifier_lock);
-		if (!mmu_interval_read_retry(&uvma->userptr.notifier,
-					     notifier_seq))
-			break;
+	lockdep_assert_held_read(&vm->userptr.notifier_lock);
 
-		up_read(&vm->userptr.notifier_lock);
+	if (!xe_vma_is_userptr(vma))
+		return 0;
 
-		if (userptr_update->bind)
-			return -EAGAIN;
+	uvma = to_userptr_vma(vma);
+	notifier_seq = uvma->userptr.notifier_seq;
 
-		notifier_seq = mmu_interval_read_begin(&uvma->userptr.notifier);
-	} while (true);
+	if (uvma->userptr.initial_bind && !xe_vm_in_fault_mode(vm))
+               return 0;
 
-	/* Inject errors to test_whether they are handled correctly */
-	if (userptr_update->bind && xe_pt_userptr_inject_eagain(uvma)) {
-		up_read(&vm->userptr.notifier_lock);
+	if (!mmu_interval_read_retry(&uvma->userptr.notifier,
+				     notifier_seq) &&
+	    !xe_pt_userptr_inject_eagain(uvma))
+		return 0;
+
+	if (xe_vm_in_fault_mode(vm)) {
 		return -EAGAIN;
-	}
+	} else {
+		spin_lock(&vm->userptr.invalidated_lock);
+		list_move_tail(&uvma->userptr.invalidate_link,
+			       &vm->userptr.invalidated);
+		spin_unlock(&vm->userptr.invalidated_lock);
 
-	userptr_update->locked = true;
+		if (xe_vm_in_preempt_fence_mode(vm)) {
+			struct dma_resv_iter cursor;
+			struct dma_fence *fence;
+			long err;
+
+			dma_resv_iter_begin(&cursor, xe_vm_resv(vm),
+					    DMA_RESV_USAGE_BOOKKEEP);
+			dma_resv_for_each_fence_unlocked(&cursor, fence)
+				dma_fence_enable_sw_signaling(fence);
+			dma_resv_iter_end(&cursor);
+
+			err = dma_resv_wait_timeout(xe_vm_resv(vm),
+						    DMA_RESV_USAGE_BOOKKEEP,
+						    false, MAX_SCHEDULE_TIMEOUT);
+			XE_WARN_ON(err <= 0);
+		}
+	}
 
 	return 0;
 }
 
-static const struct xe_migrate_pt_update_ops bind_ops = {
-	.populate = xe_vm_populate_pgtable,
-	.pre_commit = xe_pt_pre_commit,
-};
+static int op_check_userptr(struct xe_vm *vm, struct xe_vma_op *op,
+			    struct xe_vm_pgtable_update_ops *pt_update)
+{
+	int err = 0;
 
-static const struct xe_migrate_pt_update_ops userptr_bind_ops = {
-	.populate = xe_vm_populate_pgtable,
-	.pre_commit = xe_pt_userptr_pre_commit,
-};
+	lockdep_assert_held_read(&vm->userptr.notifier_lock);
+
+	switch (op->base.op) {
+	case DRM_GPUVA_OP_MAP:
+		if (!op->map.immediate && xe_vm_in_fault_mode(vm))
+			break;
+
+		err = vma_check_userptr(vm, op->map.vma, pt_update);
+		break;
+	case DRM_GPUVA_OP_REMAP:
+		if (op->remap.prev)
+			err = vma_check_userptr(vm, op->remap.prev, pt_update);
+		if (!err && op->remap.next)
+			err = vma_check_userptr(vm, op->remap.next, pt_update);
+		break;
+	case DRM_GPUVA_OP_UNMAP:
+		break;
+	case DRM_GPUVA_OP_PREFETCH:
+		err = vma_check_userptr(vm, gpuva_to_vma(op->base.prefetch.va),
+					pt_update);
+		break;
+	default:
+		drm_warn(&vm->xe->drm, "NOT POSSIBLE");
+	}
+
+	return err;
+}
+
+static int xe_pt_userptr_pre_commit(struct xe_migrate_pt_update *pt_update)
+{
+	struct xe_vm *vm = pt_update->vops->vm;
+	struct xe_vma_ops *vops = pt_update->vops;
+	struct xe_vm_pgtable_update_ops *pt_update_ops =
+		&vops->pt_update_ops[pt_update->tile_id];
+	struct xe_vma_op *op;
+	int err;
+
+	err = xe_pt_pre_commit(pt_update);
+	if (err)
+		return err;
+
+	down_read(&vm->userptr.notifier_lock);
+
+	list_for_each_entry(op, &vops->list, link) {
+		err = op_check_userptr(vm, op, pt_update_ops);
+		if (err) {
+			up_read(&vm->userptr.notifier_lock);
+			break;
+		}
+	}
+
+	return err;
+}
 
 struct invalidation_fence {
 	struct xe_gt_tlb_invalidation_fence base;
@@ -1198,190 +1350,6 @@ static int invalidation_fence_init(struct xe_gt *gt,
 	return ret && ret != -ENOENT ? ret : 0;
 }
 
-static void xe_pt_calc_rfence_interval(struct xe_vma *vma,
-				       struct xe_pt_migrate_pt_update *update,
-				       struct xe_vm_pgtable_update *entries,
-				       u32 num_entries)
-{
-	int i, level = 0;
-
-	for (i = 0; i < num_entries; i++) {
-		const struct xe_vm_pgtable_update *entry = &entries[i];
-
-		if (entry->pt->level > level)
-			level = entry->pt->level;
-	}
-
-	/* Greedy (non-optimal) calculation but simple */
-	update->base.start = ALIGN_DOWN(xe_vma_start(vma),
-					0x1ull << xe_pt_shift(level));
-	update->base.last = ALIGN(xe_vma_end(vma),
-				  0x1ull << xe_pt_shift(level)) - 1;
-}
-
-/**
- * __xe_pt_bind_vma() - Build and connect a page-table tree for the vma
- * address range.
- * @tile: The tile to bind for.
- * @vma: The vma to bind.
- * @q: The exec_queue with which to do pipelined page-table updates.
- * @syncs: Entries to sync on before binding the built tree to the live vm tree.
- * @num_syncs: Number of @sync entries.
- * @rebind: Whether we're rebinding this vma to the same address range without
- * an unbind in-between.
- *
- * This function builds a page-table tree (see xe_pt_stage_bind() for more
- * information on page-table building), and the xe_vm_pgtable_update entries
- * abstracting the operations needed to attach it to the main vm tree. It
- * then takes the relevant locks and updates the metadata side of the main
- * vm tree and submits the operations for pipelined attachment of the
- * gpu page-table to the vm main tree, (which can be done either by the
- * cpu and the GPU).
- *
- * Return: A valid dma-fence representing the pipelined attachment operation
- * on success, an error pointer on error.
- */
-struct dma_fence *
-__xe_pt_bind_vma(struct xe_tile *tile, struct xe_vma *vma, struct xe_exec_queue *q,
-		 struct xe_sync_entry *syncs, u32 num_syncs,
-		 bool rebind)
-{
-	struct xe_vm_pgtable_update entries[XE_VM_MAX_LEVEL * 2 + 1];
-	struct xe_pt_migrate_pt_update bind_pt_update = {
-		.base = {
-			.ops = xe_vma_is_userptr(vma) ? &userptr_bind_ops : &bind_ops,
-			.vma = vma,
-			.tile_id = tile->id,
-		},
-		.bind = true,
-	};
-	struct xe_vm *vm = xe_vma_vm(vma);
-	u32 num_entries;
-	struct dma_fence *fence;
-	struct invalidation_fence *ifence = NULL;
-	struct xe_range_fence *rfence;
-	int err;
-
-	bind_pt_update.locked = false;
-	xe_bo_assert_held(xe_vma_bo(vma));
-	xe_vm_assert_held(vm);
-
-	vm_dbg(&xe_vma_vm(vma)->xe->drm,
-	       "Preparing bind, with range [%llx...%llx) engine %p.\n",
-	       xe_vma_start(vma), xe_vma_end(vma), q);
-
-	err = xe_pt_prepare_bind(tile, vma, entries, &num_entries);
-	if (err)
-		goto err;
-
-	err = dma_resv_reserve_fences(xe_vm_resv(vm), 1);
-	if (!err && !xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
-		err = dma_resv_reserve_fences(xe_vma_bo(vma)->ttm.base.resv, 1);
-	if (err)
-		goto err;
-
-	xe_tile_assert(tile, num_entries <= ARRAY_SIZE(entries));
-
-	xe_vm_dbg_print_entries(tile_to_xe(tile), entries, num_entries);
-	xe_pt_calc_rfence_interval(vma, &bind_pt_update, entries,
-				   num_entries);
-
-	/*
-	 * If rebind, we have to invalidate TLB on !LR vms to invalidate
-	 * cached PTEs point to freed memory. on LR vms this is done
-	 * automatically when the context is re-enabled by the rebind worker,
-	 * or in fault mode it was invalidated on PTE zapping.
-	 *
-	 * If !rebind, and scratch enabled VMs, there is a chance the scratch
-	 * PTE is already cached in the TLB so it needs to be invalidated.
-	 * on !LR VMs this is done in the ring ops preceding a batch, but on
-	 * non-faulting LR, in particular on user-space batch buffer chaining,
-	 * it needs to be done here.
-	 */
-	if ((!rebind && xe_vm_has_scratch(vm) && xe_vm_in_preempt_fence_mode(vm))) {
-		ifence = kzalloc(sizeof(*ifence), GFP_KERNEL);
-		if (!ifence)
-			return ERR_PTR(-ENOMEM);
-	} else if (rebind && !xe_vm_in_lr_mode(vm)) {
-		/* We bump also if batch_invalidate_tlb is true */
-		vm->tlb_flush_seqno++;
-	}
-
-	rfence = kzalloc(sizeof(*rfence), GFP_KERNEL);
-	if (!rfence) {
-		kfree(ifence);
-		return ERR_PTR(-ENOMEM);
-	}
-
-	fence = xe_migrate_update_pgtables(tile->migrate,
-					   vm, xe_vma_bo(vma), q,
-					   entries, num_entries,
-					   syncs, num_syncs,
-					   &bind_pt_update.base);
-	if (!IS_ERR(fence)) {
-		bool last_munmap_rebind = vma->gpuva.flags & XE_VMA_LAST_REBIND;
-		LLIST_HEAD(deferred);
-		int err;
-
-		err = xe_range_fence_insert(&vm->rftree[tile->id], rfence,
-					    &xe_range_fence_kfree_ops,
-					    bind_pt_update.base.start,
-					    bind_pt_update.base.last, fence);
-		if (err)
-			dma_fence_wait(fence, false);
-
-		/* TLB invalidation must be done before signaling rebind */
-		if (ifence) {
-			int err = invalidation_fence_init(tile->primary_gt,
-							  ifence, fence,
-							  xe_vma_start(vma),
-							  xe_vma_end(vma),
-							  xe_vma_vm(vma)->usm.asid);
-			if (err) {
-				dma_fence_put(fence);
-				kfree(ifence);
-				return ERR_PTR(err);
-			}
-			fence = &ifence->base.base;
-		}
-
-		/* add shared fence now for pagetable delayed destroy */
-		dma_resv_add_fence(xe_vm_resv(vm), fence, rebind ||
-				   last_munmap_rebind ?
-				   DMA_RESV_USAGE_KERNEL :
-				   DMA_RESV_USAGE_BOOKKEEP);
-
-		if (!xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
-			dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
-					   DMA_RESV_USAGE_BOOKKEEP);
-		xe_pt_commit_bind(vma, entries, num_entries, rebind,
-				  bind_pt_update.locked ? &deferred : NULL);
-
-		/* This vma is live (again?) now */
-		vma->tile_present |= BIT(tile->id);
-
-		if (bind_pt_update.locked) {
-			to_userptr_vma(vma)->userptr.initial_bind = true;
-			up_read(&vm->userptr.notifier_lock);
-			xe_bo_put_commit(&deferred);
-		}
-		if (!rebind && last_munmap_rebind &&
-		    xe_vm_in_preempt_fence_mode(vm))
-			xe_vm_queue_rebind_worker(vm);
-	} else {
-		kfree(rfence);
-		kfree(ifence);
-		if (bind_pt_update.locked)
-			up_read(&vm->userptr.notifier_lock);
-		xe_pt_abort_bind(vma, entries, num_entries);
-	}
-
-	return fence;
-
-err:
-	return ERR_PTR(err);
-}
-
 struct xe_pt_stage_unbind_walk {
 	/** @base: The pagewalk base-class. */
 	struct xe_pt_walk base;
@@ -1532,8 +1500,8 @@ xe_migrate_clear_pgtable_callback(struct xe_migrate_pt_update *pt_update,
 				  void *ptr, u32 qword_ofs, u32 num_qwords,
 				  const struct xe_vm_pgtable_update *update)
 {
-	struct xe_vma *vma = pt_update->vma;
-	u64 empty = __xe_pt_empty_pte(tile, xe_vma_vm(vma), update->pt->level);
+	struct xe_vm *vm = pt_update->vops->vm;
+	u64 empty = __xe_pt_empty_pte(tile, vm, update->pt->level);
 	int i;
 
 	if (map && map->is_iomem)
@@ -1577,151 +1545,487 @@ xe_pt_commit_unbind(struct xe_vma *vma,
 	}
 }
 
-static const struct xe_migrate_pt_update_ops unbind_ops = {
-	.populate = xe_migrate_clear_pgtable_callback,
-	.pre_commit = xe_pt_pre_commit,
-};
+static void
+xe_pt_update_ops_rfence_interval(struct xe_vm_pgtable_update_ops *pt_update_ops,
+				 struct xe_vma *vma)
+{
+	u32 current_op = pt_update_ops->current_op;
+	struct xe_vm_pgtable_update_op *pt_op = &pt_update_ops->ops[current_op];
+	int i, level = 0;
+	u64 start, last;
 
-static const struct xe_migrate_pt_update_ops userptr_unbind_ops = {
-	.populate = xe_migrate_clear_pgtable_callback,
-	.pre_commit = xe_pt_userptr_pre_commit,
-};
+	for (i = 0; i < pt_op->num_entries; i++) {
+		const struct xe_vm_pgtable_update *entry = &pt_op->entries[i];
 
-/**
- * __xe_pt_unbind_vma() - Disconnect and free a page-table tree for the vma
- * address range.
- * @tile: The tile to unbind for.
- * @vma: The vma to unbind.
- * @q: The exec_queue with which to do pipelined page-table updates.
- * @syncs: Entries to sync on before disconnecting the tree to be destroyed.
- * @num_syncs: Number of @sync entries.
- *
- * This function builds a the xe_vm_pgtable_update entries abstracting the
- * operations needed to detach the page-table tree to be destroyed from the
- * man vm tree.
- * It then takes the relevant locks and submits the operations for
- * pipelined detachment of the gpu page-table from  the vm main tree,
- * (which can be done either by the cpu and the GPU), Finally it frees the
- * detached page-table tree.
- *
- * Return: A valid dma-fence representing the pipelined detachment operation
- * on success, an error pointer on error.
- */
-struct dma_fence *
-__xe_pt_unbind_vma(struct xe_tile *tile, struct xe_vma *vma, struct xe_exec_queue *q,
-		   struct xe_sync_entry *syncs, u32 num_syncs)
+		if (entry->pt->level > level)
+			level = entry->pt->level;
+	}
+
+	/* Greedy (non-optimal) calculation but simple */
+	start = ALIGN_DOWN(xe_vma_start(vma), 0x1ull << xe_pt_shift(level));
+	last = ALIGN(xe_vma_end(vma), 0x1ull << xe_pt_shift(level)) - 1;
+
+	if (start < pt_update_ops->start)
+		pt_update_ops->start = start;
+	if (last > pt_update_ops->last)
+		pt_update_ops->last = last;
+}
+
+static int vma_reserve_fences(struct xe_device *xe, struct xe_vma *vma)
 {
-	struct xe_vm_pgtable_update entries[XE_VM_MAX_LEVEL * 2 + 1];
-	struct xe_pt_migrate_pt_update unbind_pt_update = {
-		.base = {
-			.ops = xe_vma_is_userptr(vma) ? &userptr_unbind_ops :
-			&unbind_ops,
-			.vma = vma,
-			.tile_id = tile->id,
-		},
-	};
-	struct xe_vm *vm = xe_vma_vm(vma);
-	u32 num_entries;
-	struct dma_fence *fence = NULL;
-	struct invalidation_fence *ifence;
-	struct xe_range_fence *rfence;
-	int err;
+	if (!xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
+		return dma_resv_reserve_fences(xe_vma_bo(vma)->ttm.base.resv,
+					       xe->info.tile_count);
 
-	LLIST_HEAD(deferred);
+	return 0;
+}
+
+static int bind_op_prepare(struct xe_vm *vm, struct xe_tile *tile,
+			   struct xe_vm_pgtable_update_ops *pt_update_ops,
+			   struct xe_vma *vma)
+{
+	u32 current_op = pt_update_ops->current_op;
+	struct xe_vm_pgtable_update_op *pt_op = &pt_update_ops->ops[current_op];
+	struct llist_head *deferred = &pt_update_ops->deferred;
+	int err;
 
 	xe_bo_assert_held(xe_vma_bo(vma));
-	xe_vm_assert_held(vm);
 
 	vm_dbg(&xe_vma_vm(vma)->xe->drm,
-	       "Preparing unbind, with range [%llx...%llx) engine %p.\n",
-	       xe_vma_start(vma), xe_vma_end(vma), q);
-
-	num_entries = xe_pt_stage_unbind(tile, vma, entries);
-	xe_tile_assert(tile, num_entries <= ARRAY_SIZE(entries));
+	       "Preparing bind, with range [%llx...%llx)\n",
+	       xe_vma_start(vma), xe_vma_end(vma) - 1);
 
-	xe_vm_dbg_print_entries(tile_to_xe(tile), entries, num_entries);
-	xe_pt_calc_rfence_interval(vma, &unbind_pt_update, entries,
-				   num_entries);
+	pt_op->vma = NULL;
+	pt_op->bind = true;
+	pt_op->rebind = BIT(tile->id) & vma->tile_present;
 
-	err = dma_resv_reserve_fences(xe_vm_resv(vm), 1);
-	if (!err && !xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
-		err = dma_resv_reserve_fences(xe_vma_bo(vma)->ttm.base.resv, 1);
+	err = vma_reserve_fences(tile_to_xe(tile), vma);
 	if (err)
-		return ERR_PTR(err);
+		return err;
 
-	ifence = kzalloc(sizeof(*ifence), GFP_KERNEL);
-	if (!ifence)
-		return ERR_PTR(-ENOMEM);
+	err = xe_pt_prepare_bind(tile, vma, pt_op->entries,
+				 &pt_op->num_entries);
+	if (!err) {
+		xe_tile_assert(tile, pt_op->num_entries <=
+			       ARRAY_SIZE(pt_op->entries));
+		xe_vm_dbg_print_entries(tile_to_xe(tile), pt_op->entries,
+					pt_op->num_entries, true);
 
-	rfence = kzalloc(sizeof(*rfence), GFP_KERNEL);
-	if (!rfence) {
-		kfree(ifence);
-		return ERR_PTR(-ENOMEM);
+		xe_pt_update_ops_rfence_interval(pt_update_ops, vma);
+		++pt_update_ops->current_op;
+		pt_update_ops->needs_userptr_lock |= xe_vma_is_userptr(vma);
+
+		/*
+		 * If rebind, we have to invalidate TLB on !LR vms to invalidate
+		 * cached PTEs point to freed memory. on LR vms this is done
+		 * automatically when the context is re-enabled by the rebind
+		 * worker, or in fault mode it was invalidated on PTE zapping.
+		 *
+		 * If !rebind, and scratch enabled VMs, there is a chance the
+		 * scratch PTE is already cached in the TLB so it needs to be
+		 * invalidated. on !LR VMs this is done in the ring ops
+		 * preceding a batch, but on non-faulting LR, in particular on
+		 * user-space batch buffer chaining, it needs to be done here.
+		 */
+		pt_update_ops->needs_invalidation |=
+			(pt_op->rebind && !xe_vm_in_lr_mode(vm) &&
+			!vm->batch_invalidate_tlb) ||
+			(!pt_op->rebind && vm->scratch_pt[tile->id] &&
+			 xe_vm_in_preempt_fence_mode(vm));
+
+		/* FIXME: Don't commit right away */
+		vma->tile_staged |= BIT(tile->id);
+		pt_op->vma = vma;
+		xe_pt_commit_bind(vma, pt_op->entries, pt_op->num_entries,
+				  pt_op->rebind, deferred);
 	}
 
+	return err;
+}
+
+static int unbind_op_prepare(struct xe_tile *tile,
+			     struct xe_vm_pgtable_update_ops *pt_update_ops,
+			     struct xe_vma *vma)
+{
+	u32 current_op = pt_update_ops->current_op;
+	struct xe_vm_pgtable_update_op *pt_op = &pt_update_ops->ops[current_op];
+	struct llist_head *deferred = &pt_update_ops->deferred;
+	int err;
+
+	if (!((vma->tile_present | vma->tile_staged) & BIT(tile->id)))
+		return 0;
+
+	xe_bo_assert_held(xe_vma_bo(vma));
+
+	vm_dbg(&xe_vma_vm(vma)->xe->drm,
+	       "Preparing unbind, with range [%llx...%llx)\n",
+	       xe_vma_start(vma), xe_vma_end(vma) - 1);
+
 	/*
-	 * Even if we were already evicted and unbind to destroy, we need to
-	 * clear again here. The eviction may have updated pagetables at a
-	 * lower level, because it needs to be more conservative.
+	 * Wait for invalidation to complete. Can corrupt internal page table
+	 * state if an invalidation is running while preparing an unbind.
 	 */
-	fence = xe_migrate_update_pgtables(tile->migrate,
-					   vm, NULL, q ? q :
-					   vm->q[tile->id],
-					   entries, num_entries,
-					   syncs, num_syncs,
-					   &unbind_pt_update.base);
-	if (!IS_ERR(fence)) {
-		int err;
-
-		err = xe_range_fence_insert(&vm->rftree[tile->id], rfence,
-					    &xe_range_fence_kfree_ops,
-					    unbind_pt_update.base.start,
-					    unbind_pt_update.base.last, fence);
-		if (err)
-			dma_fence_wait(fence, false);
+	if (xe_vma_is_userptr(vma) && xe_vm_in_fault_mode(xe_vma_vm(vma)))
+		mmu_interval_read_begin(&to_userptr_vma(vma)->userptr.notifier);
 
-		/* TLB invalidation must be done before signaling unbind */
-		err = invalidation_fence_init(tile->primary_gt, ifence, fence,
-					      xe_vma_start(vma),
-					      xe_vma_end(vma),
-					      xe_vma_vm(vma)->usm.asid);
-		if (err) {
-			dma_fence_put(fence);
-			kfree(ifence);
-			return ERR_PTR(err);
+	pt_op->vma = vma;
+	pt_op->bind = false;
+	pt_op->rebind = false;
+
+	err = vma_reserve_fences(tile_to_xe(tile), vma);
+	if (err)
+		return err;
+
+	pt_op->num_entries = xe_pt_stage_unbind(tile, vma, pt_op->entries);
+
+	xe_vm_dbg_print_entries(tile_to_xe(tile), pt_op->entries,
+				pt_op->num_entries, false);
+	xe_pt_update_ops_rfence_interval(pt_update_ops, vma);
+	++pt_update_ops->current_op;
+	pt_update_ops->needs_userptr_lock |= xe_vma_is_userptr(vma);
+	pt_update_ops->needs_invalidation = true;
+
+	/* FIXME: Don't commit right away */
+	xe_pt_commit_unbind(vma, pt_op->entries, pt_op->num_entries,
+			    deferred);
+
+	return 0;
+}
+
+static int op_prepare(struct xe_vm *vm,
+		      struct xe_tile *tile,
+		      struct xe_vm_pgtable_update_ops *pt_update_ops,
+		      struct xe_vma_op *op)
+{
+	int err = 0;
+
+	xe_vm_assert_held(vm);
+
+	switch (op->base.op) {
+	case DRM_GPUVA_OP_MAP:
+		if (!op->map.immediate && xe_vm_in_fault_mode(vm))
+			break;
+
+		err = bind_op_prepare(vm, tile, pt_update_ops, op->map.vma);
+		pt_update_ops->wait_vm_kernel = true;
+		break;
+	case DRM_GPUVA_OP_REMAP:
+		err = unbind_op_prepare(tile, pt_update_ops,
+					gpuva_to_vma(op->base.remap.unmap->va));
+
+		if (!err && op->remap.prev) {
+			err = bind_op_prepare(vm, tile, pt_update_ops,
+					      op->remap.prev);
+			pt_update_ops->wait_vm_bookkeep = true;
 		}
-		fence = &ifence->base.base;
+		if (!err && op->remap.next) {
+			err = bind_op_prepare(vm, tile, pt_update_ops,
+					      op->remap.next);
+			pt_update_ops->wait_vm_bookkeep = true;
+		}
+		break;
+	case DRM_GPUVA_OP_UNMAP:
+		err = unbind_op_prepare(tile, pt_update_ops,
+					gpuva_to_vma(op->base.unmap.va));
+		break;
+	case DRM_GPUVA_OP_PREFETCH:
+		err = bind_op_prepare(vm, tile, pt_update_ops,
+				      gpuva_to_vma(op->base.prefetch.va));
+		pt_update_ops->wait_vm_kernel = true;
+		break;
+	default:
+		drm_warn(&vm->xe->drm, "NOT POSSIBLE");
+	}
 
-		/* add shared fence now for pagetable delayed destroy */
-		dma_resv_add_fence(xe_vm_resv(vm), fence,
-				   DMA_RESV_USAGE_BOOKKEEP);
+	return err;
+}
 
-		/* This fence will be installed by caller when doing eviction */
-		if (!xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
-			dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
-					   DMA_RESV_USAGE_BOOKKEEP);
-		xe_pt_commit_unbind(vma, entries, num_entries,
-				    unbind_pt_update.locked ? &deferred : NULL);
-		vma->tile_present &= ~BIT(tile->id);
-	} else {
-		kfree(rfence);
-		kfree(ifence);
+static void
+xe_pt_update_ops_init(struct xe_vm_pgtable_update_ops *pt_update_ops)
+{
+	init_llist_head(&pt_update_ops->deferred);
+	pt_update_ops->start = ~0x0ull;
+	pt_update_ops->last = 0x0ull;
+}
+
+/**
+ * xe_pt_update_ops_prepare() - Prepare PT update operations
+ * @tile: Tile of PT update operations
+ * @vops: VMA operationa
+ *
+ * Prepare PT update operations which includes updating internal PT state,
+ * allocate memory for page tables, populate page table being pruned in, and
+ * create PT update operations for leaf insertion / removal.
+ *
+ * Return: 0 on success, negative error code on error.
+ */
+int xe_pt_update_ops_prepare(struct xe_tile *tile, struct xe_vma_ops *vops)
+{
+	struct xe_vm_pgtable_update_ops *pt_update_ops =
+		&vops->pt_update_ops[tile->id];
+	struct xe_vma_op *op;
+	int err;
+
+	lockdep_assert_held(&vops->vm->lock);
+	xe_vm_assert_held(vops->vm);
+
+	xe_pt_update_ops_init(pt_update_ops);
+
+	err = dma_resv_reserve_fences(xe_vm_resv(vops->vm),
+				      tile_to_xe(tile)->info.tile_count);
+	if (err)
+		return err;
+
+	list_for_each_entry(op, &vops->list, link) {
+		err = op_prepare(vops->vm, tile, pt_update_ops, op);
+
+		if (err)
+			return err;
 	}
 
-	if (!vma->tile_present)
-		list_del_init(&vma->combined_links.rebind);
+	xe_tile_assert(tile, pt_update_ops->current_op <=
+		       pt_update_ops->num_ops);
+
+	return 0;
+}
+
+static void bind_op_commit(struct xe_vm *vm, struct xe_tile *tile,
+			   struct xe_vm_pgtable_update_ops *pt_update_ops,
+			   struct xe_vma *vma, struct dma_fence *fence)
+{
+	if (!xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
+		dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
+				   pt_update_ops->wait_vm_bookkeep ?
+				   DMA_RESV_USAGE_KERNEL :
+				   DMA_RESV_USAGE_BOOKKEEP);
+	vma->tile_present |= BIT(tile->id);
+	vma->tile_staged &= ~BIT(tile->id);
+	if (xe_vma_is_userptr(vma)) {
+		lockdep_assert_held_read(&vm->userptr.notifier_lock);
+		to_userptr_vma(vma)->userptr.initial_bind = true;
+	}
 
-	if (unbind_pt_update.locked) {
-		xe_tile_assert(tile, xe_vma_is_userptr(vma));
+	/*
+	 * Kick rebind worker if this bind triggers preempt fences and not in
+	 * the rebind worker
+	 */
+	if (pt_update_ops->wait_vm_bookkeep &&
+	    xe_vm_in_preempt_fence_mode(vm) &&
+	    !current->mm)
+		xe_vm_queue_rebind_worker(vm);
+}
+
+static void unbind_op_commit(struct xe_vm *vm, struct xe_tile *tile,
+			     struct xe_vm_pgtable_update_ops *pt_update_ops,
+			     struct xe_vma *vma, struct dma_fence *fence)
+{
+	if (!xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
+		dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
+				   pt_update_ops->wait_vm_bookkeep ?
+				   DMA_RESV_USAGE_KERNEL :
+				   DMA_RESV_USAGE_BOOKKEEP);
+	vma->tile_present &= ~BIT(tile->id);
+	if (!vma->tile_present) {
+		list_del_init(&vma->combined_links.rebind);
+		if (xe_vma_is_userptr(vma)) {
+			lockdep_assert_held_read(&vm->userptr.notifier_lock);
 
-		if (!vma->tile_present) {
 			spin_lock(&vm->userptr.invalidated_lock);
 			list_del_init(&to_userptr_vma(vma)->userptr.invalidate_link);
 			spin_unlock(&vm->userptr.invalidated_lock);
 		}
-		up_read(&vm->userptr.notifier_lock);
-		xe_bo_put_commit(&deferred);
 	}
+}
+
+static void op_commit(struct xe_vm *vm,
+		      struct xe_tile *tile,
+		      struct xe_vm_pgtable_update_ops *pt_update_ops,
+		      struct xe_vma_op *op, struct dma_fence *fence)
+{
+	xe_vm_assert_held(vm);
+
+	switch (op->base.op) {
+	case DRM_GPUVA_OP_MAP:
+		if (!op->map.immediate && xe_vm_in_fault_mode(vm))
+			break;
+
+		bind_op_commit(vm, tile, pt_update_ops, op->map.vma, fence);
+		break;
+	case DRM_GPUVA_OP_REMAP:
+		unbind_op_commit(vm, tile, pt_update_ops,
+				 gpuva_to_vma(op->base.remap.unmap->va), fence);
+
+		if (op->remap.prev)
+			bind_op_commit(vm, tile, pt_update_ops, op->remap.prev,
+				       fence);
+		if (op->remap.next)
+			bind_op_commit(vm, tile, pt_update_ops, op->remap.next,
+				       fence);
+		break;
+	case DRM_GPUVA_OP_UNMAP:
+		unbind_op_commit(vm, tile, pt_update_ops,
+				 gpuva_to_vma(op->base.unmap.va), fence);
+		break;
+	case DRM_GPUVA_OP_PREFETCH:
+		bind_op_commit(vm, tile, pt_update_ops,
+			       gpuva_to_vma(op->base.prefetch.va), fence);
+		break;
+	default:
+		drm_warn(&vm->xe->drm, "NOT POSSIBLE");
+	}
+}
+
+static const struct xe_migrate_pt_update_ops migrate_ops = {
+	.populate = xe_vm_populate_pgtable,
+	.clear = xe_migrate_clear_pgtable_callback,
+	.pre_commit = xe_pt_pre_commit,
+};
+
+static const struct xe_migrate_pt_update_ops userptr_migrate_ops = {
+	.populate = xe_vm_populate_pgtable,
+	.clear = xe_migrate_clear_pgtable_callback,
+	.pre_commit = xe_pt_userptr_pre_commit,
+};
+
+/**
+ * xe_pt_update_ops_run() - Run PT update operations
+ * @tile: Tile of PT update operations
+ * @vops: VMA operationa
+ *
+ * Run PT update operations which includes committing internal PT state changes,
+ * creating job for PT update operations for leaf insertion / removal, and
+ * installing job fence in various places.
+  *
+ * Return: fence on success, negative ERR_PTR on error.
+ */
+struct dma_fence *
+xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
+{
+	struct xe_vm *vm = vops->vm;
+	struct xe_vm_pgtable_update_ops *pt_update_ops =
+		&vops->pt_update_ops[tile->id];
+	struct dma_fence *fence;
+	struct invalidation_fence *ifence = NULL;
+	struct xe_range_fence *rfence;
+	struct xe_vma_op *op;
+	int err = 0;
+	struct xe_migrate_pt_update update = {
+		.ops = pt_update_ops->needs_userptr_lock ?
+			&userptr_migrate_ops :
+			&migrate_ops,
+		.vops = vops,
+		.tile_id = tile->id
+	};
+
+	lockdep_assert_held(&vm->lock);
+	xe_vm_assert_held(vm);
+
+	if (!pt_update_ops->current_op) {
+		xe_tile_assert(tile, xe_vm_in_fault_mode(vm));
+
+		return dma_fence_get_stub();
+	}
+
+	if (pt_update_ops->needs_invalidation) {
+		ifence = kzalloc(sizeof(*ifence), GFP_KERNEL);
+		if (!ifence)
+			return ERR_PTR(-ENOMEM);
+	}
+
+	rfence = kzalloc(sizeof(*rfence), GFP_KERNEL);
+	if (!rfence) {
+		err = -ENOMEM;
+		goto free_ifence;
+	}
+
+	fence = xe_migrate_update_pgtables(tile->migrate, &update);
+	if (IS_ERR(fence)) {
+		err = PTR_ERR(fence);
+		goto free_rfence;
+	}
+
+	err = xe_range_fence_insert(&vm->rftree[tile->id], rfence,
+				    &xe_range_fence_kfree_ops,
+				    pt_update_ops->start,
+				    pt_update_ops->last, fence);
+	if (err)
+		dma_fence_wait(fence, false);
+
+	/* tlb invalidation must be done before signaling rebind */
+	if (ifence) {
+		err = invalidation_fence_init(tile->primary_gt, ifence, fence,
+					      pt_update_ops->start,
+					      pt_update_ops->last,
+					      vm->usm.asid);
+		if (err)
+			goto put_fence;
+		fence = &ifence->base.base;
+	}
+
+	dma_resv_add_fence(xe_vm_resv(vm), fence,
+			   pt_update_ops->wait_vm_bookkeep ?
+			   DMA_RESV_USAGE_KERNEL :
+			   DMA_RESV_USAGE_BOOKKEEP);
+
+	list_for_each_entry(op, &vops->list, link)
+		op_commit(vops->vm, tile, pt_update_ops, op, fence);
+
+	if (pt_update_ops->needs_userptr_lock)
+		up_read(&vm->userptr.notifier_lock);
 
 	return fence;
+
+put_fence:
+	if (pt_update_ops->needs_userptr_lock)
+		up_read(&vm->userptr.notifier_lock);
+	dma_fence_put(fence);
+free_rfence:
+	kfree(rfence);
+free_ifence:
+	kfree(ifence);
+
+	return ERR_PTR(err);
 }
+
+/**
+ * xe_pt_update_ops_fini() - Finish PT update operations
+ * @tile: Tile of PT update operations
+ * @vops: VMA operations
+ *
+ * Finish PT update operations by committing to destroy page table memory
+ */
+void xe_pt_update_ops_fini(struct xe_tile *tile, struct xe_vma_ops *vops)
+{
+	struct xe_vm_pgtable_update_ops *pt_update_ops =
+		&vops->pt_update_ops[tile->id];
+	int i;
+
+	lockdep_assert_held(&vops->vm->lock);
+	xe_vm_assert_held(vops->vm);
+
+	/* FIXME: Not 100% correct */
+	for (i = 0; i < pt_update_ops->num_ops; ++i) {
+		struct xe_vm_pgtable_update_op *pt_op = &pt_update_ops->ops[i];
+
+		if (pt_op->bind)
+			xe_pt_free_bind(pt_op->entries, pt_op->num_entries);
+	}
+	xe_bo_put_commit(&vops->pt_update_ops[tile->id].deferred);
+}
+
+/**
+ * xe_pt_update_ops_abort() - Abort PT update operations
+ * @tile: Tile of PT update operations
+ * @vops: VMA operationa
+ *
+ *  Abort PT update operations by unwinding internal PT state
+ */
+void xe_pt_update_ops_abort(struct xe_tile *tile, struct xe_vma_ops *vops)
+{
+	lockdep_assert_held(&vops->vm->lock);
+	xe_vm_assert_held(vops->vm);
+
+	/* FIXME: Just kill VM for now + cleanup PTs */
+	xe_bo_put_commit(&vops->pt_update_ops[tile->id].deferred);
+	xe_vm_kill(vops->vm, false);
+ }
diff --git a/drivers/gpu/drm/xe/xe_pt.h b/drivers/gpu/drm/xe/xe_pt.h
index 71a4fbfcff43..9ab386431cad 100644
--- a/drivers/gpu/drm/xe/xe_pt.h
+++ b/drivers/gpu/drm/xe/xe_pt.h
@@ -17,6 +17,7 @@ struct xe_sync_entry;
 struct xe_tile;
 struct xe_vm;
 struct xe_vma;
+struct xe_vma_ops;
 
 /* Largest huge pte is currently 1GiB. May become device dependent. */
 #define MAX_HUGEPTE_LEVEL 2
@@ -34,14 +35,11 @@ void xe_pt_populate_empty(struct xe_tile *tile, struct xe_vm *vm,
 
 void xe_pt_destroy(struct xe_pt *pt, u32 flags, struct llist_head *deferred);
 
-struct dma_fence *
-__xe_pt_bind_vma(struct xe_tile *tile, struct xe_vma *vma, struct xe_exec_queue *q,
-		 struct xe_sync_entry *syncs, u32 num_syncs,
-		 bool rebind);
-
-struct dma_fence *
-__xe_pt_unbind_vma(struct xe_tile *tile, struct xe_vma *vma, struct xe_exec_queue *q,
-		   struct xe_sync_entry *syncs, u32 num_syncs);
+int xe_pt_update_ops_prepare(struct xe_tile *tile, struct xe_vma_ops *vops);
+struct dma_fence *xe_pt_update_ops_run(struct xe_tile *tile,
+				       struct xe_vma_ops *vops);
+void xe_pt_update_ops_fini(struct xe_tile *tile, struct xe_vma_ops *vops);
+void xe_pt_update_ops_abort(struct xe_tile *tile, struct xe_vma_ops *vops);
 
 bool xe_pt_zap_ptes(struct xe_tile *tile, struct xe_vma *vma);
 
diff --git a/drivers/gpu/drm/xe/xe_pt_types.h b/drivers/gpu/drm/xe/xe_pt_types.h
index 2093150f461e..384cc04de719 100644
--- a/drivers/gpu/drm/xe/xe_pt_types.h
+++ b/drivers/gpu/drm/xe/xe_pt_types.h
@@ -78,6 +78,8 @@ struct xe_vm_pgtable_update {
 struct xe_vm_pgtable_update_op {
 	/** @entries: entries to update for this operation */
 	struct xe_vm_pgtable_update entries[XE_VM_MAX_LEVEL * 2 + 1];
+	/** @vma: VMA for operation, operation not valid if NULL */
+	struct xe_vma *vma;
 	/** @num_entries: number of entries for this update operation */
 	u32 num_entries;
 	/** @bind: is a bind */
@@ -86,4 +88,38 @@ struct xe_vm_pgtable_update_op {
 	bool rebind;
 };
 
+/** struct xe_vm_pgtable_update_ops: page table update operations */
+struct xe_vm_pgtable_update_ops {
+	/** @ops: operations */
+	struct xe_vm_pgtable_update_op *ops;
+	/** @deferred: deferred list to destroy PT entries */
+	struct llist_head deferred;
+	/** @q: exec queue for PT operations */
+	struct xe_exec_queue *q;
+	/** @start: start address of ops */
+	u64 start;
+	/** @last: last address of ops */
+	u64 last;
+	/** @num_ops: number of operations */
+	u32 num_ops;
+	/** @current_op: current operations */
+	u32 current_op;
+	/** @needs_userptr_lock: Needs userptr lock */
+	bool needs_userptr_lock;
+	/** @needs_invalidation: Needs invalidation */
+	bool needs_invalidation;
+	/**
+	 * @wait_vm_bookkeep: PT operations need to wait until VM is idle
+	 * (bookkeep dma-resv slots are idle) and stage all future VM activity
+	 * behind these operations (install PT operations into VM kernel
+	 * dma-resv slot).
+	 */
+	bool wait_vm_bookkeep;
+	/**
+	 * @wait_vm_kernel: PT operations need to wait until VM kernel dma-resv
+	 * slots are idle.
+	 */
+	bool wait_vm_kernel;
+};
+
 #endif
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index f3795a7a0f25..551048bff9ce 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -315,7 +315,7 @@ int __xe_vm_userptr_needs_repin(struct xe_vm *vm)
 
 #define XE_VM_REBIND_RETRY_TIMEOUT_MS 1000
 
-static void xe_vm_kill(struct xe_vm *vm, bool unlocked)
+void xe_vm_kill(struct xe_vm *vm, bool unlocked)
 {
 	struct xe_exec_queue *q;
 
@@ -792,7 +792,7 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
 	struct xe_vma *vma, *next;
 	struct xe_vma_ops vops;
 	struct xe_vma_op *op, *next_op;
-	int err;
+	int err, i;
 
 	lockdep_assert_held(&vm->lock);
 	if ((xe_vm_in_lr_mode(vm) && !rebind_worker) ||
@@ -800,6 +800,8 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
 		return 0;
 
 	xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
+	for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
+		vops.pt_update_ops[i].wait_vm_bookkeep = true;
 
 	xe_vm_assert_held(vm);
 	list_for_each_entry(vma, &vm->rebind_list, combined_links.rebind) {
@@ -844,6 +846,8 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma
 	struct dma_fence *fence = NULL;
 	struct xe_vma_ops vops;
 	struct xe_vma_op *op, *next_op;
+	struct xe_tile *tile;
+	u8 id;
 	int err;
 
 	lockdep_assert_held(&vm->lock);
@@ -851,6 +855,11 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma
 	xe_assert(vm->xe, xe_vm_in_fault_mode(vm));
 
 	xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
+	for_each_tile(tile, vm->xe, id) {
+		vops.pt_update_ops[id].wait_vm_bookkeep = true;
+		vops.pt_update_ops[tile->id].q =
+			xe_tile_migrate_exec_queue(tile);
+	}
 
 	err = xe_vm_ops_add_rebind(&vops, vma, tile_mask);
 	if (err)
@@ -1691,147 +1700,6 @@ to_wait_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
 	return q ? q : vm->q[0];
 }
 
-static struct dma_fence *
-xe_vm_unbind_vma(struct xe_vma *vma, struct xe_exec_queue *q,
-		 struct xe_sync_entry *syncs, u32 num_syncs,
-		 bool first_op, bool last_op)
-{
-	struct xe_vm *vm = xe_vma_vm(vma);
-	struct xe_exec_queue *wait_exec_queue = to_wait_exec_queue(vm, q);
-	struct xe_tile *tile;
-	struct dma_fence *fence = NULL;
-	struct dma_fence **fences = NULL;
-	struct dma_fence_array *cf = NULL;
-	int cur_fence = 0;
-	int number_tiles = hweight8(vma->tile_present);
-	int err;
-	u8 id;
-
-	trace_xe_vma_unbind(vma);
-
-	if (number_tiles > 1) {
-		fences = kmalloc_array(number_tiles, sizeof(*fences),
-				       GFP_KERNEL);
-		if (!fences)
-			return ERR_PTR(-ENOMEM);
-	}
-
-	for_each_tile(tile, vm->xe, id) {
-		if (!(vma->tile_present & BIT(id)))
-			goto next;
-
-		fence = __xe_pt_unbind_vma(tile, vma, q ? q : vm->q[id],
-					   first_op ? syncs : NULL,
-					   first_op ? num_syncs : 0);
-		if (IS_ERR(fence)) {
-			err = PTR_ERR(fence);
-			goto err_fences;
-		}
-
-		if (fences)
-			fences[cur_fence++] = fence;
-
-next:
-		if (q && vm->pt_root[id] && !list_empty(&q->multi_gt_list))
-			q = list_next_entry(q, multi_gt_list);
-	}
-
-	if (fences) {
-		cf = dma_fence_array_create(number_tiles, fences,
-					    vm->composite_fence_ctx,
-					    vm->composite_fence_seqno++,
-					    false);
-		if (!cf) {
-			--vm->composite_fence_seqno;
-			err = -ENOMEM;
-			goto err_fences;
-		}
-	}
-
-	fence = cf ? &cf->base : !fence ?
-		xe_exec_queue_last_fence_get(wait_exec_queue, vm) : fence;
-
-	return fence;
-
-err_fences:
-	if (fences) {
-		while (cur_fence)
-			dma_fence_put(fences[--cur_fence]);
-		kfree(fences);
-	}
-
-	return ERR_PTR(err);
-}
-
-static struct dma_fence *
-xe_vm_bind_vma(struct xe_vma *vma, struct xe_exec_queue *q,
-	       struct xe_sync_entry *syncs, u32 num_syncs,
-	       u8 tile_mask, bool first_op, bool last_op)
-{
-	struct xe_tile *tile;
-	struct dma_fence *fence;
-	struct dma_fence **fences = NULL;
-	struct dma_fence_array *cf = NULL;
-	struct xe_vm *vm = xe_vma_vm(vma);
-	int cur_fence = 0;
-	int number_tiles = hweight8(tile_mask);
-	int err;
-	u8 id;
-
-	trace_xe_vma_bind(vma);
-
-	if (number_tiles > 1) {
-		fences = kmalloc_array(number_tiles, sizeof(*fences),
-				       GFP_KERNEL);
-		if (!fences)
-			return ERR_PTR(-ENOMEM);
-	}
-
-	for_each_tile(tile, vm->xe, id) {
-		if (!(tile_mask & BIT(id)))
-			goto next;
-
-		fence = __xe_pt_bind_vma(tile, vma, q ? q : vm->q[id],
-					 first_op ? syncs : NULL,
-					 first_op ? num_syncs : 0,
-					 vma->tile_present & BIT(id));
-		if (IS_ERR(fence)) {
-			err = PTR_ERR(fence);
-			goto err_fences;
-		}
-
-		if (fences)
-			fences[cur_fence++] = fence;
-
-next:
-		if (q && vm->pt_root[id] && !list_empty(&q->multi_gt_list))
-			q = list_next_entry(q, multi_gt_list);
-	}
-
-	if (fences) {
-		cf = dma_fence_array_create(number_tiles, fences,
-					    vm->composite_fence_ctx,
-					    vm->composite_fence_seqno++,
-					    false);
-		if (!cf) {
-			--vm->composite_fence_seqno;
-			err = -ENOMEM;
-			goto err_fences;
-		}
-	}
-
-	return cf ? &cf->base : fence;
-
-err_fences:
-	if (fences) {
-		while (cur_fence)
-			dma_fence_put(fences[--cur_fence]);
-		kfree(fences);
-	}
-
-	return ERR_PTR(err);
-}
-
 static struct xe_user_fence *
 find_ufence_get(struct xe_sync_entry *syncs, u32 num_syncs)
 {
@@ -1847,48 +1715,6 @@ find_ufence_get(struct xe_sync_entry *syncs, u32 num_syncs)
 	return NULL;
 }
 
-static struct dma_fence *
-xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma, struct xe_exec_queue *q,
-	   struct xe_bo *bo, struct xe_sync_entry *syncs, u32 num_syncs,
-	   u8 tile_mask, bool immediate, bool first_op, bool last_op)
-{
-	struct dma_fence *fence;
-	struct xe_exec_queue *wait_exec_queue = to_wait_exec_queue(vm, q);
-
-	xe_vm_assert_held(vm);
-	xe_bo_assert_held(bo);
-
-	if (immediate) {
-		fence = xe_vm_bind_vma(vma, q, syncs, num_syncs, tile_mask,
-				       first_op, last_op);
-		if (IS_ERR(fence))
-			return fence;
-	} else {
-		xe_assert(vm->xe, xe_vm_in_fault_mode(vm));
-
-		fence = xe_exec_queue_last_fence_get(wait_exec_queue, vm);
-	}
-
-	return fence;
-}
-
-static struct dma_fence *
-xe_vm_unbind(struct xe_vm *vm, struct xe_vma *vma,
-	     struct xe_exec_queue *q, struct xe_sync_entry *syncs,
-	     u32 num_syncs, bool first_op, bool last_op)
-{
-	struct dma_fence *fence;
-
-	xe_vm_assert_held(vm);
-	xe_bo_assert_held(xe_vma_bo(vma));
-
-	fence = xe_vm_unbind_vma(vma, q, syncs, num_syncs, first_op, last_op);
-	if (IS_ERR(fence))
-		return fence;
-
-	return fence;
-}
-
 #define ALL_DRM_XE_VM_CREATE_FLAGS (DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE | \
 				    DRM_XE_VM_CREATE_FLAG_LR_MODE | \
 				    DRM_XE_VM_CREATE_FLAG_FAULT_MODE)
@@ -2029,21 +1855,6 @@ static const u32 region_to_mem_type[] = {
 	XE_PL_VRAM1,
 };
 
-static struct dma_fence *
-xe_vm_prefetch(struct xe_vm *vm, struct xe_vma *vma,
-	       struct xe_exec_queue *q, struct xe_sync_entry *syncs,
-	       u32 num_syncs, bool first_op, bool last_op)
-{
-	struct xe_exec_queue *wait_exec_queue = to_wait_exec_queue(vm, q);
-
-	if (vma->tile_mask != (vma->tile_present & ~vma->tile_invalidated)) {
-		return xe_vm_bind(vm, vma, q, xe_vma_bo(vma), syncs, num_syncs,
-				  vma->tile_mask, true, first_op, last_op);
-	} else {
-		return xe_exec_queue_last_fence_get(wait_exec_queue, vm);
-	}
-}
-
 static void prep_vma_destroy(struct xe_vm *vm, struct xe_vma *vma,
 			     bool post_commit)
 {
@@ -2332,13 +2143,10 @@ static int xe_vma_op_commit(struct xe_vm *vm, struct xe_vma_op *op)
 	return err;
 }
 
-static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q,
-				   struct drm_gpuva_ops *ops,
-				   struct xe_sync_entry *syncs, u32 num_syncs,
-				   struct xe_vma_ops *vops, bool last)
+static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
+				   struct xe_vma_ops *vops)
 {
 	struct xe_device *xe = vm->xe;
-	struct xe_vma_op *last_op = NULL;
 	struct drm_gpuva_op *__op;
 	struct xe_tile *tile;
 	u8 id, tile_mask = 0;
@@ -2352,19 +2160,10 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q,
 	drm_gpuva_for_each_op(__op, ops) {
 		struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
 		struct xe_vma *vma;
-		bool first = list_empty(&vops->list);
 		unsigned int flags = 0;
 
 		INIT_LIST_HEAD(&op->link);
 		list_add_tail(&op->link, &vops->list);
-
-		if (first) {
-			op->flags |= XE_VMA_OP_FIRST;
-			op->num_syncs = num_syncs;
-			op->syncs = syncs;
-		}
-
-		op->q = q;
 		op->tile_mask = tile_mask;
 
 		switch (op->base.op) {
@@ -2477,197 +2276,21 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q,
 		}
 		case DRM_GPUVA_OP_UNMAP:
 		case DRM_GPUVA_OP_PREFETCH:
+			/* FIXME: Need to skip some prefetch ops */
 			xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
 			break;
 		default:
 			drm_warn(&vm->xe->drm, "NOT POSSIBLE");
 		}
 
-		last_op = op;
-
 		err = xe_vma_op_commit(vm, op);
 		if (err)
 			return err;
 	}
 
-	/* FIXME: Unhandled corner case */
-	XE_WARN_ON(!last_op && last && !list_empty(&vops->list));
-
-	if (!last_op)
-		return 0;
-
-	if (last) {
-		last_op->flags |= XE_VMA_OP_LAST;
-		last_op->num_syncs = num_syncs;
-		last_op->syncs = syncs;
-	}
-
 	return 0;
 }
 
-static struct dma_fence *op_execute(struct xe_vm *vm, struct xe_vma *vma,
-				    struct xe_vma_op *op)
-{
-	struct dma_fence *fence = NULL;
-
-	lockdep_assert_held(&vm->lock);
-
-	xe_vm_assert_held(vm);
-	xe_bo_assert_held(xe_vma_bo(vma));
-
-	switch (op->base.op) {
-	case DRM_GPUVA_OP_MAP:
-		fence = xe_vm_bind(vm, vma, op->q, xe_vma_bo(vma),
-				   op->syncs, op->num_syncs,
-				   op->tile_mask,
-				   op->map.immediate || !xe_vm_in_fault_mode(vm),
-				   op->flags & XE_VMA_OP_FIRST,
-				   op->flags & XE_VMA_OP_LAST);
-		break;
-	case DRM_GPUVA_OP_REMAP:
-	{
-		bool prev = !!op->remap.prev;
-		bool next = !!op->remap.next;
-
-		if (!op->remap.unmap_done) {
-			if (prev || next)
-				vma->gpuva.flags |= XE_VMA_FIRST_REBIND;
-			fence = xe_vm_unbind(vm, vma, op->q, op->syncs,
-					     op->num_syncs,
-					     op->flags & XE_VMA_OP_FIRST,
-					     op->flags & XE_VMA_OP_LAST &&
-					     !prev && !next);
-			if (IS_ERR(fence))
-				break;
-			op->remap.unmap_done = true;
-		}
-
-		if (prev) {
-			op->remap.prev->gpuva.flags |= XE_VMA_LAST_REBIND;
-			dma_fence_put(fence);
-			fence = xe_vm_bind(vm, op->remap.prev, op->q,
-					   xe_vma_bo(op->remap.prev), op->syncs,
-					   op->num_syncs,
-					   op->remap.prev->tile_mask, true,
-					   false,
-					   op->flags & XE_VMA_OP_LAST && !next);
-			op->remap.prev->gpuva.flags &= ~XE_VMA_LAST_REBIND;
-			if (IS_ERR(fence))
-				break;
-			op->remap.prev = NULL;
-		}
-
-		if (next) {
-			op->remap.next->gpuva.flags |= XE_VMA_LAST_REBIND;
-			dma_fence_put(fence);
-			fence = xe_vm_bind(vm, op->remap.next, op->q,
-					   xe_vma_bo(op->remap.next),
-					   op->syncs, op->num_syncs,
-					   op->remap.next->tile_mask, true,
-					   false, op->flags & XE_VMA_OP_LAST);
-			op->remap.next->gpuva.flags &= ~XE_VMA_LAST_REBIND;
-			if (IS_ERR(fence))
-				break;
-			op->remap.next = NULL;
-		}
-
-		break;
-	}
-	case DRM_GPUVA_OP_UNMAP:
-		fence = xe_vm_unbind(vm, vma, op->q, op->syncs,
-				     op->num_syncs, op->flags & XE_VMA_OP_FIRST,
-				     op->flags & XE_VMA_OP_LAST);
-		break;
-	case DRM_GPUVA_OP_PREFETCH:
-		fence = xe_vm_prefetch(vm, vma, op->q, op->syncs, op->num_syncs,
-				       op->flags & XE_VMA_OP_FIRST,
-				       op->flags & XE_VMA_OP_LAST);
-		break;
-	default:
-		drm_warn(&vm->xe->drm, "NOT POSSIBLE");
-	}
-
-	if (IS_ERR(fence))
-		trace_xe_vma_fail(vma);
-
-	return fence;
-}
-
-static struct dma_fence *
-__xe_vma_op_execute(struct xe_vm *vm, struct xe_vma *vma,
-		    struct xe_vma_op *op)
-{
-	struct dma_fence *fence;
-	int err;
-
-retry_userptr:
-	fence = op_execute(vm, vma, op);
-	if (IS_ERR(fence) && PTR_ERR(fence) == -EAGAIN) {
-		lockdep_assert_held_write(&vm->lock);
-
-		if (op->base.op == DRM_GPUVA_OP_REMAP) {
-			if (!op->remap.unmap_done)
-				vma = gpuva_to_vma(op->base.remap.unmap->va);
-			else if (op->remap.prev)
-				vma = op->remap.prev;
-			else
-				vma = op->remap.next;
-		}
-
-		if (xe_vma_is_userptr(vma)) {
-			err = xe_vma_userptr_pin_pages(to_userptr_vma(vma));
-			if (!err)
-				goto retry_userptr;
-
-			fence = ERR_PTR(err);
-			trace_xe_vma_fail(vma);
-		}
-	}
-
-	return fence;
-}
-
-static struct dma_fence *
-xe_vma_op_execute(struct xe_vm *vm, struct xe_vma_op *op)
-{
-	struct dma_fence *fence = ERR_PTR(-ENOMEM);
-
-	lockdep_assert_held(&vm->lock);
-
-	switch (op->base.op) {
-	case DRM_GPUVA_OP_MAP:
-		fence = __xe_vma_op_execute(vm, op->map.vma, op);
-		break;
-	case DRM_GPUVA_OP_REMAP:
-	{
-		struct xe_vma *vma;
-
-		if (!op->remap.unmap_done)
-			vma = gpuva_to_vma(op->base.remap.unmap->va);
-		else if (op->remap.prev)
-			vma = op->remap.prev;
-		else
-			vma = op->remap.next;
-
-		fence = __xe_vma_op_execute(vm, vma, op);
-		break;
-	}
-	case DRM_GPUVA_OP_UNMAP:
-		fence = __xe_vma_op_execute(vm, gpuva_to_vma(op->base.unmap.va),
-					    op);
-		break;
-	case DRM_GPUVA_OP_PREFETCH:
-		fence = __xe_vma_op_execute(vm,
-					    gpuva_to_vma(op->base.prefetch.va),
-					    op);
-		break;
-	default:
-		drm_warn(&vm->xe->drm, "NOT POSSIBLE");
-	}
-
-	return fence;
-}
-
 static void xe_vma_op_unwind(struct xe_vm *vm, struct xe_vma_op *op,
 			     bool post_commit, bool prev_post_commit,
 			     bool next_post_commit)
@@ -2853,23 +2476,110 @@ static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
 	return 0;
 }
 
+static int vm_ops_setup_tile_args(struct xe_vm *vm, struct xe_vma_ops *vops)
+{
+	struct xe_exec_queue *q = vops->q;
+	struct xe_tile *tile;
+	int number_tiles = 0;
+	u8 id;
+
+	for_each_tile(tile, vm->xe, id) {
+		if (vops->pt_update_ops[id].num_ops)
+			++number_tiles;
+
+		if (vops->pt_update_ops[id].q)
+			continue;
+
+		if (q) {
+			vops->pt_update_ops[id].q = q;
+			if (vm->pt_root[id] && !list_empty(&q->multi_gt_list))
+				q = list_next_entry(q, multi_gt_list);
+		} else {
+			vops->pt_update_ops[id].q = vm->q[id];
+		}
+	}
+
+	return number_tiles;
+}
+
 static struct dma_fence *ops_execute(struct xe_vm *vm,
 				     struct xe_vma_ops *vops)
 {
-	struct xe_vma_op *op, *next;
+	struct xe_tile *tile;
 	struct dma_fence *fence = NULL;
+	struct dma_fence **fences = NULL;
+	struct dma_fence_array *cf = NULL;
+	int number_tiles = 0, current_fence = 0, err;
+	u8 id;
 
-	list_for_each_entry_safe(op, next, &vops->list, link) {
-		dma_fence_put(fence);
-		fence = xe_vma_op_execute(vm, op);
-		if (IS_ERR(fence)) {
-			drm_warn(&vm->xe->drm, "VM op(%d) failed with %ld",
-				 op->base.op, PTR_ERR(fence));
-			fence = ERR_PTR(-ENOSPC);
-			break;
+	number_tiles = vm_ops_setup_tile_args(vm, vops);
+	if (number_tiles == 0)
+		return ERR_PTR(-ENODATA);
+
+	if (number_tiles > 1) {
+		fences = kmalloc_array(number_tiles, sizeof(*fences),
+				       GFP_KERNEL);
+		if (!fences)
+			return ERR_PTR(-ENOMEM);
+	}
+
+	for_each_tile(tile, vm->xe, id) {
+		if (!vops->pt_update_ops[id].num_ops)
+			continue;
+
+		err = xe_pt_update_ops_prepare(tile, vops);
+		if (err) {
+			fence = ERR_PTR(err);
+			goto err_out;
 		}
 	}
 
+	for_each_tile(tile, vm->xe, id) {
+		if (!vops->pt_update_ops[id].num_ops)
+			continue;
+
+		fence = xe_pt_update_ops_run(tile, vops);
+		if (IS_ERR(fence))
+			goto err_out;
+
+		if (fences)
+			fences[current_fence++] = fence;
+	}
+
+	if (fences) {
+		cf = dma_fence_array_create(number_tiles, fences,
+					    vm->composite_fence_ctx,
+					    vm->composite_fence_seqno++,
+					    false);
+		if (!cf) {
+			--vm->composite_fence_seqno;
+			fence = ERR_PTR(-ENOMEM);
+			goto err_out;
+		}
+		fence = &cf->base;
+	}
+
+	for_each_tile(tile, vm->xe, id) {
+		if (!vops->pt_update_ops[id].num_ops)
+			continue;
+
+		xe_pt_update_ops_fini(tile, vops);
+	}
+
+	return fence;
+
+err_out:
+	for_each_tile(tile, vm->xe, id) {
+		if (!vops->pt_update_ops[id].num_ops)
+			continue;
+
+		xe_pt_update_ops_abort(tile, vops);
+	}
+	while (current_fence)
+		dma_fence_put(fences[--current_fence]);
+	kfree(fences);
+	kfree(cf);
+
 	return fence;
 }
 
@@ -2950,12 +2660,10 @@ static int vm_bind_ioctl_ops_execute(struct xe_vm *vm,
 		fence = ops_execute(vm, vops);
 		if (IS_ERR(fence)) {
 			err = PTR_ERR(fence);
-			/* FIXME: Killing VM rather than proper error handling */
-			xe_vm_kill(vm, false);
 			goto unlock;
-		} else {
-			vm_bind_ioctl_ops_fini(vm, vops, fence);
 		}
+
+		vm_bind_ioctl_ops_fini(vm, vops, fence);
 	}
 
 unlock:
@@ -3312,8 +3020,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 			goto unwind_ops;
 		}
 
-		err = vm_bind_ioctl_ops_parse(vm, q, ops[i], syncs, num_syncs,
-					      &vops, i == args->num_binds - 1);
+		err = vm_bind_ioctl_ops_parse(vm, ops[i], &vops);
 		if (err)
 			goto unwind_ops;
 	}
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index b481608b12f1..c864dba35e1d 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -259,6 +259,8 @@ static inline struct dma_resv *xe_vm_resv(struct xe_vm *vm)
 	return drm_gpuvm_resv(&vm->gpuvm);
 }
 
+void xe_vm_kill(struct xe_vm *vm, bool unlocked);
+
 /**
  * xe_vm_assert_held(vm) - Assert that the vm's reservation object is held.
  * @vm: The vm
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 211c88801182..27d651093d30 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -26,14 +26,12 @@ struct xe_vm_pgtable_update_op;
 #define XE_VMA_READ_ONLY	DRM_GPUVA_USERBITS
 #define XE_VMA_DESTROYED	(DRM_GPUVA_USERBITS << 1)
 #define XE_VMA_ATOMIC_PTE_BIT	(DRM_GPUVA_USERBITS << 2)
-#define XE_VMA_FIRST_REBIND	(DRM_GPUVA_USERBITS << 3)
-#define XE_VMA_LAST_REBIND	(DRM_GPUVA_USERBITS << 4)
-#define XE_VMA_PTE_4K		(DRM_GPUVA_USERBITS << 5)
-#define XE_VMA_PTE_2M		(DRM_GPUVA_USERBITS << 6)
-#define XE_VMA_PTE_1G		(DRM_GPUVA_USERBITS << 7)
-#define XE_VMA_PTE_64K		(DRM_GPUVA_USERBITS << 8)
-#define XE_VMA_PTE_COMPACT	(DRM_GPUVA_USERBITS << 9)
-#define XE_VMA_DUMPABLE		(DRM_GPUVA_USERBITS << 10)
+#define XE_VMA_PTE_4K		(DRM_GPUVA_USERBITS << 3)
+#define XE_VMA_PTE_2M		(DRM_GPUVA_USERBITS << 4)
+#define XE_VMA_PTE_1G		(DRM_GPUVA_USERBITS << 5)
+#define XE_VMA_PTE_64K		(DRM_GPUVA_USERBITS << 6)
+#define XE_VMA_PTE_COMPACT	(DRM_GPUVA_USERBITS << 7)
+#define XE_VMA_DUMPABLE		(DRM_GPUVA_USERBITS << 8)
 
 /** struct xe_userptr - User pointer */
 struct xe_userptr {
@@ -100,6 +98,9 @@ struct xe_vma {
 	 */
 	u8 tile_present;
 
+	/** @tile_staged: bind is staged for this VMA */
+	u8 tile_staged;
+
 	/**
 	 * @pat_index: The pat index to use when encoding the PTEs for this vma.
 	 */
@@ -315,31 +316,18 @@ struct xe_vma_op_prefetch {
 
 /** enum xe_vma_op_flags - flags for VMA operation */
 enum xe_vma_op_flags {
-	/** @XE_VMA_OP_FIRST: first VMA operation for a set of syncs */
-	XE_VMA_OP_FIRST			= BIT(0),
-	/** @XE_VMA_OP_LAST: last VMA operation for a set of syncs */
-	XE_VMA_OP_LAST			= BIT(1),
 	/** @XE_VMA_OP_COMMITTED: VMA operation committed */
-	XE_VMA_OP_COMMITTED		= BIT(2),
+	XE_VMA_OP_COMMITTED		= BIT(0),
 	/** @XE_VMA_OP_PREV_COMMITTED: Previous VMA operation committed */
-	XE_VMA_OP_PREV_COMMITTED	= BIT(3),
+	XE_VMA_OP_PREV_COMMITTED	= BIT(1),
 	/** @XE_VMA_OP_NEXT_COMMITTED: Next VMA operation committed */
-	XE_VMA_OP_NEXT_COMMITTED	= BIT(4),
+	XE_VMA_OP_NEXT_COMMITTED	= BIT(2),
 };
 
 /** struct xe_vma_op - VMA operation */
 struct xe_vma_op {
 	/** @base: GPUVA base operation */
 	struct drm_gpuva_op base;
-	/** @q: exec queue for this operation */
-	struct xe_exec_queue *q;
-	/**
-	 * @syncs: syncs for this operation, only used on first and last
-	 * operation
-	 */
-	struct xe_sync_entry *syncs;
-	/** @num_syncs: number of syncs */
-	u32 num_syncs;
 	/** @link: async operation link */
 	struct list_head link;
 	/** @flags: operation flags */
@@ -363,19 +351,14 @@ struct xe_vma_ops {
 	struct list_head list;
 	/** @vm: VM */
 	struct xe_vm *vm;
-	/** @q: exec queue these operations */
+	/** @q: exec queue for VMA operations */
 	struct xe_exec_queue *q;
 	/** @syncs: syncs these operation */
 	struct xe_sync_entry *syncs;
 	/** @num_syncs: number of syncs */
 	u32 num_syncs;
 	/** @pt_update_ops: page table update operations */
-	struct {
-		/** @ops: operations */
-		struct xe_vm_pgtable_update_op *ops;
-		/** @num_ops: number of operations */
-		u32 num_ops;
-	} pt_update_ops[XE_MAX_TILES_PER_DEVICE];
+	struct xe_vm_pgtable_update_ops pt_update_ops[XE_MAX_TILES_PER_DEVICE];
 };
 
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 4/5] drm/xe: Update VM trace events
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
                   ` (2 preceding siblings ...)
  2024-05-29 18:31 ` [PATCH v3 3/5] drm/xe: Convert multiple bind ops into single job Matthew Brost
@ 2024-05-29 18:31 ` Matthew Brost
  2024-05-29 18:31 ` [PATCH v3 5/5] drm/xe: Update PT layer with better error handling Matthew Brost
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Matthew Brost @ 2024-05-29 18:31 UTC (permalink / raw)
  To: intel-xe; +Cc: Matthew Brost, Oak Zeng, Thomas Hellström, Jonathan Cavitt

The trace events have changed moving to a single job per VM bind IOCTL,
update the trace events align with old behavior as much as possible.

Cc: Oak Zeng <oak.zeng@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
---
 drivers/gpu/drm/xe/xe_trace.h | 10 ++++-----
 drivers/gpu/drm/xe/xe_vm.c    | 42 +++++++++++++++++++++++++++++++++--
 2 files changed, 45 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
index 450f407c66e8..0f5b9df889b9 100644
--- a/drivers/gpu/drm/xe/xe_trace.h
+++ b/drivers/gpu/drm/xe/xe_trace.h
@@ -426,11 +426,6 @@ DEFINE_EVENT(xe_vma, xe_vma_acc,
 	     TP_ARGS(vma)
 );
 
-DEFINE_EVENT(xe_vma, xe_vma_fail,
-	     TP_PROTO(struct xe_vma *vma),
-	     TP_ARGS(vma)
-);
-
 DEFINE_EVENT(xe_vma, xe_vma_bind,
 	     TP_PROTO(struct xe_vma *vma),
 	     TP_ARGS(vma)
@@ -544,6 +539,11 @@ DEFINE_EVENT(xe_vm, xe_vm_rebind_worker_exit,
 	     TP_ARGS(vm)
 );
 
+DEFINE_EVENT(xe_vm, xe_vm_ops_fail,
+	     TP_PROTO(struct xe_vm *vm),
+	     TP_ARGS(vm)
+);
+
 /* GuC */
 DECLARE_EVENT_CLASS(xe_guc_ct_flow_control,
 		    TP_PROTO(u32 _head, u32 _tail, u32 size, u32 space, u32 len),
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 551048bff9ce..a205a72f411b 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2476,6 +2476,38 @@ static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
 	return 0;
 }
 
+static void op_trace(struct xe_vma_op *op)
+{
+	switch (op->base.op) {
+	case DRM_GPUVA_OP_MAP:
+		trace_xe_vma_bind(op->map.vma);
+		break;
+	case DRM_GPUVA_OP_REMAP:
+		trace_xe_vma_unbind(gpuva_to_vma(op->base.remap.unmap->va));
+		if (op->remap.prev)
+			trace_xe_vma_bind(op->remap.prev);
+		if (op->remap.next)
+			trace_xe_vma_bind(op->remap.next);
+		break;
+	case DRM_GPUVA_OP_UNMAP:
+		trace_xe_vma_unbind(gpuva_to_vma(op->base.unmap.va));
+		break;
+	case DRM_GPUVA_OP_PREFETCH:
+		trace_xe_vma_bind(gpuva_to_vma(op->base.prefetch.va));
+		break;
+	default:
+		XE_WARN_ON("NOT POSSIBLE");
+	}
+}
+
+static void trace_xe_vm_ops_execute(struct xe_vma_ops *vops)
+{
+	struct xe_vma_op *op;
+
+	list_for_each_entry(op, &vops->list, link)
+		op_trace(op);
+}
+
 static int vm_ops_setup_tile_args(struct xe_vm *vm, struct xe_vma_ops *vops)
 {
 	struct xe_exec_queue *q = vops->q;
@@ -2519,8 +2551,10 @@ static struct dma_fence *ops_execute(struct xe_vm *vm,
 	if (number_tiles > 1) {
 		fences = kmalloc_array(number_tiles, sizeof(*fences),
 				       GFP_KERNEL);
-		if (!fences)
-			return ERR_PTR(-ENOMEM);
+		if (!fences) {
+			fence = ERR_PTR(-ENOMEM);
+			goto err_trace;
+		}
 	}
 
 	for_each_tile(tile, vm->xe, id) {
@@ -2534,6 +2568,8 @@ static struct dma_fence *ops_execute(struct xe_vm *vm,
 		}
 	}
 
+	trace_xe_vm_ops_execute(vops);
+
 	for_each_tile(tile, vm->xe, id) {
 		if (!vops->pt_update_ops[id].num_ops)
 			continue;
@@ -2580,6 +2616,8 @@ static struct dma_fence *ops_execute(struct xe_vm *vm,
 	kfree(fences);
 	kfree(cf);
 
+err_trace:
+	trace_xe_vm_ops_fail(vm);
 	return fence;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 5/5] drm/xe: Update PT layer with better error handling
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
                   ` (3 preceding siblings ...)
  2024-05-29 18:31 ` [PATCH v3 4/5] drm/xe: Update VM trace events Matthew Brost
@ 2024-05-29 18:31 ` Matthew Brost
  2024-05-29 19:26 ` ✓ CI.Patch_applied: success for Convert multiple bind ops to 1 job (rev3) Patchwork
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Matthew Brost @ 2024-05-29 18:31 UTC (permalink / raw)
  To: intel-xe; +Cc: Matthew Brost

Update PT layer so if a memory allocation for a PTE fails the error can
be propagated to the user without requiring to be killed.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_pt.c | 208 ++++++++++++++++++++++++++++---------
 1 file changed, 160 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 4353a3bdf6c8..e4dd385ee954 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -845,19 +845,27 @@ xe_vm_populate_pgtable(struct xe_migrate_pt_update *pt_update, struct xe_tile *t
 	}
 }
 
-static void xe_pt_abort_bind(struct xe_vma *vma,
-			     struct xe_vm_pgtable_update *entries,
-			     u32 num_entries)
+static void xe_pt_cancel_bind(struct xe_vma *vma,
+			      struct xe_vm_pgtable_update *entries,
+			      u32 num_entries)
 {
 	u32 i, j;
 
 	for (i = 0; i < num_entries; i++) {
-		if (!entries[i].pt_entries)
+		struct xe_pt *pt = entries[i].pt;
+
+		if (!pt)
 			continue;
 
-		for (j = 0; j < entries[i].qwords; j++)
-			xe_pt_destroy(entries[i].pt_entries[j].pt, xe_vma_vm(vma)->flags, NULL);
+		if (pt->level) {
+			for (j = 0; j < entries[i].qwords; j++)
+				xe_pt_destroy(entries[i].pt_entries[j].pt,
+					      xe_vma_vm(vma)->flags, NULL);
+		}
+
 		kfree(entries[i].pt_entries);
+		entries[i].pt_entries = NULL;
+		entries[i].qwords = 0;
 	}
 }
 
@@ -873,10 +881,61 @@ static void xe_pt_commit_locks_assert(struct xe_vma *vma)
 	xe_vm_assert_held(vm);
 }
 
-static void xe_pt_commit_bind(struct xe_vma *vma,
-			      struct xe_vm_pgtable_update *entries,
-			      u32 num_entries, bool rebind,
-			      struct llist_head *deferred)
+static void xe_pt_commit(struct xe_vma *vma,
+			 struct xe_vm_pgtable_update *entries,
+			 u32 num_entries, struct llist_head *deferred)
+{
+	u32 i, j;
+
+	xe_pt_commit_locks_assert(vma);
+
+	for (i = 0; i < num_entries; i++) {
+		struct xe_pt *pt = entries[i].pt;
+
+		if (!pt->level)
+			continue;
+
+		for (j = 0; j < entries[i].qwords; j++) {
+			struct xe_pt *oldpte = entries[i].pt_entries[j].pt;
+
+			xe_pt_destroy(oldpte, xe_vma_vm(vma)->flags, deferred);
+		}
+	}
+}
+
+static void xe_pt_abort_bind(struct xe_vma *vma,
+			     struct xe_vm_pgtable_update *entries,
+			     u32 num_entries, bool rebind)
+{
+	int i, j;
+
+	xe_pt_commit_locks_assert(vma);
+
+	for (i = num_entries - 1; i >= 0; --i) {
+		struct xe_pt *pt = entries[i].pt;
+		struct xe_pt_dir *pt_dir;
+
+		if (!rebind)
+			pt->num_live -= entries[i].qwords;
+
+		if (!pt->level)
+			continue;
+
+		pt_dir = as_xe_pt_dir(pt);
+		for (j = 0; j < entries[i].qwords; j++) {
+			u32 j_ = j + entries[i].ofs;
+			struct xe_pt *newpte = xe_pt_entry(pt_dir, j_);
+			struct xe_pt *oldpte = entries[i].pt_entries[j].pt;
+
+			pt_dir->children[j_] = oldpte ? &oldpte->base : 0;
+			xe_pt_destroy(newpte, xe_vma_vm(vma)->flags, NULL);
+		}
+	}
+}
+
+static void xe_pt_commit_prepare_bind(struct xe_vma *vma,
+				      struct xe_vm_pgtable_update *entries,
+				      u32 num_entries, bool rebind)
 {
 	u32 i, j;
 
@@ -896,12 +955,13 @@ static void xe_pt_commit_bind(struct xe_vma *vma,
 		for (j = 0; j < entries[i].qwords; j++) {
 			u32 j_ = j + entries[i].ofs;
 			struct xe_pt *newpte = entries[i].pt_entries[j].pt;
+			struct xe_pt *oldpte = NULL;
 
 			if (xe_pt_entry(pt_dir, j_))
-				xe_pt_destroy(xe_pt_entry(pt_dir, j_),
-					      xe_vma_vm(vma)->flags, deferred);
+				oldpte = xe_pt_entry(pt_dir, j_);
 
 			pt_dir->children[j_] = &newpte->base;
+			entries[i].pt_entries[j].pt = oldpte;
 		}
 	}
 }
@@ -925,8 +985,6 @@ xe_pt_prepare_bind(struct xe_tile *tile, struct xe_vma *vma,
 	err = xe_pt_stage_bind(tile, vma, entries, num_entries);
 	if (!err)
 		xe_tile_assert(tile, *num_entries);
-	else /* abort! */
-		xe_pt_abort_bind(vma, entries, *num_entries);
 
 	return err;
 }
@@ -1447,7 +1505,7 @@ xe_pt_stage_unbind_post_descend(struct xe_ptw *parent, pgoff_t offset,
 				     &end_offset))
 		return 0;
 
-	(void)xe_pt_new_shared(&xe_walk->wupd, xe_child, offset, false);
+	(void)xe_pt_new_shared(&xe_walk->wupd, xe_child, offset, true);
 	xe_walk->wupd.updates[level].update->qwords = end_offset - offset;
 
 	return 0;
@@ -1515,32 +1573,57 @@ xe_migrate_clear_pgtable_callback(struct xe_migrate_pt_update *pt_update,
 		memset64(ptr, empty, num_qwords);
 }
 
+static void xe_pt_abort_unbind(struct xe_vma *vma,
+			       struct xe_vm_pgtable_update *entries,
+			       u32 num_entries)
+{
+	int j, i;
+
+	xe_pt_commit_locks_assert(vma);
+
+	for (j = num_entries - 1; j >= 0; --j) {
+		struct xe_vm_pgtable_update *entry = &entries[j];
+		struct xe_pt *pt = entry->pt;
+		struct xe_pt_dir *pt_dir = as_xe_pt_dir(pt);
+
+		pt->num_live += entry->qwords;
+
+		if (!pt->level)
+			continue;
+
+		for (i = entry->ofs; i < entry->ofs + entry->qwords; i++)
+			pt_dir->children[i] =
+				entries[j].pt_entries[i - entry->ofs].pt ?
+				&entries[j].pt_entries[i - entry->ofs].pt->base : 0;
+	}
+}
+
 static void
-xe_pt_commit_unbind(struct xe_vma *vma,
-		    struct xe_vm_pgtable_update *entries, u32 num_entries,
-		    struct llist_head *deferred)
+xe_pt_commit_prepare_unbind(struct xe_vma *vma,
+			    struct xe_vm_pgtable_update *entries,
+			    u32 num_entries)
 {
-	u32 j;
+	int j, i;
 
 	xe_pt_commit_locks_assert(vma);
 
 	for (j = 0; j < num_entries; ++j) {
 		struct xe_vm_pgtable_update *entry = &entries[j];
 		struct xe_pt *pt = entry->pt;
+		struct xe_pt_dir *pt_dir;
 
 		pt->num_live -= entry->qwords;
-		if (pt->level) {
-			struct xe_pt_dir *pt_dir = as_xe_pt_dir(pt);
-			u32 i;
-
-			for (i = entry->ofs; i < entry->ofs + entry->qwords;
-			     i++) {
-				if (xe_pt_entry(pt_dir, i))
-					xe_pt_destroy(xe_pt_entry(pt_dir, i),
-						      xe_vma_vm(vma)->flags, deferred);
+		if (!pt->level)
+			continue;
 
-				pt_dir->children[i] = NULL;
-			}
+		pt_dir = as_xe_pt_dir(pt);
+		for (i = entry->ofs; i < entry->ofs + entry->qwords; i++) {
+			if (xe_pt_entry(pt_dir, i))
+				entries[j].pt_entries[i - entry->ofs].pt =
+					xe_pt_entry(pt_dir, i);
+			else
+				entries[j].pt_entries[i - entry->ofs].pt = NULL;
+			pt_dir->children[i] = NULL;
 		}
 	}
 }
@@ -1586,7 +1669,6 @@ static int bind_op_prepare(struct xe_vm *vm, struct xe_tile *tile,
 {
 	u32 current_op = pt_update_ops->current_op;
 	struct xe_vm_pgtable_update_op *pt_op = &pt_update_ops->ops[current_op];
-	struct llist_head *deferred = &pt_update_ops->deferred;
 	int err;
 
 	xe_bo_assert_held(xe_vma_bo(vma));
@@ -1633,11 +1715,12 @@ static int bind_op_prepare(struct xe_vm *vm, struct xe_tile *tile,
 			(!pt_op->rebind && vm->scratch_pt[tile->id] &&
 			 xe_vm_in_preempt_fence_mode(vm));
 
-		/* FIXME: Don't commit right away */
 		vma->tile_staged |= BIT(tile->id);
 		pt_op->vma = vma;
-		xe_pt_commit_bind(vma, pt_op->entries, pt_op->num_entries,
-				  pt_op->rebind, deferred);
+		xe_pt_commit_prepare_bind(vma, pt_op->entries,
+					  pt_op->num_entries, pt_op->rebind);
+	} else {
+		xe_pt_cancel_bind(vma, pt_op->entries, pt_op->num_entries);
 	}
 
 	return err;
@@ -1649,7 +1732,6 @@ static int unbind_op_prepare(struct xe_tile *tile,
 {
 	u32 current_op = pt_update_ops->current_op;
 	struct xe_vm_pgtable_update_op *pt_op = &pt_update_ops->ops[current_op];
-	struct llist_head *deferred = &pt_update_ops->deferred;
 	int err;
 
 	if (!((vma->tile_present | vma->tile_staged) & BIT(tile->id)))
@@ -1685,9 +1767,7 @@ static int unbind_op_prepare(struct xe_tile *tile,
 	pt_update_ops->needs_userptr_lock |= xe_vma_is_userptr(vma);
 	pt_update_ops->needs_invalidation = true;
 
-	/* FIXME: Don't commit right away */
-	xe_pt_commit_unbind(vma, pt_op->entries, pt_op->num_entries,
-			    deferred);
+	xe_pt_commit_prepare_unbind(vma, pt_op->entries, pt_op->num_entries);
 
 	return 0;
 }
@@ -1908,7 +1988,7 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
 	struct invalidation_fence *ifence = NULL;
 	struct xe_range_fence *rfence;
 	struct xe_vma_op *op;
-	int err = 0;
+	int err = 0, i;
 	struct xe_migrate_pt_update update = {
 		.ops = pt_update_ops->needs_userptr_lock ?
 			&userptr_migrate_ops :
@@ -1928,8 +2008,10 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
 
 	if (pt_update_ops->needs_invalidation) {
 		ifence = kzalloc(sizeof(*ifence), GFP_KERNEL);
-		if (!ifence)
-			return ERR_PTR(-ENOMEM);
+		if (!ifence) {
+			err = -ENOMEM;
+			goto kill_vm_tile1;
+		}
 	}
 
 	rfence = kzalloc(sizeof(*rfence), GFP_KERNEL);
@@ -1944,6 +2026,15 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
 		goto free_rfence;
 	}
 
+	/* Point of no return - VM killed if failure after this */
+	for (i = 0; i < pt_update_ops->current_op; ++i) {
+		struct xe_vm_pgtable_update_op *pt_op = &pt_update_ops->ops[i];
+
+		xe_pt_commit(pt_op->vma, pt_op->entries,
+			     pt_op->num_entries, &pt_update_ops->deferred);
+		pt_op->vma = NULL;	/* skip in xe_pt_update_ops_abort */
+	}
+
 	err = xe_range_fence_insert(&vm->rftree[tile->id], rfence,
 				    &xe_range_fence_kfree_ops,
 				    pt_update_ops->start,
@@ -1979,10 +2070,15 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
 	if (pt_update_ops->needs_userptr_lock)
 		up_read(&vm->userptr.notifier_lock);
 	dma_fence_put(fence);
+	if (!tile->id)
+		xe_vm_kill(vops->vm, false);
 free_rfence:
 	kfree(rfence);
 free_ifence:
 	kfree(ifence);
+kill_vm_tile1:
+	if (err != -EAGAIN && tile->id)
+		xe_vm_kill(vops->vm, false);
 
 	return ERR_PTR(err);
 }
@@ -2003,12 +2099,10 @@ void xe_pt_update_ops_fini(struct xe_tile *tile, struct xe_vma_ops *vops)
 	lockdep_assert_held(&vops->vm->lock);
 	xe_vm_assert_held(vops->vm);
 
-	/* FIXME: Not 100% correct */
-	for (i = 0; i < pt_update_ops->num_ops; ++i) {
+	for (i = 0; i < pt_update_ops->current_op; ++i) {
 		struct xe_vm_pgtable_update_op *pt_op = &pt_update_ops->ops[i];
 
-		if (pt_op->bind)
-			xe_pt_free_bind(pt_op->entries, pt_op->num_entries);
+		xe_pt_free_bind(pt_op->entries, pt_op->num_entries);
 	}
 	xe_bo_put_commit(&vops->pt_update_ops[tile->id].deferred);
 }
@@ -2022,10 +2116,28 @@ void xe_pt_update_ops_fini(struct xe_tile *tile, struct xe_vma_ops *vops)
  */
 void xe_pt_update_ops_abort(struct xe_tile *tile, struct xe_vma_ops *vops)
 {
+	struct xe_vm_pgtable_update_ops *pt_update_ops =
+		&vops->pt_update_ops[tile->id];
+	int i;
+
 	lockdep_assert_held(&vops->vm->lock);
 	xe_vm_assert_held(vops->vm);
 
-	/* FIXME: Just kill VM for now + cleanup PTs */
+	for (i = pt_update_ops->num_ops - 1; i >= 0; --i) {
+		struct xe_vm_pgtable_update_op *pt_op =
+			&pt_update_ops->ops[i];
+
+		if (!pt_op->vma || i >= pt_update_ops->current_op)
+			continue;
+
+		if (pt_op->bind)
+			xe_pt_abort_bind(pt_op->vma, pt_op->entries,
+					 pt_op->num_entries,
+					 pt_op->rebind);
+		else
+			xe_pt_abort_unbind(pt_op->vma, pt_op->entries,
+					   pt_op->num_entries);
+	}
+
 	xe_bo_put_commit(&vops->pt_update_ops[tile->id].deferred);
-	xe_vm_kill(vops->vm, false);
- }
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* ✓ CI.Patch_applied: success for Convert multiple bind ops to 1 job (rev3)
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
                   ` (4 preceding siblings ...)
  2024-05-29 18:31 ` [PATCH v3 5/5] drm/xe: Update PT layer with better error handling Matthew Brost
@ 2024-05-29 19:26 ` Patchwork
  2024-05-29 19:26 ` ✗ CI.checkpatch: warning " Patchwork
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2024-05-29 19:26 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Convert multiple bind ops to 1 job (rev3)
URL   : https://patchwork.freedesktop.org/series/133034/
State : success

== Summary ==

=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: 55d6179b96e0 drm-tip: 2024y-05m-29d-17h-29m-17s UTC integration manifest
=== git am output follows ===
Applying: drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue
Applying: drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops
Applying: drm/xe: Convert multiple bind ops into single job
Applying: drm/xe: Update VM trace events
Applying: drm/xe: Update PT layer with better error handling



^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✗ CI.checkpatch: warning for Convert multiple bind ops to 1 job (rev3)
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
                   ` (5 preceding siblings ...)
  2024-05-29 19:26 ` ✓ CI.Patch_applied: success for Convert multiple bind ops to 1 job (rev3) Patchwork
@ 2024-05-29 19:26 ` Patchwork
  2024-05-29 19:27 ` ✓ CI.KUnit: success " Patchwork
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2024-05-29 19:26 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Convert multiple bind ops to 1 job (rev3)
URL   : https://patchwork.freedesktop.org/series/133034/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
51ce9f6cd981d42d7467409d7dbc559a450abc1e
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit d6cd9643373ccaa0ce77f96d89dc0e15849260f4
Author: Matthew Brost <matthew.brost@intel.com>
Date:   Wed May 29 11:31:48 2024 -0700

    drm/xe: Update PT layer with better error handling
    
    Update PT layer so if a memory allocation for a PTE fails the error can
    be propagated to the user without requiring to be killed.
    
    Signed-off-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch 55d6179b96e0390025f2ba101c03b94b50cab7a1 drm-intel
1e56f3b729a5 drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue
0ad22da15204 drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops
1667d1a4058d drm/xe: Convert multiple bind ops into single job
-:13: WARNING:TYPO_SPELLING: 'implemenation' may be misspelled - perhaps 'implementation'?
#13: 
The implemenation is roughly:
    ^^^^^^^^^^^^^

-:875: WARNING:SUSPECT_CODE_INDENT: suspect code indent for conditional statements (8, 15)
#875: FILE: drivers/gpu/drm/xe/xe_pt.c:1163:
+	if (uvma->userptr.initial_bind && !xe_vm_in_fault_mode(vm))
+               return 0;

-:876: ERROR:CODE_INDENT: code indent should use tabs where possible
#876: FILE: drivers/gpu/drm/xe/xe_pt.c:1164:
+               return 0;$

-:876: WARNING:LEADING_SPACE: please, no spaces at the start of a line
#876: FILE: drivers/gpu/drm/xe/xe_pt.c:1164:
+               return 0;$

-:1657: WARNING:BLOCK_COMMENT_STYLE: Block comments should align the * on each line
#1657: FILE: drivers/gpu/drm/xe/xe_pt.c:1898:
+ * installing job fence in various places.
+  *

-:1658: WARNING:BLOCK_COMMENT_STYLE: Block comments should align the * on each line
#1658: FILE: drivers/gpu/drm/xe/xe_pt.c:1899:
+  *
+ * Return: fence on success, negative ERR_PTR on error.

-:1790: WARNING:LEADING_SPACE: please, no spaces at the start of a line
#1790: FILE: drivers/gpu/drm/xe/xe_pt.c:2031:
+ }$

total: 1 errors, 6 warnings, 0 checks, 2522 lines checked
5750d688b828 drm/xe: Update VM trace events
d6cd9643373c drm/xe: Update PT layer with better error handling



^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✓ CI.KUnit: success for Convert multiple bind ops to 1 job (rev3)
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
                   ` (6 preceding siblings ...)
  2024-05-29 19:26 ` ✗ CI.checkpatch: warning " Patchwork
@ 2024-05-29 19:27 ` Patchwork
  2024-05-29 19:39 ` ✓ CI.Build: " Patchwork
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2024-05-29 19:27 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Convert multiple bind ops to 1 job (rev3)
URL   : https://patchwork.freedesktop.org/series/133034/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[19:26:44] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[19:26:49] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
  156 | u64 ioread64_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
  163 | u64 ioread64_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
  170 | u64 ioread64be_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
  178 | u64 ioread64be_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
  264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
  272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
  280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
  288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~

[19:27:15] Starting KUnit Kernel (1/1)...
[19:27:15] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[19:27:15] =================== guc_dbm (7 subtests) ===================
[19:27:15] [PASSED] test_empty
[19:27:15] [PASSED] test_default
[19:27:15] ======================== test_size  ========================
[19:27:15] [PASSED] 4
[19:27:15] [PASSED] 8
[19:27:15] [PASSED] 32
[19:27:15] [PASSED] 256
[19:27:15] ==================== [PASSED] test_size ====================
[19:27:15] ======================= test_reuse  ========================
[19:27:15] [PASSED] 4
[19:27:15] [PASSED] 8
[19:27:15] [PASSED] 32
[19:27:15] [PASSED] 256
[19:27:15] =================== [PASSED] test_reuse ====================
[19:27:15] =================== test_range_overlap  ====================
[19:27:15] [PASSED] 4
[19:27:15] [PASSED] 8
[19:27:15] [PASSED] 32
[19:27:15] [PASSED] 256
[19:27:15] =============== [PASSED] test_range_overlap ================
[19:27:15] =================== test_range_compact  ====================
[19:27:15] [PASSED] 4
[19:27:15] [PASSED] 8
[19:27:15] [PASSED] 32
[19:27:15] [PASSED] 256
[19:27:15] =============== [PASSED] test_range_compact ================
[19:27:15] ==================== test_range_spare  =====================
[19:27:15] [PASSED] 4
[19:27:15] [PASSED] 8
[19:27:15] [PASSED] 32
[19:27:15] [PASSED] 256
[19:27:15] ================ [PASSED] test_range_spare =================
[19:27:15] ===================== [PASSED] guc_dbm =====================
[19:27:15] =================== guc_idm (6 subtests) ===================
[19:27:15] [PASSED] bad_init
[19:27:15] [PASSED] no_init
[19:27:15] [PASSED] init_fini
[19:27:15] [PASSED] check_used
[19:27:15] [PASSED] check_quota
[19:27:15] [PASSED] check_all
[19:27:15] ===================== [PASSED] guc_idm =====================
[19:27:15] ================== no_relay (3 subtests) ===================
[19:27:15] [PASSED] xe_drops_guc2pf_if_not_ready
[19:27:15] [PASSED] xe_drops_guc2vf_if_not_ready
[19:27:15] [PASSED] xe_rejects_send_if_not_ready
[19:27:15] ==================== [PASSED] no_relay =====================
[19:27:15] ================== pf_relay (14 subtests) ==================
[19:27:15] [PASSED] pf_rejects_guc2pf_too_short
[19:27:15] [PASSED] pf_rejects_guc2pf_too_long
[19:27:15] [PASSED] pf_rejects_guc2pf_no_payload
[19:27:15] [PASSED] pf_fails_no_payload
[19:27:15] [PASSED] pf_fails_bad_origin
[19:27:15] [PASSED] pf_fails_bad_type
[19:27:15] [PASSED] pf_txn_reports_error
[19:27:15] [PASSED] pf_txn_sends_pf2guc
[19:27:15] [PASSED] pf_sends_pf2guc
[19:27:15] [SKIPPED] pf_loopback_nop
[19:27:15] [SKIPPED] pf_loopback_echo
[19:27:15] [SKIPPED] pf_loopback_fail
[19:27:15] [SKIPPED] pf_loopback_busy
[19:27:15] [SKIPPED] pf_loopback_retry
[19:27:15] ==================== [PASSED] pf_relay =====================
[19:27:15] ================== vf_relay (3 subtests) ===================
[19:27:15] [PASSED] vf_rejects_guc2vf_too_short
[19:27:15] [PASSED] vf_rejects_guc2vf_too_long
[19:27:15] [PASSED] vf_rejects_guc2vf_no_payload
[19:27:15] ==================== [PASSED] vf_relay =====================
[19:27:15] ================= pf_service (11 subtests) =================
[19:27:15] [PASSED] pf_negotiate_any
[19:27:15] [PASSED] pf_negotiate_base_match
[19:27:15] [PASSED] pf_negotiate_base_newer
[19:27:15] [PASSED] pf_negotiate_base_next
[19:27:15] [SKIPPED] pf_negotiate_base_older
[19:27:15] [PASSED] pf_negotiate_base_prev
[19:27:15] [PASSED] pf_negotiate_latest_match
[19:27:15] [PASSED] pf_negotiate_latest_newer
[19:27:15] [PASSED] pf_negotiate_latest_next
[19:27:15] [SKIPPED] pf_negotiate_latest_older
[19:27:15] [SKIPPED] pf_negotiate_latest_prev
[19:27:15] =================== [PASSED] pf_service ====================
[19:27:15] ===================== lmtt (1 subtest) =====================
[19:27:15] ======================== test_ops  =========================
[19:27:15] [PASSED] 2-level
[19:27:15] [PASSED] multi-level
[19:27:15] ==================== [PASSED] test_ops =====================
[19:27:15] ====================== [PASSED] lmtt =======================
[19:27:15] ==================== xe_bo (2 subtests) ====================
[19:27:15] [SKIPPED] xe_ccs_migrate_kunit
[19:27:15] [SKIPPED] xe_bo_evict_kunit
[19:27:15] ===================== [SKIPPED] xe_bo ======================
[19:27:15] ================== xe_dma_buf (1 subtest) ==================
[19:27:15] [SKIPPED] xe_dma_buf_kunit
[19:27:15] =================== [SKIPPED] xe_dma_buf ===================
[19:27:15] ================== xe_migrate (1 subtest) ==================
[19:27:15] [SKIPPED] xe_migrate_sanity_kunit
[19:27:15] =================== [SKIPPED] xe_migrate ===================
[19:27:15] =================== xe_mocs (2 subtests) ===================
[19:27:15] [SKIPPED] xe_live_mocs_kernel_kunit
[19:27:15] [SKIPPED] xe_live_mocs_reset_kunit
[19:27:15] ==================== [SKIPPED] xe_mocs =====================
[19:27:15] ==================== args (11 subtests) ====================
[19:27:15] [PASSED] count_args_test
[19:27:15] [PASSED] call_args_example
[19:27:15] [PASSED] call_args_test
[19:27:15] [PASSED] drop_first_arg_example
[19:27:15] [PASSED] drop_first_arg_test
[19:27:15] [PASSED] first_arg_example
[19:27:15] [PASSED] first_arg_test
[19:27:15] [PASSED] last_arg_example
[19:27:15] [PASSED] last_arg_test
[19:27:15] [PASSED] pick_arg_example
[19:27:15] [PASSED] sep_comma_example
[19:27:15] ====================== [PASSED] args =======================
[19:27:15] =================== xe_pci (2 subtests) ====================
[19:27:15] [PASSED] xe_gmdid_graphics_ip
[19:27:15] [PASSED] xe_gmdid_media_ip
[19:27:15] ===================== [PASSED] xe_pci ======================
[19:27:15] ==================== xe_rtp (1 subtest) ====================
[19:27:15] ================== xe_rtp_process_tests  ===================
[19:27:15] [PASSED] coalesce-same-reg
[19:27:15] [PASSED] no-match-no-add
[19:27:15] [PASSED] no-match-no-add-multiple-rules
[19:27:15] [PASSED] two-regs-two-entries
[19:27:15] [PASSED] clr-one-set-other
[19:27:15] [PASSED] set-field
[19:27:15] [PASSED] conflict-duplicate
[19:27:15] [PASSED] conflict-not-disjoint
[19:27:15] [PASSED] conflict-reg-type
[19:27:15] ============== [PASSED] xe_rtp_process_tests ===============
stty: 'standard input': Inappropriate ioctl for device
[19:27:15] ===================== [PASSED] xe_rtp ======================
[19:27:15] ==================== xe_wa (1 subtest) =====================
[19:27:15] ======================== xe_wa_gt  =========================
[19:27:15] [PASSED] TIGERLAKE (B0)
[19:27:15] [PASSED] DG1 (A0)
[19:27:15] [PASSED] DG1 (B0)
[19:27:15] [PASSED] ALDERLAKE_S (A0)
[19:27:15] [PASSED] ALDERLAKE_S (B0)
[19:27:15] [PASSED] ALDERLAKE_S (C0)
[19:27:15] [PASSED] ALDERLAKE_S (D0)
[19:27:15] [PASSED] ALDERLAKE_P (A0)
[19:27:15] [PASSED] ALDERLAKE_P (B0)
[19:27:15] [PASSED] ALDERLAKE_P (C0)
[19:27:15] [PASSED] ALDERLAKE_S_RPLS (D0)
[19:27:15] [PASSED] ALDERLAKE_P_RPLU (E0)
[19:27:15] [PASSED] DG2_G10 (C0)
[19:27:15] [PASSED] DG2_G11 (B1)
[19:27:15] [PASSED] DG2_G12 (A1)
[19:27:15] [PASSED] METEORLAKE (g:A0, m:A0)
[19:27:15] [PASSED] METEORLAKE (g:A0, m:A0)
[19:27:15] [PASSED] METEORLAKE (g:A0, m:A0)
[19:27:15] [PASSED] LUNARLAKE (g:A0, m:A0)
[19:27:15] [PASSED] LUNARLAKE (g:B0, m:A0)
[19:27:15] ==================== [PASSED] xe_wa_gt =====================
[19:27:15] ====================== [PASSED] xe_wa ======================
[19:27:15] ============================================================
[19:27:15] Testing complete. Ran 109 tests: passed: 95, skipped: 14
[19:27:15] Elapsed time: 30.813s total, 4.267s configuring, 26.274s building, 0.221s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[19:27:15] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[19:27:17] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
  156 | u64 ioread64_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
  163 | u64 ioread64_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
  170 | u64 ioread64be_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
  178 | u64 ioread64be_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
  264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
  272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
  280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
  288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~

[19:27:38] Starting KUnit Kernel (1/1)...
[19:27:38] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[19:27:38] ============ drm_test_pick_cmdline (2 subtests) ============
[19:27:38] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[19:27:38] =============== drm_test_pick_cmdline_named  ===============
[19:27:38] [PASSED] NTSC
[19:27:38] [PASSED] NTSC-J
[19:27:38] [PASSED] PAL
[19:27:38] [PASSED] PAL-M
[19:27:38] =========== [PASSED] drm_test_pick_cmdline_named ===========
[19:27:38] ============== [PASSED] drm_test_pick_cmdline ==============
[19:27:38] ================== drm_buddy (7 subtests) ==================
[19:27:38] [PASSED] drm_test_buddy_alloc_limit
[19:27:38] [PASSED] drm_test_buddy_alloc_optimistic
[19:27:38] [PASSED] drm_test_buddy_alloc_pessimistic
[19:27:38] [PASSED] drm_test_buddy_alloc_pathological
[19:27:38] [PASSED] drm_test_buddy_alloc_contiguous
[19:27:38] [PASSED] drm_test_buddy_alloc_clear
[19:27:38] [PASSED] drm_test_buddy_alloc_range_bias
[19:27:38] ==================== [PASSED] drm_buddy ====================
[19:27:38] ============= drm_cmdline_parser (40 subtests) =============
[19:27:38] [PASSED] drm_test_cmdline_force_d_only
[19:27:38] [PASSED] drm_test_cmdline_force_D_only_dvi
[19:27:38] [PASSED] drm_test_cmdline_force_D_only_hdmi
[19:27:38] [PASSED] drm_test_cmdline_force_D_only_not_digital
[19:27:38] [PASSED] drm_test_cmdline_force_e_only
[19:27:38] [PASSED] drm_test_cmdline_res
[19:27:38] [PASSED] drm_test_cmdline_res_vesa
[19:27:38] [PASSED] drm_test_cmdline_res_vesa_rblank
[19:27:38] [PASSED] drm_test_cmdline_res_rblank
[19:27:38] [PASSED] drm_test_cmdline_res_bpp
[19:27:38] [PASSED] drm_test_cmdline_res_refresh
[19:27:38] [PASSED] drm_test_cmdline_res_bpp_refresh
[19:27:38] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[19:27:38] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[19:27:38] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[19:27:38] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[19:27:38] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[19:27:38] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[19:27:38] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[19:27:38] [PASSED] drm_test_cmdline_res_margins_force_on
[19:27:38] [PASSED] drm_test_cmdline_res_vesa_margins
[19:27:38] [PASSED] drm_test_cmdline_name
[19:27:38] [PASSED] drm_test_cmdline_name_bpp
[19:27:38] [PASSED] drm_test_cmdline_name_option
[19:27:38] [PASSED] drm_test_cmdline_name_bpp_option
[19:27:38] [PASSED] drm_test_cmdline_rotate_0
[19:27:38] [PASSED] drm_test_cmdline_rotate_90
[19:27:38] [PASSED] drm_test_cmdline_rotate_180
[19:27:38] [PASSED] drm_test_cmdline_rotate_270
[19:27:38] [PASSED] drm_test_cmdline_hmirror
[19:27:38] [PASSED] drm_test_cmdline_vmirror
[19:27:38] [PASSED] drm_test_cmdline_margin_options
[19:27:38] [PASSED] drm_test_cmdline_multiple_options
[19:27:38] [PASSED] drm_test_cmdline_bpp_extra_and_option
[19:27:38] [PASSED] drm_test_cmdline_extra_and_option
[19:27:38] [PASSED] drm_test_cmdline_freestanding_options
[19:27:38] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[19:27:38] [PASSED] drm_test_cmdline_panel_orientation
[19:27:38] ================ drm_test_cmdline_invalid  =================
[19:27:38] [PASSED] margin_only
[19:27:38] [PASSED] interlace_only
[19:27:38] [PASSED] res_missing_x
[19:27:38] [PASSED] res_missing_y
[19:27:38] [PASSED] res_bad_y
[19:27:38] [PASSED] res_missing_y_bpp
[19:27:38] [PASSED] res_bad_bpp
[19:27:38] [PASSED] res_bad_refresh
[19:27:38] [PASSED] res_bpp_refresh_force_on_off
[19:27:38] [PASSED] res_invalid_mode
[19:27:38] [PASSED] res_bpp_wrong_place_mode
[19:27:38] [PASSED] name_bpp_refresh
[19:27:38] [PASSED] name_refresh
[19:27:38] [PASSED] name_refresh_wrong_mode
[19:27:38] [PASSED] name_refresh_invalid_mode
[19:27:38] [PASSED] rotate_multiple
[19:27:38] [PASSED] rotate_invalid_val
[19:27:38] [PASSED] rotate_truncated
[19:27:38] [PASSED] invalid_option
[19:27:38] [PASSED] invalid_tv_option
[19:27:38] [PASSED] truncated_tv_option
[19:27:38] ============ [PASSED] drm_test_cmdline_invalid =============
[19:27:38] =============== drm_test_cmdline_tv_options  ===============
[19:27:38] [PASSED] NTSC
[19:27:38] [PASSED] NTSC_443
[19:27:38] [PASSED] NTSC_J
[19:27:38] [PASSED] PAL
[19:27:38] [PASSED] PAL_M
[19:27:38] [PASSED] PAL_N
[19:27:38] [PASSED] SECAM
[19:27:38] =========== [PASSED] drm_test_cmdline_tv_options ===========
[19:27:38] =============== [PASSED] drm_cmdline_parser ================
[19:27:38] ========== drmm_connector_hdmi_init (19 subtests) ==========
[19:27:38] [PASSED] drm_test_connector_hdmi_init_valid
[19:27:38] [PASSED] drm_test_connector_hdmi_init_bpc_8
[19:27:38] [PASSED] drm_test_connector_hdmi_init_bpc_10
[19:27:38] [PASSED] drm_test_connector_hdmi_init_bpc_12
[19:27:38] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[19:27:38] [PASSED] drm_test_connector_hdmi_init_bpc_null
[19:27:38] [PASSED] drm_test_connector_hdmi_init_formats_empty
[19:27:38] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[19:27:38] [PASSED] drm_test_connector_hdmi_init_null_ddc
[19:27:38] [PASSED] drm_test_connector_hdmi_init_null_product
[19:27:38] [PASSED] drm_test_connector_hdmi_init_null_vendor
[19:27:38] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[19:27:38] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[19:27:38] [PASSED] drm_test_connector_hdmi_init_product_valid
[19:27:38] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[19:27:38] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[19:27:38] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[19:27:38] ========= drm_test_connector_hdmi_init_type_valid  =========
[19:27:38] [PASSED] HDMI-A
[19:27:38] [PASSED] HDMI-B
[19:27:38] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[19:27:38] ======== drm_test_connector_hdmi_init_type_invalid  ========
[19:27:38] [PASSED] Unknown
[19:27:38] [PASSED] VGA
[19:27:38] [PASSED] DVI-I
[19:27:38] [PASSED] DVI-D
[19:27:38] [PASSED] DVI-A
[19:27:38] [PASSED] Composite
[19:27:38] [PASSED] SVIDEO
[19:27:38] [PASSED] LVDS
[19:27:38] [PASSED] Component
[19:27:38] [PASSED] DIN
[19:27:38] [PASSED] DP
[19:27:38] [PASSED] TV
[19:27:38] [PASSED] eDP
[19:27:38] [PASSED] Virtual
[19:27:38] [PASSED] DSI
[19:27:38] [PASSED] DPI
[19:27:38] [PASSED] Writeback
[19:27:38] [PASSED] SPI
[19:27:38] [PASSED] USB
[19:27:38] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[19:27:38] ============ [PASSED] drmm_connector_hdmi_init =============
[19:27:38] ============= drmm_connector_init (3 subtests) =============
[19:27:38] [PASSED] drm_test_drmm_connector_init
[19:27:38] [PASSED] drm_test_drmm_connector_init_null_ddc
[19:27:38] ========= drm_test_drmm_connector_init_type_valid  =========
[19:27:38] [PASSED] Unknown
[19:27:38] [PASSED] VGA
[19:27:38] [PASSED] DVI-I
[19:27:38] [PASSED] DVI-D
[19:27:38] [PASSED] DVI-A
[19:27:38] [PASSED] Composite
[19:27:38] [PASSED] SVIDEO
[19:27:38] [PASSED] LVDS
[19:27:38] [PASSED] Component
[19:27:38] [PASSED] DIN
[19:27:38] [PASSED] DP
[19:27:38] [PASSED] HDMI-A
[19:27:38] [PASSED] HDMI-B
[19:27:38] [PASSED] TV
[19:27:38] [PASSED] eDP
[19:27:38] [PASSED] Virtual
[19:27:38] [PASSED] DSI
[19:27:38] [PASSED] DPI
[19:27:38] [PASSED] Writeback
[19:27:38] [PASSED] SPI
[19:27:38] [PASSED] USB
[19:27:38] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[19:27:38] =============== [PASSED] drmm_connector_init ===============
[19:27:38] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[19:27:38] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[19:27:38] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[19:27:38] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[19:27:38] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[19:27:38] ========== drm_test_get_tv_mode_from_name_valid  ===========
[19:27:38] [PASSED] NTSC
[19:27:38] [PASSED] NTSC-443
[19:27:38] [PASSED] NTSC-J
[19:27:38] [PASSED] PAL
[19:27:38] [PASSED] PAL-M
[19:27:38] [PASSED] PAL-N
[19:27:38] [PASSED] SECAM
[19:27:38] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[19:27:38] [PASSED] drm_test_get_tv_mode_from_name_truncated
[19:27:38] ============ [PASSED] drm_get_tv_mode_from_name ============
[19:27:38] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[19:27:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[19:27:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[19:27:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[19:27:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[19:27:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[19:27:38] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[19:27:38] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[19:27:38] [PASSED] VIC 96
[19:27:38] [PASSED] VIC 97
[19:27:38] [PASSED] VIC 101
[19:27:38] [PASSED] VIC 102
[19:27:38] [PASSED] VIC 106
[19:27:38] [PASSED] VIC 107
[19:27:38] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[19:27:38] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[19:27:38] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[19:27:38] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[19:27:38] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[19:27:38] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[19:27:38] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[19:27:38] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[19:27:38] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[19:27:38] [PASSED] Automatic
[19:27:38] [PASSED] Full
[19:27:38] [PASSED] Limited 16:235
[19:27:38] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[19:27:38] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[19:27:38] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[19:27:38] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[19:27:38] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[19:27:38] [PASSED] RGB
[19:27:38] [PASSED] YUV 4:2:0
[19:27:38] [PASSED] YUV 4:2:2
[19:27:38] [PASSED] YUV 4:4:4
[19:27:38] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[19:27:38] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[19:27:38] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[19:27:38] ============= drm_damage_helper (21 subtests) ==============
[19:27:38] [PASSED] drm_test_damage_iter_no_damage
[19:27:38] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[19:27:38] [PASSED] drm_test_damage_iter_no_damage_src_moved
[19:27:38] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[19:27:38] [PASSED] drm_test_damage_iter_no_damage_not_visible
[19:27:38] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[19:27:38] [PASSED] drm_test_damage_iter_no_damage_no_fb
[19:27:38] [PASSED] drm_test_damage_iter_simple_damage
[19:27:38] [PASSED] drm_test_damage_iter_single_damage
[19:27:38] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[19:27:38] [PASSED] drm_test_damage_iter_single_damage_outside_src
[19:27:38] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[19:27:38] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[19:27:38] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[19:27:38] [PASSED] drm_test_damage_iter_single_damage_src_moved
[19:27:38] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[19:27:38] [PASSED] drm_test_damage_iter_damage
[19:27:38] [PASSED] drm_test_damage_iter_damage_one_intersect
[19:27:38] [PASSED] drm_test_damage_iter_damage_one_outside
[19:27:38] [PASSED] drm_test_damage_iter_damage_src_moved
[19:27:38] [PASSED] drm_test_damage_iter_damage_not_visible
[19:27:38] ================ [PASSED] drm_damage_helper ================
[19:27:38] ============== drm_dp_mst_helper (3 subtests) ==============
[19:27:38] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[19:27:38] [PASSED] Clock 154000 BPP 30 DSC disabled
[19:27:38] [PASSED] Clock 234000 BPP 30 DSC disabled
[19:27:38] [PASSED] Clock 297000 BPP 24 DSC disabled
[19:27:38] [PASSED] Clock 332880 BPP 24 DSC enabled
[19:27:38] [PASSED] Clock 324540 BPP 24 DSC enabled
[19:27:38] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[19:27:38] ============== drm_test_dp_mst_calc_pbn_div  ===============
[19:27:38] [PASSED] Link rate 2000000 lane count 4
[19:27:38] [PASSED] Link rate 2000000 lane count 2
[19:27:38] [PASSED] Link rate 2000000 lane count 1
[19:27:38] [PASSED] Link rate 1350000 lane count 4
[19:27:38] [PASSED] Link rate 1350000 lane count 2
[19:27:38] [PASSED] Link rate 1350000 lane count 1
[19:27:38] [PASSED] Link rate 1000000 lane count 4
[19:27:38] [PASSED] Link rate 1000000 lane count 2
[19:27:38] [PASSED] Link rate 1000000 lane count 1
[19:27:38] [PASSED] Link rate 810000 lane count 4
[19:27:38] [PASSED] Link rate 810000 lane count 2
[19:27:38] [PASSED] Link rate 810000 lane count 1
[19:27:38] [PASSED] Link rate 540000 lane count 4
[19:27:38] [PASSED] Link rate 540000 lane count 2
[19:27:38] [PASSED] Link rate 540000 lane count 1
[19:27:38] [PASSED] Link rate 270000 lane count 4
[19:27:38] [PASSED] Link rate 270000 lane count 2
[19:27:38] [PASSED] Link rate 270000 lane count 1
[19:27:38] [PASSED] Link rate 162000 lane count 4
[19:27:38] [PASSED] Link rate 162000 lane count 2
[19:27:38] [PASSED] Link rate 162000 lane count 1
[19:27:38] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[19:27:38] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[19:27:38] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[19:27:38] [PASSED] DP_POWER_UP_PHY with port number
[19:27:38] [PASSED] DP_POWER_DOWN_PHY with port number
[19:27:38] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[19:27:38] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[19:27:38] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[19:27:38] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[19:27:38] [PASSED] DP_QUERY_PAYLOAD with port number
[19:27:38] [PASSED] DP_QUERY_PAYLOAD with VCPI
[19:27:38] [PASSED] DP_REMOTE_DPCD_READ with port number
[19:27:38] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[19:27:38] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[19:27:38] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[19:27:38] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[19:27:38] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[19:27:38] [PASSED] DP_REMOTE_I2C_READ with port number
[19:27:38] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[19:27:38] [PASSED] DP_REMOTE_I2C_READ with transactions array
[19:27:38] [PASSED] DP_REMOTE_I2C_WRITE with port number
[19:27:38] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[19:27:38] [PASSED] DP_REMOTE_I2C_WRITE with data array
[19:27:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[19:27:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[19:27:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[19:27:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[19:27:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[19:27:38] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[19:27:38] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[19:27:38] ================ [PASSED] drm_dp_mst_helper ================
[19:27:38] ================== drm_exec (7 subtests) ===================
[19:27:38] [PASSED] sanitycheck
[19:27:38] [PASSED] test_lock
[19:27:38] [PASSED] test_lock_unlock
[19:27:38] [PASSED] test_duplicates
[19:27:38] [PASSED] test_prepare
[19:27:38] [PASSED] test_prepare_array
[19:27:38] [PASSED] test_multiple_loops
[19:27:38] ==================== [PASSED] drm_exec =====================
[19:27:38] =========== drm_format_helper_test (17 subtests) ===========
[19:27:38] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[19:27:38] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[19:27:38] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[19:27:38] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[19:27:38] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[19:27:38] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[19:27:38] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[19:27:38] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[19:27:38] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[19:27:38] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[19:27:38] ============== drm_test_fb_xrgb8888_to_mono  ===============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[19:27:38] ==================== drm_test_fb_swab  =====================
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ================ [PASSED] drm_test_fb_swab =================
[19:27:38] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[19:27:38] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[19:27:38] [PASSED] single_pixel_source_buffer
[19:27:38] [PASSED] single_pixel_clip_rectangle
[19:27:38] [PASSED] well_known_colors
[19:27:38] [PASSED] destination_pitch
[19:27:38] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[19:27:38] ================= drm_test_fb_clip_offset  =================
[19:27:38] [PASSED] pass through
[19:27:38] [PASSED] horizontal offset
[19:27:38] [PASSED] vertical offset
[19:27:38] [PASSED] horizontal and vertical offset
[19:27:38] [PASSED] horizontal offset (custom pitch)
[19:27:38] [PASSED] vertical offset (custom pitch)
[19:27:38] [PASSED] horizontal and vertical offset (custom pitch)
[19:27:38] ============= [PASSED] drm_test_fb_clip_offset =============
[19:27:38] ============== drm_test_fb_build_fourcc_list  ==============
[19:27:38] [PASSED] no native formats
[19:27:38] [PASSED] XRGB8888 as native format
[19:27:38] [PASSED] remove duplicates
[19:27:38] [PASSED] convert alpha formats
[19:27:38] [PASSED] random formats
[19:27:38] ========== [PASSED] drm_test_fb_build_fourcc_list ==========
[19:27:38] =================== drm_test_fb_memcpy  ====================
[19:27:38] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[19:27:38] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[19:27:38] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[19:27:38] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[19:27:38] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[19:27:38] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[19:27:38] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[19:27:38] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[19:27:38] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[19:27:38] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[19:27:38] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[19:27:38] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[19:27:38] =============== [PASSED] drm_test_fb_memcpy ================
[19:27:38] ============= [PASSED] drm_format_helper_test ==============
[19:27:38] ================= drm_format (18 subtests) =================
[19:27:38] [PASSED] drm_test_format_block_width_invalid
[19:27:38] [PASSED] drm_test_format_block_width_one_plane
[19:27:38] [PASSED] drm_test_format_block_width_two_plane
[19:27:38] [PASSED] drm_test_format_block_width_three_plane
[19:27:38] [PASSED] drm_test_format_block_width_tiled
[19:27:38] [PASSED] drm_test_format_block_height_invalid
[19:27:38] [PASSED] drm_test_format_block_height_one_plane
[19:27:38] [PASSED] drm_test_format_block_height_two_plane
[19:27:38] [PASSED] drm_test_format_block_height_three_plane
[19:27:38] [PASSED] drm_test_format_block_height_tiled
[19:27:38] [PASSED] drm_test_format_min_pitch_invalid
[19:27:38] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[19:27:38] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[19:27:38] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[19:27:38] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[19:27:38] [PASSED] drm_test_format_min_pitch_two_plane
[19:27:38] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[19:27:38] [PASSED] drm_test_format_min_pitch_tiled
[19:27:38] =================== [PASSED] drm_format ====================
[19:27:38] =============== drm_framebuffer (1 subtest) ================
[19:27:38] =============== drm_test_framebuffer_create  ===============
[19:27:38] [PASSED] ABGR8888 normal sizes
[19:27:38] [PASSED] ABGR8888 max sizes
[19:27:38] [PASSED] ABGR8888 pitch greater than min required
[19:27:38] [PASSED] ABGR8888 pitch less than min required
[19:27:38] [PASSED] ABGR8888 Invalid width
[19:27:38] [PASSED] ABGR8888 Invalid buffer handle
[19:27:38] [PASSED] No pixel format
[19:27:38] [PASSED] ABGR8888 Width 0
[19:27:38] [PASSED] ABGR8888 Height 0
[19:27:38] [PASSED] ABGR8888 Out of bound height * pitch combination
[19:27:38] [PASSED] ABGR8888 Large buffer offset
[19:27:38] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[19:27:38] [PASSED] ABGR8888 Valid buffer modifier
[19:27:38] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[19:27:38] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[19:27:38] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[19:27:38] [PASSED] NV12 Normal sizes
[19:27:38] [PASSED] NV12 Max sizes
[19:27:38] [PASSED] NV12 Invalid pitch
[19:27:38] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[19:27:38] [PASSED] NV12 different  modifier per-plane
[19:27:38] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[19:27:38] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[19:27:38] [PASSED] NV12 Modifier for inexistent plane
[19:27:38] [PASSED] NV12 Handle for inexistent plane
[19:27:38] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[19:27:38] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[19:27:38] [PASSED] YVU420 Normal sizes
[19:27:38] [PASSED] YVU420 Max sizes
[19:27:38] [PASSED] YVU420 Invalid pitch
[19:27:38] [PASSED] YVU420 Different pitches
[19:27:38] [PASSED] YVU420 Different buffer offsets/pitches
[19:27:38] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[19:27:38] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[19:27:38] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[19:27:38] [PASSED] YVU420 Valid modifier
[19:27:38] [PASSED] YVU420 Different modifiers per plane
[19:27:38] [PASSED] YVU420 Modifier for inexistent plane
[19:27:38] [PASSED] X0L2 Normal sizes
[19:27:38] [PASSED] X0L2 Max sizes
[19:27:38] [PASSED] X0L2 Invalid pitch
[19:27:38] [PASSED] X0L2 Pitch greater than minimum required
[19:27:38] [PASSED] X0L2 Handle for inexistent plane
[19:27:38] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[19:27:38] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[19:27:38] [PASSED] X0L2 Valid modifier
[19:27:38] [PASSED] X0L2 Modifier for inexistent plane
[19:27:38] =========== [PASSED] drm_test_framebuffer_create ===========
[19:27:38] ================= [PASSED] drm_framebuffer =================
[19:27:38] ================ drm_gem_shmem (8 subtests) ================
[19:27:38] [PASSED] drm_gem_shmem_test_obj_create
[19:27:38] [PASSED] drm_gem_shmem_test_obj_create_private
[19:27:38] [PASSED] drm_gem_shmem_test_pin_pages
[19:27:38] [PASSED] drm_gem_shmem_test_vmap
[19:27:38] [PASSED] drm_gem_shmem_test_get_pages_sgt
[19:27:38] [PASSED] drm_gem_shmem_test_get_sg_table
[19:27:38] [PASSED] drm_gem_shmem_test_madvise
[19:27:38] [PASSED] drm_gem_shmem_test_purge
[19:27:38] ================== [PASSED] drm_gem_shmem ==================
[19:27:38] === drm_atomic_helper_connector_hdmi_check (22 subtests) ===
[19:27:38] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[19:27:38] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[19:27:38] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[19:27:38] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[19:27:38] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[19:27:38] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[19:27:38] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[19:27:38] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[19:27:38] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[19:27:38] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback
[19:27:38] [PASSED] drm_test_check_max_tmds_rate_format_fallback
[19:27:38] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[19:27:38] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[19:27:38] [PASSED] drm_test_check_output_bpc_dvi
[19:27:38] [PASSED] drm_test_check_output_bpc_format_vic_1
[19:27:38] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[19:27:38] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[19:27:38] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[19:27:38] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[19:27:38] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[19:27:38] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[19:27:38] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[19:27:38] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[19:27:38] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[19:27:38] [PASSED] drm_test_check_broadcast_rgb_value
[19:27:38] [PASSED] drm_test_check_bpc_8_value
[19:27:38] [PASSED] drm_test_check_bpc_10_value
[19:27:38] [PASSED] drm_test_check_bpc_12_value
[19:27:38] [PASSED] drm_test_check_format_value
[19:27:38] [PASSED] drm_test_check_tmds_char_value
[19:27:38] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[19:27:38] ================= drm_managed (2 subtests) =================
[19:27:38] [PASSED] drm_test_managed_release_action
[19:27:38] [PASSED] drm_test_managed_run_action
[19:27:38] =================== [PASSED] drm_managed ===================
[19:27:38] =================== drm_mm (6 subtests) ====================
[19:27:38] [PASSED] drm_test_mm_init
[19:27:38] [PASSED] drm_test_mm_debug
[19:27:38] [PASSED] drm_test_mm_align32
[19:27:38] [PASSED] drm_test_mm_align64
[19:27:38] [PASSED] drm_test_mm_lowest
[19:27:38] [PASSED] drm_test_mm_highest
[19:27:38] ===================== [PASSED] drm_mm ======================
[19:27:38] ============= drm_modes_analog_tv (4 subtests) =============
[19:27:38] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[19:27:38] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[19:27:38] [PASSED] drm_test_modes_analog_tv_pal_576i
[19:27:38] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[19:27:38] =============== [PASSED] drm_modes_analog_tv ===============
[19:27:38] ============== drm_plane_helper (2 subtests) ===============
[19:27:38] =============== drm_test_check_plane_state  ================
[19:27:38] [PASSED] clipping_simple
[19:27:38] [PASSED] clipping_rotate_reflect
[19:27:38] [PASSED] positioning_simple
[19:27:38] [PASSED] upscaling
[19:27:38] [PASSED] downscaling
[19:27:38] [PASSED] rounding1
[19:27:38] [PASSED] rounding2
[19:27:38] [PASSED] rounding3
[19:27:38] [PASSED] rounding4
[19:27:38] =========== [PASSED] drm_test_check_plane_state ============
[19:27:38] =========== drm_test_check_invalid_plane_state  ============
[19:27:38] [PASSED] positioning_invalid
[19:27:38] [PASSED] upscaling_invalid
[19:27:38] [PASSED] downscaling_invalid
[19:27:38] ======= [PASSED] drm_test_check_invalid_plane_state ========
[19:27:38] ================ [PASSED] drm_plane_helper =================
stty: 'standard input': Inappropriate ioctl for device
[19:27:38] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[19:27:38] ====== drm_test_connector_helper_tv_get_modes_check  =======
[19:27:38] [PASSED] None
[19:27:38] [PASSED] PAL
[19:27:38] [PASSED] NTSC
[19:27:38] [PASSED] Both, NTSC Default
[19:27:38] [PASSED] Both, PAL Default
[19:27:38] [PASSED] Both, NTSC Default, with PAL on command-line
[19:27:38] [PASSED] Both, PAL Default, with NTSC on command-line
[19:27:38] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[19:27:38] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[19:27:38] ================== drm_rect (9 subtests) ===================
[19:27:38] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[19:27:38] [PASSED] drm_test_rect_clip_scaled_not_clipped
[19:27:38] [PASSED] drm_test_rect_clip_scaled_clipped
[19:27:38] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[19:27:38] ================= drm_test_rect_intersect  =================
[19:27:38] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[19:27:38] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[19:27:38] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[19:27:38] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[19:27:38] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[19:27:38] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[19:27:38] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[19:27:38] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[19:27:38] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[19:27:38] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[19:27:38] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[19:27:38] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[19:27:38] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[19:27:38] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[19:27:38] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[19:27:38] ============= [PASSED] drm_test_rect_intersect =============
[19:27:38] ================ drm_test_rect_calc_hscale  ================
[19:27:38] [PASSED] normal use
[19:27:38] [PASSED] out of max range
[19:27:38] [PASSED] out of min range
[19:27:38] [PASSED] zero dst
[19:27:38] [PASSED] negative src
[19:27:38] [PASSED] negative dst
[19:27:38] ============ [PASSED] drm_test_rect_calc_hscale ============
[19:27:38] ================ drm_test_rect_calc_vscale  ================
[19:27:38] [PASSED] normal use
[19:27:38] [PASSED] out of max range
[19:27:38] [PASSED] out of min range
[19:27:38] [PASSED] zero dst
[19:27:38] [PASSED] negative src
[19:27:38] [PASSED] negative dst
[19:27:38] ============ [PASSED] drm_test_rect_calc_vscale ============
[19:27:38] ================== drm_test_rect_rotate  ===================
[19:27:38] [PASSED] reflect-x
[19:27:38] [PASSED] reflect-y
[19:27:38] [PASSED] rotate-0
[19:27:38] [PASSED] rotate-90
[19:27:38] [PASSED] rotate-180
[19:27:38] [PASSED] rotate-270
[19:27:38] ============== [PASSED] drm_test_rect_rotate ===============
[19:27:38] ================ drm_test_rect_rotate_inv  =================
[19:27:38] [PASSED] reflect-x
[19:27:38] [PASSED] reflect-y
[19:27:38] [PASSED] rotate-0
[19:27:38] [PASSED] rotate-90
[19:27:38] [PASSED] rotate-180
[19:27:38] [PASSED] rotate-270
[19:27:38] ============ [PASSED] drm_test_rect_rotate_inv =============
[19:27:38] ==================== [PASSED] drm_rect =====================
[19:27:38] ============================================================
[19:27:38] Testing complete. Ran 511 tests: passed: 511
[19:27:39] Elapsed time: 23.347s total, 1.721s configuring, 21.377s building, 0.201s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✓ CI.Build: success for Convert multiple bind ops to 1 job (rev3)
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
                   ` (7 preceding siblings ...)
  2024-05-29 19:27 ` ✓ CI.KUnit: success " Patchwork
@ 2024-05-29 19:39 ` Patchwork
  2024-05-29 19:39 ` ✗ CI.Hooks: failure " Patchwork
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2024-05-29 19:39 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Convert multiple bind ops to 1 job (rev3)
URL   : https://patchwork.freedesktop.org/series/133034/
State : success

== Summary ==

lib/modules/6.10.0-rc1-xe/kernel/sound/core/seq/
lib/modules/6.10.0-rc1-xe/kernel/sound/core/seq/snd-seq.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/core/snd-seq-device.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/core/snd-hwdep.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/core/snd.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/core/snd-pcm.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/core/snd-compress.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/core/snd-timer.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soundcore.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/intel/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/intel/atom/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/intel/atom/snd-soc-sst-atom-hifi2-platform.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/intel/atom/sst/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/intel/atom/sst/snd-intel-sst-acpi.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/intel/atom/sst/snd-intel-sst-core.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/intel/common/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/intel/common/snd-soc-acpi-intel-match.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/amd/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/amd/snd-acp-config.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/intel/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/intel/snd-sof-pci-intel-tgl.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/intel/snd-sof-intel-hda-mlink.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/intel/snd-sof-pci-intel-cnl.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/intel/snd-sof-pci-intel-lnl.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/intel/snd-sof-intel-hda-common.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/intel/snd-sof-intel-hda-generic.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/intel/snd-sof-intel-hda.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/intel/snd-sof-pci-intel-mtl.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/amd/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/amd/snd-sof-amd-renoir.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/amd/snd-sof-amd-acp.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/snd-sof-utils.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/snd-sof-pci.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/snd-sof.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/snd-sof-probes.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/xtensa/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/sof/xtensa/snd-sof-xtensa-dsp.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/snd-soc-core.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/snd-soc-acpi.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/codecs/
lib/modules/6.10.0-rc1-xe/kernel/sound/soc/codecs/snd-soc-hdac-hda.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/hda/
lib/modules/6.10.0-rc1-xe/kernel/sound/hda/snd-intel-sdw-acpi.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/hda/ext/
lib/modules/6.10.0-rc1-xe/kernel/sound/hda/ext/snd-hda-ext-core.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/hda/snd-intel-dspcfg.ko
lib/modules/6.10.0-rc1-xe/kernel/sound/hda/snd-hda-core.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/kernel/
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/kernel/msr.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/kernel/cpuid.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/crypto/
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/crypto/sha512-ssse3.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/crypto/crct10dif-pclmul.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/crypto/ghash-clmulni-intel.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/crypto/sha1-ssse3.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/crypto/crc32-pclmul.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/crypto/sha256-ssse3.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/crypto/aesni-intel.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/crypto/polyval-clmulni.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/events/
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/events/intel/
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/events/intel/intel-cstate.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/events/rapl.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/kvm/
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/kvm/kvm.ko
lib/modules/6.10.0-rc1-xe/kernel/arch/x86/kvm/kvm-intel.ko
lib/modules/6.10.0-rc1-xe/kernel/crypto/
lib/modules/6.10.0-rc1-xe/kernel/crypto/crypto_simd.ko
lib/modules/6.10.0-rc1-xe/kernel/crypto/cmac.ko
lib/modules/6.10.0-rc1-xe/kernel/crypto/ccm.ko
lib/modules/6.10.0-rc1-xe/kernel/crypto/cryptd.ko
lib/modules/6.10.0-rc1-xe/kernel/crypto/polyval-generic.ko
lib/modules/6.10.0-rc1-xe/kernel/crypto/async_tx/
lib/modules/6.10.0-rc1-xe/kernel/crypto/async_tx/async_xor.ko
lib/modules/6.10.0-rc1-xe/kernel/crypto/async_tx/async_tx.ko
lib/modules/6.10.0-rc1-xe/kernel/crypto/async_tx/async_memcpy.ko
lib/modules/6.10.0-rc1-xe/kernel/crypto/async_tx/async_pq.ko
lib/modules/6.10.0-rc1-xe/kernel/crypto/async_tx/async_raid6_recov.ko
lib/modules/6.10.0-rc1-xe/build
lib/modules/6.10.0-rc1-xe/modules.alias.bin
lib/modules/6.10.0-rc1-xe/modules.builtin
lib/modules/6.10.0-rc1-xe/modules.softdep
lib/modules/6.10.0-rc1-xe/modules.alias
lib/modules/6.10.0-rc1-xe/modules.order
lib/modules/6.10.0-rc1-xe/modules.symbols
lib/modules/6.10.0-rc1-xe/modules.dep.bin
+ mv kernel-nodebug.tar.gz ..
+ cd ..
+ rm -rf archive
++ date +%s
+ echo -e '\e[0Ksection_end:1717011554:package_x86_64_nodebug\r\e[0K'
+ sync
^[[0Ksection_end:1717011554:package_x86_64_nodebug
^[[0K
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✗ CI.Hooks: failure for Convert multiple bind ops to 1 job (rev3)
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
                   ` (8 preceding siblings ...)
  2024-05-29 19:39 ` ✓ CI.Build: " Patchwork
@ 2024-05-29 19:39 ` Patchwork
  2024-05-29 19:41 ` ✓ CI.checksparse: success " Patchwork
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2024-05-29 19:39 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Convert multiple bind ops to 1 job (rev3)
URL   : https://patchwork.freedesktop.org/series/133034/
State : failure

== Summary ==

run-parts: executing /workspace/ci/hooks/00-showenv
+ export
+ grep -Ei '(^|\W)CI_'
declare -x CI_KERNEL_BUILD_DIR="/workspace/kernel/build64-default"
declare -x CI_KERNEL_SRC_DIR="/workspace/kernel"
declare -x CI_TOOLS_SRC_DIR="/workspace/ci"
declare -x CI_WORKSPACE_DIR="/workspace"
run-parts: executing /workspace/ci/hooks/10-build-W1
+ SRC_DIR=/workspace/kernel
+ RESTORE_DISPLAY_CONFIG=0
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ cd /workspace/kernel
++ nproc
+ make -j48 O=/workspace/kernel/build64-default modules_prepare
make[1]: Entering directory '/workspace/kernel/build64-default'
  GEN     Makefile
  UPD     include/generated/compile.h
  UPD     include/config/kernel.release
mkdir -p /workspace/kernel/build64-default/tools/objtool && make O=/workspace/kernel/build64-default subdir=tools/objtool --no-print-directory -C objtool 
  UPD     include/generated/utsrelease.h
  HOSTCC  /workspace/kernel/build64-default/tools/objtool/fixdep.o
  CALL    ../scripts/checksyscalls.sh
  HOSTLD  /workspace/kernel/build64-default/tools/objtool/fixdep-in.o
  LINK    /workspace/kernel/build64-default/tools/objtool/fixdep
  INSTALL libsubcmd_headers
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/exec-cmd.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/help.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/pager.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/parse-options.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/run-command.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/sigchain.o
  CC      /workspace/kernel/build64-default/tools/objtool/libsubcmd/subcmd-config.o
  LD      /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd-in.o
  AR      /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd.a
  CC      /workspace/kernel/build64-default/tools/objtool/weak.o
  CC      /workspace/kernel/build64-default/tools/objtool/check.o
  CC      /workspace/kernel/build64-default/tools/objtool/special.o
  CC      /workspace/kernel/build64-default/tools/objtool/builtin-check.o
  CC      /workspace/kernel/build64-default/tools/objtool/elf.o
  CC      /workspace/kernel/build64-default/tools/objtool/objtool.o
  CC      /workspace/kernel/build64-default/tools/objtool/orc_gen.o
  CC      /workspace/kernel/build64-default/tools/objtool/orc_dump.o
  CC      /workspace/kernel/build64-default/tools/objtool/libstring.o
  CC      /workspace/kernel/build64-default/tools/objtool/libctype.o
  CC      /workspace/kernel/build64-default/tools/objtool/str_error_r.o
  CC      /workspace/kernel/build64-default/tools/objtool/librbtree.o
  CC      /workspace/kernel/build64-default/tools/objtool/arch/x86/special.o
  CC      /workspace/kernel/build64-default/tools/objtool/arch/x86/decode.o
  CC      /workspace/kernel/build64-default/tools/objtool/arch/x86/orc.o
  LD      /workspace/kernel/build64-default/tools/objtool/arch/x86/objtool-in.o
  LD      /workspace/kernel/build64-default/tools/objtool/objtool-in.o
  LINK    /workspace/kernel/build64-default/tools/objtool/objtool
make[1]: Leaving directory '/workspace/kernel/build64-default'
++ nproc
+ make -j48 O=/workspace/kernel/build64-default M=drivers/gpu/drm/xe W=1
make[1]: Entering directory '/workspace/kernel/build64-default'
../scripts/Makefile.build:41: drivers/gpu/drm/xe/Makefile: No such file or directory
make[3]: *** No rule to make target 'drivers/gpu/drm/xe/Makefile'.  Stop.
make[2]: *** [/workspace/kernel/Makefile:1934: drivers/gpu/drm/xe] Error 2
make[1]: *** [/workspace/kernel/Makefile:240: __sub-make] Error 2
make[1]: Leaving directory '/workspace/kernel/build64-default'
make: *** [Makefile:240: __sub-make] Error 2
run-parts: /workspace/ci/hooks/10-build-W1 exited with return code 2



^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✓ CI.checksparse: success for Convert multiple bind ops to 1 job (rev3)
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
                   ` (9 preceding siblings ...)
  2024-05-29 19:39 ` ✗ CI.Hooks: failure " Patchwork
@ 2024-05-29 19:41 ` Patchwork
  2024-05-29 20:10 ` ✗ CI.BAT: failure " Patchwork
  2024-05-29 23:15 ` ✗ CI.FULL: " Patchwork
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2024-05-29 19:41 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Convert multiple bind ops to 1 job (rev3)
URL   : https://patchwork.freedesktop.org/series/133034/
State : success

== Summary ==

+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast 55d6179b96e0390025f2ba101c03b94b50cab7a1
Sparse version: 0.6.1 (Ubuntu: 0.6.1-2build1)
Fast mode used, each commit won't be checked separately.
Okay!

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✗ CI.BAT: failure for Convert multiple bind ops to 1 job (rev3)
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
                   ` (10 preceding siblings ...)
  2024-05-29 19:41 ` ✓ CI.checksparse: success " Patchwork
@ 2024-05-29 20:10 ` Patchwork
  2024-05-29 23:15 ` ✗ CI.FULL: " Patchwork
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2024-05-29 20:10 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 2361 bytes --]

== Series Details ==

Series: Convert multiple bind ops to 1 job (rev3)
URL   : https://patchwork.freedesktop.org/series/133034/
State : failure

== Summary ==

CI Bug Log - changes from xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1_BAT -> xe-pw-133034v3_BAT
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-133034v3_BAT absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-133034v3_BAT, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (4 -> 5)
------------------------------

  Additional (1): bat-atsm-2 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-133034v3_BAT:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_addfb_basic@invalid-set-prop-any:
    - bat-atsm-2:         NOTRUN -> [SKIP][1] +61 other tests skip
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/bat-atsm-2/igt@kms_addfb_basic@invalid-set-prop-any.html

  * igt@xe_module_load@load:
    - bat-atsm-2:         NOTRUN -> [FAIL][2]
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/bat-atsm-2/igt@xe_module_load@load.html

  
Known issues
------------

  Here are the changes found in xe-pw-133034v3_BAT that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@xe_exec_fault_mode@twice-userptr-invalidate-imm:
    - bat-atsm-2:         NOTRUN -> [SKIP][3] ([Intel XE#1130]) +192 other tests skip
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/bat-atsm-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-imm.html

  
  [Intel XE#1130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1130


Build changes
-------------

  * Linux: xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1 -> xe-pw-133034v3

  IGT_7873: b9bcded9123ac56ce05748de6c4870fb49451b87 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1: 55d6179b96e0390025f2ba101c03b94b50cab7a1
  xe-pw-133034v3: 133034v3

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/index.html

[-- Attachment #2: Type: text/html, Size: 3005 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✗ CI.FULL: failure for Convert multiple bind ops to 1 job (rev3)
  2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
                   ` (11 preceding siblings ...)
  2024-05-29 20:10 ` ✗ CI.BAT: failure " Patchwork
@ 2024-05-29 23:15 ` Patchwork
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2024-05-29 23:15 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 70224 bytes --]

== Series Details ==

Series: Convert multiple bind ops to 1 job (rev3)
URL   : https://patchwork.freedesktop.org/series/133034/
State : failure

== Summary ==

CI Bug Log - changes from xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1_full -> xe-pw-133034v3_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-133034v3_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-133034v3_full, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (3 -> 3)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-133034v3_full:

### IGT changes ###

#### Possible regressions ####

  * igt@xe_live_ktest@xe_mocs@xe_live_mocs_kernel_kunit:
    - shard-adlp:         NOTRUN -> [FAIL][1] +2 other tests fail
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@xe_live_ktest@xe_mocs@xe_live_mocs_kernel_kunit.html

  
#### Warnings ####

  * igt@kms_flip@flip-vs-suspend:
    - shard-dg2-set2:     [INCOMPLETE][2] ([Intel XE#1195]) -> [INCOMPLETE][3] +1 other test incomplete
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-463/igt@kms_flip@flip-vs-suspend.html
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_flip@flip-vs-suspend.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@xe_exec_balancer@twice-cm-parallel-userptr-rebind:
    - {shard-lnl}:        NOTRUN -> [INCOMPLETE][4]
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-lnl-3/igt@xe_exec_balancer@twice-cm-parallel-userptr-rebind.html

  
Known issues
------------

  Here are the changes found in xe-pw-133034v3_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_big_fb@x-tiled-16bpp-rotate-90:
    - shard-adlp:         NOTRUN -> [SKIP][5] ([Intel XE#1201] / [Intel XE#316]) +1 other test skip
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@kms_big_fb@x-tiled-16bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-64bpp-rotate-270:
    - shard-dg2-set2:     NOTRUN -> [SKIP][6] ([Intel XE#1201] / [Intel XE#316])
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_big_fb@x-tiled-64bpp-rotate-270.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-90:
    - shard-adlp:         NOTRUN -> [SKIP][7] ([Intel XE#1124] / [Intel XE#1201]) +6 other tests skip
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@kms_big_fb@yf-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-addfb-size-overflow:
    - shard-adlp:         NOTRUN -> [SKIP][8] ([Intel XE#1201] / [Intel XE#610])
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip:
    - shard-dg2-set2:     NOTRUN -> [SKIP][9] ([Intel XE#1124] / [Intel XE#1201]) +1 other test skip
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip.html

  * igt@kms_big_joiner@invalid-modeset-force-joiner:
    - shard-adlp:         NOTRUN -> [SKIP][10] ([Intel XE#1201]) +4 other tests skip
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_big_joiner@invalid-modeset-force-joiner.html

  * igt@kms_bw@linear-tiling-2-displays-1920x1080p:
    - shard-dg2-set2:     NOTRUN -> [SKIP][11] ([Intel XE#1201] / [Intel XE#367])
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_bw@linear-tiling-2-displays-1920x1080p.html

  * igt@kms_bw@linear-tiling-4-displays-2560x1440p:
    - shard-adlp:         NOTRUN -> [SKIP][12] ([Intel XE#1201] / [Intel XE#367]) +2 other tests skip
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_bw@linear-tiling-4-displays-2560x1440p.html

  * igt@kms_ccs@bad-pixel-format-y-tiled-gen12-mc-ccs@pipe-b-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [SKIP][13] ([Intel XE#1201] / [Intel XE#787]) +26 other tests skip
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_ccs@bad-pixel-format-y-tiled-gen12-mc-ccs@pipe-b-hdmi-a-1.html

  * igt@kms_ccs@bad-pixel-format-y-tiled-gen12-mc-ccs@pipe-d-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [SKIP][14] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#787]) +17 other tests skip
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_ccs@bad-pixel-format-y-tiled-gen12-mc-ccs@pipe-d-hdmi-a-1.html

  * igt@kms_ccs@bad-rotation-90-y-tiled-gen12-mc-ccs@pipe-d-dp-4:
    - shard-dg2-set2:     NOTRUN -> [SKIP][15] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#787]) +3 other tests skip
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-mc-ccs@pipe-d-dp-4.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-6:
    - shard-dg2-set2:     NOTRUN -> [SKIP][16] ([Intel XE#1201] / [Intel XE#787]) +13 other tests skip
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-6.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-xe2-ccs:
    - shard-adlp:         NOTRUN -> [SKIP][17] ([Intel XE#1201] / [Intel XE#1252])
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@kms_ccs@crc-primary-rotation-180-4-tiled-xe2-ccs.html

  * igt@kms_chamelium_color@ctm-limited-range:
    - shard-adlp:         NOTRUN -> [SKIP][18] ([Intel XE#1201] / [Intel XE#306])
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@kms_chamelium_color@ctm-limited-range.html

  * igt@kms_chamelium_frames@hdmi-cmp-planar-formats:
    - shard-adlp:         NOTRUN -> [SKIP][19] ([Intel XE#1201] / [Intel XE#373]) +7 other tests skip
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@kms_chamelium_frames@hdmi-cmp-planar-formats.html

  * igt@kms_chamelium_hpd@dp-hpd-after-suspend:
    - shard-dg2-set2:     NOTRUN -> [SKIP][20] ([Intel XE#1201] / [Intel XE#373])
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_chamelium_hpd@dp-hpd-after-suspend.html

  * igt@kms_content_protection@dp-mst-type-0:
    - shard-dg2-set2:     NOTRUN -> [SKIP][21] ([Intel XE#1201] / [Intel XE#307])
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_content_protection@dp-mst-type-0.html

  * igt@kms_cursor_crc@cursor-offscreen-32x32:
    - shard-adlp:         NOTRUN -> [SKIP][22] ([Intel XE#1201] / [Intel XE#455]) +14 other tests skip
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_cursor_crc@cursor-offscreen-32x32.html

  * igt@kms_cursor_crc@cursor-sliding-256x256@pipe-d-hdmi-a-6:
    - shard-dg2-set2:     NOTRUN -> [DMESG-WARN][23] ([Intel XE#1214] / [Intel XE#282]) +3 other tests dmesg-warn
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_cursor_crc@cursor-sliding-256x256@pipe-d-hdmi-a-6.html

  * igt@kms_cursor_edge_walk@128x128-top-edge@pipe-a-hdmi-a-6:
    - shard-dg2-set2:     [PASS][24] -> [INCOMPLETE][25] ([Intel XE#1195]) +1 other test incomplete
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-466/igt@kms_cursor_edge_walk@128x128-top-edge@pipe-a-hdmi-a-6.html
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-466/igt@kms_cursor_edge_walk@128x128-top-edge@pipe-a-hdmi-a-6.html

  * igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic:
    - shard-adlp:         NOTRUN -> [SKIP][26] ([Intel XE#1201] / [Intel XE#309]) +2 other tests skip
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic.html

  * igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy:
    - shard-dg2-set2:     [PASS][27] -> [DMESG-WARN][28] ([Intel XE#1214] / [Intel XE#282] / [Intel XE#910])
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-463/igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy.html
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-435/igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy.html

  * igt@kms_cursor_legacy@cursor-vs-flip-toggle:
    - shard-dg2-set2:     [PASS][29] -> [DMESG-WARN][30] ([Intel XE#1214] / [Intel XE#282]) +1 other test dmesg-warn
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-434/igt@kms_cursor_legacy@cursor-vs-flip-toggle.html
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-466/igt@kms_cursor_legacy@cursor-vs-flip-toggle.html

  * igt@kms_fbcon_fbt@fbc-suspend:
    - shard-adlp:         NOTRUN -> [DMESG-WARN][31] ([Intel XE#1191] / [Intel XE#1214])
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@kms_fbcon_fbt@fbc-suspend.html

  * igt@kms_flip@2x-blocking-absolute-wf_vblank-interruptible:
    - shard-adlp:         NOTRUN -> [SKIP][32] ([Intel XE#1201] / [Intel XE#310]) +5 other tests skip
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@kms_flip@2x-blocking-absolute-wf_vblank-interruptible.html

  * igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-fullscreen:
    - shard-adlp:         NOTRUN -> [SKIP][33] ([Intel XE#1201] / [Intel XE#651]) +6 other tests skip
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-fullscreen.html

  * igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-blt:
    - shard-adlp:         NOTRUN -> [FAIL][34] ([Intel XE#1861]) +1 other test fail
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbc-tiling-4:
    - shard-adlp:         NOTRUN -> [SKIP][35] ([Intel XE#1151] / [Intel XE#1201])
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@kms_frontbuffer_tracking@fbc-tiling-4.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-render:
    - shard-dg2-set2:     NOTRUN -> [SKIP][36] ([Intel XE#1201] / [Intel XE#651]) +5 other tests skip
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-indfb-pgflip-blt:
    - shard-dg2-set2:     NOTRUN -> [SKIP][37] ([Intel XE#1201] / [Intel XE#653])
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-indfb-scaledprimary:
    - shard-adlp:         NOTRUN -> [SKIP][38] ([Intel XE#1201] / [Intel XE#653]) +11 other tests skip
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@kms_frontbuffer_tracking@fbcpsr-indfb-scaledprimary.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-blt:
    - shard-adlp:         NOTRUN -> [SKIP][39] ([Intel XE#1201] / [Intel XE#656]) +31 other tests skip
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-blt.html

  * igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b:
    - shard-adlp:         [PASS][40] -> [DMESG-WARN][41] ([Intel XE#1214] / [Intel XE#1608])
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-adlp-2/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b.html
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format:
    - shard-adlp:         NOTRUN -> [SKIP][42] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#498]) +3 other tests skip
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-b-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [SKIP][43] ([Intel XE#1201] / [Intel XE#498]) +5 other tests skip
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-b-hdmi-a-1.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25:
    - shard-dg2-set2:     NOTRUN -> [SKIP][44] ([Intel XE#1201] / [Intel XE#305] / [Intel XE#455]) +1 other test skip
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-c-hdmi-a-6:
    - shard-dg2-set2:     NOTRUN -> [SKIP][45] ([Intel XE#1201] / [Intel XE#305]) +2 other tests skip
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-c-hdmi-a-6.html

  * igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25:
    - shard-adlp:         NOTRUN -> [SKIP][46] ([Intel XE#1201] / [Intel XE#305] / [Intel XE#455]) +1 other test skip
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25.html

  * igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25@pipe-b-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [SKIP][47] ([Intel XE#1201] / [Intel XE#305]) +2 other tests skip
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25@pipe-b-hdmi-a-1.html

  * igt@kms_pm_backlight@fade-with-suspend:
    - shard-adlp:         NOTRUN -> [SKIP][48] ([Intel XE#1201] / [Intel XE#870])
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_pm_backlight@fade-with-suspend.html

  * igt@kms_pm_dc@dc9-dpms:
    - shard-adlp:         NOTRUN -> [SKIP][49] ([Intel XE#1201] / [Intel XE#734])
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@kms_pm_dc@dc9-dpms.html

  * igt@kms_psr2_su@page_flip-p010:
    - shard-adlp:         NOTRUN -> [SKIP][50] ([Intel XE#1122] / [Intel XE#1201])
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@kms_psr2_su@page_flip-p010.html

  * igt@kms_psr2_su@page_flip-xrgb8888:
    - shard-dg2-set2:     NOTRUN -> [SKIP][51] ([Intel XE#1122] / [Intel XE#1201])
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_psr2_su@page_flip-xrgb8888.html

  * igt@kms_psr@psr-suspend:
    - shard-adlp:         NOTRUN -> [SKIP][52] ([Intel XE#1201] / [Intel XE#929]) +10 other tests skip
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@kms_psr@psr-suspend.html

  * igt@kms_psr@psr2-cursor-plane-move:
    - shard-dg2-set2:     NOTRUN -> [SKIP][53] ([Intel XE#1201] / [Intel XE#929]) +3 other tests skip
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_psr@psr2-cursor-plane-move.html

  * igt@kms_rotation_crc@multiplane-rotation-cropping-top:
    - shard-adlp:         NOTRUN -> [FAIL][54] ([Intel XE#1874]) +2 other tests fail
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_rotation_crc@multiplane-rotation-cropping-top.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90:
    - shard-adlp:         NOTRUN -> [SKIP][55] ([Intel XE#1201] / [Intel XE#327])
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90.html

  * igt@kms_scaling_modes@scaling-mode-full-aspect:
    - shard-dg2-set2:     NOTRUN -> [SKIP][56] ([Intel XE#1201] / [Intel XE#455])
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@kms_scaling_modes@scaling-mode-full-aspect.html

  * igt@kms_tiled_display@basic-test-pattern:
    - shard-adlp:         NOTRUN -> [SKIP][57] ([Intel XE#1201] / [Intel XE#362])
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@kms_tiled_display@basic-test-pattern.html

  * igt@xe_ccs@block-copy-compressed-inc-dimension:
    - shard-adlp:         NOTRUN -> [SKIP][58] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#488]) +1 other test skip
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@xe_ccs@block-copy-compressed-inc-dimension.html

  * igt@xe_compute@ccs-mode-compute-kernel:
    - shard-adlp:         NOTRUN -> [SKIP][59] ([Intel XE#1201] / [Intel XE#1447])
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@xe_compute@ccs-mode-compute-kernel.html

  * igt@xe_copy_basic@mem-copy-linear-0xfffe:
    - shard-adlp:         NOTRUN -> [SKIP][60] ([Intel XE#1123] / [Intel XE#1201])
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@xe_copy_basic@mem-copy-linear-0xfffe.html

  * igt@xe_evict@evict-beng-threads-large:
    - shard-dg2-set2:     [PASS][61] -> [TIMEOUT][62] ([Intel XE#1473])
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-464/igt@xe_evict@evict-beng-threads-large.html
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-435/igt@xe_evict@evict-beng-threads-large.html

  * igt@xe_evict@evict-mixed-many-threads-small:
    - shard-adlp:         NOTRUN -> [SKIP][63] ([Intel XE#1201] / [Intel XE#261]) +3 other tests skip
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@xe_evict@evict-mixed-many-threads-small.html

  * igt@xe_evict@evict-small-cm:
    - shard-adlp:         NOTRUN -> [SKIP][64] ([Intel XE#1201] / [Intel XE#261] / [Intel XE#688]) +1 other test skip
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@xe_evict@evict-small-cm.html

  * igt@xe_evict@evict-threads-large:
    - shard-dg2-set2:     [PASS][65] -> [INCOMPLETE][66] ([Intel XE#1195] / [Intel XE#1473])
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@xe_evict@evict-threads-large.html
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@xe_evict@evict-threads-large.html

  * igt@xe_evict_ccs@evict-overcommit-parallel-nofree-reopen:
    - shard-adlp:         NOTRUN -> [SKIP][67] ([Intel XE#1201] / [Intel XE#688])
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@xe_evict_ccs@evict-overcommit-parallel-nofree-reopen.html

  * igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-rebind:
    - shard-adlp:         NOTRUN -> [SKIP][68] ([Intel XE#1201] / [Intel XE#1392]) +5 other tests skip
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-rebind.html

  * igt@xe_exec_fault_mode@many-execqueues-bindexecqueue-userptr-invalidate-race-prefetch:
    - shard-dg2-set2:     NOTRUN -> [SKIP][69] ([Intel XE#1201] / [Intel XE#288]) +3 other tests skip
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@xe_exec_fault_mode@many-execqueues-bindexecqueue-userptr-invalidate-race-prefetch.html

  * igt@xe_exec_fault_mode@many-execqueues-bindexecqueue-userptr-rebind-prefetch:
    - shard-adlp:         NOTRUN -> [SKIP][70] ([Intel XE#1201] / [Intel XE#288]) +16 other tests skip
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@xe_exec_fault_mode@many-execqueues-bindexecqueue-userptr-rebind-prefetch.html

  * igt@xe_exec_reset@cm-gt-reset:
    - shard-adlp:         NOTRUN -> [FAIL][71] ([Intel XE#1068])
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@xe_exec_reset@cm-gt-reset.html

  * igt@xe_pm@d3cold-basic-exec:
    - shard-adlp:         NOTRUN -> [SKIP][72] ([Intel XE#1201] / [Intel XE#366])
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-6/igt@xe_pm@d3cold-basic-exec.html

  * igt@xe_pm@s2idle-basic:
    - shard-adlp:         NOTRUN -> [INCOMPLETE][73] ([Intel XE#1044] / [Intel XE#1195] / [Intel XE#1358])
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@xe_pm@s2idle-basic.html

  * igt@xe_pm@s2idle-vm-bind-userptr:
    - shard-dg2-set2:     [PASS][74] -> [INCOMPLETE][75] ([Intel XE#1195] / [Intel XE#1694])
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-464/igt@xe_pm@s2idle-vm-bind-userptr.html
   [75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-466/igt@xe_pm@s2idle-vm-bind-userptr.html

  * igt@xe_query@multigpu-query-hwconfig:
    - shard-adlp:         NOTRUN -> [SKIP][76] ([Intel XE#1201] / [Intel XE#944]) +1 other test skip
   [76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@xe_query@multigpu-query-hwconfig.html

  * igt@xe_vm@mmap-style-bind-either-side-partial-split-page-hammer:
    - shard-adlp:         NOTRUN -> [INCOMPLETE][77] ([Intel XE#1195])
   [77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@xe_vm@mmap-style-bind-either-side-partial-split-page-hammer.html

  * igt@xe_wedged@basic-wedged:
    - shard-adlp:         NOTRUN -> [DMESG-WARN][78] ([Intel XE#1214] / [Intel XE#1760])
   [78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@xe_wedged@basic-wedged.html

  
#### Possible fixes ####

  * igt@kms_cursor_legacy@cursorb-vs-flipa-atomic:
    - shard-dg2-set2:     [DMESG-WARN][79] ([Intel XE#1214] / [Intel XE#282]) -> [PASS][80] +2 other tests pass
   [79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-463/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic.html
   [80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-434/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic.html

  * igt@kms_flip@2x-plain-flip-ts-check:
    - shard-dg2-set2:     [FAIL][81] ([Intel XE#480] / [Intel XE#886]) -> [PASS][82]
   [81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-433/igt@kms_flip@2x-plain-flip-ts-check.html
   [82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-436/igt@kms_flip@2x-plain-flip-ts-check.html

  * igt@kms_flip@2x-plain-flip-ts-check@ab-hdmi-a6-dp4:
    - shard-dg2-set2:     [FAIL][83] ([Intel XE#886]) -> [PASS][84]
   [83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-433/igt@kms_flip@2x-plain-flip-ts-check@ab-hdmi-a6-dp4.html
   [84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-436/igt@kms_flip@2x-plain-flip-ts-check@ab-hdmi-a6-dp4.html

  * igt@kms_pm_rpm@system-suspend-modeset:
    - shard-dg2-set2:     [DMESG-WARN][85] ([Intel XE#1162] / [Intel XE#1214]) -> [PASS][86]
   [85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_pm_rpm@system-suspend-modeset.html
   [86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_pm_rpm@system-suspend-modeset.html

  * igt@kms_universal_plane@cursor-fb-leak:
    - shard-dg2-set2:     [FAIL][87] ([Intel XE#771] / [Intel XE#899]) -> [PASS][88]
   [87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_universal_plane@cursor-fb-leak.html
   [88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_universal_plane@cursor-fb-leak.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-b-dp-4:
    - shard-dg2-set2:     [FAIL][89] ([Intel XE#899]) -> [PASS][90] +1 other test pass
   [89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_universal_plane@cursor-fb-leak@pipe-b-dp-4.html
   [90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_universal_plane@cursor-fb-leak@pipe-b-dp-4.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-c-edp-1:
    - {shard-lnl}:        [FAIL][91] ([Intel XE#899]) -> [PASS][92] +1 other test pass
   [91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-lnl-3/igt@kms_universal_plane@cursor-fb-leak@pipe-c-edp-1.html
   [92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-lnl-1/igt@kms_universal_plane@cursor-fb-leak@pipe-c-edp-1.html

  * igt@xe_ccs@suspend-resume@tile64-compressed-compfmt0-vram01-vram01:
    - shard-dg2-set2:     [INCOMPLETE][93] ([Intel XE#1195]) -> [PASS][94] +1 other test pass
   [93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-466/igt@xe_ccs@suspend-resume@tile64-compressed-compfmt0-vram01-vram01.html
   [94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-464/igt@xe_ccs@suspend-resume@tile64-compressed-compfmt0-vram01-vram01.html

  * igt@xe_evict@evict-beng-cm-threads-large:
    - shard-dg2-set2:     [TIMEOUT][95] ([Intel XE#1473] / [Intel XE#392]) -> [PASS][96] +1 other test pass
   [95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@xe_evict@evict-beng-cm-threads-large.html
   [96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@xe_evict@evict-beng-cm-threads-large.html

  * igt@xe_exec_fault_mode@many-userptr-invalidate-race:
    - {shard-lnl}:        [ABORT][97] ([Intel XE#1761]) -> [PASS][98] +3 other tests pass
   [97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-lnl-2/igt@xe_exec_fault_mode@many-userptr-invalidate-race.html
   [98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-lnl-5/igt@xe_exec_fault_mode@many-userptr-invalidate-race.html

  * igt@xe_gt_freq@freq_fixed_exec:
    - shard-dg2-set2:     [FAIL][99] -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@xe_gt_freq@freq_fixed_exec.html
   [100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-435/igt@xe_gt_freq@freq_fixed_exec.html

  * igt@xe_pm@s4-basic:
    - {shard-lnl}:        [ABORT][101] ([Intel XE#1358] / [Intel XE#1607]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-lnl-2/igt@xe_pm@s4-basic.html
   [102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-lnl-5/igt@xe_pm@s4-basic.html

  * igt@xe_pm@s4-multiple-execs:
    - shard-adlp:         [ABORT][103] ([Intel XE#1358] / [Intel XE#1794]) -> [PASS][104]
   [103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-adlp-9/igt@xe_pm@s4-multiple-execs.html
   [104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@xe_pm@s4-multiple-execs.html

  * igt@xe_pm@s4-vm-bind-userptr:
    - shard-adlp:         [DMESG-WARN][105] ([Intel XE#1214]) -> [PASS][106]
   [105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-adlp-6/igt@xe_pm@s4-vm-bind-userptr.html
   [106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-2/igt@xe_pm@s4-vm-bind-userptr.html

  
#### Warnings ####

  * igt@kms_big_fb@4-tiled-8bpp-rotate-90:
    - shard-dg2-set2:     [SKIP][107] ([Intel XE#316]) -> [SKIP][108] ([Intel XE#1201] / [Intel XE#316]) +3 other tests skip
   [107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_big_fb@4-tiled-8bpp-rotate-90.html
   [108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_big_fb@4-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-64bpp-rotate-90:
    - shard-dg2-set2:     [SKIP][109] ([Intel XE#1201] / [Intel XE#316]) -> [SKIP][110] ([Intel XE#316]) +5 other tests skip
   [109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_big_fb@x-tiled-64bpp-rotate-90.html
   [110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_big_fb@x-tiled-64bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip:
    - shard-adlp:         [DMESG-FAIL][111] ([Intel XE#324]) -> [FAIL][112] ([Intel XE#1231])
   [111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-adlp-2/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html
   [112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html

  * igt@kms_big_fb@yf-tiled-16bpp-rotate-90:
    - shard-dg2-set2:     [SKIP][113] ([Intel XE#1124] / [Intel XE#1201]) -> [SKIP][114] ([Intel XE#1124]) +7 other tests skip
   [113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-463/igt@kms_big_fb@yf-tiled-16bpp-rotate-90.html
   [114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_big_fb@yf-tiled-16bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-64bpp-rotate-90:
    - shard-dg2-set2:     [SKIP][115] ([Intel XE#1124]) -> [SKIP][116] ([Intel XE#1124] / [Intel XE#1201]) +7 other tests skip
   [115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_big_fb@yf-tiled-64bpp-rotate-90.html
   [116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_big_fb@yf-tiled-64bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-addfb-size-overflow:
    - shard-dg2-set2:     [SKIP][117] ([Intel XE#1201] / [Intel XE#610]) -> [SKIP][118] ([Intel XE#610])
   [117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html
   [118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html

  * igt@kms_bw@linear-tiling-2-displays-2160x1440p:
    - shard-dg2-set2:     [SKIP][119] ([Intel XE#367]) -> [SKIP][120] ([Intel XE#1201] / [Intel XE#367]) +2 other tests skip
   [119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_bw@linear-tiling-2-displays-2160x1440p.html
   [120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_bw@linear-tiling-2-displays-2160x1440p.html

  * igt@kms_bw@linear-tiling-4-displays-2560x1440p:
    - shard-dg2-set2:     [SKIP][121] ([Intel XE#1201] / [Intel XE#367]) -> [SKIP][122] ([Intel XE#367])
   [121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_bw@linear-tiling-4-displays-2560x1440p.html
   [122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_bw@linear-tiling-4-displays-2560x1440p.html

  * igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-6:
    - shard-dg2-set2:     [SKIP][123] ([Intel XE#787]) -> [SKIP][124] ([Intel XE#1201] / [Intel XE#787]) +48 other tests skip
   [123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-6.html
   [124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-6.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-xe2-ccs:
    - shard-dg2-set2:     [SKIP][125] ([Intel XE#1252]) -> [SKIP][126] ([Intel XE#1201] / [Intel XE#1252])
   [125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_ccs@crc-primary-rotation-180-4-tiled-xe2-ccs.html
   [126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_ccs@crc-primary-rotation-180-4-tiled-xe2-ccs.html

  * igt@kms_ccs@crc-primary-rotation-180-y-tiled-ccs@pipe-d-dp-4:
    - shard-dg2-set2:     [SKIP][127] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][128] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#787]) +13 other tests skip
   [127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_ccs@crc-primary-rotation-180-y-tiled-ccs@pipe-d-dp-4.html
   [128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_ccs@crc-primary-rotation-180-y-tiled-ccs@pipe-d-dp-4.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc@pipe-c-hdmi-a-6:
    - shard-dg2-set2:     [SKIP][129] ([Intel XE#1201] / [Intel XE#787]) -> [SKIP][130] ([Intel XE#787]) +83 other tests skip
   [129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc@pipe-c-hdmi-a-6.html
   [130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc@pipe-c-hdmi-a-6.html

  * igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-mc-ccs@pipe-d-dp-4:
    - shard-dg2-set2:     [SKIP][131] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#787]) -> [SKIP][132] ([Intel XE#455] / [Intel XE#787]) +23 other tests skip
   [131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-mc-ccs@pipe-d-dp-4.html
   [132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-mc-ccs@pipe-d-dp-4.html

  * igt@kms_chamelium_color@ctm-limited-range:
    - shard-dg2-set2:     [SKIP][133] ([Intel XE#306]) -> [SKIP][134] ([Intel XE#1201] / [Intel XE#306]) +1 other test skip
   [133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_chamelium_color@ctm-limited-range.html
   [134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_chamelium_color@ctm-limited-range.html

  * igt@kms_chamelium_color@degamma:
    - shard-dg2-set2:     [SKIP][135] ([Intel XE#1201] / [Intel XE#306]) -> [SKIP][136] ([Intel XE#306]) +2 other tests skip
   [135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-463/igt@kms_chamelium_color@degamma.html
   [136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_chamelium_color@degamma.html

  * igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode:
    - shard-dg2-set2:     [SKIP][137] ([Intel XE#373]) -> [SKIP][138] ([Intel XE#1201] / [Intel XE#373]) +6 other tests skip
   [137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode.html
   [138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode.html

  * igt@kms_chamelium_hpd@vga-hpd-for-each-pipe:
    - shard-dg2-set2:     [SKIP][139] ([Intel XE#1201] / [Intel XE#373]) -> [SKIP][140] ([Intel XE#373]) +8 other tests skip
   [139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_chamelium_hpd@vga-hpd-for-each-pipe.html
   [140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_chamelium_hpd@vga-hpd-for-each-pipe.html

  * igt@kms_cursor_crc@cursor-dpms:
    - shard-dg2-set2:     [DMESG-WARN][141] ([Intel XE#282]) -> [DMESG-WARN][142] ([Intel XE#1214] / [Intel XE#282]) +9 other tests dmesg-warn
   [141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_cursor_crc@cursor-dpms.html
   [142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_cursor_crc@cursor-dpms.html

  * igt@kms_cursor_crc@cursor-rapid-movement-512x170:
    - shard-dg2-set2:     [SKIP][143] ([Intel XE#1201] / [Intel XE#308]) -> [SKIP][144] ([Intel XE#308])
   [143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_cursor_crc@cursor-rapid-movement-512x170.html
   [144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_cursor_crc@cursor-rapid-movement-512x170.html

  * igt@kms_cursor_edge_walk@256x256-top-bottom:
    - shard-dg2-set2:     [DMESG-WARN][145] ([Intel XE#1214] / [Intel XE#282]) -> [DMESG-WARN][146] ([Intel XE#282]) +4 other tests dmesg-warn
   [145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_cursor_edge_walk@256x256-top-bottom.html
   [146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_cursor_edge_walk@256x256-top-bottom.html

  * igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy:
    - shard-dg2-set2:     [DMESG-WARN][147] ([Intel XE#1214] / [Intel XE#282] / [Intel XE#910]) -> [DMESG-WARN][148] ([Intel XE#282] / [Intel XE#910])
   [147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy.html
   [148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy.html

  * igt@kms_display_modes@mst-extended-mode-negative:
    - shard-dg2-set2:     [SKIP][149] ([Intel XE#1201] / [Intel XE#307]) -> [SKIP][150] ([Intel XE#307])
   [149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_display_modes@mst-extended-mode-negative.html
   [150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_display_modes@mst-extended-mode-negative.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a6:
    - shard-dg2-set2:     [DMESG-WARN][151] ([Intel XE#1162]) -> [DMESG-WARN][152] ([Intel XE#1162] / [Intel XE#1214]) +1 other test dmesg-warn
   [151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a6.html
   [152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a6.html

  * igt@kms_flip@flip-vs-suspend@a-hdmi-a6:
    - shard-dg2-set2:     [DMESG-WARN][153] ([Intel XE#1162] / [Intel XE#1214]) -> [DMESG-WARN][154] ([Intel XE#1162]) +1 other test dmesg-warn
   [153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-463/igt@kms_flip@flip-vs-suspend@a-hdmi-a6.html
   [154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_flip@flip-vs-suspend@a-hdmi-a6.html

  * igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-draw-mmap-wc:
    - shard-dg2-set2:     [SKIP][155] ([Intel XE#651]) -> [SKIP][156] ([Intel XE#1201] / [Intel XE#651]) +20 other tests skip
   [155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-draw-mmap-wc.html
   [156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-1p-rte:
    - shard-dg2-set2:     [SKIP][157] ([Intel XE#1201] / [Intel XE#651]) -> [SKIP][158] ([Intel XE#651]) +25 other tests skip
   [157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_frontbuffer_tracking@fbcdrrs-1p-rte.html
   [158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_frontbuffer_tracking@fbcdrrs-1p-rte.html

  * igt@kms_frontbuffer_tracking@fbcpsr-tiling-y:
    - shard-dg2-set2:     [SKIP][159] ([Intel XE#658]) -> [SKIP][160] ([Intel XE#1201] / [Intel XE#658])
   [159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html
   [160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt:
    - shard-dg2-set2:     [SKIP][161] ([Intel XE#1201] / [Intel XE#653]) -> [SKIP][162] ([Intel XE#653]) +25 other tests skip
   [161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt.html
   [162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-suspend:
    - shard-dg2-set2:     [SKIP][163] ([Intel XE#653]) -> [SKIP][164] ([Intel XE#1201] / [Intel XE#653]) +20 other tests skip
   [163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_frontbuffer_tracking@psr-suspend.html
   [164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_frontbuffer_tracking@psr-suspend.html

  * igt@kms_getfb@getfb-reject-ccs:
    - shard-dg2-set2:     [SKIP][165] ([Intel XE#605]) -> [SKIP][166] ([Intel XE#1201] / [Intel XE#605])
   [165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_getfb@getfb-reject-ccs.html
   [166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_getfb@getfb-reject-ccs.html

  * igt@kms_hdr@invalid-hdr:
    - shard-dg2-set2:     [SKIP][167] ([Intel XE#455]) -> [SKIP][168] ([Intel XE#1201] / [Intel XE#455]) +16 other tests skip
   [167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_hdr@invalid-hdr.html
   [168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_hdr@invalid-hdr.html

  * igt@kms_plane@plane-panning-bottom-right-suspend:
    - shard-adlp:         [DMESG-WARN][169] ([Intel XE#1191] / [Intel XE#1214]) -> [DMESG-WARN][170] ([Intel XE#1191] / [Intel XE#1214] / [Intel XE#1608])
   [169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-adlp-2/igt@kms_plane@plane-panning-bottom-right-suspend.html
   [170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-8/igt@kms_plane@plane-panning-bottom-right-suspend.html

  * igt@kms_plane@plane-panning-bottom-right-suspend@pipe-a:
    - shard-dg2-set2:     [DMESG-FAIL][171] ([Intel XE#1162]) -> [FAIL][172] ([Intel XE#616])
   [171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-a.html
   [172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-a.html

  * igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b:
    - shard-dg2-set2:     [FAIL][173] ([Intel XE#616]) -> [DMESG-FAIL][174] ([Intel XE#1162])
   [173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b.html
   [174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-25-with-modifiers:
    - shard-dg2-set2:     [SKIP][175] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#498]) -> [SKIP][176] ([Intel XE#455] / [Intel XE#498]) +3 other tests skip
   [175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-modifiers.html
   [176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-modifiers.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-25-with-modifiers@pipe-a-hdmi-a-6:
    - shard-dg2-set2:     [SKIP][177] ([Intel XE#1201] / [Intel XE#498]) -> [SKIP][178] ([Intel XE#498]) +5 other tests skip
   [177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-modifiers@pipe-a-hdmi-a-6.html
   [178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-modifiers@pipe-a-hdmi-a-6.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation:
    - shard-dg2-set2:     [SKIP][179] ([Intel XE#455] / [Intel XE#498]) -> [SKIP][180] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#498]) +1 other test skip
   [179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation.html
   [180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation@pipe-b-hdmi-a-6:
    - shard-dg2-set2:     [SKIP][181] ([Intel XE#498]) -> [SKIP][182] ([Intel XE#1201] / [Intel XE#498]) +2 other tests skip
   [181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation@pipe-b-hdmi-a-6.html
   [182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation@pipe-b-hdmi-a-6.html

  * igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25:
    - shard-dg2-set2:     [SKIP][183] ([Intel XE#1201] / [Intel XE#305] / [Intel XE#455]) -> [SKIP][184] ([Intel XE#305] / [Intel XE#455])
   [183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25.html
   [184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25.html

  * igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25@pipe-c-hdmi-a-6:
    - shard-dg2-set2:     [SKIP][185] ([Intel XE#1201] / [Intel XE#305]) -> [SKIP][186] ([Intel XE#305]) +2 other tests skip
   [185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25@pipe-c-hdmi-a-6.html
   [186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25@pipe-c-hdmi-a-6.html

  * igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25@pipe-d-hdmi-a-6:
    - shard-dg2-set2:     [SKIP][187] ([Intel XE#1201] / [Intel XE#455]) -> [SKIP][188] ([Intel XE#455]) +14 other tests skip
   [187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25@pipe-d-hdmi-a-6.html
   [188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25@pipe-d-hdmi-a-6.html

  * igt@kms_pm_backlight@fade-with-dpms:
    - shard-dg2-set2:     [SKIP][189] ([Intel XE#870]) -> [SKIP][190] ([Intel XE#1201] / [Intel XE#870])
   [189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_pm_backlight@fade-with-dpms.html
   [190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_pm_backlight@fade-with-dpms.html

  * igt@kms_pm_backlight@fade-with-suspend:
    - shard-dg2-set2:     [SKIP][191] ([Intel XE#1201] / [Intel XE#870]) -> [SKIP][192] ([Intel XE#870])
   [191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_pm_backlight@fade-with-suspend.html
   [192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_pm_backlight@fade-with-suspend.html

  * igt@kms_pm_dc@dc3co-vpb-simulation:
    - shard-dg2-set2:     [SKIP][193] ([Intel XE#1122]) -> [SKIP][194] ([Intel XE#1122] / [Intel XE#1201])
   [193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_pm_dc@dc3co-vpb-simulation.html
   [194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_pm_dc@dc3co-vpb-simulation.html

  * igt@kms_pm_dc@dc6-dpms:
    - shard-dg2-set2:     [SKIP][195] ([Intel XE#1201] / [Intel XE#908]) -> [SKIP][196] ([Intel XE#908])
   [195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_pm_dc@dc6-dpms.html
   [196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_pm_dc@dc6-dpms.html

  * igt@kms_psr2_su@page_flip-p010:
    - shard-dg2-set2:     [SKIP][197] ([Intel XE#1122] / [Intel XE#1201]) -> [SKIP][198] ([Intel XE#1122]) +1 other test skip
   [197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_psr2_su@page_flip-p010.html
   [198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_psr2_su@page_flip-p010.html

  * igt@kms_psr@fbc-pr-cursor-plane-move:
    - shard-dg2-set2:     [SKIP][199] ([Intel XE#929]) -> [SKIP][200] ([Intel XE#1201] / [Intel XE#929]) +11 other tests skip
   [199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_psr@fbc-pr-cursor-plane-move.html
   [200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_psr@fbc-pr-cursor-plane-move.html

  * igt@kms_psr@fbc-psr2-sprite-plane-move:
    - shard-dg2-set2:     [SKIP][201] ([Intel XE#1201] / [Intel XE#929]) -> [SKIP][202] ([Intel XE#929]) +14 other tests skip
   [201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_psr@fbc-psr2-sprite-plane-move.html
   [202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_psr@fbc-psr2-sprite-plane-move.html

  * igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
    - shard-dg2-set2:     [SKIP][203] ([Intel XE#1149] / [Intel XE#1201]) -> [SKIP][204] ([Intel XE#1149])
   [203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
   [204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-180:
    - shard-dg2-set2:     [SKIP][205] ([Intel XE#1127] / [Intel XE#1201]) -> [SKIP][206] ([Intel XE#1127])
   [205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_rotation_crc@primary-y-tiled-reflect-x-180.html
   [206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_rotation_crc@primary-y-tiled-reflect-x-180.html

  * igt@kms_tiled_display@basic-test-pattern:
    - shard-dg2-set2:     [FAIL][207] ([Intel XE#1729]) -> [SKIP][208] ([Intel XE#1201] / [Intel XE#362])
   [207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_tiled_display@basic-test-pattern.html
   [208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@kms_tiled_display@basic-test-pattern.html

  * igt@kms_writeback@writeback-check-output-xrgb2101010:
    - shard-dg2-set2:     [SKIP][209] ([Intel XE#1201] / [Intel XE#756]) -> [SKIP][210] ([Intel XE#756])
   [209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@kms_writeback@writeback-check-output-xrgb2101010.html
   [210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@kms_writeback@writeback-check-output-xrgb2101010.html

  * igt@kms_writeback@writeback-pixel-formats:
    - shard-dg2-set2:     [SKIP][211] ([Intel XE#756]) -> [SKIP][212] ([Intel XE#1201] / [Intel XE#756])
   [211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@kms_writeback@writeback-pixel-formats.html
   [212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@kms_writeback@writeback-pixel-formats.html

  * igt@sriov_basic@enable-vfs-autoprobe-off:
    - shard-dg2-set2:     [SKIP][213] ([Intel XE#1091]) -> [SKIP][214] ([Intel XE#1091] / [Intel XE#1201])
   [213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@sriov_basic@enable-vfs-autoprobe-off.html
   [214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@sriov_basic@enable-vfs-autoprobe-off.html

  * igt@xe_compute_preempt@compute-preempt-many:
    - shard-dg2-set2:     [SKIP][215] ([Intel XE#1201] / [Intel XE#1280] / [Intel XE#455]) -> [SKIP][216] ([Intel XE#1280] / [Intel XE#455]) +1 other test skip
   [215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@xe_compute_preempt@compute-preempt-many.html
   [216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@xe_compute_preempt@compute-preempt-many.html

  * igt@xe_copy_basic@mem-copy-linear-0xfffe:
    - shard-dg2-set2:     [SKIP][217] ([Intel XE#1123]) -> [SKIP][218] ([Intel XE#1123] / [Intel XE#1201])
   [217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@xe_copy_basic@mem-copy-linear-0xfffe.html
   [218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@xe_copy_basic@mem-copy-linear-0xfffe.html

  * igt@xe_evict@evict-beng-mixed-many-threads-large:
    - shard-dg2-set2:     [INCOMPLETE][219] ([Intel XE#1473] / [Intel XE#392]) -> [INCOMPLETE][220] ([Intel XE#1195] / [Intel XE#1473] / [Intel XE#392])
   [219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@xe_evict@evict-beng-mixed-many-threads-large.html
   [220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@xe_evict@evict-beng-mixed-many-threads-large.html

  * igt@xe_exec_fault_mode@once-bindexecqueue-userptr-invalidate-prefetch:
    - shard-dg2-set2:     [SKIP][221] ([Intel XE#288]) -> [SKIP][222] ([Intel XE#1201] / [Intel XE#288]) +18 other tests skip
   [221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@xe_exec_fault_mode@once-bindexecqueue-userptr-invalidate-prefetch.html
   [222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@xe_exec_fault_mode@once-bindexecqueue-userptr-invalidate-prefetch.html

  * igt@xe_exec_fault_mode@twice-bindexecqueue-rebind-imm:
    - shard-dg2-set2:     [SKIP][223] ([Intel XE#1201] / [Intel XE#288]) -> [SKIP][224] ([Intel XE#288]) +17 other tests skip
   [223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@xe_exec_fault_mode@twice-bindexecqueue-rebind-imm.html
   [224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@xe_exec_fault_mode@twice-bindexecqueue-rebind-imm.html

  * igt@xe_huc_copy@huc_copy:
    - shard-dg2-set2:     [SKIP][225] ([Intel XE#255]) -> [SKIP][226] ([Intel XE#1201] / [Intel XE#255])
   [225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@xe_huc_copy@huc_copy.html
   [226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@xe_huc_copy@huc_copy.html

  * igt@xe_media_fill@media-fill:
    - shard-dg2-set2:     [SKIP][227] ([Intel XE#1201] / [Intel XE#560]) -> [SKIP][228] ([Intel XE#560])
   [227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@xe_media_fill@media-fill.html
   [228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@xe_media_fill@media-fill.html

  * igt@xe_pat@display-vs-wb-transient:
    - shard-dg2-set2:     [SKIP][229] ([Intel XE#1201] / [Intel XE#1337]) -> [SKIP][230] ([Intel XE#1337])
   [229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@xe_pat@display-vs-wb-transient.html
   [230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@xe_pat@display-vs-wb-transient.html

  * igt@xe_pat@pat-index-xelpg:
    - shard-dg2-set2:     [SKIP][231] ([Intel XE#979]) -> [SKIP][232] ([Intel XE#1201] / [Intel XE#979])
   [231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@xe_pat@pat-index-xelpg.html
   [232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@xe_pat@pat-index-xelpg.html

  * igt@xe_pm@d3cold-basic-exec:
    - shard-dg2-set2:     [SKIP][233] ([Intel XE#1201] / [Intel XE#366]) -> [SKIP][234] ([Intel XE#366])
   [233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@xe_pm@d3cold-basic-exec.html
   [234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@xe_pm@d3cold-basic-exec.html

  * igt@xe_pm@s3-d3hot-basic-exec:
    - shard-adlp:         [DMESG-WARN][235] ([Intel XE#1162] / [Intel XE#1191] / [Intel XE#1214]) -> [INCOMPLETE][236] ([Intel XE#1044] / [Intel XE#1195] / [Intel XE#1358])
   [235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-adlp-8/igt@xe_pm@s3-d3hot-basic-exec.html
   [236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-adlp-1/igt@xe_pm@s3-d3hot-basic-exec.html

  * igt@xe_pm@s3-vm-bind-unbind-all:
    - shard-dg2-set2:     [DMESG-WARN][237] ([Intel XE#1162] / [Intel XE#1214] / [Intel XE#1941]) -> [DMESG-WARN][238] ([Intel XE#1162] / [Intel XE#1941])
   [237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-435/igt@xe_pm@s3-vm-bind-unbind-all.html
   [238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@xe_pm@s3-vm-bind-unbind-all.html

  * igt@xe_query@multigpu-query-config:
    - shard-dg2-set2:     [SKIP][239] ([Intel XE#944]) -> [SKIP][240] ([Intel XE#1201] / [Intel XE#944]) +2 other tests skip
   [239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@xe_query@multigpu-query-config.html
   [240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-463/igt@xe_query@multigpu-query-config.html

  * igt@xe_query@multigpu-query-invalid-uc-fw-version-mbz:
    - shard-dg2-set2:     [SKIP][241] ([Intel XE#1201] / [Intel XE#944]) -> [SKIP][242] ([Intel XE#944]) +1 other test skip
   [241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-436/igt@xe_query@multigpu-query-invalid-uc-fw-version-mbz.html
   [242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-432/igt@xe_query@multigpu-query-invalid-uc-fw-version-mbz.html

  * igt@xe_wedged@basic-wedged:
    - shard-dg2-set2:     [DMESG-WARN][243] ([Intel XE#1760]) -> [DMESG-WARN][244] ([Intel XE#1214] / [Intel XE#1760])
   [243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-432/igt@xe_wedged@basic-wedged.html
   [244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@xe_wedged@basic-wedged.html

  * igt@xe_wedged@wedged-at-any-timeout:
    - shard-dg2-set2:     [DMESG-WARN][245] ([Intel XE#1214] / [Intel XE#1760]) -> [DMESG-FAIL][246] ([Intel XE#1760])
   [245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1/shard-dg2-464/igt@xe_wedged@wedged-at-any-timeout.html
   [246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/shard-dg2-433/igt@xe_wedged@wedged-at-any-timeout.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [Intel XE#1044]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1044
  [Intel XE#1062]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1062
  [Intel XE#1068]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1068
  [Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
  [Intel XE#1122]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1122
  [Intel XE#1123]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1123
  [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
  [Intel XE#1127]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1127
  [Intel XE#1128]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1128
  [Intel XE#1149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1149
  [Intel XE#1151]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1151
  [Intel XE#1162]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1162
  [Intel XE#1191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1191
  [Intel XE#1195]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1195
  [Intel XE#1201]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1201
  [Intel XE#1214]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1214
  [Intel XE#1231]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1231
  [Intel XE#1252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1252
  [Intel XE#1280]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1280
  [Intel XE#1337]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1337
  [Intel XE#1358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1358
  [Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
  [Intel XE#1397]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1397
  [Intel XE#1399]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1399
  [Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401
  [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
  [Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
  [Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
  [Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
  [Intel XE#1437]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1437
  [Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
  [Intel XE#1446]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1446
  [Intel XE#1447]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1447
  [Intel XE#1450]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1450
  [Intel XE#1468]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1468
  [Intel XE#1469]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1469
  [Intel XE#1473]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1473
  [Intel XE#1504]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1504
  [Intel XE#1512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1512
  [Intel XE#1607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1607
  [Intel XE#1608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1608
  [Intel XE#1659]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1659
  [Intel XE#1694]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1694
  [Intel XE#1725]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1725
  [Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
  [Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
  [Intel XE#1760]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1760
  [Intel XE#1761]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1761
  [Intel XE#1794]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1794
  [Intel XE#1830]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1830
  [Intel XE#1861]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1861
  [Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
  [Intel XE#1941]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1941
  [Intel XE#1960]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1960
  [Intel XE#255]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/255
  [Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
  [Intel XE#282]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/282
  [Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
  [Intel XE#305]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/305
  [Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
  [Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
  [Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
  [Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
  [Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
  [Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
  [Intel XE#324]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/324
  [Intel XE#327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/327
  [Intel XE#330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/330
  [Intel XE#352]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/352
  [Intel XE#362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/362
  [Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
  [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
  [Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
  [Intel XE#392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/392
  [Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
  [Intel XE#480]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/480
  [Intel XE#488]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/488
  [Intel XE#498]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/498
  [Intel XE#560]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/560
  [Intel XE#584]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/584
  [Intel XE#605]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/605
  [Intel XE#610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/610
  [Intel XE#616]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/616
  [Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
  [Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
  [Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
  [Intel XE#658]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/658
  [Intel XE#664]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/664
  [Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
  [Intel XE#702]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/702
  [Intel XE#703]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/703
  [Intel XE#734]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/734
  [Intel XE#736]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/736
  [Intel XE#756]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/756
  [Intel XE#771]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/771
  [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
  [Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
  [Intel XE#886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/886
  [Intel XE#899]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/899
  [Intel XE#908]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/908
  [Intel XE#910]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/910
  [Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
  [Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
  [Intel XE#979]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/979


Build changes
-------------

  * Linux: xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1 -> xe-pw-133034v3

  IGT_7873: b9bcded9123ac56ce05748de6c4870fb49451b87 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-1374-55d6179b96e0390025f2ba101c03b94b50cab7a1: 55d6179b96e0390025f2ba101c03b94b50cab7a1
  xe-pw-133034v3: 133034v3

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-133034v3/index.html

[-- Attachment #2: Type: text/html, Size: 93629 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 3/5] drm/xe: Convert multiple bind ops into single job
  2024-05-29 18:31 ` [PATCH v3 3/5] drm/xe: Convert multiple bind ops into single job Matthew Brost
@ 2024-05-30  0:32   ` Zanoni, Paulo R
  2024-05-30  0:49     ` Matthew Brost
  0 siblings, 1 reply; 16+ messages in thread
From: Zanoni, Paulo R @ 2024-05-30  0:32 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, Brost,  Matthew
  Cc: Zeng, Oak, thomas.hellstrom@linux.intel.com

On Wed, 2024-05-29 at 11:31 -0700, Matthew Brost wrote:
> This aligns with the uAPI of an array of binds or single bind that
> results in multiple GPUVA ops to be considered a single atomic
> operations.
> 
> The implemenation is roughly:
> - xe_vma_ops is a list of xe_vma_op (GPUVA op)
> - each xe_vma_op resolves to 0-3 PT ops
> - xe_vma_ops creates a single job
> - if at any point during binding a failure occurs, xe_vma_ops contains
>   the information necessary unwind the PT and VMA (GPUVA) state
> 
> v2:
>  - add missing dma-resv slot reservation (CI, testing)
> 
> Cc: Oak Zeng <oak.zeng@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_bo_types.h |    2 +
>  drivers/gpu/drm/xe/xe_migrate.c  |  296 ++++----
>  drivers/gpu/drm/xe/xe_migrate.h  |   32 +-
>  drivers/gpu/drm/xe/xe_pt.c       | 1108 +++++++++++++++++++-----------
>  drivers/gpu/drm/xe/xe_pt.h       |   14 +-
>  drivers/gpu/drm/xe/xe_pt_types.h |   36 +
>  drivers/gpu/drm/xe/xe_vm.c       |  519 +++-----------
>  drivers/gpu/drm/xe/xe_vm.h       |    2 +
>  drivers/gpu/drm/xe/xe_vm_types.h |   45 +-
>  9 files changed, 1032 insertions(+), 1022 deletions(-)
> 

(snip)

> 
> -/**
> - * __xe_pt_bind_vma() - Build and connect a page-table tree for the vma
> - * address range.
> - * @tile: The tile to bind for.
> - * @vma: The vma to bind.
> - * @q: The exec_queue with which to do pipelined page-table updates.
> - * @syncs: Entries to sync on before binding the built tree to the live vm tree.
> - * @num_syncs: Number of @sync entries.
> - * @rebind: Whether we're rebinding this vma to the same address range without
> - * an unbind in-between.
> - *
> - * This function builds a page-table tree (see xe_pt_stage_bind() for more
> - * information on page-table building), and the xe_vm_pgtable_update entries
> - * abstracting the operations needed to attach it to the main vm tree. It
> - * then takes the relevant locks and updates the metadata side of the main
> - * vm tree and submits the operations for pipelined attachment of the
> - * gpu page-table to the vm main tree, (which can be done either by the
> - * cpu and the GPU).
> - *
> - * Return: A valid dma-fence representing the pipelined attachment operation
> - * on success, an error pointer on error.
> - */
> -struct dma_fence *
> -__xe_pt_bind_vma(struct xe_tile *tile, struct xe_vma *vma, struct xe_exec_queue *q,
> -		 struct xe_sync_entry *syncs, u32 num_syncs,
> -		 bool rebind)
> -{
> -	struct xe_vm_pgtable_update entries[XE_VM_MAX_LEVEL * 2 + 1];
> -	struct xe_pt_migrate_pt_update bind_pt_update = {
> -		.base = {
> -			.ops = xe_vma_is_userptr(vma) ? &userptr_bind_ops : &bind_ops,
> -			.vma = vma,
> -			.tile_id = tile->id,
> -		},
> -		.bind = true,
> -	};
> -	struct xe_vm *vm = xe_vma_vm(vma);
> -	u32 num_entries;
> -	struct dma_fence *fence;
> -	struct invalidation_fence *ifence = NULL;
> -	struct xe_range_fence *rfence;
> -	int err;
> -
> -	bind_pt_update.locked = false;
> -	xe_bo_assert_held(xe_vma_bo(vma));
> -	xe_vm_assert_held(vm);
> -
> -	vm_dbg(&xe_vma_vm(vma)->xe->drm,
> -	       "Preparing bind, with range [%llx...%llx) engine %p.\n",
> -	       xe_vma_start(vma), xe_vma_end(vma), q);
> -
> -	err = xe_pt_prepare_bind(tile, vma, entries, &num_entries);
> -	if (err)
> -		goto err;
> -
> -	err = dma_resv_reserve_fences(xe_vm_resv(vm), 1);
> -	if (!err && !xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
> -		err = dma_resv_reserve_fences(xe_vma_bo(vma)->ttm.base.resv, 1);
> -	if (err)
> -		goto err;
> -
> -	xe_tile_assert(tile, num_entries <= ARRAY_SIZE(entries));
> -
> -	xe_vm_dbg_print_entries(tile_to_xe(tile), entries, num_entries);
> -	xe_pt_calc_rfence_interval(vma, &bind_pt_update, entries,
> -				   num_entries);
> -
> -	/*
> -	 * If rebind, we have to invalidate TLB on !LR vms to invalidate
> -	 * cached PTEs point to freed memory. on LR vms this is done
> -	 * automatically when the context is re-enabled by the rebind worker,
> -	 * or in fault mode it was invalidated on PTE zapping.
> -	 *
> -	 * If !rebind, and scratch enabled VMs, there is a chance the scratch
> -	 * PTE is already cached in the TLB so it needs to be invalidated.
> -	 * on !LR VMs this is done in the ring ops preceding a batch, but on
> -	 * non-faulting LR, in particular on user-space batch buffer chaining,
> -	 * it needs to be done here.
> -	 */
> -	if ((!rebind && xe_vm_has_scratch(vm) && xe_vm_in_preempt_fence_mode(vm))) {
> -		ifence = kzalloc(sizeof(*ifence), GFP_KERNEL);
> -		if (!ifence)
> -			return ERR_PTR(-ENOMEM);
> -	} else if (rebind && !xe_vm_in_lr_mode(vm)) {
> -		/* We bump also if batch_invalidate_tlb is true */
> -		vm->tlb_flush_seqno++;

This was the only thing actually setting a value of "tlb_flush_seqno"
to anything in the driver. Now the only thing that remains setting any
value to tlb_flush_seqno is xe_sched_job_arm() where we have:

q->tlb_flush_seqno = vm->tlb_flush_seqno;

(but there seems to be no line ever initializing vm->tlb_flush_seqno)

Something may be wrong here. We're not setting initial values but we're
copying values and checking them in an "if" statement.

The following compiles:

diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index e81704c7c030a..dd51b59e4433f 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -136,11 +136,6 @@ struct xe_exec_queue {
        const struct xe_ring_ops *ring_ops;
        /** @entity: DRM sched entity for this exec queue (1 to 1 relationship) */
        struct drm_sched_entity *entity;
-       /**
-        * @tlb_flush_seqno: The seqno of the last rebind tlb flush performed
-        * Protected by @vm's resv. Unused if @vm == NULL.
-        */
-       u64 tlb_flush_seqno;
        /** @old_run_ticks: prior hw engine class run time in ticks for this exec queue */
        u64 old_run_ticks;
        /** @run_ticks: hw engine class run time in ticks for this exec queue */
diff --git a/drivers/gpu/drm/xe/xe_sched_job.c b/drivers/gpu/drm/xe/xe_sched_job.c
index 29f3201d7dfac..6c3cb7e295ac7 100644
--- a/drivers/gpu/drm/xe/xe_sched_job.c
+++ b/drivers/gpu/drm/xe/xe_sched_job.c
@@ -254,9 +254,8 @@ void xe_sched_job_arm(struct xe_sched_job *job)
        }
 
        if (vm && !xe_sched_job_is_migration(q) && !xe_vm_in_lr_mode(vm) &&
-           (vm->batch_invalidate_tlb || vm->tlb_flush_seqno != q->tlb_flush_seqno)) {
+           vm->batch_invalidate_tlb) {
                xe_vm_assert_held(vm);
-               q->tlb_flush_seqno = vm->tlb_flush_seqno;
                job->ring_ops_flush_tlb = true;
        }
 
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 27d651093d307..03e839efb234e 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -263,11 +263,6 @@ struct xe_vm {
                bool capture_once;
        } error_capture;
 
-       /**
-        * @tlb_flush_seqno: Required TLB flush seqno for the next exec.
-        * protected by the vm resv.
-        */
-       u64 tlb_flush_seqno;
        /** @batch_invalidate_tlb: Always invalidate TLB before batch start */
        bool batch_invalidate_tlb;
        /** @xef: XE file handle for tracking this VM's drm client */




^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 3/5] drm/xe: Convert multiple bind ops into single job
  2024-05-30  0:32   ` Zanoni, Paulo R
@ 2024-05-30  0:49     ` Matthew Brost
  0 siblings, 0 replies; 16+ messages in thread
From: Matthew Brost @ 2024-05-30  0:49 UTC (permalink / raw)
  To: Zanoni, Paulo R
  Cc: intel-xe@lists.freedesktop.org, Zeng,  Oak,
	thomas.hellstrom@linux.intel.com

On Wed, May 29, 2024 at 06:32:12PM -0600, Zanoni, Paulo R wrote:
> On Wed, 2024-05-29 at 11:31 -0700, Matthew Brost wrote:
> > This aligns with the uAPI of an array of binds or single bind that
> > results in multiple GPUVA ops to be considered a single atomic
> > operations.
> > 
> > The implemenation is roughly:
> > - xe_vma_ops is a list of xe_vma_op (GPUVA op)
> > - each xe_vma_op resolves to 0-3 PT ops
> > - xe_vma_ops creates a single job
> > - if at any point during binding a failure occurs, xe_vma_ops contains
> >   the information necessary unwind the PT and VMA (GPUVA) state
> > 
> > v2:
> >  - add missing dma-resv slot reservation (CI, testing)
> > 
> > Cc: Oak Zeng <oak.zeng@intel.com>
> > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_bo_types.h |    2 +
> >  drivers/gpu/drm/xe/xe_migrate.c  |  296 ++++----
> >  drivers/gpu/drm/xe/xe_migrate.h  |   32 +-
> >  drivers/gpu/drm/xe/xe_pt.c       | 1108 +++++++++++++++++++-----------
> >  drivers/gpu/drm/xe/xe_pt.h       |   14 +-
> >  drivers/gpu/drm/xe/xe_pt_types.h |   36 +
> >  drivers/gpu/drm/xe/xe_vm.c       |  519 +++-----------
> >  drivers/gpu/drm/xe/xe_vm.h       |    2 +
> >  drivers/gpu/drm/xe/xe_vm_types.h |   45 +-
> >  9 files changed, 1032 insertions(+), 1022 deletions(-)
> > 
> 
> (snip)
> 
> > 
> > -/**
> > - * __xe_pt_bind_vma() - Build and connect a page-table tree for the vma
> > - * address range.
> > - * @tile: The tile to bind for.
> > - * @vma: The vma to bind.
> > - * @q: The exec_queue with which to do pipelined page-table updates.
> > - * @syncs: Entries to sync on before binding the built tree to the live vm tree.
> > - * @num_syncs: Number of @sync entries.
> > - * @rebind: Whether we're rebinding this vma to the same address range without
> > - * an unbind in-between.
> > - *
> > - * This function builds a page-table tree (see xe_pt_stage_bind() for more
> > - * information on page-table building), and the xe_vm_pgtable_update entries
> > - * abstracting the operations needed to attach it to the main vm tree. It
> > - * then takes the relevant locks and updates the metadata side of the main
> > - * vm tree and submits the operations for pipelined attachment of the
> > - * gpu page-table to the vm main tree, (which can be done either by the
> > - * cpu and the GPU).
> > - *
> > - * Return: A valid dma-fence representing the pipelined attachment operation
> > - * on success, an error pointer on error.
> > - */
> > -struct dma_fence *
> > -__xe_pt_bind_vma(struct xe_tile *tile, struct xe_vma *vma, struct xe_exec_queue *q,
> > -		 struct xe_sync_entry *syncs, u32 num_syncs,
> > -		 bool rebind)
> > -{
> > -	struct xe_vm_pgtable_update entries[XE_VM_MAX_LEVEL * 2 + 1];
> > -	struct xe_pt_migrate_pt_update bind_pt_update = {
> > -		.base = {
> > -			.ops = xe_vma_is_userptr(vma) ? &userptr_bind_ops : &bind_ops,
> > -			.vma = vma,
> > -			.tile_id = tile->id,
> > -		},
> > -		.bind = true,
> > -	};
> > -	struct xe_vm *vm = xe_vma_vm(vma);
> > -	u32 num_entries;
> > -	struct dma_fence *fence;
> > -	struct invalidation_fence *ifence = NULL;
> > -	struct xe_range_fence *rfence;
> > -	int err;
> > -
> > -	bind_pt_update.locked = false;
> > -	xe_bo_assert_held(xe_vma_bo(vma));
> > -	xe_vm_assert_held(vm);
> > -
> > -	vm_dbg(&xe_vma_vm(vma)->xe->drm,
> > -	       "Preparing bind, with range [%llx...%llx) engine %p.\n",
> > -	       xe_vma_start(vma), xe_vma_end(vma), q);
> > -
> > -	err = xe_pt_prepare_bind(tile, vma, entries, &num_entries);
> > -	if (err)
> > -		goto err;
> > -
> > -	err = dma_resv_reserve_fences(xe_vm_resv(vm), 1);
> > -	if (!err && !xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
> > -		err = dma_resv_reserve_fences(xe_vma_bo(vma)->ttm.base.resv, 1);
> > -	if (err)
> > -		goto err;
> > -
> > -	xe_tile_assert(tile, num_entries <= ARRAY_SIZE(entries));
> > -
> > -	xe_vm_dbg_print_entries(tile_to_xe(tile), entries, num_entries);
> > -	xe_pt_calc_rfence_interval(vma, &bind_pt_update, entries,
> > -				   num_entries);
> > -
> > -	/*
> > -	 * If rebind, we have to invalidate TLB on !LR vms to invalidate
> > -	 * cached PTEs point to freed memory. on LR vms this is done
> > -	 * automatically when the context is re-enabled by the rebind worker,
> > -	 * or in fault mode it was invalidated on PTE zapping.
> > -	 *
> > -	 * If !rebind, and scratch enabled VMs, there is a chance the scratch
> > -	 * PTE is already cached in the TLB so it needs to be invalidated.
> > -	 * on !LR VMs this is done in the ring ops preceding a batch, but on
> > -	 * non-faulting LR, in particular on user-space batch buffer chaining,
> > -	 * it needs to be done here.
> > -	 */
> > -	if ((!rebind && xe_vm_has_scratch(vm) && xe_vm_in_preempt_fence_mode(vm))) {
> > -		ifence = kzalloc(sizeof(*ifence), GFP_KERNEL);
> > -		if (!ifence)
> > -			return ERR_PTR(-ENOMEM);
> > -	} else if (rebind && !xe_vm_in_lr_mode(vm)) {
> > -		/* We bump also if batch_invalidate_tlb is true */
> > -		vm->tlb_flush_seqno++;
> 
> This was the only thing actually setting a value of "tlb_flush_seqno"
> to anything in the driver. Now the only thing that remains setting any
> value to tlb_flush_seqno is xe_sched_job_arm() where we have:
> 
> q->tlb_flush_seqno = vm->tlb_flush_seqno;
> 
> (but there seems to be no line ever initializing vm->tlb_flush_seqno)
> 

This is initialized to zero on the zalloc of the xe_vm structure.

> Something may be wrong here. We're not setting initial values but we're
> copying values and checking them in an "if" statement.
> 
> The following compiles:
>

Sure but this doesn't correctly set job->ring_ops_flush_tlb now. The
idea here is if we can defer the TLB flush to next job rather than a GuC
command after we change the page tables we do that. Roughly, on binds in
dma-fence mode we can always defer, in some bind cases in LR mode we can
defer, and on unbinds we always use the GuC.

This patch broke this in the rebase. The fix in bind_op_prepare should
look like:

                /*
                 * If rebind, we have to invalidate TLB on !LR vms to invalidate
                 * cached PTEs point to freed memory. on LR vms this is done
-                * automatically when the context is re-enabled by the rebind
-                * worker, or in fault mode it was invalidated on PTE zapping.
+                * automatically when the context is re-enabled by the rebind worker,
+                * or in fault mode it was invalidated on PTE zapping.
                 *
-                * If !rebind, and scratch enabled VMs, there is a chance the
-                * scratch PTE is already cached in the TLB so it needs to be
-                * invalidated. on !LR VMs this is done in the ring ops
-                * preceding a batch, but on non-faulting LR, in particular on
-                * user-space batch buffer chaining, it needs to be done here.
+                * If !rebind, and scratch enabled VMs, there is a chance the scratch
+                * PTE is already cached in the TLB so it needs to be invalidated.
+                * on !LR VMs this is done in the ring ops preceding a batch, but on
+                * non-faulting LR, in particular on user-space batch buffer chaining,
+                * it needs to be done here.
                 */
-               pt_update_ops->needs_invalidation |=
-                       (pt_op->rebind && !xe_vm_in_lr_mode(vm) &&
-                       !vm->batch_invalidate_tlb) ||
-                       (!pt_op->rebind && vm->scratch_pt[tile->id] &&
-                        xe_vm_in_preempt_fence_mode(vm));
+               if ((!pt_op->rebind && xe_vm_has_scratch(vm) &&
+                    xe_vm_in_preempt_fence_mode(vm)))
+                       pt_update_ops->needs_invalidation = true;
+               else if (rebind && !xe_vm_in_lr_mode(vm))
+                       /* We bump also if batch_invalidate_tlb is true */
+                       vm->tlb_flush_seqno++;

Will fix in next rev.

Matt
 
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index e81704c7c030a..dd51b59e4433f 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -136,11 +136,6 @@ struct xe_exec_queue {
>         const struct xe_ring_ops *ring_ops;
>         /** @entity: DRM sched entity for this exec queue (1 to 1 relationship) */
>         struct drm_sched_entity *entity;
> -       /**
> -        * @tlb_flush_seqno: The seqno of the last rebind tlb flush performed
> -        * Protected by @vm's resv. Unused if @vm == NULL.
> -        */
> -       u64 tlb_flush_seqno;
>         /** @old_run_ticks: prior hw engine class run time in ticks for this exec queue */
>         u64 old_run_ticks;
>         /** @run_ticks: hw engine class run time in ticks for this exec queue */
> diff --git a/drivers/gpu/drm/xe/xe_sched_job.c b/drivers/gpu/drm/xe/xe_sched_job.c
> index 29f3201d7dfac..6c3cb7e295ac7 100644
> --- a/drivers/gpu/drm/xe/xe_sched_job.c
> +++ b/drivers/gpu/drm/xe/xe_sched_job.c
> @@ -254,9 +254,8 @@ void xe_sched_job_arm(struct xe_sched_job *job)
>         }
>  
>         if (vm && !xe_sched_job_is_migration(q) && !xe_vm_in_lr_mode(vm) &&
> -           (vm->batch_invalidate_tlb || vm->tlb_flush_seqno != q->tlb_flush_seqno)) {
> +           vm->batch_invalidate_tlb) {
>                 xe_vm_assert_held(vm);
> -               q->tlb_flush_seqno = vm->tlb_flush_seqno;
>                 job->ring_ops_flush_tlb = true;
>         }
>  
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index 27d651093d307..03e839efb234e 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -263,11 +263,6 @@ struct xe_vm {
>                 bool capture_once;
>         } error_capture;
>  
> -       /**
> -        * @tlb_flush_seqno: Required TLB flush seqno for the next exec.
> -        * protected by the vm resv.
> -        */
> -       u64 tlb_flush_seqno;
>         /** @batch_invalidate_tlb: Always invalidate TLB before batch start */
>         bool batch_invalidate_tlb;
>         /** @xef: XE file handle for tracking this VM's drm client */
> 
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2024-05-30  0:50 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-29 18:31 [PATCH v3 0/5] Convert multiple bind ops to 1 job Matthew Brost
2024-05-29 18:31 ` [PATCH v3 1/5] drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue Matthew Brost
2024-05-29 18:31 ` [PATCH v3 2/5] drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops Matthew Brost
2024-05-29 18:31 ` [PATCH v3 3/5] drm/xe: Convert multiple bind ops into single job Matthew Brost
2024-05-30  0:32   ` Zanoni, Paulo R
2024-05-30  0:49     ` Matthew Brost
2024-05-29 18:31 ` [PATCH v3 4/5] drm/xe: Update VM trace events Matthew Brost
2024-05-29 18:31 ` [PATCH v3 5/5] drm/xe: Update PT layer with better error handling Matthew Brost
2024-05-29 19:26 ` ✓ CI.Patch_applied: success for Convert multiple bind ops to 1 job (rev3) Patchwork
2024-05-29 19:26 ` ✗ CI.checkpatch: warning " Patchwork
2024-05-29 19:27 ` ✓ CI.KUnit: success " Patchwork
2024-05-29 19:39 ` ✓ CI.Build: " Patchwork
2024-05-29 19:39 ` ✗ CI.Hooks: failure " Patchwork
2024-05-29 19:41 ` ✓ CI.checksparse: success " Patchwork
2024-05-29 20:10 ` ✗ CI.BAT: failure " Patchwork
2024-05-29 23:15 ` ✗ CI.FULL: " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox