intel-xe.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
@ 2025-08-13 12:38 Himal Prasad Ghimiray
  2025-08-13 13:26 ` ✗ CI.checkpatch: warning for " Patchwork
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-13 12:38 UTC (permalink / raw)
  To: intel-xe; +Cc: Himal Prasad Ghimiray

DONOT REVIEW
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
 drivers/gpu/drm/drm_gpusvm.c           | 122 ++-----
 drivers/gpu/drm/drm_gpuvm.c            | 287 ++++++++++++----
 drivers/gpu/drm/imagination/pvr_vm.c   |  15 +-
 drivers/gpu/drm/msm/msm_gem_vma.c      |  33 +-
 drivers/gpu/drm/nouveau/nouveau_uvmm.c |  11 +-
 drivers/gpu/drm/panthor/panthor_mmu.c  |  13 +-
 drivers/gpu/drm/xe/Makefile            |   1 +
 drivers/gpu/drm/xe/xe_bo.c             |  29 +-
 drivers/gpu/drm/xe/xe_bo_types.h       |   8 +
 drivers/gpu/drm/xe/xe_device.c         |   4 +
 drivers/gpu/drm/xe/xe_gt_pagefault.c   |  35 +-
 drivers/gpu/drm/xe/xe_pt.c             |  39 ++-
 drivers/gpu/drm/xe/xe_svm.c            | 254 ++++++++++++--
 drivers/gpu/drm/xe/xe_svm.h            |  23 ++
 drivers/gpu/drm/xe/xe_vm.c             | 437 ++++++++++++++++++++++--
 drivers/gpu/drm/xe/xe_vm.h             |  10 +-
 drivers/gpu/drm/xe/xe_vm_madvise.c     | 445 +++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm_madvise.h     |  15 +
 drivers/gpu/drm/xe/xe_vm_types.h       |  57 +++-
 include/drm/drm_gpusvm.h               |  70 ++++
 include/drm/drm_gpuvm.h                |  38 ++-
 include/uapi/drm/xe_drm.h              | 274 +++++++++++++++
 22 files changed, 1921 insertions(+), 299 deletions(-)
 create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
 create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h

diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 661306da6b2d..e2a9a6ae1d54 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -271,107 +271,50 @@ npages_in_range(unsigned long start, unsigned long end)
 }
 
 /**
- * drm_gpusvm_range_find() - Find GPU SVM range from GPU SVM notifier
- * @notifier: Pointer to the GPU SVM notifier structure.
- * @start: Start address of the range
- * @end: End address of the range
+ * drm_gpusvm_notifier_find() - Find GPU SVM notifier from GPU SVM
+ * @gpusvm: Pointer to the GPU SVM structure.
+ * @start: Start address of the notifier
+ * @end: End address of the notifier
  *
- * Return: A pointer to the drm_gpusvm_range if found or NULL
+ * Return: A pointer to the drm_gpusvm_notifier if found or NULL
  */
-struct drm_gpusvm_range *
-drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
-		      unsigned long end)
+struct drm_gpusvm_notifier *
+drm_gpusvm_notifier_find(struct drm_gpusvm *gpusvm, unsigned long start,
+			 unsigned long end)
 {
 	struct interval_tree_node *itree;
 
-	itree = interval_tree_iter_first(&notifier->root, start, end - 1);
+	itree = interval_tree_iter_first(&gpusvm->root, start, end - 1);
 
 	if (itree)
-		return container_of(itree, struct drm_gpusvm_range, itree);
+		return container_of(itree, struct drm_gpusvm_notifier, itree);
 	else
 		return NULL;
 }
-EXPORT_SYMBOL_GPL(drm_gpusvm_range_find);
+EXPORT_SYMBOL_GPL(drm_gpusvm_notifier_find);
 
 /**
- * drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
- * @range__: Iterator variable for the ranges
- * @next__: Iterator variable for the ranges temporay storage
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the range
- * @end__: End address of the range
- *
- * This macro is used to iterate over GPU SVM ranges in a notifier while
- * removing ranges from it.
- */
-#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__)	\
-	for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)),	\
-	     (next__) = __drm_gpusvm_range_next(range__);				\
-	     (range__) && (drm_gpusvm_range_start(range__) < (end__));			\
-	     (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-
-/**
- * __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
- * @notifier: a pointer to the current drm_gpusvm_notifier
+ * drm_gpusvm_range_find() - Find GPU SVM range from GPU SVM notifier
+ * @notifier: Pointer to the GPU SVM notifier structure.
+ * @start: Start address of the range
+ * @end: End address of the range
  *
- * Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
- *         the current notifier is the last one or if the input notifier is
- *         NULL.
+ * Return: A pointer to the drm_gpusvm_range if found or NULL
  */
-static struct drm_gpusvm_notifier *
-__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
-{
-	if (notifier && !list_is_last(&notifier->entry,
-				      &notifier->gpusvm->notifier_list))
-		return list_next_entry(notifier, entry);
-
-	return NULL;
-}
-
-static struct drm_gpusvm_notifier *
-notifier_iter_first(struct rb_root_cached *root, unsigned long start,
-		    unsigned long last)
+struct drm_gpusvm_range *
+drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
+		      unsigned long end)
 {
 	struct interval_tree_node *itree;
 
-	itree = interval_tree_iter_first(root, start, last);
+	itree = interval_tree_iter_first(&notifier->root, start, end - 1);
 
 	if (itree)
-		return container_of(itree, struct drm_gpusvm_notifier, itree);
+		return container_of(itree, struct drm_gpusvm_range, itree);
 	else
 		return NULL;
 }
-
-/**
- * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
- * @notifier__: Iterator variable for the notifiers
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the notifier
- * @end__: End address of the notifier
- *
- * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
- */
-#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__)		\
-	for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1);	\
-	     (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__));		\
-	     (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-
-/**
- * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
- * @notifier__: Iterator variable for the notifiers
- * @next__: Iterator variable for the notifiers temporay storage
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the notifier
- * @end__: End address of the notifier
- *
- * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
- * removing notifiers from it.
- */
-#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__)	\
-	for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1),	\
-	     (next__) = __drm_gpusvm_notifier_next(notifier__);				\
-	     (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__));		\
-	     (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
+EXPORT_SYMBOL_GPL(drm_gpusvm_range_find);
 
 /**
  * drm_gpusvm_notifier_invalidate() - Invalidate a GPU SVM notifier.
@@ -472,22 +415,6 @@ int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
 }
 EXPORT_SYMBOL_GPL(drm_gpusvm_init);
 
-/**
- * drm_gpusvm_notifier_find() - Find GPU SVM notifier
- * @gpusvm: Pointer to the GPU SVM structure
- * @fault_addr: Fault address
- *
- * This function finds the GPU SVM notifier associated with the fault address.
- *
- * Return: Pointer to the GPU SVM notifier on success, NULL otherwise.
- */
-static struct drm_gpusvm_notifier *
-drm_gpusvm_notifier_find(struct drm_gpusvm *gpusvm,
-			 unsigned long fault_addr)
-{
-	return notifier_iter_first(&gpusvm->root, fault_addr, fault_addr + 1);
-}
-
 /**
  * to_drm_gpusvm_notifier() - retrieve the container struct for a given rbtree node
  * @node: a pointer to the rbtree node embedded within a drm_gpusvm_notifier struct
@@ -943,7 +870,7 @@ drm_gpusvm_range_find_or_insert(struct drm_gpusvm *gpusvm,
 	if (!mmget_not_zero(mm))
 		return ERR_PTR(-EFAULT);
 
-	notifier = drm_gpusvm_notifier_find(gpusvm, fault_addr);
+	notifier = drm_gpusvm_notifier_find(gpusvm, fault_addr, fault_addr + 1);
 	if (!notifier) {
 		notifier = drm_gpusvm_notifier_alloc(gpusvm, fault_addr);
 		if (IS_ERR(notifier)) {
@@ -1107,7 +1034,8 @@ void drm_gpusvm_range_remove(struct drm_gpusvm *gpusvm,
 	drm_gpusvm_driver_lock_held(gpusvm);
 
 	notifier = drm_gpusvm_notifier_find(gpusvm,
-					    drm_gpusvm_range_start(range));
+					    drm_gpusvm_range_start(range),
+					    drm_gpusvm_range_start(range) + 1);
 	if (WARN_ON_ONCE(!notifier))
 		return;
 
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index bbc7fecb6f4a..d6bea8a4fffd 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -420,6 +420,71 @@
  *	 new: |-----------|-----| (b.bo_offset=m,a.bo_offset=n+2)
  */
 
+/**
+ * DOC: Madvise Logic - Splitting and Traversal
+ *
+ * This logic handles GPU VA range updates by generating remap and map operations
+ * without performing unmaps or merging existing mappings.
+ *
+ * 1) The requested range lies entirely within a single drm_gpuva. The logic splits
+ * the existing mapping at the start and end boundaries and inserts a new map.
+ *
+ * ::
+ *              a      start    end     b
+ *         pre: |-----------------------|
+ *                     drm_gpuva1
+ *
+ *              a      start    end     b
+ *         new: |-----|=========|-------|
+ *               remap   map      remap
+ *
+ * one REMAP and one MAP : Same behaviour as SPLIT and MERGE
+ *
+ * 2) The requested range spans multiple drm_gpuva regions. The logic traverses
+ * across boundaries, remapping the start and end segments, and inserting two
+ * map operations to cover the full range.
+ *
+ * ::           a       start      b              c        end       d
+ *         pre: |------------------|--------------|------------------|
+ *                    drm_gpuva1      drm_gpuva2         drm_gpuva3
+ *
+ *              a       start      b              c        end       d
+ *         new: |-------|==========|--------------|========|---------|
+ *                remap1   map1       drm_gpuva2    map2     remap2
+ *
+ * two REMAPS and two MAPS
+ *
+ * 3) Either start or end lies within a drm_gpuva. A single remap and map operation
+ * are generated to update the affected portion.
+ *
+ *
+ * ::           a/start            b              c        end       d
+ *         pre: |------------------|--------------|------------------|
+ *                    drm_gpuva1      drm_gpuva2         drm_gpuva3
+ *
+ *              a/start            b              c        end       d
+ *         new: |------------------|--------------|========|---------|
+ *                drm_gpuva1         drm_gpuva2     map1     remap1
+ *
+ * ::           a       start      b              c/end              d
+ *         pre: |------------------|--------------|------------------|
+ *                    drm_gpuva1      drm_gpuva2         drm_gpuva3
+ *
+ *              a       start      b              c/end              d
+ *         new: |-------|==========|--------------|------------------|
+ *                remap1   map1       drm_gpuva2        drm_gpuva3
+ *
+ * one REMAP and one MAP
+ *
+ * 4) Both start and end align with existing drm_gpuva boundaries. No operations
+ * are needed as the range is already covered.
+ *
+ * 5) No existing drm_gpuvas. No operations.
+ *
+ * Unlike drm_gpuvm_sm_map_ops_create, this logic avoids unmaps and merging,
+ * focusing solely on remap and map operations for efficient traversal and update.
+ */
+
 /**
  * DOC: Locking
  *
@@ -486,13 +551,18 @@
  *				  u64 addr, u64 range,
  *				  struct drm_gem_object *obj, u64 offset)
  *	{
+ *		struct drm_gpuvm_map_req map_req = {
+ *		        .map.va.addr = addr,
+ *	                .map.va.range = range,
+ *	                .map.gem.obj = obj,
+ *	                .map.gem.offset = offset,
+ *	           };
  *		struct drm_gpuva_ops *ops;
  *		struct drm_gpuva_op *op
  *		struct drm_gpuvm_bo *vm_bo;
  *
  *		driver_lock_va_space();
- *		ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range,
- *						  obj, offset);
+ *		ops = drm_gpuvm_sm_map_ops_create(gpuvm, &map_req);
  *		if (IS_ERR(ops))
  *			return PTR_ERR(ops);
  *
@@ -2054,16 +2124,18 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unmap);
 
 static int
 op_map_cb(const struct drm_gpuvm_ops *fn, void *priv,
-	  u64 addr, u64 range,
-	  struct drm_gem_object *obj, u64 offset)
+	  const struct drm_gpuvm_map_req *req)
 {
 	struct drm_gpuva_op op = {};
 
+	if (!req)
+		return 0;
+
 	op.op = DRM_GPUVA_OP_MAP;
-	op.map.va.addr = addr;
-	op.map.va.range = range;
-	op.map.gem.obj = obj;
-	op.map.gem.offset = offset;
+	op.map.va.addr = req->map.va.addr;
+	op.map.va.range = req->map.va.range;
+	op.map.gem.obj = req->map.gem.obj;
+	op.map.gem.offset = req->map.gem.offset;
 
 	return fn->sm_step_map(&op, priv);
 }
@@ -2088,10 +2160,13 @@ op_remap_cb(const struct drm_gpuvm_ops *fn, void *priv,
 
 static int
 op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
-	    struct drm_gpuva *va, bool merge)
+	    struct drm_gpuva *va, bool merge, bool madvise)
 {
 	struct drm_gpuva_op op = {};
 
+	if (madvise)
+		return 0;
+
 	op.op = DRM_GPUVA_OP_UNMAP;
 	op.unmap.va = va;
 	op.unmap.keep = merge;
@@ -2102,10 +2177,15 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
 static int
 __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 		   const struct drm_gpuvm_ops *ops, void *priv,
-		   u64 req_addr, u64 req_range,
-		   struct drm_gem_object *req_obj, u64 req_offset)
+		   const struct drm_gpuvm_map_req *req,
+		   bool madvise)
 {
+	struct drm_gem_object *req_obj = req->map.gem.obj;
+	const struct drm_gpuvm_map_req *op_map = madvise ? NULL : req;
 	struct drm_gpuva *va, *next;
+	u64 req_offset = req->map.gem.offset;
+	u64 req_range = req->map.va.range;
+	u64 req_addr = req->map.va.addr;
 	u64 req_end = req_addr + req_range;
 	int ret;
 
@@ -2120,19 +2200,22 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 		u64 end = addr + range;
 		bool merge = !!va->gem.obj;
 
+		if (madvise && obj)
+			continue;
+
 		if (addr == req_addr) {
 			merge &= obj == req_obj &&
 				 offset == req_offset;
 
 			if (end == req_end) {
-				ret = op_unmap_cb(ops, priv, va, merge);
+				ret = op_unmap_cb(ops, priv, va, merge, madvise);
 				if (ret)
 					return ret;
 				break;
 			}
 
 			if (end < req_end) {
-				ret = op_unmap_cb(ops, priv, va, merge);
+				ret = op_unmap_cb(ops, priv, va, merge, madvise);
 				if (ret)
 					return ret;
 				continue;
@@ -2153,6 +2236,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 				ret = op_remap_cb(ops, priv, NULL, &n, &u);
 				if (ret)
 					return ret;
+
+				if (madvise)
+					op_map = req;
 				break;
 			}
 		} else if (addr < req_addr) {
@@ -2173,6 +2259,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 				ret = op_remap_cb(ops, priv, &p, NULL, &u);
 				if (ret)
 					return ret;
+
+				if (madvise)
+					op_map = req;
 				break;
 			}
 
@@ -2180,6 +2269,18 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 				ret = op_remap_cb(ops, priv, &p, NULL, &u);
 				if (ret)
 					return ret;
+
+				if (madvise) {
+					struct drm_gpuvm_map_req map_req = {
+						.map.va.addr =  req_addr,
+						.map.va.range = end - req_addr,
+					};
+
+					ret = op_map_cb(ops, priv, &map_req);
+					if (ret)
+						return ret;
+				}
+
 				continue;
 			}
 
@@ -2195,6 +2296,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 				ret = op_remap_cb(ops, priv, &p, &n, &u);
 				if (ret)
 					return ret;
+
+				if (madvise)
+					op_map = req;
 				break;
 			}
 		} else if (addr > req_addr) {
@@ -2203,16 +2307,18 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 					   (addr - req_addr);
 
 			if (end == req_end) {
-				ret = op_unmap_cb(ops, priv, va, merge);
+				ret = op_unmap_cb(ops, priv, va, merge, madvise);
 				if (ret)
 					return ret;
+
 				break;
 			}
 
 			if (end < req_end) {
-				ret = op_unmap_cb(ops, priv, va, merge);
+				ret = op_unmap_cb(ops, priv, va, merge, madvise);
 				if (ret)
 					return ret;
+
 				continue;
 			}
 
@@ -2231,14 +2337,20 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
 				ret = op_remap_cb(ops, priv, NULL, &n, &u);
 				if (ret)
 					return ret;
+
+				if (madvise) {
+					struct drm_gpuvm_map_req map_req = {
+						.map.va.addr =  addr,
+						.map.va.range = req_end - addr,
+					};
+
+					return op_map_cb(ops, priv, &map_req);
+				}
 				break;
 			}
 		}
 	}
-
-	return op_map_cb(ops, priv,
-			 req_addr, req_range,
-			 req_obj, req_offset);
+	return op_map_cb(ops, priv, op_map);
 }
 
 static int
@@ -2290,7 +2402,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
 			if (ret)
 				return ret;
 		} else {
-			ret = op_unmap_cb(ops, priv, va, false);
+			ret = op_unmap_cb(ops, priv, va, false, false);
 			if (ret)
 				return ret;
 		}
@@ -2303,10 +2415,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
  * drm_gpuvm_sm_map() - calls the &drm_gpuva_op split/merge steps
  * @gpuvm: the &drm_gpuvm representing the GPU VA space
  * @priv: pointer to a driver private data structure
- * @req_addr: the start address of the new mapping
- * @req_range: the range of the new mapping
- * @req_obj: the &drm_gem_object to map
- * @req_offset: the offset within the &drm_gem_object
+ * @req: ptr to struct drm_gpuvm_map_req
  *
  * This function iterates the given range of the GPU VA space. It utilizes the
  * &drm_gpuvm_ops to call back into the driver providing the split and merge
@@ -2333,8 +2442,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
  */
 int
 drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
-		 u64 req_addr, u64 req_range,
-		 struct drm_gem_object *req_obj, u64 req_offset)
+		 const struct drm_gpuvm_map_req *req)
 {
 	const struct drm_gpuvm_ops *ops = gpuvm->ops;
 
@@ -2343,9 +2451,7 @@ drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
 		       ops->sm_step_unmap)))
 		return -EINVAL;
 
-	return __drm_gpuvm_sm_map(gpuvm, ops, priv,
-				  req_addr, req_range,
-				  req_obj, req_offset);
+	return __drm_gpuvm_sm_map(gpuvm, ops, priv, req, false);
 }
 EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
 
@@ -2421,10 +2527,7 @@ static const struct drm_gpuvm_ops lock_ops = {
  * @gpuvm: the &drm_gpuvm representing the GPU VA space
  * @exec: the &drm_exec locking context
  * @num_fences: for newly mapped objects, the # of fences to reserve
- * @req_addr: the start address of the range to unmap
- * @req_range: the range of the mappings to unmap
- * @req_obj: the &drm_gem_object to map
- * @req_offset: the offset within the &drm_gem_object
+ * @req: ptr to drm_gpuvm_map_req struct
  *
  * This function locks (drm_exec_lock_obj()) objects that will be unmapped/
  * remapped, and locks+prepares (drm_exec_prepare_object()) objects that
@@ -2445,9 +2548,7 @@ static const struct drm_gpuvm_ops lock_ops = {
  *                    ret = drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, op->addr, op->range);
  *                    break;
  *                case DRIVER_OP_MAP:
- *                    ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences,
- *                                                     op->addr, op->range,
- *                                                     obj, op->obj_offset);
+ *                    ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences, &req);
  *                    break;
  *                }
  *
@@ -2478,18 +2579,17 @@ static const struct drm_gpuvm_ops lock_ops = {
 int
 drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
 			   struct drm_exec *exec, unsigned int num_fences,
-			   u64 req_addr, u64 req_range,
-			   struct drm_gem_object *req_obj, u64 req_offset)
+			   struct drm_gpuvm_map_req *req)
 {
+	struct drm_gem_object *req_obj = req->map.gem.obj;
+
 	if (req_obj) {
 		int ret = drm_exec_prepare_obj(exec, req_obj, num_fences);
 		if (ret)
 			return ret;
 	}
 
-	return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec,
-				  req_addr, req_range,
-				  req_obj, req_offset);
+	return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec, req, false);
 
 }
 EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_exec_lock);
@@ -2608,13 +2708,42 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
 	.sm_step_unmap = drm_gpuva_sm_step,
 };
 
+static struct drm_gpuva_ops *
+__drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
+			      const struct drm_gpuvm_map_req *req,
+			      bool madvise)
+{
+	struct drm_gpuva_ops *ops;
+	struct {
+		struct drm_gpuvm *vm;
+		struct drm_gpuva_ops *ops;
+	} args;
+	int ret;
+
+	ops = kzalloc(sizeof(*ops), GFP_KERNEL);
+	if (unlikely(!ops))
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&ops->list);
+
+	args.vm = gpuvm;
+	args.ops = ops;
+
+	ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args, req, madvise);
+	if (ret)
+		goto err_free_ops;
+
+	return ops;
+
+err_free_ops:
+	drm_gpuva_ops_free(gpuvm, ops);
+	return ERR_PTR(ret);
+}
+
 /**
  * drm_gpuvm_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
  * @gpuvm: the &drm_gpuvm representing the GPU VA space
- * @req_addr: the start address of the new mapping
- * @req_range: the range of the new mapping
- * @req_obj: the &drm_gem_object to map
- * @req_offset: the offset within the &drm_gem_object
+ * @req: map request arguments
  *
  * This function creates a list of operations to perform splitting and merging
  * of existent mapping(s) with the newly requested one.
@@ -2642,39 +2771,49 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
  */
 struct drm_gpuva_ops *
 drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
-			    u64 req_addr, u64 req_range,
-			    struct drm_gem_object *req_obj, u64 req_offset)
+			    const struct drm_gpuvm_map_req *req)
 {
-	struct drm_gpuva_ops *ops;
-	struct {
-		struct drm_gpuvm *vm;
-		struct drm_gpuva_ops *ops;
-	} args;
-	int ret;
-
-	ops = kzalloc(sizeof(*ops), GFP_KERNEL);
-	if (unlikely(!ops))
-		return ERR_PTR(-ENOMEM);
-
-	INIT_LIST_HEAD(&ops->list);
-
-	args.vm = gpuvm;
-	args.ops = ops;
-
-	ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args,
-				 req_addr, req_range,
-				 req_obj, req_offset);
-	if (ret)
-		goto err_free_ops;
-
-	return ops;
-
-err_free_ops:
-	drm_gpuva_ops_free(gpuvm, ops);
-	return ERR_PTR(ret);
+	return __drm_gpuvm_sm_map_ops_create(gpuvm, req, false);
 }
 EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_ops_create);
 
+/**
+ * drm_gpuvm_madvise_ops_create() - creates the &drm_gpuva_ops to split
+ * @gpuvm: the &drm_gpuvm representing the GPU VA space
+ * @req: map request arguments
+ *
+ * This function creates a list of operations to perform splitting
+ * of existent mapping(s) at start or end, based on the request map.
+ *
+ * The list can be iterated with &drm_gpuva_for_each_op and must be processed
+ * in the given order. It can contain map and remap operations, but it
+ * also can be empty if no operation is required, e.g. if the requested mapping
+ * already exists is the exact same way.
+ *
+ * There will be no unmap operations, a maximum of two remap operations and two
+ * map operations. The two map operations correspond to: one from start to the
+ * end of drm_gpuvaX, and another from the start of drm_gpuvaY to end.
+ *
+ * Note that before calling this function again with another mapping request it
+ * is necessary to update the &drm_gpuvm's view of the GPU VA space. The
+ * previously obtained operations must be either processed or abandoned. To
+ * update the &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(),
+ * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
+ * used.
+ *
+ * After the caller finished processing the returned &drm_gpuva_ops, they must
+ * be freed with &drm_gpuva_ops_free.
+ *
+ * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
+ */
+struct drm_gpuva_ops *
+drm_gpuvm_madvise_ops_create(struct drm_gpuvm *gpuvm,
+			     const struct drm_gpuvm_map_req *req)
+{
+	return __drm_gpuvm_sm_map_ops_create(gpuvm, req, true);
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_madvise_ops_create);
+
 /**
  * drm_gpuvm_sm_unmap_ops_create() - creates the &drm_gpuva_ops to split on
  * unmap
diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
index 2896fa7501b1..3d97990170bf 100644
--- a/drivers/gpu/drm/imagination/pvr_vm.c
+++ b/drivers/gpu/drm/imagination/pvr_vm.c
@@ -185,12 +185,17 @@ struct pvr_vm_bind_op {
 static int pvr_vm_bind_op_exec(struct pvr_vm_bind_op *bind_op)
 {
 	switch (bind_op->type) {
-	case PVR_VM_BIND_TYPE_MAP:
+	case PVR_VM_BIND_TYPE_MAP: {
+		const struct drm_gpuvm_map_req map_req = {
+			.map.va.addr = bind_op->device_addr,
+			.map.va.range = bind_op->size,
+			.map.gem.obj = gem_from_pvr_gem(bind_op->pvr_obj),
+			.map.gem.offset = bind_op->offset,
+		};
+
 		return drm_gpuvm_sm_map(&bind_op->vm_ctx->gpuvm_mgr,
-					bind_op, bind_op->device_addr,
-					bind_op->size,
-					gem_from_pvr_gem(bind_op->pvr_obj),
-					bind_op->offset);
+					bind_op, &map_req);
+	}
 
 	case PVR_VM_BIND_TYPE_UNMAP:
 		return drm_gpuvm_sm_unmap(&bind_op->vm_ctx->gpuvm_mgr,
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 3cd8562a5109..3e97d3d61430 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -371,6 +371,12 @@ struct drm_gpuva *
 msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
 		u64 offset, u64 range_start, u64 range_end)
 {
+	struct drm_gpuva_op_map op_map = {
+		.va.addr = range_start,
+		.va.range = range_end - range_start,
+		.gem.obj = obj,
+		.gem.offset = offset,
+	};
 	struct msm_gem_vm *vm = to_msm_vm(gpuvm);
 	struct drm_gpuvm_bo *vm_bo;
 	struct msm_gem_vma *vma;
@@ -399,7 +405,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
 	if (obj)
 		GEM_WARN_ON((range_end - range_start) > obj->size);
 
-	drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset);
+	drm_gpuva_init_from_op(&vma->base, &op_map);
 	vma->mapped = false;
 
 	ret = drm_gpuva_insert(&vm->base, &vma->base);
@@ -1172,10 +1178,17 @@ vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exec)
 				break;
 			case MSM_VM_BIND_OP_MAP:
 			case MSM_VM_BIND_OP_MAP_NULL:
-				ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1,
-							    op->iova, op->range,
-							    op->obj, op->obj_offset);
+			{
+				struct drm_gpuvm_map_req map_req = {
+					.map.va.addr = op->iova,
+					.map.va.range = op->range,
+					.map.gem.obj = op->obj,
+					.map.gem.offset = op->obj_offset,
+				};
+
+				ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, &map_req);
 				break;
+			}
 			default:
 				/*
 				 * lookup_op() should have already thrown an error for
@@ -1283,9 +1296,17 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job)
 				arg.flags |= MSM_VMA_DUMP;
 			fallthrough;
 		case MSM_VM_BIND_OP_MAP_NULL:
-			ret = drm_gpuvm_sm_map(job->vm, &arg, op->iova,
-					       op->range, op->obj, op->obj_offset);
+		{
+			struct drm_gpuvm_map_req map_req = {
+				.map.va.addr = op->iova,
+				.map.va.range = op->range,
+				.map.gem.obj = op->obj,
+				.map.gem.offset = op->obj_offset,
+			};
+
+			ret = drm_gpuvm_sm_map(job->vm, &arg, &map_req);
 			break;
+		}
 		default:
 			/*
 			 * lookup_op() should have already thrown an error for
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index ddfc46bc1b3e..d94a85509176 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -1276,6 +1276,12 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
 			break;
 		case OP_MAP: {
 			struct nouveau_uvma_region *reg;
+			struct drm_gpuvm_map_req map_req = {
+				.map.va.addr = op->va.addr,
+				.map.va.range = op->va.range,
+				.map.gem.obj = op->gem.obj,
+				.map.gem.offset = op->gem.offset,
+			};
 
 			reg = nouveau_uvma_region_find_first(uvmm,
 							     op->va.addr,
@@ -1301,10 +1307,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
 			}
 
 			op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->base,
-							      op->va.addr,
-							      op->va.range,
-							      op->gem.obj,
-							      op->gem.offset);
+							      &map_req);
 			if (IS_ERR(op->ops)) {
 				ret = PTR_ERR(op->ops);
 				goto unwind_continue;
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 4140f697ba5a..e3cdaa73fd38 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -2169,15 +2169,22 @@ panthor_vm_exec_op(struct panthor_vm *vm, struct panthor_vm_op_ctx *op,
 	mutex_lock(&vm->op_lock);
 	vm->op_ctx = op;
 	switch (op_type) {
-	case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
+	case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP: {
+		const struct drm_gpuvm_map_req map_req = {
+			.map.va.addr = op->va.addr,
+			.map.va.range = op->va.range,
+			.map.gem.obj = op->map.vm_bo->obj,
+			.map.gem.offset = op->map.bo_offset,
+		};
+
 		if (vm->unusable) {
 			ret = -EINVAL;
 			break;
 		}
 
-		ret = drm_gpuvm_sm_map(&vm->base, vm, op->va.addr, op->va.range,
-				       op->map.vm_bo->obj, op->map.bo_offset);
+		ret = drm_gpuvm_sm_map(&vm->base, vm, &map_req);
 		break;
+	}
 
 	case DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP:
 		ret = drm_gpuvm_sm_unmap(&vm->base, vm, op->va.addr, op->va.range);
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 8e0c3412a757..d0ea869fcd24 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -128,6 +128,7 @@ xe-y += xe_bb.o \
 	xe_uc.o \
 	xe_uc_fw.o \
 	xe_vm.o \
+	xe_vm_madvise.o \
 	xe_vram.o \
 	xe_vram_freq.o \
 	xe_vsec.o \
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 6fea39842e1e..72396d358a00 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1711,6 +1711,18 @@ static void xe_gem_object_close(struct drm_gem_object *obj,
 	}
 }
 
+static bool should_migrate_to_smem(struct xe_bo *bo)
+{
+	/*
+	 * NOTE: The following atomic checks are platform-specific. For example,
+	 * if a device supports CXL atomics, these may not be necessary or
+	 * may behave differently.
+	 */
+
+	return bo->attr.atomic_access == DRM_XE_ATOMIC_GLOBAL ||
+	       bo->attr.atomic_access == DRM_XE_ATOMIC_CPU;
+}
+
 static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
 {
 	struct ttm_buffer_object *tbo = vmf->vma->vm_private_data;
@@ -1719,7 +1731,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
 	struct xe_bo *bo = ttm_to_xe_bo(tbo);
 	bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK;
 	vm_fault_t ret;
-	int idx;
+	int idx, r = 0;
 
 	if (needs_rpm)
 		xe_pm_runtime_get(xe);
@@ -1731,8 +1743,19 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
 	if (drm_dev_enter(ddev, &idx)) {
 		trace_xe_bo_cpu_fault(bo);
 
-		ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
-					       TTM_BO_VM_NUM_PREFAULT);
+		if (should_migrate_to_smem(bo)) {
+			xe_assert(xe, bo->flags & XE_BO_FLAG_SYSTEM);
+
+			r = xe_bo_migrate(bo, XE_PL_TT);
+			if (r == -EBUSY || r == -ERESTARTSYS || r == -EINTR)
+				ret = VM_FAULT_NOPAGE;
+			else if (r)
+				ret = VM_FAULT_SIGBUS;
+		}
+		if (!ret)
+			ret = ttm_bo_vm_fault_reserved(vmf,
+						       vmf->vma->vm_page_prot,
+						       TTM_BO_VM_NUM_PREFAULT);
 		drm_dev_exit(idx);
 
 		if (ret == VM_FAULT_RETRY &&
diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
index cf604adc13a3..314652afdca7 100644
--- a/drivers/gpu/drm/xe/xe_bo_types.h
+++ b/drivers/gpu/drm/xe/xe_bo_types.h
@@ -61,6 +61,14 @@ struct xe_bo {
 	 */
 	struct list_head client_link;
 #endif
+	/** @attr: User controlled attributes for bo */
+	struct {
+		/**
+		 * @atomic_access: type of atomic access bo needs
+		 * protected by bo dma-resv lock
+		 */
+		u32 atomic_access;
+	} attr;
 	/**
 	 * @pxp_key_instance: PXP key instance this BO was created against. A
 	 * 0 in this variable indicates that the BO does not use PXP encryption.
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 3e0402dff423..a9455c05f706 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -63,6 +63,7 @@
 #include "xe_ttm_stolen_mgr.h"
 #include "xe_ttm_sys_mgr.h"
 #include "xe_vm.h"
+#include "xe_vm_madvise.h"
 #include "xe_vram.h"
 #include "xe_vram_types.h"
 #include "xe_vsec.h"
@@ -201,6 +202,9 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
 			  DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS, xe_vm_query_vmas_attrs_ioctl,
+			  DRM_RENDER_ALLOW),
 };
 
 static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index ab43dec52776..4ea30fbce9bd 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -75,7 +75,7 @@ static bool vma_is_valid(struct xe_tile *tile, struct xe_vma *vma)
 }
 
 static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma,
-		       bool atomic, struct xe_vram_region *vram)
+		       bool need_vram_move, struct xe_vram_region *vram)
 {
 	struct xe_bo *bo = xe_vma_bo(vma);
 	struct xe_vm *vm = xe_vma_vm(vma);
@@ -85,26 +85,13 @@ static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma,
 	if (err)
 		return err;
 
-	if (atomic && vram) {
-		xe_assert(vm->xe, IS_DGFX(vm->xe));
+	if (!bo)
+		return 0;
 
-		if (xe_vma_is_userptr(vma)) {
-			err = -EACCES;
-			return err;
-		}
+	err = need_vram_move ? xe_bo_migrate(bo, vram->placement) :
+			       xe_bo_validate(bo, vm, true);
 
-		/* Migrate to VRAM, move should invalidate the VMA first */
-		err = xe_bo_migrate(bo, vram->placement);
-		if (err)
-			return err;
-	} else if (bo) {
-		/* Create backing store if needed */
-		err = xe_bo_validate(bo, vm, true);
-		if (err)
-			return err;
-	}
-
-	return 0;
+	return err;
 }
 
 static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
@@ -115,10 +102,14 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
 	struct drm_exec exec;
 	struct dma_fence *fence;
 	ktime_t end = 0;
-	int err;
+	int err, needs_vram;
 
 	lockdep_assert_held_write(&vm->lock);
 
+	needs_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
+	if (needs_vram < 0 || (needs_vram && xe_vma_is_userptr(vma)))
+		return needs_vram < 0 ? needs_vram : -EACCES;
+
 	xe_gt_stats_incr(gt, XE_GT_STATS_ID_VMA_PAGEFAULT_COUNT, 1);
 	xe_gt_stats_incr(gt, XE_GT_STATS_ID_VMA_PAGEFAULT_KB, xe_vma_size(vma) / 1024);
 
@@ -141,7 +132,7 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
 	/* Lock VM and BOs dma-resv */
 	drm_exec_init(&exec, 0, 0);
 	drm_exec_until_all_locked(&exec) {
-		err = xe_pf_begin(&exec, vma, atomic, tile->mem.vram);
+		err = xe_pf_begin(&exec, vma, needs_vram == 1, tile->mem.vram);
 		drm_exec_retry_on_contention(&exec);
 		if (xe_vm_validate_should_retry(&exec, err, &end))
 			err = -EAGAIN;
@@ -576,7 +567,7 @@ static int handle_acc(struct xe_gt *gt, struct acc *acc)
 	/* Lock VM and BOs dma-resv */
 	drm_exec_init(&exec, 0, 0);
 	drm_exec_until_all_locked(&exec) {
-		ret = xe_pf_begin(&exec, vma, true, tile->mem.vram);
+		ret = xe_pf_begin(&exec, vma, IS_DGFX(vm->xe), tile->mem.vram);
 		drm_exec_retry_on_contention(&exec);
 		if (ret)
 			break;
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index f3a39e734a90..c0a70c80dff9 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -518,7 +518,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
 {
 	struct xe_pt_stage_bind_walk *xe_walk =
 		container_of(walk, typeof(*xe_walk), base);
-	u16 pat_index = xe_walk->vma->pat_index;
+	u16 pat_index = xe_walk->vma->attr.pat_index;
 	struct xe_pt *xe_parent = container_of(parent, typeof(*xe_parent), base);
 	struct xe_vm *vm = xe_walk->vm;
 	struct xe_pt *xe_child;
@@ -640,28 +640,31 @@ static const struct xe_pt_walk_ops xe_pt_stage_bind_ops = {
  *    - In all other cases device atomics will be disabled with AE=0 until an application
  *      request differently using a ioctl like madvise.
  */
-static bool xe_atomic_for_vram(struct xe_vm *vm)
+static bool xe_atomic_for_vram(struct xe_vm *vm, struct xe_vma *vma)
 {
+	if (vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
+		return false;
+
 	return true;
 }
 
-static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
+static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_vma *vma)
 {
 	struct xe_device *xe = vm->xe;
+	struct xe_bo *bo = xe_vma_bo(vma);
 
-	if (!xe->info.has_device_atomics_on_smem)
+	if (!xe->info.has_device_atomics_on_smem ||
+	    vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
 		return false;
 
+	if (vma->attr.atomic_access == DRM_XE_ATOMIC_DEVICE)
+		return true;
+
 	/*
 	 * If a SMEM+LMEM allocation is backed by SMEM, a device
 	 * atomics will cause a gpu page fault and which then
 	 * gets migrated to LMEM, bind such allocations with
 	 * device atomics enabled.
-	 *
-	 * TODO: Revisit this. Perhaps add something like a
-	 * fault_on_atomics_in_system UAPI flag.
-	 * Note that this also prohibits GPU atomics in LR mode for
-	 * userptr and system memory on DGFX.
 	 */
 	return (!IS_DGFX(xe) || (!xe_vm_in_lr_mode(vm) ||
 				 (bo && xe_bo_has_single_placement(bo))));
@@ -744,8 +747,8 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
 		goto walk_pt;
 
 	if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
-		xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
-		xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
+		xe_walk.default_vram_pte = xe_atomic_for_vram(vm, vma) ? XE_USM_PPGTT_PTE_AE : 0;
+		xe_walk.default_system_pte = xe_atomic_for_system(vm, vma) ?
 			XE_USM_PPGTT_PTE_AE : 0;
 	}
 
@@ -950,7 +953,19 @@ bool xe_pt_zap_ptes_range(struct xe_tile *tile, struct xe_vm *vm,
 	struct xe_pt *pt = vm->pt_root[tile->id];
 	u8 pt_mask = (range->tile_present & ~range->tile_invalidated);
 
-	xe_svm_assert_in_notifier(vm);
+	/*
+	 * Locking rules:
+	 *
+	 * - notifier_lock (write): full protection against page table changes
+	 *   and MMU notifier invalidations.
+	 *
+	 * - notifier_lock (read) + vm_lock (write): combined protection against
+	 *   invalidations and concurrent page table modifications. (e.g., madvise)
+	 *
+	 */
+	lockdep_assert(lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 0) ||
+		       (lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 1) &&
+		       lockdep_is_held_type(&vm->lock, 0)));
 
 	if (!(pt_mask & BIT(tile->id)))
 		return false;
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index e35c6d4def20..0596039ef0a1 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -253,10 +253,56 @@ static int __xe_svm_garbage_collector(struct xe_vm *vm,
 	return 0;
 }
 
+static int xe_svm_range_set_default_attr(struct xe_vm *vm, u64 range_start, u64 range_end)
+{
+	struct xe_vma *vma;
+	struct xe_vma_mem_attr default_attr = {
+		.preferred_loc = {
+			.devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE,
+			.migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
+		},
+		.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+	};
+	int err = 0;
+
+	vma = xe_vm_find_vma_by_addr(vm, range_start);
+	if (!vma)
+		return -EINVAL;
+
+	if (xe_vma_has_default_mem_attrs(vma))
+		return 0;
+
+	vm_dbg(&vm->xe->drm, "Existing VMA start=0x%016llx, vma_end=0x%016llx",
+	       xe_vma_start(vma), xe_vma_end(vma));
+
+	if (xe_vma_start(vma) == range_start && xe_vma_end(vma) == range_end) {
+		default_attr.pat_index = vma->attr.default_pat_index;
+		default_attr.default_pat_index  = vma->attr.default_pat_index;
+		vma->attr = default_attr;
+	} else {
+		vm_dbg(&vm->xe->drm, "Split VMA start=0x%016llx, vma_end=0x%016llx",
+		       range_start, range_end);
+		err = xe_vm_alloc_cpu_addr_mirror_vma(vm, range_start, range_end - range_start);
+		if (err) {
+			drm_warn(&vm->xe->drm, "VMA SPLIT failed: %pe\n", ERR_PTR(err));
+			xe_vm_kill(vm, true);
+			return err;
+		}
+	}
+
+	/*
+	 * On call from xe_svm_handle_pagefault original VMA might be changed
+	 * signal this to lookup for VMA again.
+	 */
+	return -EAGAIN;
+}
+
 static int xe_svm_garbage_collector(struct xe_vm *vm)
 {
 	struct xe_svm_range *range;
-	int err;
+	u64 range_start;
+	u64 range_end;
+	int err, ret = 0;
 
 	lockdep_assert_held_write(&vm->lock);
 
@@ -271,6 +317,9 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
 		if (!range)
 			break;
 
+		range_start = xe_svm_range_start(range);
+		range_end = xe_svm_range_end(range);
+
 		list_del(&range->garbage_collector_link);
 		spin_unlock(&vm->svm.garbage_collector.lock);
 
@@ -283,11 +332,19 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
 			return err;
 		}
 
+		err = xe_svm_range_set_default_attr(vm, range_start, range_end);
+		if (err) {
+			if (err == -EAGAIN)
+				ret = -EAGAIN;
+			else
+				return err;
+		}
+
 		spin_lock(&vm->svm.garbage_collector.lock);
 	}
 	spin_unlock(&vm->svm.garbage_collector.lock);
 
-	return 0;
+	return ret;
 }
 
 static void xe_svm_garbage_collector_work_func(struct work_struct *w)
@@ -789,22 +846,9 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
 	return true;
 }
 
-/**
- * xe_svm_handle_pagefault() - SVM handle page fault
- * @vm: The VM.
- * @vma: The CPU address mirror VMA.
- * @gt: The gt upon the fault occurred.
- * @fault_addr: The GPU fault address.
- * @atomic: The fault atomic access bit.
- *
- * Create GPU bindings for a SVM page fault. Optionally migrate to device
- * memory.
- *
- * Return: 0 on success, negative error code on error.
- */
-int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
-			    struct xe_gt *gt, u64 fault_addr,
-			    bool atomic)
+static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
+				     struct xe_gt *gt, u64 fault_addr,
+				     bool need_vram)
 {
 	struct drm_gpusvm_ctx ctx = {
 		.read_only = xe_vma_read_only(vma),
@@ -812,14 +856,14 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
 			IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
 		.check_pages_threshold = IS_DGFX(vm->xe) &&
 			IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
-		.devmem_only = atomic && IS_DGFX(vm->xe) &&
-			IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
-		.timeslice_ms = atomic && IS_DGFX(vm->xe) &&
+		.devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
+		.timeslice_ms = need_vram && IS_DGFX(vm->xe) &&
 			IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
 			vm->xe->atomic_svm_timeslice_ms : 0,
 	};
 	struct xe_svm_range *range;
 	struct dma_fence *fence;
+	struct drm_pagemap *dpagemap;
 	struct xe_tile *tile = gt_to_tile(gt);
 	int migrate_try_count = ctx.devmem_only ? 3 : 1;
 	ktime_t end = 0;
@@ -849,8 +893,14 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
 
 	range_debug(range, "PAGE FAULT");
 
+	dpagemap = xe_vma_resolve_pagemap(vma, tile);
 	if (--migrate_try_count >= 0 &&
-	    xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
+	    xe_svm_range_needs_migrate_to_vram(range, vma, !!dpagemap || ctx.devmem_only)) {
+		/* TODO : For multi-device dpagemap will be used to find the
+		 * remote tile and remote device. Will need to modify
+		 * xe_svm_alloc_vram to use dpagemap for future multi-device
+		 * support.
+		 */
 		err = xe_svm_alloc_vram(tile, range, &ctx);
 		ctx.timeslice_ms <<= 1;	/* Double timeslice if we have to retry */
 		if (err) {
@@ -917,6 +967,45 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
 	return err;
 }
 
+/**
+ * xe_svm_handle_pagefault() - SVM handle page fault
+ * @vm: The VM.
+ * @vma: The CPU address mirror VMA.
+ * @gt: The gt upon the fault occurred.
+ * @fault_addr: The GPU fault address.
+ * @atomic: The fault atomic access bit.
+ *
+ * Create GPU bindings for a SVM page fault. Optionally migrate to device
+ * memory.
+ *
+ * Return: 0 on success, negative error code on error.
+ */
+int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
+			    struct xe_gt *gt, u64 fault_addr,
+			    bool atomic)
+{
+	int need_vram, ret;
+retry:
+	need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
+	if (need_vram < 0)
+		return need_vram;
+
+	ret =  __xe_svm_handle_pagefault(vm, vma, gt, fault_addr,
+					 need_vram ? true : false);
+	if (ret == -EAGAIN) {
+		/*
+		 * Retry once on -EAGAIN to re-lookup the VMA, as the original VMA
+		 * may have been split by xe_svm_range_set_default_attr.
+		 */
+		vma = xe_vm_find_vma_by_addr(vm, fault_addr);
+		if (!vma)
+			return -EINVAL;
+
+		goto retry;
+	}
+	return ret;
+}
+
 /**
  * xe_svm_has_mapping() - SVM has mappings
  * @vm: The VM.
@@ -932,6 +1021,41 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
 	return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
 }
 
+/**
+ * xe_svm_unmap_address_range - UNMAP SVM mappings and ranges
+ * @vm: The VM
+ * @start: start addr
+ * @end: end addr
+ *
+ * This function UNMAPS svm ranges if start or end address are inside them.
+ */
+void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
+{
+	struct drm_gpusvm_notifier *notifier, *next;
+
+	lockdep_assert_held_write(&vm->lock);
+
+	drm_gpusvm_for_each_notifier_safe(notifier, next, &vm->svm.gpusvm, start, end) {
+		struct drm_gpusvm_range *range, *__next;
+
+		drm_gpusvm_for_each_range_safe(range, __next, notifier, start, end) {
+			if (start > drm_gpusvm_range_start(range) ||
+			    end < drm_gpusvm_range_end(range)) {
+				if (IS_DGFX(vm->xe) && xe_svm_range_in_vram(to_xe_range(range)))
+					drm_gpusvm_range_evict(&vm->svm.gpusvm, range);
+				drm_gpusvm_range_get(range);
+				__xe_svm_garbage_collector(vm, to_xe_range(range));
+				if (!list_empty(&to_xe_range(range)->garbage_collector_link)) {
+					spin_lock(&vm->svm.garbage_collector.lock);
+					list_del(&to_xe_range(range)->garbage_collector_link);
+					spin_unlock(&vm->svm.garbage_collector.lock);
+				}
+				drm_gpusvm_range_put(range);
+			}
+		}
+	}
+}
+
 /**
  * xe_svm_bo_evict() - SVM evict BO to system memory
  * @bo: BO to evict
@@ -996,6 +1120,56 @@ int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
 	return err;
 }
 
+/**
+ * xe_svm_ranges_zap_ptes_in_range - clear ptes of svm ranges in input range
+ * @vm: Pointer to the xe_vm structure
+ * @start: Start of the input range
+ * @end: End of the input range
+ *
+ * This function removes the page table entries (PTEs) associated
+ * with the svm ranges within the given input start and end
+ *
+ * Return: tile_mask for which gt's need to be tlb invalidated.
+ */
+u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
+{
+	struct drm_gpusvm_notifier *notifier;
+	struct xe_svm_range *range;
+	u64 adj_start, adj_end;
+	struct xe_tile *tile;
+	u8 tile_mask = 0;
+	u8 id;
+
+	lockdep_assert(lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 1) &&
+		       lockdep_is_held_type(&vm->lock, 0));
+
+	drm_gpusvm_for_each_notifier(notifier, &vm->svm.gpusvm, start, end) {
+		struct drm_gpusvm_range *r = NULL;
+
+		adj_start = max(start, drm_gpusvm_notifier_start(notifier));
+		adj_end = min(end, drm_gpusvm_notifier_end(notifier));
+		drm_gpusvm_for_each_range(r, notifier, adj_start, adj_end) {
+			range = to_xe_range(r);
+			for_each_tile(tile, vm->xe, id) {
+				if (xe_pt_zap_ptes_range(tile, vm, range)) {
+					tile_mask |= BIT(id);
+					/*
+					 * WRITE_ONCE pairs with READ_ONCE in
+					 * xe_vm_has_valid_gpu_mapping().
+					 * Must not fail after setting
+					 * tile_invalidated and before
+					 * TLB invalidation.
+					 */
+					WRITE_ONCE(range->tile_invalidated,
+						   range->tile_invalidated | BIT(id));
+				}
+			}
+		}
+	}
+
+	return tile_mask;
+}
+
 #if IS_ENABLED(CONFIG_DRM_XE_PAGEMAP)
 
 static struct drm_pagemap *tile_local_pagemap(struct xe_tile *tile)
@@ -1003,6 +1177,37 @@ static struct drm_pagemap *tile_local_pagemap(struct xe_tile *tile)
 	return &tile->mem.vram->dpagemap;
 }
 
+/**
+ * xe_vma_resolve_pagemap - Resolve the appropriate DRM pagemap for a VMA
+ * @vma: Pointer to the xe_vma structure containing memory attributes
+ * @tile: Pointer to the xe_tile structure used as fallback for VRAM mapping
+ *
+ * This function determines the correct DRM pagemap to use for a given VMA.
+ * It first checks if a valid devmem_fd is provided in the VMA's preferred
+ * location. If the devmem_fd is negative, it returns NULL, indicating no
+ * pagemap is available and smem to be used as preferred location.
+ * If the devmem_fd is equal to the default faulting
+ * GT identifier, it returns the VRAM pagemap associated with the tile.
+ *
+ * Future support for multi-device configurations may use drm_pagemap_from_fd()
+ * to resolve pagemaps from arbitrary file descriptors.
+ *
+ * Return: A pointer to the resolved drm_pagemap, or NULL if none is applicable.
+ */
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
+{
+	s32 fd = (s32)vma->attr.preferred_loc.devmem_fd;
+
+	if (fd == DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM)
+		return NULL;
+
+	if (fd == DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE)
+		return IS_DGFX(tile_to_xe(tile)) ? tile_local_pagemap(tile) : NULL;
+
+	/* TODO: Support multi-device with drm_pagemap_from_fd(fd) */
+	return NULL;
+}
+
 /**
  * xe_svm_alloc_vram()- Allocate device memory pages for range,
  * migrating existing data.
@@ -1115,6 +1320,11 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
 {
 	return 0;
 }
+
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
+{
+	return NULL;
+}
 #endif
 
 /**
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 4bdccb56d25f..9d6a8840a8b7 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -90,6 +90,12 @@ bool xe_svm_range_validate(struct xe_vm *vm,
 
 u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end,  struct xe_vma *vma);
 
+void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end);
+
+u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end);
+
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile);
+
 /**
  * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
  * @range: SVM range
@@ -303,6 +309,23 @@ u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *vm
 	return ULONG_MAX;
 }
 
+static inline
+void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
+{
+}
+
+static inline
+u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
+{
+	return 0;
+}
+
+static inline
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
+{
+	return NULL;
+}
+
 #define xe_svm_assert_in_notifier(...) do {} while (0)
 #define xe_svm_range_has_dma_mapping(...) false
 
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index d40d2d43c041..82e6b97c2723 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -38,6 +38,7 @@
 #include "xe_res_cursor.h"
 #include "xe_svm.h"
 #include "xe_sync.h"
+#include "xe_tile.h"
 #include "xe_trace_bo.h"
 #include "xe_wa.h"
 #include "xe_hmm.h"
@@ -1168,7 +1169,8 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 				    struct xe_bo *bo,
 				    u64 bo_offset_or_userptr,
 				    u64 start, u64 end,
-				    u16 pat_index, unsigned int flags)
+				    struct xe_vma_mem_attr *attr,
+				    unsigned int flags)
 {
 	struct xe_vma *vma;
 	struct xe_tile *tile;
@@ -1223,7 +1225,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 	if (vm->xe->info.has_atomic_enable_pte_bit)
 		vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
 
-	vma->pat_index = pat_index;
+	vma->attr = *attr;
 
 	if (bo) {
 		struct drm_gpuvm_bo *vm_bo;
@@ -2190,6 +2192,108 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
 	return err;
 }
 
+static int xe_vm_query_vmas(struct xe_vm *vm, u64 start, u64 end)
+{
+	struct drm_gpuva *gpuva;
+	u32 num_vmas = 0;
+
+	lockdep_assert_held(&vm->lock);
+	drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end)
+		num_vmas++;
+
+	return num_vmas;
+}
+
+static int get_mem_attrs(struct xe_vm *vm, u32 *num_vmas, u64 start,
+			 u64 end, struct drm_xe_mem_range_attr *attrs)
+{
+	struct drm_gpuva *gpuva;
+	int i = 0;
+
+	lockdep_assert_held(&vm->lock);
+
+	drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
+		struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+		if (i == *num_vmas)
+			return -ENOSPC;
+
+		attrs[i].start = xe_vma_start(vma);
+		attrs[i].end = xe_vma_end(vma);
+		attrs[i].atomic.val = vma->attr.atomic_access;
+		attrs[i].pat_index.val = vma->attr.pat_index;
+		attrs[i].preferred_mem_loc.devmem_fd = vma->attr.preferred_loc.devmem_fd;
+		attrs[i].preferred_mem_loc.migration_policy =
+		vma->attr.preferred_loc.migration_policy;
+
+		i++;
+	}
+
+	*num_vmas = i;
+	return 0;
+}
+
+int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct xe_device *xe = to_xe_device(dev);
+	struct xe_file *xef = to_xe_file(file);
+	struct drm_xe_mem_range_attr *mem_attrs;
+	struct drm_xe_vm_query_mem_range_attr *args = data;
+	u64 __user *attrs_user = u64_to_user_ptr(args->vector_of_mem_attr);
+	struct xe_vm *vm;
+	int err = 0;
+
+	if (XE_IOCTL_DBG(xe,
+			 ((args->num_mem_ranges == 0 &&
+			  (attrs_user || args->sizeof_mem_range_attr != 0)) ||
+			 (args->num_mem_ranges > 0 &&
+			  (!attrs_user ||
+			   args->sizeof_mem_range_attr !=
+			   sizeof(struct drm_xe_mem_range_attr))))))
+		return -EINVAL;
+
+	vm = xe_vm_lookup(xef, args->vm_id);
+	if (XE_IOCTL_DBG(xe, !vm))
+		return -EINVAL;
+
+	err = down_read_interruptible(&vm->lock);
+	if (err)
+		goto put_vm;
+
+	attrs_user = u64_to_user_ptr(args->vector_of_mem_attr);
+
+	if (args->num_mem_ranges == 0 && !attrs_user) {
+		args->num_mem_ranges = xe_vm_query_vmas(vm, args->start, args->start + args->range);
+		args->sizeof_mem_range_attr = sizeof(struct drm_xe_mem_range_attr);
+		goto unlock_vm;
+	}
+
+	mem_attrs = kvmalloc_array(args->num_mem_ranges, args->sizeof_mem_range_attr,
+				   GFP_KERNEL | __GFP_ACCOUNT |
+				   __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
+	if (!mem_attrs) {
+		err = args->num_mem_ranges > 1 ? -ENOBUFS : -ENOMEM;
+		goto unlock_vm;
+	}
+
+	memset(mem_attrs, 0, args->num_mem_ranges * args->sizeof_mem_range_attr);
+	err = get_mem_attrs(vm, &args->num_mem_ranges, args->start,
+			    args->start + args->range, mem_attrs);
+	if (err)
+		goto free_mem_attrs;
+
+	err = copy_to_user(attrs_user, mem_attrs,
+			   args->sizeof_mem_range_attr * args->num_mem_ranges);
+
+free_mem_attrs:
+	kvfree(mem_attrs);
+unlock_vm:
+	up_read(&vm->lock);
+put_vm:
+	xe_vm_put(vm);
+	return err;
+}
+
 static bool vma_matches(struct xe_vma *vma, u64 page_addr)
 {
 	if (page_addr > xe_vma_end(vma) - 1 ||
@@ -2337,10 +2441,17 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
 
 	switch (operation) {
 	case DRM_XE_VM_BIND_OP_MAP:
-	case DRM_XE_VM_BIND_OP_MAP_USERPTR:
-		ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, addr, range,
-						  obj, bo_offset_or_userptr);
+	case DRM_XE_VM_BIND_OP_MAP_USERPTR: {
+		struct drm_gpuvm_map_req map_req = {
+			.map.va.addr = addr,
+			.map.va.range = range,
+			.map.gem.obj = obj,
+			.map.gem.offset = bo_offset_or_userptr,
+		};
+
+		ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
 		break;
+	}
 	case DRM_XE_VM_BIND_OP_UNMAP:
 		ops = drm_gpuvm_sm_unmap_ops_create(&vm->gpuvm, addr, range);
 		break;
@@ -2388,9 +2499,10 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
 				__xe_vm_needs_clear_scratch_pages(vm, flags);
 		} else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
 			struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
+			struct xe_tile *tile;
 			struct xe_svm_range *svm_range;
 			struct drm_gpusvm_ctx ctx = {};
-			struct xe_tile *tile;
+			struct drm_pagemap *dpagemap;
 			u8 id, tile_mask = 0;
 			u32 i;
 
@@ -2407,8 +2519,24 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
 				tile_mask |= 0x1 << id;
 
 			xa_init_flags(&op->prefetch_range.range, XA_FLAGS_ALLOC);
-			op->prefetch_range.region = prefetch_region;
 			op->prefetch_range.ranges_count = 0;
+			tile = NULL;
+
+			if (prefetch_region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC) {
+				dpagemap = xe_vma_resolve_pagemap(vma,
+								  xe_device_get_root_tile(vm->xe));
+				/*
+				 * TODO: Once multigpu support is enabled will need
+				 * something to dereference tile from dpagemap.
+				 */
+				if (dpagemap)
+					tile = xe_device_get_root_tile(vm->xe);
+			} else if (prefetch_region) {
+				tile = &vm->xe->tiles[region_to_mem_type[prefetch_region] -
+						      XE_PL_VRAM0];
+			}
+
+			op->prefetch_range.tile = tile;
 alloc_next_range:
 			svm_range = xe_svm_range_find_or_insert(vm, addr, vma, &ctx);
 
@@ -2427,7 +2555,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
 				goto unwind_prefetch_ops;
 			}
 
-			if (xe_svm_range_validate(vm, svm_range, tile_mask, !!prefetch_region)) {
+			if (xe_svm_range_validate(vm, svm_range, tile_mask, !!tile)) {
 				xe_svm_range_debug(svm_range, "PREFETCH - RANGE IS VALID");
 				goto check_next_range;
 			}
@@ -2464,7 +2592,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
 ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
 
 static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
-			      u16 pat_index, unsigned int flags)
+			      struct xe_vma_mem_attr *attr, unsigned int flags)
 {
 	struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
 	struct drm_exec exec;
@@ -2493,7 +2621,7 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
 	}
 	vma = xe_vma_create(vm, bo, op->gem.offset,
 			    op->va.addr, op->va.addr +
-			    op->va.range - 1, pat_index, flags);
+			    op->va.range - 1, attr, flags);
 	if (IS_ERR(vma))
 		goto err_unlock;
 
@@ -2610,6 +2738,29 @@ static int xe_vma_op_commit(struct xe_vm *vm, struct xe_vma_op *op)
 	return err;
 }
 
+/**
+ * xe_vma_has_default_mem_attrs - Check if a VMA has default memory attributes
+ * @vma: Pointer to the xe_vma structure to check
+ *
+ * This function determines whether the given VMA (Virtual Memory Area)
+ * has its memory attributes set to their default values. Specifically,
+ * it checks the following conditions:
+ *
+ * - `atomic_access` is `DRM_XE_VMA_ATOMIC_UNDEFINED`
+ * - `pat_index` is equal to `default_pat_index`
+ * - `preferred_loc.devmem_fd` is `DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE`
+ * - `preferred_loc.migration_policy` is `DRM_XE_MIGRATE_ALL_PAGES`
+ *
+ * Return: true if all attributes are at their default values, false otherwise.
+ */
+bool xe_vma_has_default_mem_attrs(struct xe_vma *vma)
+{
+	return (vma->attr.atomic_access == DRM_XE_ATOMIC_UNDEFINED &&
+		vma->attr.pat_index ==  vma->attr.default_pat_index &&
+		vma->attr.preferred_loc.devmem_fd == DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE &&
+		vma->attr.preferred_loc.migration_policy == DRM_XE_MIGRATE_ALL_PAGES);
+}
+
 static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 				   struct xe_vma_ops *vops)
 {
@@ -2636,6 +2787,16 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 		switch (op->base.op) {
 		case DRM_GPUVA_OP_MAP:
 		{
+			struct xe_vma_mem_attr default_attr = {
+				.preferred_loc = {
+					.devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE,
+					.migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
+				},
+				.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+				.default_pat_index = op->map.pat_index,
+				.pat_index = op->map.pat_index,
+			};
+
 			flags |= op->map.read_only ?
 				VMA_CREATE_FLAG_READ_ONLY : 0;
 			flags |= op->map.is_null ?
@@ -2645,7 +2806,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 			flags |= op->map.is_cpu_addr_mirror ?
 				VMA_CREATE_FLAG_IS_SYSTEM_ALLOCATOR : 0;
 
-			vma = new_vma(vm, &op->base.map, op->map.pat_index,
+			vma = new_vma(vm, &op->base.map, &default_attr,
 				      flags);
 			if (IS_ERR(vma))
 				return PTR_ERR(vma);
@@ -2673,8 +2834,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 				end = op->base.remap.next->va.addr;
 
 			if (xe_vma_is_cpu_addr_mirror(old) &&
-			    xe_svm_has_mapping(vm, start, end))
-				return -EBUSY;
+			    xe_svm_has_mapping(vm, start, end)) {
+				if (vops->flags & XE_VMA_OPS_FLAG_MADVISE)
+					xe_svm_unmap_address_range(vm, start, end);
+				else
+					return -EBUSY;
+			}
 
 			op->remap.start = xe_vma_start(old);
 			op->remap.range = xe_vma_size(old);
@@ -2693,7 +2858,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 
 			if (op->base.remap.prev) {
 				vma = new_vma(vm, op->base.remap.prev,
-					      old->pat_index, flags);
+					      &old->attr, flags);
 				if (IS_ERR(vma))
 					return PTR_ERR(vma);
 
@@ -2723,7 +2888,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 
 			if (op->base.remap.next) {
 				vma = new_vma(vm, op->base.remap.next,
-					      old->pat_index, flags);
+					      &old->attr, flags);
 				if (IS_ERR(vma))
 					return PTR_ERR(vma);
 
@@ -2910,30 +3075,26 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
 {
 	bool devmem_possible = IS_DGFX(vm->xe) && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP);
 	struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
+	struct xe_tile *tile = op->prefetch_range.tile;
 	int err = 0;
 
 	struct xe_svm_range *svm_range;
 	struct drm_gpusvm_ctx ctx = {};
-	struct xe_tile *tile;
 	unsigned long i;
-	u32 region;
 
 	if (!xe_vma_is_cpu_addr_mirror(vma))
 		return 0;
 
-	region = op->prefetch_range.region;
-
 	ctx.read_only = xe_vma_read_only(vma);
 	ctx.devmem_possible = devmem_possible;
 	ctx.check_pages_threshold = devmem_possible ? SZ_64K : 0;
 
 	/* TODO: Threading the migration */
 	xa_for_each(&op->prefetch_range.range, i, svm_range) {
-		if (!region)
+		if (!tile)
 			xe_svm_range_migrate_to_smem(vm, svm_range);
 
-		if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
-			tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
+		if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, !!tile)) {
 			err = xe_svm_alloc_vram(tile, svm_range, &ctx);
 			if (err) {
 				drm_dbg(&vm->xe->drm, "VRAM allocation failed, retry from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
@@ -2996,12 +3157,11 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
 		struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
 		u32 region;
 
-		if (xe_vma_is_cpu_addr_mirror(vma))
-			region = op->prefetch_range.region;
-		else
+		if (!xe_vma_is_cpu_addr_mirror(vma)) {
 			region = op->prefetch.region;
-
-		xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
+			xe_assert(vm->xe, region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC ||
+				  region <= ARRAY_SIZE(region_to_mem_type));
+		}
 
 		err = vma_lock_and_validate(exec,
 					    gpuva_to_vma(op->base.prefetch.va),
@@ -3419,8 +3579,8 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
 				 op == DRM_XE_VM_BIND_OP_PREFETCH) ||
 		    XE_IOCTL_DBG(xe, prefetch_region &&
 				 op != DRM_XE_VM_BIND_OP_PREFETCH) ||
-		    XE_IOCTL_DBG(xe, !(BIT(prefetch_region) &
-				       xe->info.mem_region_mask)) ||
+		    XE_IOCTL_DBG(xe,  (prefetch_region != DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC &&
+				       !(BIT(prefetch_region) & xe->info.mem_region_mask))) ||
 		    XE_IOCTL_DBG(xe, obj &&
 				 op == DRM_XE_VM_BIND_OP_UNMAP)) {
 			err = -EINVAL;
@@ -4182,3 +4342,222 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
 	}
 	kvfree(snap);
 }
+
+/**
+ * xe_vma_need_vram_for_atomic - Check if VMA needs VRAM migration for atomic operations
+ * @xe: Pointer to the XE device structure
+ * @vma: Pointer to the virtual memory area (VMA) structure
+ * @is_atomic: In pagefault path and atomic operation
+ *
+ * This function determines whether the given VMA needs to be migrated to
+ * VRAM in order to do atomic GPU operation.
+ *
+ * Return:
+ *   1        - Migration to VRAM is required
+ *   0        - Migration is not required
+ *   -EACCES  - Invalid access for atomic memory attr
+ *
+ */
+int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
+{
+	u32 atomic_access = xe_vma_bo(vma) ? xe_vma_bo(vma)->attr.atomic_access :
+					     vma->attr.atomic_access;
+
+	if (!IS_DGFX(xe) || !is_atomic)
+		return false;
+
+	/*
+	 * NOTE: The checks implemented here are platform-specific. For
+	 * instance, on a device supporting CXL atomics, these would ideally
+	 * work universally without additional handling.
+	 */
+	switch (atomic_access) {
+	case DRM_XE_ATOMIC_DEVICE:
+		return !xe->info.has_device_atomics_on_smem;
+
+	case DRM_XE_ATOMIC_CPU:
+		return -EACCES;
+
+	case DRM_XE_ATOMIC_UNDEFINED:
+	case DRM_XE_ATOMIC_GLOBAL:
+	default:
+		return 1;
+	}
+}
+
+static int xe_vm_alloc_vma(struct xe_vm *vm,
+			   struct drm_gpuvm_map_req *map_req,
+			   bool is_madvise)
+{
+	struct xe_vma_ops vops;
+	struct drm_gpuva_ops *ops = NULL;
+	struct drm_gpuva_op *__op;
+	bool is_cpu_addr_mirror = false;
+	bool remap_op = false;
+	struct xe_vma_mem_attr tmp_attr;
+	u16 default_pat;
+	int err;
+
+	lockdep_assert_held_write(&vm->lock);
+
+	if (is_madvise)
+		ops = drm_gpuvm_madvise_ops_create(&vm->gpuvm, map_req);
+	else
+		ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, map_req);
+
+	if (IS_ERR(ops))
+		return PTR_ERR(ops);
+
+	if (list_empty(&ops->list)) {
+		err = 0;
+		goto free_ops;
+	}
+
+	drm_gpuva_for_each_op(__op, ops) {
+		struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+		struct xe_vma *vma = NULL;
+
+		if (!is_madvise) {
+			if (__op->op == DRM_GPUVA_OP_UNMAP) {
+				vma = gpuva_to_vma(op->base.unmap.va);
+				XE_WARN_ON(!xe_vma_has_default_mem_attrs(vma));
+				default_pat = vma->attr.default_pat_index;
+			}
+
+			if (__op->op == DRM_GPUVA_OP_REMAP) {
+				vma = gpuva_to_vma(op->base.remap.unmap->va);
+				default_pat = vma->attr.default_pat_index;
+			}
+
+			if (__op->op == DRM_GPUVA_OP_MAP) {
+				op->map.is_cpu_addr_mirror = true;
+				op->map.pat_index = default_pat;
+			}
+		} else {
+			if (__op->op == DRM_GPUVA_OP_REMAP) {
+				vma = gpuva_to_vma(op->base.remap.unmap->va);
+				xe_assert(vm->xe, !remap_op);
+				remap_op = true;
+
+				if (xe_vma_is_cpu_addr_mirror(vma))
+					is_cpu_addr_mirror = true;
+				else
+					is_cpu_addr_mirror = false;
+			}
+
+			if (__op->op == DRM_GPUVA_OP_MAP) {
+				xe_assert(vm->xe, remap_op);
+				remap_op = false;
+				/*
+				 * In case of madvise ops DRM_GPUVA_OP_MAP is
+				 * always after DRM_GPUVA_OP_REMAP, so ensure
+				 * we assign op->map.is_cpu_addr_mirror true
+				 * if REMAP is for xe_vma_is_cpu_addr_mirror vma
+				 */
+				op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
+			}
+		}
+		print_op(vm->xe, __op);
+	}
+
+	xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
+
+	if (is_madvise)
+		vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
+
+	err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
+	if (err)
+		goto unwind_ops;
+
+	xe_vm_lock(vm, false);
+
+	drm_gpuva_for_each_op(__op, ops) {
+		struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+		struct xe_vma *vma;
+
+		if (__op->op == DRM_GPUVA_OP_UNMAP) {
+			vma = gpuva_to_vma(op->base.unmap.va);
+			/* There should be no unmap for madvise */
+			if (is_madvise)
+				XE_WARN_ON("UNEXPECTED UNMAP");
+
+			xe_vma_destroy(vma, NULL);
+		} else if (__op->op == DRM_GPUVA_OP_REMAP) {
+			vma = gpuva_to_vma(op->base.remap.unmap->va);
+			/* In case of madvise ops Store attributes for REMAP UNMAPPED
+			 * VMA, so they can be assigned to newly MAP created vma.
+			 */
+			if (is_madvise)
+				tmp_attr = vma->attr;
+
+			xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
+		} else if (__op->op == DRM_GPUVA_OP_MAP) {
+			vma = op->map.vma;
+			/* In case of madvise call, MAP will always be follwed by REMAP.
+			 * Therefore temp_attr will always have sane values, making it safe to
+			 * copy them to new vma.
+			 */
+			if (is_madvise)
+				vma->attr = tmp_attr;
+		}
+	}
+
+	xe_vm_unlock(vm);
+	drm_gpuva_ops_free(&vm->gpuvm, ops);
+	return 0;
+
+unwind_ops:
+	vm_bind_ioctl_ops_unwind(vm, &ops, 1);
+free_ops:
+	drm_gpuva_ops_free(&vm->gpuvm, ops);
+	return err;
+}
+
+/**
+ * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
+ * @vm: Pointer to the xe_vm structure
+ * @start: Starting input address
+ * @range: Size of the input range
+ *
+ * This function splits existing vma to create new vma for user provided input range
+ *
+ * Return: 0 if success
+ */
+int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+{
+	struct drm_gpuvm_map_req map_req = {
+		.map.va.addr = start,
+		.map.va.range = range,
+	};
+
+	lockdep_assert_held_write(&vm->lock);
+
+	vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
+
+	return xe_vm_alloc_vma(vm, &map_req, true);
+}
+
+/**
+ * xe_vm_alloc_cpu_addr_mirror_vma - Allocate CPU addr mirror vma
+ * @vm: Pointer to the xe_vm structure
+ * @start: Starting input address
+ * @range: Size of the input range
+ *
+ * This function splits/merges existing vma to create new vma for user provided input range
+ *
+ * Return: 0 if success
+ */
+int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+{
+	struct drm_gpuvm_map_req map_req = {
+		.map.va.addr = start,
+		.map.va.range = range,
+	};
+
+	lockdep_assert_held_write(&vm->lock);
+
+	vm_dbg(&vm->xe->drm, "CPU_ADDR_MIRROR_VMA_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
+	       start, range);
+
+	return xe_vm_alloc_vma(vm, &map_req, false);
+}
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 2f213737c7e5..57f77c8430d6 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -66,6 +66,8 @@ static inline bool xe_vm_is_closed_or_banned(struct xe_vm *vm)
 struct xe_vma *
 xe_vm_find_overlapping_vma(struct xe_vm *vm, u64 start, u64 range);
 
+bool xe_vma_has_default_mem_attrs(struct xe_vma *vma);
+
 /**
  * xe_vm_has_scratch() - Whether the vm is configured for scratch PTEs
  * @vm: The vm
@@ -171,6 +173,12 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
 
 struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
 
+int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic);
+
+int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
+
+int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
+
 /**
  * to_userptr_vma() - Return a pointer to an embedding userptr vma
  * @vma: Pointer to the embedded struct xe_vma
@@ -191,7 +199,7 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
 			struct drm_file *file);
 int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
 		     struct drm_file *file);
-
+int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
 void xe_vm_close_and_put(struct xe_vm *vm);
 
 static inline bool xe_vm_in_fault_mode(struct xe_vm *vm)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
new file mode 100644
index 000000000000..7813bdedacaa
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -0,0 +1,445 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include "xe_vm_madvise.h"
+
+#include <linux/nospec.h>
+#include <drm/xe_drm.h>
+
+#include "xe_bo.h"
+#include "xe_pat.h"
+#include "xe_pt.h"
+#include "xe_svm.h"
+
+struct xe_vmas_in_madvise_range {
+	u64 addr;
+	u64 range;
+	struct xe_vma **vmas;
+	int num_vmas;
+	bool has_svm_vmas;
+	bool has_bo_vmas;
+	bool has_userptr_vmas;
+};
+
+static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_range)
+{
+	u64 addr = madvise_range->addr;
+	u64 range = madvise_range->range;
+
+	struct xe_vma  **__vmas;
+	struct drm_gpuva *gpuva;
+	int max_vmas = 8;
+
+	lockdep_assert_held(&vm->lock);
+
+	madvise_range->num_vmas = 0;
+	madvise_range->vmas = kmalloc_array(max_vmas, sizeof(*madvise_range->vmas), GFP_KERNEL);
+	if (!madvise_range->vmas)
+		return -ENOMEM;
+
+	vm_dbg(&vm->xe->drm, "VMA's in range: start=0x%016llx, end=0x%016llx", addr, addr + range);
+
+	drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, addr, addr + range) {
+		struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+		if (xe_vma_bo(vma))
+			madvise_range->has_bo_vmas = true;
+		else if (xe_vma_is_cpu_addr_mirror(vma))
+			madvise_range->has_svm_vmas = true;
+		else if (xe_vma_is_userptr(vma))
+			madvise_range->has_userptr_vmas = true;
+
+		if (madvise_range->num_vmas == max_vmas) {
+			max_vmas <<= 1;
+			__vmas = krealloc(madvise_range->vmas,
+					  max_vmas * sizeof(*madvise_range->vmas),
+					  GFP_KERNEL);
+			if (!__vmas) {
+				kfree(madvise_range->vmas);
+				return -ENOMEM;
+			}
+			madvise_range->vmas = __vmas;
+		}
+
+		madvise_range->vmas[madvise_range->num_vmas] = vma;
+		(madvise_range->num_vmas)++;
+	}
+
+	if (!madvise_range->num_vmas)
+		kfree(madvise_range->vmas);
+
+	vm_dbg(&vm->xe->drm, "madvise_range-num_vmas = %d\n", madvise_range->num_vmas);
+
+	return 0;
+}
+
+static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
+				      struct xe_vma **vmas, int num_vmas,
+				      struct drm_xe_madvise *op)
+{
+	int i;
+
+	xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC);
+
+	for (i = 0; i < num_vmas; i++) {
+		/*TODO: Extend attributes to bo based vmas */
+		if ((vmas[i]->attr.preferred_loc.devmem_fd == op->preferred_mem_loc.devmem_fd &&
+		     vmas[i]->attr.preferred_loc.migration_policy ==
+		     op->preferred_mem_loc.migration_policy) ||
+		    !xe_vma_is_cpu_addr_mirror(vmas[i])) {
+			vmas[i]->skip_invalidation = true;
+		} else {
+			vmas[i]->skip_invalidation = false;
+			vmas[i]->attr.preferred_loc.devmem_fd = op->preferred_mem_loc.devmem_fd;
+			/* Till multi-device support is not added migration_policy
+			 * is of no use and can be ignored.
+			 */
+			vmas[i]->attr.preferred_loc.migration_policy =
+						op->preferred_mem_loc.migration_policy;
+		}
+	}
+}
+
+static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
+			   struct xe_vma **vmas, int num_vmas,
+			   struct drm_xe_madvise *op)
+{
+	struct xe_bo *bo;
+	int i;
+
+	xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
+	xe_assert(vm->xe, op->atomic.val <= DRM_XE_ATOMIC_CPU);
+
+	for (i = 0; i < num_vmas; i++) {
+		if (xe_vma_is_userptr(vmas[i]) &&
+		    !(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
+		      xe->info.has_device_atomics_on_smem)) {
+			vmas[i]->skip_invalidation = true;
+			continue;
+		}
+
+		if (vmas[i]->attr.atomic_access == op->atomic.val) {
+			vmas[i]->skip_invalidation = true;
+		} else {
+			vmas[i]->skip_invalidation = false;
+			vmas[i]->attr.atomic_access = op->atomic.val;
+		}
+
+		vmas[i]->attr.atomic_access = op->atomic.val;
+
+		bo = xe_vma_bo(vmas[i]);
+		if (!bo || bo->attr.atomic_access == op->atomic.val)
+			continue;
+
+		vmas[i]->skip_invalidation = false;
+		xe_bo_assert_held(bo);
+		bo->attr.atomic_access = op->atomic.val;
+
+		/* Invalidate cpu page table, so bo can migrate to smem in next access */
+		if (xe_bo_is_vram(bo) &&
+		    (bo->attr.atomic_access == DRM_XE_ATOMIC_CPU ||
+		     bo->attr.atomic_access == DRM_XE_ATOMIC_GLOBAL))
+			ttm_bo_unmap_virtual(&bo->ttm);
+	}
+}
+
+static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
+			      struct xe_vma **vmas, int num_vmas,
+			      struct drm_xe_madvise *op)
+{
+	int i;
+
+	xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_PAT);
+
+	for (i = 0; i < num_vmas; i++) {
+		if (vmas[i]->attr.pat_index == op->pat_index.val) {
+			vmas[i]->skip_invalidation = true;
+		} else {
+			vmas[i]->skip_invalidation = false;
+			vmas[i]->attr.pat_index = op->pat_index.val;
+		}
+	}
+}
+
+typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
+			     struct xe_vma **vmas, int num_vmas,
+			     struct drm_xe_madvise *op);
+
+static const madvise_func madvise_funcs[] = {
+	[DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
+	[DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
+	[DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
+};
+
+static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
+{
+	struct drm_gpuva *gpuva;
+	struct xe_tile *tile;
+	u8 id, tile_mask = 0;
+
+	lockdep_assert_held_write(&vm->lock);
+
+	/* Wait for pending binds */
+	if (dma_resv_wait_timeout(xe_vm_resv(vm), DMA_RESV_USAGE_BOOKKEEP,
+				  false, MAX_SCHEDULE_TIMEOUT) <= 0)
+		XE_WARN_ON(1);
+
+	drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
+		struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+		if (vma->skip_invalidation || xe_vma_is_null(vma))
+			continue;
+
+		if (xe_vma_is_cpu_addr_mirror(vma)) {
+			tile_mask |= xe_svm_ranges_zap_ptes_in_range(vm,
+								      xe_vma_start(vma),
+								      xe_vma_end(vma));
+		} else {
+			for_each_tile(tile, vm->xe, id) {
+				if (xe_pt_zap_ptes(tile, vma)) {
+					tile_mask |= BIT(id);
+
+				/*
+				 * WRITE_ONCE pairs with READ_ONCE
+				 * in xe_vm_has_valid_gpu_mapping()
+				 */
+				WRITE_ONCE(vma->tile_invalidated,
+					   vma->tile_invalidated | BIT(id));
+				}
+			}
+		}
+	}
+
+	return tile_mask;
+}
+
+static int xe_vm_invalidate_madvise_range(struct xe_vm *vm, u64 start, u64 end)
+{
+	u8 tile_mask = xe_zap_ptes_in_madvise_range(vm, start, end);
+
+	if (!tile_mask)
+		return 0;
+
+	xe_device_wmb(vm->xe);
+
+	return xe_vm_range_tilemask_tlb_invalidation(vm, start, end, tile_mask);
+}
+
+static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madvise *args)
+{
+	if (XE_IOCTL_DBG(xe, !args))
+		return false;
+
+	if (XE_IOCTL_DBG(xe, !IS_ALIGNED(args->start, SZ_4K)))
+		return false;
+
+	if (XE_IOCTL_DBG(xe, !IS_ALIGNED(args->range, SZ_4K)))
+		return false;
+
+	if (XE_IOCTL_DBG(xe, args->range < SZ_4K))
+		return false;
+
+	switch (args->type) {
+	case DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC:
+	{
+		s32 fd = (s32)args->preferred_mem_loc.devmem_fd;
+
+		if (XE_IOCTL_DBG(xe, fd < DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM))
+			return false;
+
+		if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.migration_policy >
+				     DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES))
+			return false;
+
+		if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.pad))
+			return false;
+
+		if (XE_IOCTL_DBG(xe, args->atomic.reserved))
+			return false;
+		break;
+	}
+	case DRM_XE_MEM_RANGE_ATTR_ATOMIC:
+		if (XE_IOCTL_DBG(xe, args->atomic.val > DRM_XE_ATOMIC_CPU))
+			return false;
+
+		if (XE_IOCTL_DBG(xe, args->atomic.pad))
+			return false;
+
+		if (XE_IOCTL_DBG(xe, args->atomic.reserved))
+			return false;
+
+		break;
+	case DRM_XE_MEM_RANGE_ATTR_PAT:
+	{
+		u16 coh_mode = xe_pat_index_get_coh_mode(xe, args->pat_index.val);
+
+		if (XE_IOCTL_DBG(xe, !coh_mode))
+			return false;
+
+		if (XE_WARN_ON(coh_mode > XE_COH_AT_LEAST_1WAY))
+			return false;
+
+		if (XE_IOCTL_DBG(xe, args->pat_index.pad))
+			return false;
+
+		if (XE_IOCTL_DBG(xe, args->pat_index.reserved))
+			return false;
+		break;
+	}
+	default:
+		if (XE_IOCTL_DBG(xe, 1))
+			return false;
+	}
+
+	if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
+		return false;
+
+	return true;
+}
+
+static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma **vmas,
+				   int num_vmas, u32 atomic_val)
+{
+	struct xe_device *xe = vm->xe;
+	struct xe_bo *bo;
+	int i;
+
+	for (i = 0; i < num_vmas; i++) {
+		bo = xe_vma_bo(vmas[i]);
+		if (!bo)
+			continue;
+		/*
+		 * NOTE: The following atomic checks are platform-specific. For example,
+		 * if a device supports CXL atomics, these may not be necessary or
+		 * may behave differently.
+		 */
+		if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_CPU &&
+				 !(bo->flags & XE_BO_FLAG_SYSTEM)))
+			return false;
+
+		if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_DEVICE &&
+				 !(bo->flags & XE_BO_FLAG_VRAM0) &&
+				 !(bo->flags & XE_BO_FLAG_VRAM1) &&
+				 !(bo->flags & XE_BO_FLAG_SYSTEM &&
+				   xe->info.has_device_atomics_on_smem)))
+			return false;
+
+		if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_GLOBAL &&
+				 (!(bo->flags & XE_BO_FLAG_SYSTEM) ||
+				  (!(bo->flags & XE_BO_FLAG_VRAM0) &&
+				   !(bo->flags & XE_BO_FLAG_VRAM1)))))
+			return false;
+	}
+	return true;
+}
+/**
+ * xe_vm_madvise_ioctl - Handle MADVise ioctl for a VM
+ * @dev: DRM device pointer
+ * @data: Pointer to ioctl data (drm_xe_madvise*)
+ * @file: DRM file pointer
+ *
+ * Handles the MADVISE ioctl to provide memory advice for vma's within
+ * input range.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct xe_device *xe = to_xe_device(dev);
+	struct xe_file *xef = to_xe_file(file);
+	struct drm_xe_madvise *args = data;
+	struct xe_vmas_in_madvise_range madvise_range = {.addr = args->start,
+							 .range =  args->range, };
+	struct xe_vm *vm;
+	struct drm_exec exec;
+	int err, attr_type;
+
+	vm = xe_vm_lookup(xef, args->vm_id);
+	if (XE_IOCTL_DBG(xe, !vm))
+		return -EINVAL;
+
+	if (!madvise_args_are_sane(vm->xe, args)) {
+		err = -EINVAL;
+		goto put_vm;
+	}
+
+	xe_svm_flush(vm);
+
+	err = down_write_killable(&vm->lock);
+	if (err)
+		goto put_vm;
+
+	if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) {
+		err = -ENOENT;
+		goto unlock_vm;
+	}
+
+	err = xe_vm_alloc_madvise_vma(vm, args->start, args->range);
+	if (err)
+		goto unlock_vm;
+
+	err = get_vmas(vm, &madvise_range);
+	if (err || !madvise_range.num_vmas)
+		goto unlock_vm;
+
+	if (madvise_range.has_bo_vmas) {
+		if (args->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC) {
+			if (!check_bo_args_are_sane(vm, madvise_range.vmas,
+						    madvise_range.num_vmas,
+						    args->atomic.val)) {
+				err = -EINVAL;
+				goto unlock_vm;
+			}
+		}
+
+		drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
+		drm_exec_until_all_locked(&exec) {
+			for (int i = 0; i < madvise_range.num_vmas; i++) {
+				struct xe_bo *bo = xe_vma_bo(madvise_range.vmas[i]);
+
+				if (!bo)
+					continue;
+				err = drm_exec_lock_obj(&exec, &bo->ttm.base);
+				drm_exec_retry_on_contention(&exec);
+				if (err)
+					goto err_fini;
+			}
+		}
+	}
+
+	if (madvise_range.has_userptr_vmas) {
+		err = down_read_interruptible(&vm->userptr.notifier_lock);
+		if (err)
+			goto err_fini;
+	}
+
+	if (madvise_range.has_svm_vmas) {
+		err = down_read_interruptible(&vm->svm.gpusvm.notifier_lock);
+		if (err)
+			goto unlock_userptr;
+	}
+
+	attr_type = array_index_nospec(args->type, ARRAY_SIZE(madvise_funcs));
+	madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args);
+
+	err = xe_vm_invalidate_madvise_range(vm, args->start, args->start + args->range);
+
+	if (madvise_range.has_svm_vmas)
+		xe_svm_notifier_unlock(vm);
+
+unlock_userptr:
+	if (madvise_range.has_userptr_vmas)
+		up_read(&vm->userptr.notifier_lock);
+err_fini:
+	if (madvise_range.has_bo_vmas)
+		drm_exec_fini(&exec);
+	kfree(madvise_range.vmas);
+	madvise_range.vmas = NULL;
+unlock_vm:
+	up_write(&vm->lock);
+put_vm:
+	xe_vm_put(vm);
+	return err;
+}
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
new file mode 100644
index 000000000000..b0e1fc445f23
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_VM_MADVISE_H_
+#define _XE_VM_MADVISE_H_
+
+struct drm_device;
+struct drm_file;
+
+int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
+			struct drm_file *file);
+
+#endif
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 8a07feef503b..b5108d010786 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -77,6 +77,44 @@ struct xe_userptr {
 #endif
 };
 
+/**
+ * struct xe_vma_mem_attr - memory attributes associated with vma
+ */
+struct xe_vma_mem_attr {
+	/** @preferred_loc: perferred memory_location */
+	struct {
+		/** @preferred_loc.migration_policy: Pages migration policy */
+		u32 migration_policy;
+
+		/**
+		 * @preferred_loc.devmem_fd: used for determining pagemap_fd
+		 * requested by user DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM and
+		 * DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE mean system memory or
+		 * closest device memory respectively.
+		 */
+		u32 devmem_fd;
+	} preferred_loc;
+
+	/**
+	 * @atomic_access: The atomic access type for the vma
+	 * See %DRM_XE_VMA_ATOMIC_UNDEFINED, %DRM_XE_VMA_ATOMIC_DEVICE,
+	 * %DRM_XE_VMA_ATOMIC_GLOBAL, and %DRM_XE_VMA_ATOMIC_CPU for possible
+	 * values. These are defined in uapi/drm/xe_drm.h.
+	 */
+	u32 atomic_access;
+
+	/**
+	 * @default_pat_index: The pat index for VMA set during first bind by user.
+	 */
+	u16 default_pat_index;
+
+	/**
+	 * @pat_index: The pat index to use when encoding the PTEs for this vma.
+	 * same as default_pat_index unless overwritten by madvise.
+	 */
+	u16 pat_index;
+};
+
 struct xe_vma {
 	/** @gpuva: Base GPUVA object */
 	struct drm_gpuva gpuva;
@@ -126,15 +164,22 @@ struct xe_vma {
 	u8 tile_staged;
 
 	/**
-	 * @pat_index: The pat index to use when encoding the PTEs for this vma.
+	 * @skip_invalidation: Used in madvise to avoid invalidation
+	 * if mem attributes doesn't change
 	 */
-	u16 pat_index;
+	bool skip_invalidation;
 
 	/**
 	 * @ufence: The user fence that was provided with MAP.
 	 * Needs to be signalled before UNMAP can be processed.
 	 */
 	struct xe_user_fence *ufence;
+
+	/**
+	 * @attr: The attributes of vma which determines the migration policy
+	 * and encoding of the PTEs for this vma.
+	 */
+	struct xe_vma_mem_attr attr;
 };
 
 /**
@@ -395,8 +440,11 @@ struct xe_vma_op_prefetch_range {
 	struct xarray range;
 	/** @ranges_count: number of svm ranges to map */
 	u32 ranges_count;
-	/** @region: memory region to prefetch to */
-	u32 region;
+	/**
+	 * @tile: Pointer to the tile structure containing memory to prefetch.
+	 *        NULL if prefetch requested region is smem
+	 */
+	struct xe_tile *tile;
 };
 
 /** enum xe_vma_op_flags - flags for VMA operation */
@@ -462,6 +510,7 @@ struct xe_vma_ops {
 	struct xe_vm_pgtable_update_ops pt_update_ops[XE_MAX_TILES_PER_DEVICE];
 	/** @flag: signify the properties within xe_vma_ops*/
 #define XE_VMA_OPS_FLAG_HAS_SVM_PREFETCH BIT(0)
+#define XE_VMA_OPS_FLAG_MADVISE          BIT(1)
 	u32 flags;
 #ifdef TEST_VM_OPS_ERROR
 	/** @inject_error: inject error to test error handling */
diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
index 8d613e9b2690..0e336148309d 100644
--- a/include/drm/drm_gpusvm.h
+++ b/include/drm/drm_gpusvm.h
@@ -282,6 +282,10 @@ void drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
 bool drm_gpusvm_has_mapping(struct drm_gpusvm *gpusvm, unsigned long start,
 			    unsigned long end);
 
+struct drm_gpusvm_notifier *
+drm_gpusvm_notifier_find(struct drm_gpusvm *gpusvm, unsigned long start,
+			 unsigned long end);
+
 struct drm_gpusvm_range *
 drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
 		      unsigned long end);
@@ -434,4 +438,70 @@ __drm_gpusvm_range_next(struct drm_gpusvm_range *range)
 	     (range__) && (drm_gpusvm_range_start(range__) < (end__));	\
 	     (range__) = __drm_gpusvm_range_next(range__))
 
+/**
+ * drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
+ * @range__: Iterator variable for the ranges
+ * @next__: Iterator variable for the ranges temporay storage
+ * @notifier__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the range
+ * @end__: End address of the range
+ *
+ * This macro is used to iterate over GPU SVM ranges in a notifier while
+ * removing ranges from it.
+ */
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__)	\
+	for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)),	\
+	     (next__) = __drm_gpusvm_range_next(range__);				\
+	     (range__) && (drm_gpusvm_range_start(range__) < (end__));			\
+	     (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
+
+/**
+ * __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
+ * @notifier: a pointer to the current drm_gpusvm_notifier
+ *
+ * Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
+ *         the current notifier is the last one or if the input notifier is
+ *         NULL.
+ */
+static inline struct drm_gpusvm_notifier *
+__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
+{
+	if (notifier && !list_is_last(&notifier->entry,
+				      &notifier->gpusvm->notifier_list))
+		return list_next_entry(notifier, entry);
+
+	return NULL;
+}
+
+/**
+ * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
+ * @notifier__: Iterator variable for the notifiers
+ * @gpusvm__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the notifier
+ * @end__: End address of the notifier
+ *
+ * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
+ */
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__)		\
+	for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__));	\
+	     (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__));		\
+	     (notifier__) = __drm_gpusvm_notifier_next(notifier__))
+
+/**
+ * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
+ * @notifier__: Iterator variable for the notifiers
+ * @next__: Iterator variable for the notifiers temporay storage
+ * @gpusvm__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the notifier
+ * @end__: End address of the notifier
+ *
+ * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
+ * removing notifiers from it.
+ */
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__)	\
+	for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)),	\
+	     (next__) = __drm_gpusvm_notifier_next(notifier__);				\
+	     (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__));		\
+	     (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
+
 #endif /* __DRM_GPUSVM_H__ */
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 274532facfd6..4a22b9d848f7 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -160,15 +160,6 @@ struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
 struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start);
 struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end);
 
-static inline void drm_gpuva_init(struct drm_gpuva *va, u64 addr, u64 range,
-				  struct drm_gem_object *obj, u64 offset)
-{
-	va->va.addr = addr;
-	va->va.range = range;
-	va->gem.obj = obj;
-	va->gem.offset = offset;
-}
-
 /**
  * drm_gpuva_invalidate() - sets whether the backing GEM of this &drm_gpuva is
  * invalidated
@@ -1058,10 +1049,23 @@ struct drm_gpuva_ops {
  */
 #define drm_gpuva_next_op(op) list_next_entry(op, entry)
 
+/**
+ * struct drm_gpuvm_map_req - arguments passed to drm_gpuvm_sm_map[_ops_create]()
+ */
+struct drm_gpuvm_map_req {
+	/**
+	 * @op_map: struct drm_gpuva_op_map
+	 */
+	struct drm_gpuva_op_map map;
+};
+
 struct drm_gpuva_ops *
 drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
-			    u64 addr, u64 range,
-			    struct drm_gem_object *obj, u64 offset);
+			    const struct drm_gpuvm_map_req *req);
+struct drm_gpuva_ops *
+drm_gpuvm_madvise_ops_create(struct drm_gpuvm *gpuvm,
+			     const struct drm_gpuvm_map_req *req);
+
 struct drm_gpuva_ops *
 drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
 			      u64 addr, u64 range);
@@ -1079,8 +1083,10 @@ void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
 static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
 					  struct drm_gpuva_op_map *op)
 {
-	drm_gpuva_init(va, op->va.addr, op->va.range,
-		       op->gem.obj, op->gem.offset);
+	va->va.addr = op->va.addr;
+	va->va.range = op->va.range;
+	va->gem.obj = op->gem.obj;
+	va->gem.offset = op->gem.offset;
 }
 
 /**
@@ -1205,16 +1211,14 @@ struct drm_gpuvm_ops {
 };
 
 int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
-		     u64 addr, u64 range,
-		     struct drm_gem_object *obj, u64 offset);
+		     const struct drm_gpuvm_map_req *req);
 
 int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
 		       u64 addr, u64 range);
 
 int drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
 			  struct drm_exec *exec, unsigned int num_fences,
-			  u64 req_addr, u64 req_range,
-			  struct drm_gem_object *obj, u64 offset);
+			  struct drm_gpuvm_map_req *req);
 
 int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
 				 u64 req_addr, u64 req_range);
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index c721e130c1d2..eaf713706387 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -81,6 +81,8 @@ extern "C" {
  *  - &DRM_IOCTL_XE_EXEC
  *  - &DRM_IOCTL_XE_WAIT_USER_FENCE
  *  - &DRM_IOCTL_XE_OBSERVATION
+ *  - &DRM_IOCTL_XE_MADVISE
+ *  - &DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS
  */
 
 /*
@@ -102,6 +104,8 @@ extern "C" {
 #define DRM_XE_EXEC			0x09
 #define DRM_XE_WAIT_USER_FENCE		0x0a
 #define DRM_XE_OBSERVATION		0x0b
+#define DRM_XE_MADVISE			0x0c
+#define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS	0x0d
 
 /* Must be kept compact -- no holes */
 
@@ -117,6 +121,8 @@ extern "C" {
 #define DRM_IOCTL_XE_EXEC			DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
 #define DRM_IOCTL_XE_WAIT_USER_FENCE		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
 #define DRM_IOCTL_XE_OBSERVATION		DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
+#define DRM_IOCTL_XE_MADVISE			DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
+#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS	DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
 
 /**
  * DOC: Xe IOCTL Extensions
@@ -1007,6 +1013,10 @@ struct drm_xe_vm_destroy {
  *    valid on VMs with DRM_XE_VM_CREATE_FLAG_FAULT_MODE set. The CPU address
  *    mirror flag are only valid for DRM_XE_VM_BIND_OP_MAP operations, the BO
  *    handle MBZ, and the BO offset MBZ.
+ *
+ * The @prefetch_mem_region_instance for %DRM_XE_VM_BIND_OP_PREFETCH can also be:
+ *  - %DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC, which ensures prefetching occurs in
+ *    the memory region advised by madvise.
  */
 struct drm_xe_vm_bind_op {
 	/** @extensions: Pointer to the first extension struct, if any */
@@ -1112,6 +1122,7 @@ struct drm_xe_vm_bind_op {
 	/** @flags: Bind flags */
 	__u32 flags;
 
+#define DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC	-1
 	/**
 	 * @prefetch_mem_region_instance: Memory region to prefetch VMA to.
 	 * It is a region instance, not a mask.
@@ -1978,6 +1989,269 @@ struct drm_xe_query_eu_stall {
 	__u64 sampling_rates[];
 };
 
+/**
+ * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
+ *
+ * This structure is used to set memory attributes for a virtual address range
+ * in a VM. The type of attribute is specified by @type, and the corresponding
+ * union member is used to provide additional parameters for @type.
+ *
+ * Supported attribute types:
+ * - DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC: Set preferred memory location.
+ * - DRM_XE_MEM_RANGE_ATTR_ATOMIC: Set atomic access policy.
+ * - DRM_XE_MEM_RANGE_ATTR_PAT: Set page attribute table index.
+ *
+ * Example:
+ *
+ * .. code-block:: C
+ *
+ * struct drm_xe_madvise madvise = {
+ *          .vm_id = vm_id,
+ *          .start = 0x100000,
+ *          .range = 0x2000,
+ *          .type = DRM_XE_MEM_RANGE_ATTR_ATOMIC,
+ *          .atomic_val = DRM_XE_ATOMIC_DEVICE,
+ *         };
+ *
+ * ioctl(fd, DRM_IOCTL_XE_MADVISE, &madvise);
+ *
+ */
+struct drm_xe_madvise {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
+	/** @start: start of the virtual address range */
+	__u64 start;
+
+	/** @range: size of the virtual address range */
+	__u64 range;
+
+	/** @vm_id: vm_id of the virtual range */
+	__u32 vm_id;
+
+#define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC	0
+#define DRM_XE_MEM_RANGE_ATTR_ATOMIC		1
+#define DRM_XE_MEM_RANGE_ATTR_PAT		2
+	/** @type: type of attribute */
+	__u32 type;
+
+	union {
+		/**
+		 * @preferred_mem_loc: preferred memory location
+		 *
+		 * Used when @type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC
+		 *
+		 * Supported values for @preferred_mem_loc.devmem_fd:
+		 * - DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE: set vram of faulting tile as preferred loc
+		 * - DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM: set smem as preferred loc
+		 *
+		 * Supported values for @preferred_mem_loc.migration_policy:
+		 * - DRM_XE_MIGRATE_ALL_PAGES
+		 * - DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES
+		 */
+		struct {
+#define DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE	0
+#define DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM	-1
+			/** @preferred_mem_loc.devmem_fd: fd for preferred loc */
+			__u32 devmem_fd;
+
+#define DRM_XE_MIGRATE_ALL_PAGES		0
+#define DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES	1
+			/** @preferred_mem_loc.migration_policy: Page migration policy */
+			__u16 migration_policy;
+
+			/** @preferred_mem_loc.pad : MBZ */
+			__u16 pad;
+
+			/** @preferred_mem_loc.reserved : Reserved */
+			__u64 reserved;
+		} preferred_mem_loc;
+
+		/**
+		 * @atomic: Atomic access policy
+		 *
+		 * Used when @type == DRM_XE_MEM_RANGE_ATTR_ATOMIC.
+		 *
+		 * Supported values for @atomic.val:
+		 * - DRM_XE_ATOMIC_UNDEFINED: Undefined or default behaviour
+		 *   Support both GPU and CPU atomic operations for system allocator
+		 *   Support GPU atomic operations for normal(bo) allocator
+		 * - DRM_XE_ATOMIC_DEVICE: Support GPU atomic operations
+		 * - DRM_XE_ATOMIC_GLOBAL: Support both GPU and CPU atomic operations
+		 * - DRM_XE_ATOMIC_CPU: Support CPU atomic
+		 */
+		struct {
+#define DRM_XE_ATOMIC_UNDEFINED	0
+#define DRM_XE_ATOMIC_DEVICE	1
+#define DRM_XE_ATOMIC_GLOBAL	2
+#define DRM_XE_ATOMIC_CPU	3
+			/** @atomic.val: value of atomic operation */
+			__u32 val;
+
+			/** @atomic.pad: MBZ */
+			__u32 pad;
+
+			/** @atomic.reserved: Reserved */
+			__u64 reserved;
+		} atomic;
+
+		/**
+		 * @pat_index: Page attribute table index
+		 *
+		 * Used when @type == DRM_XE_MEM_RANGE_ATTR_PAT.
+		 */
+		struct {
+			/** @pat_index.val: PAT index value */
+			__u32 val;
+
+			/** @pat_index.pad: MBZ */
+			__u32 pad;
+
+			/** @pat_index.reserved: Reserved */
+			__u64 reserved;
+		} pat_index;
+	};
+
+	/** @reserved: Reserved */
+	__u64 reserved[2];
+};
+
+/**
+ * struct drm_xe_mem_range_attr - Output of &DRM_IOCTL_XE_VM_QUERY_MEM_RANGES_ATTRS
+ *
+ * This structure is provided by userspace and filled by KMD in response to the
+ * DRM_IOCTL_XE_VM_QUERY_MEM_RANGES_ATTRS ioctl. It describes memory attributes of
+ * a memory ranges within a user specified address range in a VM.
+ *
+ * The structure includes information such as atomic access policy,
+ * page attribute table (PAT) index, and preferred memory location.
+ * Userspace allocates an array of these structures and passes a pointer to the
+ * ioctl to retrieve attributes for each memory ranges
+ *
+ * @extensions: Pointer to the first extension struct, if any
+ * @start: Start address of the memory range
+ * @end: End address of the virtual memory range
+ *
+ */
+struct drm_xe_mem_range_attr {
+	 /** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
+	/** @start: start of the memory range */
+	__u64 start;
+
+	/** @end: end of the memory range */
+	__u64 end;
+
+	/** @preferred_mem_loc: preferred memory location */
+	struct {
+		/** @preferred_mem_loc.devmem_fd: fd for preferred loc */
+		__u32 devmem_fd;
+
+		/** @preferred_mem_loc.migration_policy: Page migration policy */
+		__u32 migration_policy;
+	} preferred_mem_loc;
+
+	/** @atomic: Atomic access policy */
+	struct {
+		/** @atomic.val: atomic attribute */
+		__u32 val;
+
+		/** @atomic.reserved: Reserved */
+		__u32 reserved;
+	} atomic;
+
+	 /** @pat_index: Page attribute table index */
+	struct {
+		/** @pat_index.val: PAT index */
+		__u32 val;
+
+		/** @pat_index.reserved: Reserved */
+		__u32 reserved;
+	} pat_index;
+
+	/** @reserved: Reserved */
+	__u64 reserved[2];
+};
+
+/**
+ * struct drm_xe_vm_query_mem_range_attr - Input of &DRM_IOCTL_XE_VM_QUERY_MEM_ATTRIBUTES
+ *
+ * This structure is used to query memory attributes of memory regions
+ * within a user specified address range in a VM. It provides detailed
+ * information about each memory range, including atomic access policy,
+ * page attribute table (PAT) index, and preferred memory location.
+ *
+ * Userspace first calls the ioctl with @num_mem_ranges = 0,
+ * @sizeof_mem_ranges_attr = 0 and @vector_of_vma_mem_attr = NULL to retrieve
+ * the number of memory regions and size of each memory range attribute.
+ * Then, it allocates a buffer of that size and calls the ioctl again to fill
+ * the buffer with memory range attributes.
+ *
+ * If second call fails with -ENOSPC, it means memory ranges changed between
+ * first call and now, retry IOCTL again with @num_mem_ranges = 0,
+ * @sizeof_mem_ranges_attr = 0 and @vector_of_vma_mem_attr = NULL followed by
+ * Second ioctl call.
+ *
+ * Example:
+ *
+ * .. code-block:: C
+ *    struct drm_xe_vm_query_mem_range_attr query = {
+ *         .vm_id = vm_id,
+ *         .start = 0x100000,
+ *         .range = 0x2000,
+ *     };
+ *
+ *    // First ioctl call to get num of mem regions and sizeof each attribute
+ *    ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, &query);
+ *
+ *    // Allocate buffer for the memory region attributes
+ *    void *ptr = malloc(query.num_mem_ranges * query.sizeof_mem_range_attr);
+ *
+ *    query.vector_of_mem_attr = (uintptr_t)ptr;
+ *
+ *    // Second ioctl call to actually fill the memory attributes
+ *    ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, &query);
+ *
+ *    // Iterate over the returned memory region attributes
+ *    for (unsigned int i = 0; i < query.num_mem_ranges; ++i) {
+ *       struct drm_xe_mem_range_attr *attr = (struct drm_xe_mem_range_attr *)ptr;
+ *
+ *       // Do something with attr
+ *
+ *       // Move pointer by one entry
+ *       ptr += query.sizeof_mem_range_attr;
+ *     }
+ *
+ *    free(ptr);
+ */
+struct drm_xe_vm_query_mem_range_attr {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
+	/** @vm_id: vm_id of the virtual range */
+	__u32 vm_id;
+
+	/** @num_mem_ranges: number of mem_ranges in range */
+	__u32 num_mem_ranges;
+
+	/** @start: start of the virtual address range */
+	__u64 start;
+
+	/** @range: size of the virtual address range */
+	__u64 range;
+
+	/** @sizeof_mem_range_attr: size of struct drm_xe_mem_range_attr */
+	__u64 sizeof_mem_range_attr;
+
+	/** @vector_of_mem_attr: userptr to array of struct drm_xe_mem_range_attr */
+	__u64 vector_of_mem_attr;
+
+	/** @reserved: Reserved */
+	__u64 reserved[2];
+
+};
+
 #if defined(__cplusplus)
 }
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* ✗ CI.checkpatch: warning for drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
  2025-08-13 12:38 [PATCH] drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes Himal Prasad Ghimiray
@ 2025-08-13 13:26 ` Patchwork
  2025-08-13 13:27 ` ✓ CI.KUnit: success " Patchwork
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Patchwork @ 2025-08-13 13:26 UTC (permalink / raw)
  To: Himal Prasad Ghimiray; +Cc: intel-xe

== Series Details ==

Series: drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
URL   : https://patchwork.freedesktop.org/series/152884/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
6f9293a391ff3c575bc021f454be5d0a0c076f57
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 068ae4881ebb02634d51864d4c336a4a54db4bc8
Author: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Date:   Wed Aug 13 18:08:55 2025 +0530

    drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
    
    DONOT REVIEW
    Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
+ /mt/dim checkpatch ec8aa890d544a1acecf63c1a23e659bb7fc7abe6 drm-intel
068ae4881ebb drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
-:2124: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#2124: 
new file mode 100644

-:2465: CHECK:LINE_SPACING: Please use a blank line after function/struct/union/enum declarations
#2465: FILE: drivers/gpu/drm/xe/xe_vm_madvise.c:337:
+}
+/**

-:2721: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'range__' - possible side-effects?
#2721: FILE: include/drm/drm_gpusvm.h:452:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__)	\
+	for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)),	\
+	     (next__) = __drm_gpusvm_range_next(range__);				\
+	     (range__) && (drm_gpusvm_range_start(range__) < (end__));			\
+	     (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))

-:2721: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'next__' - possible side-effects?
#2721: FILE: include/drm/drm_gpusvm.h:452:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__)	\
+	for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)),	\
+	     (next__) = __drm_gpusvm_range_next(range__);				\
+	     (range__) && (drm_gpusvm_range_start(range__) < (end__));			\
+	     (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))

-:2721: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#2721: FILE: include/drm/drm_gpusvm.h:452:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__)	\
+	for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)),	\
+	     (next__) = __drm_gpusvm_range_next(range__);				\
+	     (range__) && (drm_gpusvm_range_start(range__) < (end__));			\
+	     (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))

-:2754: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'notifier__' - possible side-effects?
#2754: FILE: include/drm/drm_gpusvm.h:485:
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__)		\
+	for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__));	\
+	     (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__));		\
+	     (notifier__) = __drm_gpusvm_notifier_next(notifier__))

-:2754: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#2754: FILE: include/drm/drm_gpusvm.h:485:
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__)		\
+	for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__));	\
+	     (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__));		\
+	     (notifier__) = __drm_gpusvm_notifier_next(notifier__))

-:2770: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'notifier__' - possible side-effects?
#2770: FILE: include/drm/drm_gpusvm.h:501:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__)	\
+	for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)),	\
+	     (next__) = __drm_gpusvm_notifier_next(notifier__);				\
+	     (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__));		\
+	     (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))

-:2770: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'next__' - possible side-effects?
#2770: FILE: include/drm/drm_gpusvm.h:501:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__)	\
+	for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)),	\
+	     (next__) = __drm_gpusvm_notifier_next(notifier__);				\
+	     (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__));		\
+	     (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))

-:2770: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#2770: FILE: include/drm/drm_gpusvm.h:501:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__)	\
+	for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)),	\
+	     (next__) = __drm_gpusvm_notifier_next(notifier__);				\
+	     (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__));		\
+	     (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))

-:2881: WARNING:LONG_LINE: line length of 113 exceeds 100 columns
#2881: FILE: include/uapi/drm/xe_drm.h:124:
+#define DRM_IOCTL_XE_MADVISE			DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)

-:2882: WARNING:LONG_LINE: line length of 147 exceeds 100 columns
#2882: FILE: include/uapi/drm/xe_drm.h:125:
+#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS	DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)

total: 0 errors, 3 warnings, 9 checks, 2972 lines checked



^ permalink raw reply	[flat|nested] 5+ messages in thread

* ✓ CI.KUnit: success for drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
  2025-08-13 12:38 [PATCH] drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes Himal Prasad Ghimiray
  2025-08-13 13:26 ` ✗ CI.checkpatch: warning for " Patchwork
@ 2025-08-13 13:27 ` Patchwork
  2025-08-13 14:30 ` ✓ Xe.CI.BAT: " Patchwork
  2025-08-13 15:37 ` ✗ Xe.CI.Full: failure " Patchwork
  3 siblings, 0 replies; 5+ messages in thread
From: Patchwork @ 2025-08-13 13:27 UTC (permalink / raw)
  To: Himal Prasad Ghimiray; +Cc: intel-xe

== Series Details ==

Series: drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
URL   : https://patchwork.freedesktop.org/series/152884/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[13:26:21] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[13:26:25] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[13:26:54] Starting KUnit Kernel (1/1)...
[13:26:54] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[13:26:55] ================== guc_buf (11 subtests) ===================
[13:26:55] [PASSED] test_smallest
[13:26:55] [PASSED] test_largest
[13:26:55] [PASSED] test_granular
[13:26:55] [PASSED] test_unique
[13:26:55] [PASSED] test_overlap
[13:26:55] [PASSED] test_reusable
[13:26:55] [PASSED] test_too_big
[13:26:55] [PASSED] test_flush
[13:26:55] [PASSED] test_lookup
[13:26:55] [PASSED] test_data
[13:26:55] [PASSED] test_class
[13:26:55] ===================== [PASSED] guc_buf =====================
[13:26:55] =================== guc_dbm (7 subtests) ===================
[13:26:55] [PASSED] test_empty
[13:26:55] [PASSED] test_default
[13:26:55] ======================== test_size  ========================
[13:26:55] [PASSED] 4
[13:26:55] [PASSED] 8
[13:26:55] [PASSED] 32
[13:26:55] [PASSED] 256
[13:26:55] ==================== [PASSED] test_size ====================
[13:26:55] ======================= test_reuse  ========================
[13:26:55] [PASSED] 4
[13:26:55] [PASSED] 8
[13:26:55] [PASSED] 32
[13:26:55] [PASSED] 256
[13:26:55] =================== [PASSED] test_reuse ====================
[13:26:55] =================== test_range_overlap  ====================
[13:26:55] [PASSED] 4
[13:26:55] [PASSED] 8
[13:26:55] [PASSED] 32
[13:26:55] [PASSED] 256
[13:26:55] =============== [PASSED] test_range_overlap ================
[13:26:55] =================== test_range_compact  ====================
[13:26:55] [PASSED] 4
[13:26:55] [PASSED] 8
[13:26:55] [PASSED] 32
[13:26:55] [PASSED] 256
[13:26:55] =============== [PASSED] test_range_compact ================
[13:26:55] ==================== test_range_spare  =====================
[13:26:55] [PASSED] 4
[13:26:55] [PASSED] 8
[13:26:55] [PASSED] 32
[13:26:55] [PASSED] 256
[13:26:55] ================ [PASSED] test_range_spare =================
[13:26:55] ===================== [PASSED] guc_dbm =====================
[13:26:55] =================== guc_idm (6 subtests) ===================
[13:26:55] [PASSED] bad_init
[13:26:55] [PASSED] no_init
[13:26:55] [PASSED] init_fini
[13:26:55] [PASSED] check_used
[13:26:55] [PASSED] check_quota
[13:26:55] [PASSED] check_all
[13:26:55] ===================== [PASSED] guc_idm =====================
[13:26:55] ================== no_relay (3 subtests) ===================
[13:26:55] [PASSED] xe_drops_guc2pf_if_not_ready
[13:26:55] [PASSED] xe_drops_guc2vf_if_not_ready
[13:26:55] [PASSED] xe_rejects_send_if_not_ready
[13:26:55] ==================== [PASSED] no_relay =====================
[13:26:55] ================== pf_relay (14 subtests) ==================
[13:26:55] [PASSED] pf_rejects_guc2pf_too_short
[13:26:55] [PASSED] pf_rejects_guc2pf_too_long
[13:26:55] [PASSED] pf_rejects_guc2pf_no_payload
[13:26:55] [PASSED] pf_fails_no_payload
[13:26:55] [PASSED] pf_fails_bad_origin
[13:26:55] [PASSED] pf_fails_bad_type
[13:26:55] [PASSED] pf_txn_reports_error
[13:26:55] [PASSED] pf_txn_sends_pf2guc
[13:26:55] [PASSED] pf_sends_pf2guc
[13:26:55] [SKIPPED] pf_loopback_nop
[13:26:55] [SKIPPED] pf_loopback_echo
[13:26:55] [SKIPPED] pf_loopback_fail
[13:26:55] [SKIPPED] pf_loopback_busy
[13:26:55] [SKIPPED] pf_loopback_retry
[13:26:55] ==================== [PASSED] pf_relay =====================
[13:26:55] ================== vf_relay (3 subtests) ===================
[13:26:55] [PASSED] vf_rejects_guc2vf_too_short
[13:26:55] [PASSED] vf_rejects_guc2vf_too_long
[13:26:55] [PASSED] vf_rejects_guc2vf_no_payload
[13:26:55] ==================== [PASSED] vf_relay =====================
[13:26:55] ===================== lmtt (1 subtest) =====================
[13:26:55] ======================== test_ops  =========================
[13:26:55] [PASSED] 2-level
[13:26:55] [PASSED] multi-level
[13:26:55] ==================== [PASSED] test_ops =====================
[13:26:55] ====================== [PASSED] lmtt =======================
[13:26:55] ================= pf_service (11 subtests) =================
[13:26:55] [PASSED] pf_negotiate_any
[13:26:55] [PASSED] pf_negotiate_base_match
[13:26:55] [PASSED] pf_negotiate_base_newer
[13:26:55] [PASSED] pf_negotiate_base_next
[13:26:55] [SKIPPED] pf_negotiate_base_older
[13:26:55] [PASSED] pf_negotiate_base_prev
[13:26:55] [PASSED] pf_negotiate_latest_match
[13:26:55] [PASSED] pf_negotiate_latest_newer
[13:26:55] [PASSED] pf_negotiate_latest_next
[13:26:55] [SKIPPED] pf_negotiate_latest_older
[13:26:55] [SKIPPED] pf_negotiate_latest_prev
[13:26:55] =================== [PASSED] pf_service ====================
[13:26:55] =================== xe_mocs (2 subtests) ===================
[13:26:55] ================ xe_live_mocs_kernel_kunit  ================
[13:26:55] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[13:26:55] ================ xe_live_mocs_reset_kunit  =================
[13:26:55] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[13:26:55] ==================== [SKIPPED] xe_mocs =====================
[13:26:55] ================= xe_migrate (2 subtests) ==================
[13:26:55] ================= xe_migrate_sanity_kunit  =================
[13:26:55] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[13:26:55] ================== xe_validate_ccs_kunit  ==================
[13:26:55] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[13:26:55] =================== [SKIPPED] xe_migrate ===================
[13:26:55] ================== xe_dma_buf (1 subtest) ==================
[13:26:55] ==================== xe_dma_buf_kunit  =====================
[13:26:55] ================ [SKIPPED] xe_dma_buf_kunit ================
[13:26:55] =================== [SKIPPED] xe_dma_buf ===================
[13:26:55] ================= xe_bo_shrink (1 subtest) =================
[13:26:55] =================== xe_bo_shrink_kunit  ====================
[13:26:55] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[13:26:55] ================== [SKIPPED] xe_bo_shrink ==================
[13:26:55] ==================== xe_bo (2 subtests) ====================
[13:26:55] ================== xe_ccs_migrate_kunit  ===================
[13:26:55] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[13:26:55] ==================== xe_bo_evict_kunit  ====================
[13:26:55] =============== [SKIPPED] xe_bo_evict_kunit ================
[13:26:55] ===================== [SKIPPED] xe_bo ======================
[13:26:55] ==================== args (11 subtests) ====================
[13:26:55] [PASSED] count_args_test
[13:26:55] [PASSED] call_args_example
[13:26:55] [PASSED] call_args_test
[13:26:55] [PASSED] drop_first_arg_example
[13:26:55] [PASSED] drop_first_arg_test
[13:26:55] [PASSED] first_arg_example
[13:26:55] [PASSED] first_arg_test
[13:26:55] [PASSED] last_arg_example
[13:26:55] [PASSED] last_arg_test
[13:26:55] [PASSED] pick_arg_example
[13:26:55] [PASSED] sep_comma_example
[13:26:55] ====================== [PASSED] args =======================
[13:26:55] =================== xe_pci (3 subtests) ====================
[13:26:55] ==================== check_graphics_ip  ====================
[13:26:55] [PASSED] 12.70 Xe_LPG
[13:26:55] [PASSED] 12.71 Xe_LPG
[13:26:55] [PASSED] 12.74 Xe_LPG+
[13:26:55] [PASSED] 20.01 Xe2_HPG
[13:26:55] [PASSED] 20.02 Xe2_HPG
[13:26:55] [PASSED] 20.04 Xe2_LPG
[13:26:55] [PASSED] 30.00 Xe3_LPG
[13:26:55] [PASSED] 30.01 Xe3_LPG
[13:26:55] [PASSED] 30.03 Xe3_LPG
[13:26:55] ================ [PASSED] check_graphics_ip ================
[13:26:55] ===================== check_media_ip  ======================
[13:26:55] [PASSED] 13.00 Xe_LPM+
[13:26:55] [PASSED] 13.01 Xe2_HPM
[13:26:55] [PASSED] 20.00 Xe2_LPM
[13:26:55] [PASSED] 30.00 Xe3_LPM
[13:26:55] [PASSED] 30.02 Xe3_LPM
[13:26:55] ================= [PASSED] check_media_ip ==================
[13:26:55] ================= check_platform_gt_count  =================
[13:26:55] [PASSED] 0x9A60 (TIGERLAKE)
[13:26:55] [PASSED] 0x9A68 (TIGERLAKE)
[13:26:55] [PASSED] 0x9A70 (TIGERLAKE)
[13:26:55] [PASSED] 0x9A40 (TIGERLAKE)
[13:26:55] [PASSED] 0x9A49 (TIGERLAKE)
[13:26:55] [PASSED] 0x9A59 (TIGERLAKE)
[13:26:55] [PASSED] 0x9A78 (TIGERLAKE)
[13:26:55] [PASSED] 0x9AC0 (TIGERLAKE)
[13:26:55] [PASSED] 0x9AC9 (TIGERLAKE)
[13:26:55] [PASSED] 0x9AD9 (TIGERLAKE)
[13:26:55] [PASSED] 0x9AF8 (TIGERLAKE)
[13:26:55] [PASSED] 0x4C80 (ROCKETLAKE)
[13:26:55] [PASSED] 0x4C8A (ROCKETLAKE)
[13:26:55] [PASSED] 0x4C8B (ROCKETLAKE)
[13:26:55] [PASSED] 0x4C8C (ROCKETLAKE)
[13:26:55] [PASSED] 0x4C90 (ROCKETLAKE)
[13:26:55] [PASSED] 0x4C9A (ROCKETLAKE)
[13:26:55] [PASSED] 0x4680 (ALDERLAKE_S)
[13:26:55] [PASSED] 0x4682 (ALDERLAKE_S)
[13:26:55] [PASSED] 0x4688 (ALDERLAKE_S)
[13:26:55] [PASSED] 0x468A (ALDERLAKE_S)
[13:26:55] [PASSED] 0x468B (ALDERLAKE_S)
[13:26:55] [PASSED] 0x4690 (ALDERLAKE_S)
[13:26:55] [PASSED] 0x4692 (ALDERLAKE_S)
[13:26:55] [PASSED] 0x4693 (ALDERLAKE_S)
[13:26:55] [PASSED] 0x46A0 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46A1 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46A2 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46A3 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46A6 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46A8 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46AA (ALDERLAKE_P)
[13:26:55] [PASSED] 0x462A (ALDERLAKE_P)
[13:26:55] [PASSED] 0x4626 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x4628 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46B0 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46B1 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46B2 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46B3 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46C0 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46C1 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46C2 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46C3 (ALDERLAKE_P)
[13:26:55] [PASSED] 0x46D0 (ALDERLAKE_N)
[13:26:55] [PASSED] 0x46D1 (ALDERLAKE_N)
[13:26:55] [PASSED] 0x46D2 (ALDERLAKE_N)
[13:26:55] [PASSED] 0x46D3 (ALDERLAKE_N)
[13:26:55] [PASSED] 0x46D4 (ALDERLAKE_N)
[13:26:55] [PASSED] 0xA721 (ALDERLAKE_P)
[13:26:55] [PASSED] 0xA7A1 (ALDERLAKE_P)
[13:26:55] [PASSED] 0xA7A9 (ALDERLAKE_P)
[13:26:55] [PASSED] 0xA7AC (ALDERLAKE_P)
[13:26:55] [PASSED] 0xA7AD (ALDERLAKE_P)
[13:26:55] [PASSED] 0xA720 (ALDERLAKE_P)
[13:26:55] [PASSED] 0xA7A0 (ALDERLAKE_P)
[13:26:55] [PASSED] 0xA7A8 (ALDERLAKE_P)
[13:26:55] [PASSED] 0xA7AA (ALDERLAKE_P)
[13:26:55] [PASSED] 0xA7AB (ALDERLAKE_P)
[13:26:55] [PASSED] 0xA780 (ALDERLAKE_S)
[13:26:55] [PASSED] 0xA781 (ALDERLAKE_S)
[13:26:55] [PASSED] 0xA782 (ALDERLAKE_S)
[13:26:55] [PASSED] 0xA783 (ALDERLAKE_S)
[13:26:55] [PASSED] 0xA788 (ALDERLAKE_S)
[13:26:55] [PASSED] 0xA789 (ALDERLAKE_S)
[13:26:55] [PASSED] 0xA78A (ALDERLAKE_S)
[13:26:55] [PASSED] 0xA78B (ALDERLAKE_S)
[13:26:55] [PASSED] 0x4905 (DG1)
[13:26:55] [PASSED] 0x4906 (DG1)
[13:26:55] [PASSED] 0x4907 (DG1)
[13:26:55] [PASSED] 0x4908 (DG1)
[13:26:55] [PASSED] 0x4909 (DG1)
[13:26:55] [PASSED] 0x56C0 (DG2)
[13:26:55] [PASSED] 0x56C2 (DG2)
[13:26:55] [PASSED] 0x56C1 (DG2)
[13:26:55] [PASSED] 0x7D51 (METEORLAKE)
[13:26:55] [PASSED] 0x7DD1 (METEORLAKE)
[13:26:55] [PASSED] 0x7D41 (METEORLAKE)
[13:26:55] [PASSED] 0x7D67 (METEORLAKE)
[13:26:55] [PASSED] 0xB640 (METEORLAKE)
[13:26:55] [PASSED] 0x56A0 (DG2)
[13:26:55] [PASSED] 0x56A1 (DG2)
[13:26:55] [PASSED] 0x56A2 (DG2)
[13:26:55] [PASSED] 0x56BE (DG2)
[13:26:55] [PASSED] 0x56BF (DG2)
[13:26:55] [PASSED] 0x5690 (DG2)
[13:26:55] [PASSED] 0x5691 (DG2)
[13:26:55] [PASSED] 0x5692 (DG2)
[13:26:55] [PASSED] 0x56A5 (DG2)
[13:26:55] [PASSED] 0x56A6 (DG2)
[13:26:55] [PASSED] 0x56B0 (DG2)
[13:26:55] [PASSED] 0x56B1 (DG2)
[13:26:55] [PASSED] 0x56BA (DG2)
[13:26:55] [PASSED] 0x56BB (DG2)
[13:26:55] [PASSED] 0x56BC (DG2)
[13:26:55] [PASSED] 0x56BD (DG2)
[13:26:55] [PASSED] 0x5693 (DG2)
[13:26:55] [PASSED] 0x5694 (DG2)
[13:26:55] [PASSED] 0x5695 (DG2)
[13:26:55] [PASSED] 0x56A3 (DG2)
[13:26:55] [PASSED] 0x56A4 (DG2)
[13:26:55] [PASSED] 0x56B2 (DG2)
[13:26:55] [PASSED] 0x56B3 (DG2)
[13:26:55] [PASSED] 0x5696 (DG2)
[13:26:55] [PASSED] 0x5697 (DG2)
[13:26:55] [PASSED] 0xB69 (PVC)
[13:26:55] [PASSED] 0xB6E (PVC)
[13:26:55] [PASSED] 0xBD4 (PVC)
[13:26:55] [PASSED] 0xBD5 (PVC)
[13:26:55] [PASSED] 0xBD6 (PVC)
[13:26:55] [PASSED] 0xBD7 (PVC)
[13:26:55] [PASSED] 0xBD8 (PVC)
[13:26:55] [PASSED] 0xBD9 (PVC)
[13:26:55] [PASSED] 0xBDA (PVC)
[13:26:55] [PASSED] 0xBDB (PVC)
[13:26:55] [PASSED] 0xBE0 (PVC)
[13:26:55] [PASSED] 0xBE1 (PVC)
[13:26:55] [PASSED] 0xBE5 (PVC)
[13:26:55] [PASSED] 0x7D40 (METEORLAKE)
[13:26:55] [PASSED] 0x7D45 (METEORLAKE)
[13:26:55] [PASSED] 0x7D55 (METEORLAKE)
[13:26:55] [PASSED] 0x7D60 (METEORLAKE)
[13:26:55] [PASSED] 0x7DD5 (METEORLAKE)
[13:26:55] [PASSED] 0x6420 (LUNARLAKE)
[13:26:55] [PASSED] 0x64A0 (LUNARLAKE)
[13:26:55] [PASSED] 0x64B0 (LUNARLAKE)
[13:26:55] [PASSED] 0xE202 (BATTLEMAGE)
[13:26:55] [PASSED] 0xE209 (BATTLEMAGE)
[13:26:55] [PASSED] 0xE20B (BATTLEMAGE)
[13:26:55] [PASSED] 0xE20C (BATTLEMAGE)
[13:26:55] [PASSED] 0xE20D (BATTLEMAGE)
[13:26:55] [PASSED] 0xE210 (BATTLEMAGE)
[13:26:55] [PASSED] 0xE211 (BATTLEMAGE)
[13:26:55] [PASSED] 0xE212 (BATTLEMAGE)
[13:26:55] [PASSED] 0xE216 (BATTLEMAGE)
[13:26:55] [PASSED] 0xE220 (BATTLEMAGE)
[13:26:55] [PASSED] 0xE221 (BATTLEMAGE)
[13:26:55] [PASSED] 0xE222 (BATTLEMAGE)
[13:26:55] [PASSED] 0xE223 (BATTLEMAGE)
[13:26:55] [PASSED] 0xB080 (PANTHERLAKE)
[13:26:55] [PASSED] 0xB081 (PANTHERLAKE)
[13:26:55] [PASSED] 0xB082 (PANTHERLAKE)
[13:26:55] [PASSED] 0xB083 (PANTHERLAKE)
[13:26:55] [PASSED] 0xB084 (PANTHERLAKE)
[13:26:55] [PASSED] 0xB085 (PANTHERLAKE)
[13:26:55] [PASSED] 0xB086 (PANTHERLAKE)
[13:26:55] [PASSED] 0xB087 (PANTHERLAKE)
[13:26:55] [PASSED] 0xB08F (PANTHERLAKE)
[13:26:55] [PASSED] 0xB090 (PANTHERLAKE)
[13:26:55] [PASSED] 0xB0A0 (PANTHERLAKE)
[13:26:55] [PASSED] 0xB0B0 (PANTHERLAKE)
[13:26:55] [PASSED] 0xFD80 (PANTHERLAKE)
[13:26:55] [PASSED] 0xFD81 (PANTHERLAKE)
[13:26:55] ============= [PASSED] check_platform_gt_count =============
[13:26:55] ===================== [PASSED] xe_pci ======================
[13:26:55] =================== xe_rtp (2 subtests) ====================
[13:26:55] =============== xe_rtp_process_to_sr_tests  ================
[13:26:55] [PASSED] coalesce-same-reg
[13:26:55] [PASSED] no-match-no-add
[13:26:55] [PASSED] match-or
[13:26:55] [PASSED] match-or-xfail
[13:26:55] [PASSED] no-match-no-add-multiple-rules
[13:26:55] [PASSED] two-regs-two-entries
[13:26:55] [PASSED] clr-one-set-other
[13:26:55] [PASSED] set-field
[13:26:55] [PASSED] conflict-duplicate
[13:26:55] [PASSED] conflict-not-disjoint
[13:26:55] [PASSED] conflict-reg-type
[13:26:55] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[13:26:55] ================== xe_rtp_process_tests  ===================
[13:26:55] [PASSED] active1
[13:26:55] [PASSED] active2
[13:26:55] [PASSED] active-inactive
[13:26:55] [PASSED] inactive-active
[13:26:55] [PASSED] inactive-1st_or_active-inactive
[13:26:55] [PASSED] inactive-2nd_or_active-inactive
[13:26:55] [PASSED] inactive-last_or_active-inactive
[13:26:55] [PASSED] inactive-no_or_active-inactive
[13:26:55] ============== [PASSED] xe_rtp_process_tests ===============
[13:26:55] ===================== [PASSED] xe_rtp ======================
[13:26:55] ==================== xe_wa (1 subtest) =====================
[13:26:55] ======================== xe_wa_gt  =========================
[13:26:55] [PASSED] TIGERLAKE (B0)
[13:26:55] [PASSED] DG1 (A0)
[13:26:55] [PASSED] DG1 (B0)
[13:26:55] [PASSED] ALDERLAKE_S (A0)
[13:26:55] [PASSED] ALDERLAKE_S (B0)
[13:26:55] [PASSED] ALDERLAKE_S (C0)
[13:26:55] [PASSED] ALDERLAKE_S (D0)
[13:26:55] [PASSED] ALDERLAKE_P (A0)
[13:26:55] [PASSED] ALDERLAKE_P (B0)
[13:26:55] [PASSED] ALDERLAKE_P (C0)
[13:26:55] [PASSED] ALDERLAKE_S_RPLS (D0)
[13:26:55] [PASSED] ALDERLAKE_P_RPLU (E0)
[13:26:55] [PASSED] DG2_G10 (C0)
[13:26:55] [PASSED] DG2_G11 (B1)
[13:26:55] [PASSED] DG2_G12 (A1)
[13:26:55] [PASSED] METEORLAKE (g:A0, m:A0)
[13:26:55] [PASSED] METEORLAKE (g:A0, m:A0)
[13:26:55] [PASSED] METEORLAKE (g:A0, m:A0)
[13:26:55] [PASSED] LUNARLAKE (g:A0, m:A0)
[13:26:55] [PASSED] LUNARLAKE (g:B0, m:A0)
stty: 'standard input': Inappropriate ioctl for device
[13:26:55] [PASSED] BATTLEMAGE (g:A0, m:A1)
[13:26:55] ==================== [PASSED] xe_wa_gt =====================
[13:26:55] ====================== [PASSED] xe_wa ======================
[13:26:55] ============================================================
[13:26:55] Testing complete. Ran 297 tests: passed: 281, skipped: 16
[13:26:55] Elapsed time: 33.627s total, 4.312s configuring, 28.948s building, 0.326s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[13:26:55] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[13:26:57] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[13:27:19] Starting KUnit Kernel (1/1)...
[13:27:19] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[13:27:19] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[13:27:19] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[13:27:19] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[13:27:19] =========== drm_validate_clone_mode (2 subtests) ===========
[13:27:19] ============== drm_test_check_in_clone_mode  ===============
[13:27:19] [PASSED] in_clone_mode
[13:27:19] [PASSED] not_in_clone_mode
[13:27:19] ========== [PASSED] drm_test_check_in_clone_mode ===========
[13:27:19] =============== drm_test_check_valid_clones  ===============
[13:27:19] [PASSED] not_in_clone_mode
[13:27:19] [PASSED] valid_clone
[13:27:19] [PASSED] invalid_clone
[13:27:19] =========== [PASSED] drm_test_check_valid_clones ===========
[13:27:19] ============= [PASSED] drm_validate_clone_mode =============
[13:27:19] ============= drm_validate_modeset (1 subtest) =============
[13:27:19] [PASSED] drm_test_check_connector_changed_modeset
[13:27:19] ============== [PASSED] drm_validate_modeset ===============
[13:27:19] ====== drm_test_bridge_get_current_state (2 subtests) ======
[13:27:19] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[13:27:19] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[13:27:19] ======== [PASSED] drm_test_bridge_get_current_state ========
[13:27:19] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[13:27:19] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[13:27:19] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[13:27:19] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[13:27:19] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[13:27:19] ============== drm_bridge_alloc (2 subtests) ===============
[13:27:19] [PASSED] drm_test_drm_bridge_alloc_basic
[13:27:19] [PASSED] drm_test_drm_bridge_alloc_get_put
[13:27:19] ================ [PASSED] drm_bridge_alloc =================
[13:27:19] ================== drm_buddy (7 subtests) ==================
[13:27:19] [PASSED] drm_test_buddy_alloc_limit
[13:27:19] [PASSED] drm_test_buddy_alloc_optimistic
[13:27:19] [PASSED] drm_test_buddy_alloc_pessimistic
[13:27:19] [PASSED] drm_test_buddy_alloc_pathological
[13:27:19] [PASSED] drm_test_buddy_alloc_contiguous
[13:27:19] [PASSED] drm_test_buddy_alloc_clear
[13:27:19] [PASSED] drm_test_buddy_alloc_range_bias
[13:27:19] ==================== [PASSED] drm_buddy ====================
[13:27:19] ============= drm_cmdline_parser (40 subtests) =============
[13:27:19] [PASSED] drm_test_cmdline_force_d_only
[13:27:19] [PASSED] drm_test_cmdline_force_D_only_dvi
[13:27:19] [PASSED] drm_test_cmdline_force_D_only_hdmi
[13:27:19] [PASSED] drm_test_cmdline_force_D_only_not_digital
[13:27:19] [PASSED] drm_test_cmdline_force_e_only
[13:27:19] [PASSED] drm_test_cmdline_res
[13:27:19] [PASSED] drm_test_cmdline_res_vesa
[13:27:19] [PASSED] drm_test_cmdline_res_vesa_rblank
[13:27:19] [PASSED] drm_test_cmdline_res_rblank
[13:27:19] [PASSED] drm_test_cmdline_res_bpp
[13:27:19] [PASSED] drm_test_cmdline_res_refresh
[13:27:19] [PASSED] drm_test_cmdline_res_bpp_refresh
[13:27:19] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[13:27:19] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[13:27:19] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[13:27:19] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[13:27:19] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[13:27:19] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[13:27:19] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[13:27:19] [PASSED] drm_test_cmdline_res_margins_force_on
[13:27:19] [PASSED] drm_test_cmdline_res_vesa_margins
[13:27:19] [PASSED] drm_test_cmdline_name
[13:27:19] [PASSED] drm_test_cmdline_name_bpp
[13:27:19] [PASSED] drm_test_cmdline_name_option
[13:27:19] [PASSED] drm_test_cmdline_name_bpp_option
[13:27:19] [PASSED] drm_test_cmdline_rotate_0
[13:27:19] [PASSED] drm_test_cmdline_rotate_90
[13:27:19] [PASSED] drm_test_cmdline_rotate_180
[13:27:19] [PASSED] drm_test_cmdline_rotate_270
[13:27:19] [PASSED] drm_test_cmdline_hmirror
[13:27:19] [PASSED] drm_test_cmdline_vmirror
[13:27:19] [PASSED] drm_test_cmdline_margin_options
[13:27:19] [PASSED] drm_test_cmdline_multiple_options
[13:27:19] [PASSED] drm_test_cmdline_bpp_extra_and_option
[13:27:19] [PASSED] drm_test_cmdline_extra_and_option
[13:27:19] [PASSED] drm_test_cmdline_freestanding_options
[13:27:19] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[13:27:19] [PASSED] drm_test_cmdline_panel_orientation
[13:27:19] ================ drm_test_cmdline_invalid  =================
[13:27:19] [PASSED] margin_only
[13:27:19] [PASSED] interlace_only
[13:27:19] [PASSED] res_missing_x
[13:27:19] [PASSED] res_missing_y
[13:27:19] [PASSED] res_bad_y
[13:27:19] [PASSED] res_missing_y_bpp
[13:27:19] [PASSED] res_bad_bpp
[13:27:19] [PASSED] res_bad_refresh
[13:27:19] [PASSED] res_bpp_refresh_force_on_off
[13:27:19] [PASSED] res_invalid_mode
[13:27:19] [PASSED] res_bpp_wrong_place_mode
[13:27:19] [PASSED] name_bpp_refresh
[13:27:19] [PASSED] name_refresh
[13:27:19] [PASSED] name_refresh_wrong_mode
[13:27:19] [PASSED] name_refresh_invalid_mode
[13:27:19] [PASSED] rotate_multiple
[13:27:19] [PASSED] rotate_invalid_val
[13:27:19] [PASSED] rotate_truncated
[13:27:19] [PASSED] invalid_option
[13:27:19] [PASSED] invalid_tv_option
[13:27:19] [PASSED] truncated_tv_option
[13:27:19] ============ [PASSED] drm_test_cmdline_invalid =============
[13:27:19] =============== drm_test_cmdline_tv_options  ===============
[13:27:19] [PASSED] NTSC
[13:27:19] [PASSED] NTSC_443
[13:27:19] [PASSED] NTSC_J
[13:27:19] [PASSED] PAL
[13:27:19] [PASSED] PAL_M
[13:27:19] [PASSED] PAL_N
[13:27:19] [PASSED] SECAM
[13:27:19] [PASSED] MONO_525
[13:27:19] [PASSED] MONO_625
[13:27:19] =========== [PASSED] drm_test_cmdline_tv_options ===========
[13:27:19] =============== [PASSED] drm_cmdline_parser ================
[13:27:19] ========== drmm_connector_hdmi_init (20 subtests) ==========
[13:27:19] [PASSED] drm_test_connector_hdmi_init_valid
[13:27:19] [PASSED] drm_test_connector_hdmi_init_bpc_8
[13:27:19] [PASSED] drm_test_connector_hdmi_init_bpc_10
[13:27:19] [PASSED] drm_test_connector_hdmi_init_bpc_12
[13:27:19] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[13:27:19] [PASSED] drm_test_connector_hdmi_init_bpc_null
[13:27:19] [PASSED] drm_test_connector_hdmi_init_formats_empty
[13:27:19] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[13:27:19] === drm_test_connector_hdmi_init_formats_yuv420_allowed  ===
[13:27:19] [PASSED] supported_formats=0x9 yuv420_allowed=1
[13:27:19] [PASSED] supported_formats=0x9 yuv420_allowed=0
[13:27:19] [PASSED] supported_formats=0x3 yuv420_allowed=1
[13:27:19] [PASSED] supported_formats=0x3 yuv420_allowed=0
[13:27:19] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[13:27:19] [PASSED] drm_test_connector_hdmi_init_null_ddc
[13:27:19] [PASSED] drm_test_connector_hdmi_init_null_product
[13:27:19] [PASSED] drm_test_connector_hdmi_init_null_vendor
[13:27:19] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[13:27:19] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[13:27:19] [PASSED] drm_test_connector_hdmi_init_product_valid
[13:27:19] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[13:27:19] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[13:27:19] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[13:27:19] ========= drm_test_connector_hdmi_init_type_valid  =========
[13:27:19] [PASSED] HDMI-A
[13:27:19] [PASSED] HDMI-B
[13:27:19] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[13:27:19] ======== drm_test_connector_hdmi_init_type_invalid  ========
[13:27:19] [PASSED] Unknown
[13:27:19] [PASSED] VGA
[13:27:19] [PASSED] DVI-I
[13:27:19] [PASSED] DVI-D
[13:27:19] [PASSED] DVI-A
[13:27:19] [PASSED] Composite
[13:27:19] [PASSED] SVIDEO
[13:27:19] [PASSED] LVDS
[13:27:19] [PASSED] Component
[13:27:19] [PASSED] DIN
[13:27:19] [PASSED] DP
[13:27:19] [PASSED] TV
[13:27:19] [PASSED] eDP
[13:27:19] [PASSED] Virtual
[13:27:19] [PASSED] DSI
[13:27:19] [PASSED] DPI
[13:27:19] [PASSED] Writeback
[13:27:19] [PASSED] SPI
[13:27:19] [PASSED] USB
[13:27:19] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[13:27:19] ============ [PASSED] drmm_connector_hdmi_init =============
[13:27:19] ============= drmm_connector_init (3 subtests) =============
[13:27:19] [PASSED] drm_test_drmm_connector_init
[13:27:19] [PASSED] drm_test_drmm_connector_init_null_ddc
[13:27:19] ========= drm_test_drmm_connector_init_type_valid  =========
[13:27:19] [PASSED] Unknown
[13:27:19] [PASSED] VGA
[13:27:19] [PASSED] DVI-I
[13:27:19] [PASSED] DVI-D
[13:27:19] [PASSED] DVI-A
[13:27:19] [PASSED] Composite
[13:27:19] [PASSED] SVIDEO
[13:27:19] [PASSED] LVDS
[13:27:19] [PASSED] Component
[13:27:19] [PASSED] DIN
[13:27:19] [PASSED] DP
[13:27:19] [PASSED] HDMI-A
[13:27:19] [PASSED] HDMI-B
[13:27:19] [PASSED] TV
[13:27:19] [PASSED] eDP
[13:27:19] [PASSED] Virtual
[13:27:19] [PASSED] DSI
[13:27:19] [PASSED] DPI
[13:27:19] [PASSED] Writeback
[13:27:19] [PASSED] SPI
[13:27:19] [PASSED] USB
[13:27:19] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[13:27:19] =============== [PASSED] drmm_connector_init ===============
[13:27:19] ========= drm_connector_dynamic_init (6 subtests) ==========
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_init
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_init_properties
[13:27:19] ===== drm_test_drm_connector_dynamic_init_type_valid  ======
[13:27:19] [PASSED] Unknown
[13:27:19] [PASSED] VGA
[13:27:19] [PASSED] DVI-I
[13:27:19] [PASSED] DVI-D
[13:27:19] [PASSED] DVI-A
[13:27:19] [PASSED] Composite
[13:27:19] [PASSED] SVIDEO
[13:27:19] [PASSED] LVDS
[13:27:19] [PASSED] Component
[13:27:19] [PASSED] DIN
[13:27:19] [PASSED] DP
[13:27:19] [PASSED] HDMI-A
[13:27:19] [PASSED] HDMI-B
[13:27:19] [PASSED] TV
[13:27:19] [PASSED] eDP
[13:27:19] [PASSED] Virtual
[13:27:19] [PASSED] DSI
[13:27:19] [PASSED] DPI
[13:27:19] [PASSED] Writeback
[13:27:19] [PASSED] SPI
[13:27:19] [PASSED] USB
[13:27:19] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[13:27:19] ======== drm_test_drm_connector_dynamic_init_name  =========
[13:27:19] [PASSED] Unknown
[13:27:19] [PASSED] VGA
[13:27:19] [PASSED] DVI-I
[13:27:19] [PASSED] DVI-D
[13:27:19] [PASSED] DVI-A
[13:27:19] [PASSED] Composite
[13:27:19] [PASSED] SVIDEO
[13:27:19] [PASSED] LVDS
[13:27:19] [PASSED] Component
[13:27:19] [PASSED] DIN
[13:27:19] [PASSED] DP
[13:27:19] [PASSED] HDMI-A
[13:27:19] [PASSED] HDMI-B
[13:27:19] [PASSED] TV
[13:27:19] [PASSED] eDP
[13:27:19] [PASSED] Virtual
[13:27:19] [PASSED] DSI
[13:27:19] [PASSED] DPI
[13:27:19] [PASSED] Writeback
[13:27:19] [PASSED] SPI
[13:27:19] [PASSED] USB
[13:27:19] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[13:27:19] =========== [PASSED] drm_connector_dynamic_init ============
[13:27:19] ==== drm_connector_dynamic_register_early (4 subtests) =====
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[13:27:19] ====== [PASSED] drm_connector_dynamic_register_early =======
[13:27:19] ======= drm_connector_dynamic_register (7 subtests) ========
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[13:27:19] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[13:27:19] ========= [PASSED] drm_connector_dynamic_register ==========
[13:27:19] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[13:27:19] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[13:27:19] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[13:27:19] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[13:27:19] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[13:27:19] ========== drm_test_get_tv_mode_from_name_valid  ===========
[13:27:19] [PASSED] NTSC
[13:27:19] [PASSED] NTSC-443
[13:27:19] [PASSED] NTSC-J
[13:27:19] [PASSED] PAL
[13:27:19] [PASSED] PAL-M
[13:27:19] [PASSED] PAL-N
[13:27:19] [PASSED] SECAM
[13:27:19] [PASSED] Mono
[13:27:19] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[13:27:19] [PASSED] drm_test_get_tv_mode_from_name_truncated
[13:27:19] ============ [PASSED] drm_get_tv_mode_from_name ============
[13:27:19] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[13:27:19] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[13:27:19] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[13:27:19] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[13:27:19] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[13:27:19] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[13:27:19] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[13:27:19] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[13:27:19] [PASSED] VIC 96
[13:27:19] [PASSED] VIC 97
[13:27:19] [PASSED] VIC 101
[13:27:19] [PASSED] VIC 102
[13:27:19] [PASSED] VIC 106
[13:27:19] [PASSED] VIC 107
[13:27:19] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[13:27:19] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[13:27:19] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[13:27:19] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[13:27:19] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[13:27:19] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[13:27:19] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[13:27:19] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[13:27:19] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[13:27:19] [PASSED] Automatic
[13:27:19] [PASSED] Full
[13:27:19] [PASSED] Limited 16:235
[13:27:19] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[13:27:19] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[13:27:19] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[13:27:19] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[13:27:19] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[13:27:19] [PASSED] RGB
[13:27:19] [PASSED] YUV 4:2:0
[13:27:19] [PASSED] YUV 4:2:2
[13:27:19] [PASSED] YUV 4:4:4
[13:27:19] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[13:27:19] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[13:27:19] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[13:27:19] ============= drm_damage_helper (21 subtests) ==============
[13:27:19] [PASSED] drm_test_damage_iter_no_damage
[13:27:19] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[13:27:19] [PASSED] drm_test_damage_iter_no_damage_src_moved
[13:27:19] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[13:27:19] [PASSED] drm_test_damage_iter_no_damage_not_visible
[13:27:19] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[13:27:19] [PASSED] drm_test_damage_iter_no_damage_no_fb
[13:27:19] [PASSED] drm_test_damage_iter_simple_damage
[13:27:19] [PASSED] drm_test_damage_iter_single_damage
[13:27:19] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[13:27:19] [PASSED] drm_test_damage_iter_single_damage_outside_src
[13:27:19] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[13:27:19] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[13:27:19] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[13:27:19] [PASSED] drm_test_damage_iter_single_damage_src_moved
[13:27:19] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[13:27:19] [PASSED] drm_test_damage_iter_damage
[13:27:19] [PASSED] drm_test_damage_iter_damage_one_intersect
[13:27:19] [PASSED] drm_test_damage_iter_damage_one_outside
[13:27:19] [PASSED] drm_test_damage_iter_damage_src_moved
[13:27:19] [PASSED] drm_test_damage_iter_damage_not_visible
[13:27:19] ================ [PASSED] drm_damage_helper ================
[13:27:19] ============== drm_dp_mst_helper (3 subtests) ==============
[13:27:19] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[13:27:19] [PASSED] Clock 154000 BPP 30 DSC disabled
[13:27:19] [PASSED] Clock 234000 BPP 30 DSC disabled
[13:27:19] [PASSED] Clock 297000 BPP 24 DSC disabled
[13:27:19] [PASSED] Clock 332880 BPP 24 DSC enabled
[13:27:19] [PASSED] Clock 324540 BPP 24 DSC enabled
[13:27:19] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[13:27:19] ============== drm_test_dp_mst_calc_pbn_div  ===============
[13:27:19] [PASSED] Link rate 2000000 lane count 4
[13:27:19] [PASSED] Link rate 2000000 lane count 2
[13:27:19] [PASSED] Link rate 2000000 lane count 1
[13:27:19] [PASSED] Link rate 1350000 lane count 4
[13:27:19] [PASSED] Link rate 1350000 lane count 2
[13:27:19] [PASSED] Link rate 1350000 lane count 1
[13:27:19] [PASSED] Link rate 1000000 lane count 4
[13:27:19] [PASSED] Link rate 1000000 lane count 2
[13:27:19] [PASSED] Link rate 1000000 lane count 1
[13:27:19] [PASSED] Link rate 810000 lane count 4
[13:27:19] [PASSED] Link rate 810000 lane count 2
[13:27:19] [PASSED] Link rate 810000 lane count 1
[13:27:19] [PASSED] Link rate 540000 lane count 4
[13:27:19] [PASSED] Link rate 540000 lane count 2
[13:27:19] [PASSED] Link rate 540000 lane count 1
[13:27:19] [PASSED] Link rate 270000 lane count 4
[13:27:19] [PASSED] Link rate 270000 lane count 2
[13:27:19] [PASSED] Link rate 270000 lane count 1
[13:27:19] [PASSED] Link rate 162000 lane count 4
[13:27:19] [PASSED] Link rate 162000 lane count 2
[13:27:19] [PASSED] Link rate 162000 lane count 1
[13:27:19] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[13:27:19] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[13:27:19] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[13:27:19] [PASSED] DP_POWER_UP_PHY with port number
[13:27:19] [PASSED] DP_POWER_DOWN_PHY with port number
[13:27:19] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[13:27:19] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[13:27:19] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[13:27:19] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[13:27:19] [PASSED] DP_QUERY_PAYLOAD with port number
[13:27:19] [PASSED] DP_QUERY_PAYLOAD with VCPI
[13:27:19] [PASSED] DP_REMOTE_DPCD_READ with port number
[13:27:19] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[13:27:19] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[13:27:19] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[13:27:19] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[13:27:19] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[13:27:19] [PASSED] DP_REMOTE_I2C_READ with port number
[13:27:19] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[13:27:19] [PASSED] DP_REMOTE_I2C_READ with transactions array
[13:27:19] [PASSED] DP_REMOTE_I2C_WRITE with port number
[13:27:19] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[13:27:19] [PASSED] DP_REMOTE_I2C_WRITE with data array
[13:27:19] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[13:27:19] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[13:27:19] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[13:27:19] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[13:27:19] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[13:27:19] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[13:27:19] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[13:27:19] ================ [PASSED] drm_dp_mst_helper ================
[13:27:19] ================== drm_exec (7 subtests) ===================
[13:27:19] [PASSED] sanitycheck
[13:27:19] [PASSED] test_lock
[13:27:19] [PASSED] test_lock_unlock
[13:27:19] [PASSED] test_duplicates
[13:27:19] [PASSED] test_prepare
[13:27:19] [PASSED] test_prepare_array
[13:27:19] [PASSED] test_multiple_loops
[13:27:19] ==================== [PASSED] drm_exec =====================
[13:27:19] =========== drm_format_helper_test (17 subtests) ===========
[13:27:19] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[13:27:19] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[13:27:19] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[13:27:19] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[13:27:19] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[13:27:19] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[13:27:19] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[13:27:19] ============= drm_test_fb_xrgb8888_to_bgr888  ==============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[13:27:19] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[13:27:19] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[13:27:19] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[13:27:19] ============== drm_test_fb_xrgb8888_to_mono  ===============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[13:27:19] ==================== drm_test_fb_swab  =====================
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ================ [PASSED] drm_test_fb_swab =================
[13:27:19] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[13:27:19] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[13:27:19] [PASSED] single_pixel_source_buffer
[13:27:19] [PASSED] single_pixel_clip_rectangle
[13:27:19] [PASSED] well_known_colors
[13:27:19] [PASSED] destination_pitch
[13:27:19] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[13:27:19] ================= drm_test_fb_clip_offset  =================
[13:27:19] [PASSED] pass through
[13:27:19] [PASSED] horizontal offset
[13:27:19] [PASSED] vertical offset
[13:27:19] [PASSED] horizontal and vertical offset
[13:27:19] [PASSED] horizontal offset (custom pitch)
[13:27:19] [PASSED] vertical offset (custom pitch)
[13:27:19] [PASSED] horizontal and vertical offset (custom pitch)
[13:27:19] ============= [PASSED] drm_test_fb_clip_offset =============
[13:27:19] =================== drm_test_fb_memcpy  ====================
[13:27:19] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[13:27:19] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[13:27:19] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[13:27:19] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[13:27:19] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[13:27:19] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[13:27:19] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[13:27:19] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[13:27:19] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[13:27:19] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[13:27:19] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[13:27:19] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[13:27:19] =============== [PASSED] drm_test_fb_memcpy ================
[13:27:19] ============= [PASSED] drm_format_helper_test ==============
[13:27:19] ================= drm_format (18 subtests) =================
[13:27:19] [PASSED] drm_test_format_block_width_invalid
[13:27:19] [PASSED] drm_test_format_block_width_one_plane
[13:27:19] [PASSED] drm_test_format_block_width_two_plane
[13:27:19] [PASSED] drm_test_format_block_width_three_plane
[13:27:19] [PASSED] drm_test_format_block_width_tiled
[13:27:19] [PASSED] drm_test_format_block_height_invalid
[13:27:19] [PASSED] drm_test_format_block_height_one_plane
[13:27:19] [PASSED] drm_test_format_block_height_two_plane
[13:27:19] [PASSED] drm_test_format_block_height_three_plane
[13:27:19] [PASSED] drm_test_format_block_height_tiled
[13:27:19] [PASSED] drm_test_format_min_pitch_invalid
[13:27:19] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[13:27:19] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[13:27:19] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[13:27:19] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[13:27:19] [PASSED] drm_test_format_min_pitch_two_plane
[13:27:19] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[13:27:19] [PASSED] drm_test_format_min_pitch_tiled
[13:27:19] =================== [PASSED] drm_format ====================
[13:27:19] ============== drm_framebuffer (10 subtests) ===============
[13:27:19] ========== drm_test_framebuffer_check_src_coords  ==========
[13:27:19] [PASSED] Success: source fits into fb
[13:27:19] [PASSED] Fail: overflowing fb with x-axis coordinate
[13:27:19] [PASSED] Fail: overflowing fb with y-axis coordinate
[13:27:19] [PASSED] Fail: overflowing fb with source width
[13:27:19] [PASSED] Fail: overflowing fb with source height
[13:27:19] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[13:27:19] [PASSED] drm_test_framebuffer_cleanup
[13:27:19] =============== drm_test_framebuffer_create  ===============
[13:27:19] [PASSED] ABGR8888 normal sizes
[13:27:19] [PASSED] ABGR8888 max sizes
[13:27:19] [PASSED] ABGR8888 pitch greater than min required
[13:27:19] [PASSED] ABGR8888 pitch less than min required
[13:27:19] [PASSED] ABGR8888 Invalid width
[13:27:19] [PASSED] ABGR8888 Invalid buffer handle
[13:27:19] [PASSED] No pixel format
[13:27:19] [PASSED] ABGR8888 Width 0
[13:27:19] [PASSED] ABGR8888 Height 0
[13:27:19] [PASSED] ABGR8888 Out of bound height * pitch combination
[13:27:19] [PASSED] ABGR8888 Large buffer offset
[13:27:19] [PASSED] ABGR8888 Buffer offset for inexistent plane
[13:27:19] [PASSED] ABGR8888 Invalid flag
[13:27:19] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[13:27:19] [PASSED] ABGR8888 Valid buffer modifier
[13:27:19] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[13:27:19] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[13:27:19] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[13:27:19] [PASSED] NV12 Normal sizes
[13:27:19] [PASSED] NV12 Max sizes
[13:27:19] [PASSED] NV12 Invalid pitch
[13:27:19] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[13:27:19] [PASSED] NV12 different  modifier per-plane
[13:27:19] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[13:27:19] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[13:27:19] [PASSED] NV12 Modifier for inexistent plane
[13:27:19] [PASSED] NV12 Handle for inexistent plane
[13:27:19] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[13:27:19] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[13:27:19] [PASSED] YVU420 Normal sizes
[13:27:19] [PASSED] YVU420 Max sizes
[13:27:19] [PASSED] YVU420 Invalid pitch
[13:27:19] [PASSED] YVU420 Different pitches
[13:27:19] [PASSED] YVU420 Different buffer offsets/pitches
[13:27:19] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[13:27:19] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[13:27:19] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[13:27:19] [PASSED] YVU420 Valid modifier
[13:27:19] [PASSED] YVU420 Different modifiers per plane
[13:27:19] [PASSED] YVU420 Modifier for inexistent plane
[13:27:19] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[13:27:19] [PASSED] X0L2 Normal sizes
[13:27:19] [PASSED] X0L2 Max sizes
[13:27:19] [PASSED] X0L2 Invalid pitch
[13:27:19] [PASSED] X0L2 Pitch greater than minimum required
[13:27:19] [PASSED] X0L2 Handle for inexistent plane
[13:27:19] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[13:27:19] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[13:27:19] [PASSED] X0L2 Valid modifier
[13:27:19] [PASSED] X0L2 Modifier for inexistent plane
[13:27:19] =========== [PASSED] drm_test_framebuffer_create ===========
[13:27:19] [PASSED] drm_test_framebuffer_free
[13:27:19] [PASSED] drm_test_framebuffer_init
[13:27:19] [PASSED] drm_test_framebuffer_init_bad_format
[13:27:19] [PASSED] drm_test_framebuffer_init_dev_mismatch
[13:27:19] [PASSED] drm_test_framebuffer_lookup
[13:27:19] [PASSED] drm_test_framebuffer_lookup_inexistent
[13:27:19] [PASSED] drm_test_framebuffer_modifiers_not_supported
[13:27:19] ================= [PASSED] drm_framebuffer =================
[13:27:19] ================ drm_gem_shmem (8 subtests) ================
[13:27:19] [PASSED] drm_gem_shmem_test_obj_create
[13:27:19] [PASSED] drm_gem_shmem_test_obj_create_private
[13:27:19] [PASSED] drm_gem_shmem_test_pin_pages
[13:27:19] [PASSED] drm_gem_shmem_test_vmap
[13:27:19] [PASSED] drm_gem_shmem_test_get_pages_sgt
[13:27:19] [PASSED] drm_gem_shmem_test_get_sg_table
[13:27:19] [PASSED] drm_gem_shmem_test_madvise
[13:27:19] [PASSED] drm_gem_shmem_test_purge
[13:27:19] ================== [PASSED] drm_gem_shmem ==================
[13:27:19] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[13:27:19] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[13:27:19] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[13:27:19] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[13:27:19] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[13:27:19] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[13:27:19] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[13:27:19] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420  =======
[13:27:19] [PASSED] Automatic
[13:27:19] [PASSED] Full
[13:27:19] [PASSED] Limited 16:235
[13:27:19] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[13:27:19] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[13:27:19] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[13:27:19] [PASSED] drm_test_check_disable_connector
[13:27:19] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[13:27:19] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[13:27:19] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[13:27:19] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[13:27:19] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[13:27:19] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[13:27:19] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[13:27:19] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[13:27:19] [PASSED] drm_test_check_output_bpc_dvi
[13:27:19] [PASSED] drm_test_check_output_bpc_format_vic_1
[13:27:19] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[13:27:19] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[13:27:19] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[13:27:19] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[13:27:19] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[13:27:19] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[13:27:19] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[13:27:19] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[13:27:19] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[13:27:19] [PASSED] drm_test_check_broadcast_rgb_value
[13:27:19] [PASSED] drm_test_check_bpc_8_value
[13:27:19] [PASSED] drm_test_check_bpc_10_value
[13:27:19] [PASSED] drm_test_check_bpc_12_value
[13:27:19] [PASSED] drm_test_check_format_value
[13:27:19] [PASSED] drm_test_check_tmds_char_value
[13:27:19] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[13:27:19] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[13:27:19] [PASSED] drm_test_check_mode_valid
[13:27:19] [PASSED] drm_test_check_mode_valid_reject
[13:27:19] [PASSED] drm_test_check_mode_valid_reject_rate
[13:27:19] [PASSED] drm_test_check_mode_valid_reject_max_clock
[13:27:19] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[13:27:19] ================= drm_managed (2 subtests) =================
[13:27:19] [PASSED] drm_test_managed_release_action
[13:27:19] [PASSED] drm_test_managed_run_action
[13:27:19] =================== [PASSED] drm_managed ===================
[13:27:19] =================== drm_mm (6 subtests) ====================
[13:27:19] [PASSED] drm_test_mm_init
[13:27:19] [PASSED] drm_test_mm_debug
[13:27:19] [PASSED] drm_test_mm_align32
[13:27:19] [PASSED] drm_test_mm_align64
[13:27:19] [PASSED] drm_test_mm_lowest
[13:27:19] [PASSED] drm_test_mm_highest
[13:27:19] ===================== [PASSED] drm_mm ======================
[13:27:19] ============= drm_modes_analog_tv (5 subtests) =============
[13:27:19] [PASSED] drm_test_modes_analog_tv_mono_576i
[13:27:19] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[13:27:19] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[13:27:19] [PASSED] drm_test_modes_analog_tv_pal_576i
[13:27:19] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[13:27:19] =============== [PASSED] drm_modes_analog_tv ===============
[13:27:19] ============== drm_plane_helper (2 subtests) ===============
[13:27:19] =============== drm_test_check_plane_state  ================
[13:27:19] [PASSED] clipping_simple
[13:27:19] [PASSED] clipping_rotate_reflect
[13:27:19] [PASSED] positioning_simple
[13:27:19] [PASSED] upscaling
[13:27:19] [PASSED] downscaling
[13:27:19] [PASSED] rounding1
[13:27:19] [PASSED] rounding2
[13:27:19] [PASSED] rounding3
[13:27:19] [PASSED] rounding4
[13:27:19] =========== [PASSED] drm_test_check_plane_state ============
[13:27:19] =========== drm_test_check_invalid_plane_state  ============
[13:27:19] [PASSED] positioning_invalid
[13:27:19] [PASSED] upscaling_invalid
[13:27:19] [PASSED] downscaling_invalid
[13:27:19] ======= [PASSED] drm_test_check_invalid_plane_state ========
[13:27:19] ================ [PASSED] drm_plane_helper =================
[13:27:19] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[13:27:19] ====== drm_test_connector_helper_tv_get_modes_check  =======
[13:27:19] [PASSED] None
[13:27:19] [PASSED] PAL
[13:27:19] [PASSED] NTSC
[13:27:19] [PASSED] Both, NTSC Default
[13:27:19] [PASSED] Both, PAL Default
[13:27:19] [PASSED] Both, NTSC Default, with PAL on command-line
[13:27:19] [PASSED] Both, PAL Default, with NTSC on command-line
[13:27:19] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[13:27:19] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[13:27:19] ================== drm_rect (9 subtests) ===================
[13:27:19] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[13:27:19] [PASSED] drm_test_rect_clip_scaled_not_clipped
[13:27:19] [PASSED] drm_test_rect_clip_scaled_clipped
[13:27:19] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[13:27:19] ================= drm_test_rect_intersect  =================
[13:27:19] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[13:27:19] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[13:27:19] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[13:27:19] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[13:27:19] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[13:27:19] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[13:27:19] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[13:27:19] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[13:27:19] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[13:27:19] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[13:27:19] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[13:27:19] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[13:27:19] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[13:27:19] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[13:27:19] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[13:27:19] ============= [PASSED] drm_test_rect_intersect =============
[13:27:19] ================ drm_test_rect_calc_hscale  ================
[13:27:19] [PASSED] normal use
[13:27:19] [PASSED] out of max range
[13:27:19] [PASSED] out of min range
[13:27:19] [PASSED] zero dst
[13:27:19] [PASSED] negative src
[13:27:19] [PASSED] negative dst
[13:27:19] ============ [PASSED] drm_test_rect_calc_hscale ============
[13:27:19] ================ drm_test_rect_calc_vscale  ================
[13:27:19] [PASSED] normal use
[13:27:19] [PASSED] out of max range
[13:27:19] [PASSED] out of min range
[13:27:19] [PASSED] zero dst
[13:27:19] [PASSED] negative src
[13:27:19] [PASSED] negative dst
[13:27:19] ============ [PASSED] drm_test_rect_calc_vscale ============
[13:27:19] ================== drm_test_rect_rotate  ===================
[13:27:19] [PASSED] reflect-x
[13:27:19] [PASSED] reflect-y
[13:27:19] [PASSED] rotate-0
[13:27:19] [PASSED] rotate-90
[13:27:19] [PASSED] rotate-180
[13:27:19] [PASSED] rotate-270
stty: 'standard input': Inappropriate ioctl for device
[13:27:19] ============== [PASSED] drm_test_rect_rotate ===============
[13:27:19] ================ drm_test_rect_rotate_inv  =================
[13:27:19] [PASSED] reflect-x
[13:27:19] [PASSED] reflect-y
[13:27:19] [PASSED] rotate-0
[13:27:19] [PASSED] rotate-90
[13:27:19] [PASSED] rotate-180
[13:27:19] [PASSED] rotate-270
[13:27:19] ============ [PASSED] drm_test_rect_rotate_inv =============
[13:27:19] ==================== [PASSED] drm_rect =====================
[13:27:19] ============ drm_sysfb_modeset_test (1 subtest) ============
[13:27:19] ============ drm_test_sysfb_build_fourcc_list  =============
[13:27:19] [PASSED] no native formats
[13:27:19] [PASSED] XRGB8888 as native format
[13:27:19] [PASSED] remove duplicates
[13:27:19] [PASSED] convert alpha formats
[13:27:19] [PASSED] random formats
[13:27:19] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[13:27:19] ============= [PASSED] drm_sysfb_modeset_test ==============
[13:27:19] ============================================================
[13:27:19] Testing complete. Ran 616 tests: passed: 616
[13:27:19] Elapsed time: 24.558s total, 1.703s configuring, 22.689s building, 0.149s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[13:27:19] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[13:27:21] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[13:27:29] Starting KUnit Kernel (1/1)...
[13:27:29] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[13:27:29] ================= ttm_device (5 subtests) ==================
[13:27:29] [PASSED] ttm_device_init_basic
[13:27:29] [PASSED] ttm_device_init_multiple
[13:27:29] [PASSED] ttm_device_fini_basic
[13:27:29] [PASSED] ttm_device_init_no_vma_man
[13:27:29] ================== ttm_device_init_pools  ==================
[13:27:29] [PASSED] No DMA allocations, no DMA32 required
[13:27:29] [PASSED] DMA allocations, DMA32 required
[13:27:29] [PASSED] No DMA allocations, DMA32 required
[13:27:29] [PASSED] DMA allocations, no DMA32 required
[13:27:29] ============== [PASSED] ttm_device_init_pools ==============
[13:27:29] =================== [PASSED] ttm_device ====================
[13:27:29] ================== ttm_pool (8 subtests) ===================
[13:27:29] ================== ttm_pool_alloc_basic  ===================
[13:27:29] [PASSED] One page
[13:27:29] [PASSED] More than one page
[13:27:29] [PASSED] Above the allocation limit
[13:27:29] [PASSED] One page, with coherent DMA mappings enabled
[13:27:29] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[13:27:29] ============== [PASSED] ttm_pool_alloc_basic ===============
[13:27:29] ============== ttm_pool_alloc_basic_dma_addr  ==============
[13:27:29] [PASSED] One page
[13:27:29] [PASSED] More than one page
[13:27:29] [PASSED] Above the allocation limit
[13:27:29] [PASSED] One page, with coherent DMA mappings enabled
[13:27:29] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[13:27:29] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[13:27:29] [PASSED] ttm_pool_alloc_order_caching_match
[13:27:29] [PASSED] ttm_pool_alloc_caching_mismatch
[13:27:29] [PASSED] ttm_pool_alloc_order_mismatch
[13:27:29] [PASSED] ttm_pool_free_dma_alloc
[13:27:29] [PASSED] ttm_pool_free_no_dma_alloc
[13:27:29] [PASSED] ttm_pool_fini_basic
[13:27:29] ==================== [PASSED] ttm_pool =====================
[13:27:29] ================ ttm_resource (8 subtests) =================
[13:27:29] ================= ttm_resource_init_basic  =================
[13:27:29] [PASSED] Init resource in TTM_PL_SYSTEM
[13:27:29] [PASSED] Init resource in TTM_PL_VRAM
[13:27:29] [PASSED] Init resource in a private placement
[13:27:29] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[13:27:29] ============= [PASSED] ttm_resource_init_basic =============
[13:27:29] [PASSED] ttm_resource_init_pinned
[13:27:29] [PASSED] ttm_resource_fini_basic
[13:27:29] [PASSED] ttm_resource_manager_init_basic
[13:27:29] [PASSED] ttm_resource_manager_usage_basic
[13:27:29] [PASSED] ttm_resource_manager_set_used_basic
[13:27:29] [PASSED] ttm_sys_man_alloc_basic
[13:27:29] [PASSED] ttm_sys_man_free_basic
[13:27:29] ================== [PASSED] ttm_resource ===================
[13:27:29] =================== ttm_tt (15 subtests) ===================
[13:27:29] ==================== ttm_tt_init_basic  ====================
[13:27:29] [PASSED] Page-aligned size
[13:27:29] [PASSED] Extra pages requested
[13:27:29] ================ [PASSED] ttm_tt_init_basic ================
[13:27:29] [PASSED] ttm_tt_init_misaligned
[13:27:29] [PASSED] ttm_tt_fini_basic
[13:27:29] [PASSED] ttm_tt_fini_sg
[13:27:29] [PASSED] ttm_tt_fini_shmem
[13:27:29] [PASSED] ttm_tt_create_basic
[13:27:29] [PASSED] ttm_tt_create_invalid_bo_type
[13:27:29] [PASSED] ttm_tt_create_ttm_exists
[13:27:29] [PASSED] ttm_tt_create_failed
[13:27:29] [PASSED] ttm_tt_destroy_basic
[13:27:29] [PASSED] ttm_tt_populate_null_ttm
[13:27:29] [PASSED] ttm_tt_populate_populated_ttm
[13:27:29] [PASSED] ttm_tt_unpopulate_basic
[13:27:29] [PASSED] ttm_tt_unpopulate_empty_ttm
[13:27:29] [PASSED] ttm_tt_swapin_basic
[13:27:29] ===================== [PASSED] ttm_tt ======================
[13:27:29] =================== ttm_bo (14 subtests) ===================
[13:27:29] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[13:27:29] [PASSED] Cannot be interrupted and sleeps
[13:27:29] [PASSED] Cannot be interrupted, locks straight away
[13:27:29] [PASSED] Can be interrupted, sleeps
[13:27:29] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[13:27:29] [PASSED] ttm_bo_reserve_locked_no_sleep
[13:27:29] [PASSED] ttm_bo_reserve_no_wait_ticket
[13:27:29] [PASSED] ttm_bo_reserve_double_resv
[13:27:29] [PASSED] ttm_bo_reserve_interrupted
[13:27:29] [PASSED] ttm_bo_reserve_deadlock
[13:27:29] [PASSED] ttm_bo_unreserve_basic
[13:27:29] [PASSED] ttm_bo_unreserve_pinned
[13:27:29] [PASSED] ttm_bo_unreserve_bulk
[13:27:29] [PASSED] ttm_bo_put_basic
[13:27:29] [PASSED] ttm_bo_put_shared_resv
[13:27:29] [PASSED] ttm_bo_pin_basic
[13:27:29] [PASSED] ttm_bo_pin_unpin_resource
[13:27:29] [PASSED] ttm_bo_multiple_pin_one_unpin
[13:27:29] ===================== [PASSED] ttm_bo ======================
[13:27:29] ============== ttm_bo_validate (21 subtests) ===============
[13:27:29] ============== ttm_bo_init_reserved_sys_man  ===============
[13:27:29] [PASSED] Buffer object for userspace
[13:27:29] [PASSED] Kernel buffer object
[13:27:29] [PASSED] Shared buffer object
[13:27:29] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[13:27:29] ============== ttm_bo_init_reserved_mock_man  ==============
[13:27:29] [PASSED] Buffer object for userspace
[13:27:29] [PASSED] Kernel buffer object
[13:27:29] [PASSED] Shared buffer object
[13:27:29] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[13:27:29] [PASSED] ttm_bo_init_reserved_resv
[13:27:29] ================== ttm_bo_validate_basic  ==================
[13:27:29] [PASSED] Buffer object for userspace
[13:27:29] [PASSED] Kernel buffer object
[13:27:29] [PASSED] Shared buffer object
[13:27:29] ============== [PASSED] ttm_bo_validate_basic ==============
[13:27:29] [PASSED] ttm_bo_validate_invalid_placement
[13:27:29] ============= ttm_bo_validate_same_placement  ==============
[13:27:29] [PASSED] System manager
[13:27:29] [PASSED] VRAM manager
[13:27:29] ========= [PASSED] ttm_bo_validate_same_placement ==========
[13:27:29] [PASSED] ttm_bo_validate_failed_alloc
[13:27:29] [PASSED] ttm_bo_validate_pinned
[13:27:29] [PASSED] ttm_bo_validate_busy_placement
[13:27:29] ================ ttm_bo_validate_multihop  =================
[13:27:29] [PASSED] Buffer object for userspace
[13:27:29] [PASSED] Kernel buffer object
[13:27:29] [PASSED] Shared buffer object
[13:27:29] ============ [PASSED] ttm_bo_validate_multihop =============
[13:27:29] ========== ttm_bo_validate_no_placement_signaled  ==========
[13:27:29] [PASSED] Buffer object in system domain, no page vector
[13:27:29] [PASSED] Buffer object in system domain with an existing page vector
[13:27:29] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[13:27:29] ======== ttm_bo_validate_no_placement_not_signaled  ========
[13:27:29] [PASSED] Buffer object for userspace
[13:27:29] [PASSED] Kernel buffer object
[13:27:29] [PASSED] Shared buffer object
[13:27:29] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[13:27:29] [PASSED] ttm_bo_validate_move_fence_signaled
[13:27:29] ========= ttm_bo_validate_move_fence_not_signaled  =========
[13:27:29] [PASSED] Waits for GPU
[13:27:29] [PASSED] Tries to lock straight away
[13:27:29] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[13:27:29] [PASSED] ttm_bo_validate_happy_evict
[13:27:29] [PASSED] ttm_bo_validate_all_pinned_evict
[13:27:29] [PASSED] ttm_bo_validate_allowed_only_evict
[13:27:29] [PASSED] ttm_bo_validate_deleted_evict
[13:27:29] [PASSED] ttm_bo_validate_busy_domain_evict
[13:27:29] [PASSED] ttm_bo_validate_evict_gutting
[13:27:29] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[13:27:29] ================= [PASSED] ttm_bo_validate =================
[13:27:29] ============================================================
[13:27:29] Testing complete. Ran 101 tests: passed: 101
[13:27:29] Elapsed time: 9.874s total, 1.715s configuring, 7.943s building, 0.181s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 5+ messages in thread

* ✓ Xe.CI.BAT: success for drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
  2025-08-13 12:38 [PATCH] drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes Himal Prasad Ghimiray
  2025-08-13 13:26 ` ✗ CI.checkpatch: warning for " Patchwork
  2025-08-13 13:27 ` ✓ CI.KUnit: success " Patchwork
@ 2025-08-13 14:30 ` Patchwork
  2025-08-13 15:37 ` ✗ Xe.CI.Full: failure " Patchwork
  3 siblings, 0 replies; 5+ messages in thread
From: Patchwork @ 2025-08-13 14:30 UTC (permalink / raw)
  To: Himal Prasad Ghimiray; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 1216 bytes --]

== Series Details ==

Series: drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
URL   : https://patchwork.freedesktop.org/series/152884/
State : success

== Summary ==

CI Bug Log - changes from xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13_BAT -> xe-pw-152884v1_BAT
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (11 -> 9)
------------------------------

  Missing    (2): bat-adlp-vm bat-ptl-vm 

Known issues
------------

  Here are the changes found in xe-pw-152884v1_BAT that come from known issues:

### IGT changes ###

  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [Intel XE#5783]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5783


Build changes
-------------

  * Linux: xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13 -> xe-pw-152884v1

  IGT_8493: 8493
  xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13: 546fc742f08b8dbd3fa1486933c9b15085e11d13
  xe-pw-152884v1: 152884v1

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/index.html

[-- Attachment #2: Type: text/html, Size: 1693 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* ✗ Xe.CI.Full: failure for drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
  2025-08-13 12:38 [PATCH] drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes Himal Prasad Ghimiray
                   ` (2 preceding siblings ...)
  2025-08-13 14:30 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-08-13 15:37 ` Patchwork
  3 siblings, 0 replies; 5+ messages in thread
From: Patchwork @ 2025-08-13 15:37 UTC (permalink / raw)
  To: Himal Prasad Ghimiray; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 63202 bytes --]

== Series Details ==

Series: drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes
URL   : https://patchwork.freedesktop.org/series/152884/
State : failure

== Summary ==

CI Bug Log - changes from xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13_FULL -> xe-pw-152884v1_FULL
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-152884v1_FULL absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-152884v1_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (4 -> 4)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-152884v1_FULL:

### IGT changes ###

#### Possible regressions ####

  * igt@xe_exec_compute_mode@once-bindexecqueue-userptr-rebind:
    - shard-adlp:         [PASS][1] -> [FAIL][2]
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-2/igt@xe_exec_compute_mode@once-bindexecqueue-userptr-rebind.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-6/igt@xe_exec_compute_mode@once-bindexecqueue-userptr-rebind.html

  * igt@xe_exec_system_allocator@many-large-free:
    - shard-bmg:          [PASS][3] -> [INCOMPLETE][4]
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-6/igt@xe_exec_system_allocator@many-large-free.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-5/igt@xe_exec_system_allocator@many-large-free.html

  * igt@xe_exec_system_allocator@process-many-new-bo-map:
    - shard-lnl:          [PASS][5] -> [FAIL][6]
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-1/igt@xe_exec_system_allocator@process-many-new-bo-map.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-2/igt@xe_exec_system_allocator@process-many-new-bo-map.html

  
Known issues
------------

  Here are the changes found in xe-pw-152884v1_FULL that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - shard-dg2-set2:     NOTRUN -> [SKIP][7] ([Intel XE#623])
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_async_flips@invalid-async-flip-atomic:
    - shard-dg2-set2:     NOTRUN -> [SKIP][8] ([Intel XE#3768])
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_async_flips@invalid-async-flip-atomic.html

  * igt@kms_atomic_transition@plane-all-modeset-transition:
    - shard-lnl:          NOTRUN -> [SKIP][9] ([Intel XE#3279])
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_atomic_transition@plane-all-modeset-transition.html

  * igt@kms_big_fb@4-tiled-addfb-size-offset-overflow:
    - shard-adlp:         NOTRUN -> [SKIP][10] ([Intel XE#607])
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_big_fb@4-tiled-addfb-size-offset-overflow.html

  * igt@kms_big_fb@x-tiled-16bpp-rotate-90:
    - shard-dg2-set2:     NOTRUN -> [SKIP][11] ([Intel XE#316])
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_big_fb@x-tiled-16bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-adlp:         [PASS][12] -> [DMESG-FAIL][13] ([Intel XE#4543])
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-2/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@y-tiled-16bpp-rotate-180:
    - shard-lnl:          NOTRUN -> [SKIP][14] ([Intel XE#1124])
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_big_fb@y-tiled-16bpp-rotate-180.html

  * igt@kms_big_fb@y-tiled-16bpp-rotate-270:
    - shard-adlp:         NOTRUN -> [SKIP][15] ([Intel XE#316]) +1 other test skip
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_big_fb@y-tiled-16bpp-rotate-270.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0:
    - shard-dg2-set2:     NOTRUN -> [SKIP][16] ([Intel XE#1124]) +3 other tests skip
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-90:
    - shard-bmg:          NOTRUN -> [SKIP][17] ([Intel XE#1124]) +1 other test skip
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_big_fb@yf-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow:
    - shard-dg2-set2:     NOTRUN -> [SKIP][18] ([Intel XE#607])
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html

  * igt@kms_big_fb@yf-tiled-addfb-size-overflow:
    - shard-lnl:          NOTRUN -> [SKIP][19] ([Intel XE#1428])
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-adlp:         NOTRUN -> [SKIP][20] ([Intel XE#1124]) +1 other test skip
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p:
    - shard-bmg:          [PASS][21] -> [SKIP][22] ([Intel XE#2314] / [Intel XE#2894])
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-7/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html

  * igt@kms_bw@linear-tiling-2-displays-1920x1080p:
    - shard-dg2-set2:     NOTRUN -> [SKIP][23] ([Intel XE#367]) +1 other test skip
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-463/igt@kms_bw@linear-tiling-2-displays-1920x1080p.html

  * igt@kms_bw@linear-tiling-2-displays-2160x1440p:
    - shard-lnl:          NOTRUN -> [SKIP][24] ([Intel XE#367])
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_bw@linear-tiling-2-displays-2160x1440p.html

  * igt@kms_ccs@ccs-on-another-bo-4-tiled-mtl-mc-ccs:
    - shard-lnl:          NOTRUN -> [SKIP][25] ([Intel XE#2887]) +2 other tests skip
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_ccs@ccs-on-another-bo-4-tiled-mtl-mc-ccs.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs:
    - shard-adlp:         NOTRUN -> [SKIP][26] ([Intel XE#2907])
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-6:
    - shard-dg2-set2:     NOTRUN -> [SKIP][27] ([Intel XE#787]) +209 other tests skip
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-435/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-6.html

  * igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs@pipe-a-dp-2:
    - shard-bmg:          NOTRUN -> [SKIP][28] ([Intel XE#2652] / [Intel XE#787]) +3 other tests skip
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-2/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs@pipe-a-dp-2.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs:
    - shard-dg2-set2:     NOTRUN -> [SKIP][29] ([Intel XE#2907]) +1 other test skip
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-rc-ccs@pipe-d-dp-4:
    - shard-dg2-set2:     NOTRUN -> [SKIP][30] ([Intel XE#455] / [Intel XE#787]) +36 other tests skip
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-435/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-rc-ccs@pipe-d-dp-4.html

  * igt@kms_ccs@missing-ccs-buffer-y-tiled-ccs:
    - shard-bmg:          NOTRUN -> [SKIP][31] ([Intel XE#2887]) +3 other tests skip
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_ccs@missing-ccs-buffer-y-tiled-ccs.html

  * igt@kms_ccs@random-ccs-data-y-tiled-ccs:
    - shard-adlp:         NOTRUN -> [SKIP][32] ([Intel XE#455] / [Intel XE#787]) +5 other tests skip
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_ccs@random-ccs-data-y-tiled-ccs.html

  * igt@kms_ccs@random-ccs-data-y-tiled-ccs@pipe-b-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [SKIP][33] ([Intel XE#787]) +8 other tests skip
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_ccs@random-ccs-data-y-tiled-ccs@pipe-b-hdmi-a-1.html

  * igt@kms_chamelium_color@ctm-max:
    - shard-lnl:          NOTRUN -> [SKIP][34] ([Intel XE#306])
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_chamelium_color@ctm-max.html

  * igt@kms_chamelium_edid@dp-mode-timings:
    - shard-adlp:         NOTRUN -> [SKIP][35] ([Intel XE#373])
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_chamelium_edid@dp-mode-timings.html

  * igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode:
    - shard-bmg:          NOTRUN -> [SKIP][36] ([Intel XE#2252])
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode.html

  * igt@kms_chamelium_hpd@dp-hpd-fast:
    - shard-dg2-set2:     NOTRUN -> [SKIP][37] ([Intel XE#373]) +5 other tests skip
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_chamelium_hpd@dp-hpd-fast.html

  * igt@kms_concurrent@multi-plane-atomic-lowres:
    - shard-bmg:          NOTRUN -> [ABORT][38] ([Intel XE#5826])
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_concurrent@multi-plane-atomic-lowres.html

  * igt@kms_concurrent@multi-plane-atomic-lowres@pipe-a-dp-2:
    - shard-bmg:          NOTRUN -> [ABORT][39] ([Intel XE#5898])
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_concurrent@multi-plane-atomic-lowres@pipe-a-dp-2.html

  * igt@kms_concurrent@multi-plane-atomic-lowres@pipe-a-hdmi-a-3:
    - shard-bmg:          NOTRUN -> [DMESG-WARN][40] ([Intel XE#5826])
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_concurrent@multi-plane-atomic-lowres@pipe-a-hdmi-a-3.html

  * igt@kms_content_protection@dp-mst-lic-type-0:
    - shard-lnl:          NOTRUN -> [SKIP][41] ([Intel XE#307])
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_content_protection@dp-mst-lic-type-0.html

  * igt@kms_content_protection@dp-mst-lic-type-1:
    - shard-dg2-set2:     NOTRUN -> [SKIP][42] ([Intel XE#307])
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@kms_content_protection@dp-mst-lic-type-1.html

  * igt@kms_content_protection@legacy@pipe-a-dp-2:
    - shard-dg2-set2:     NOTRUN -> [FAIL][43] ([Intel XE#1178]) +1 other test fail
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-432/igt@kms_content_protection@legacy@pipe-a-dp-2.html

  * igt@kms_content_protection@lic-type-0@pipe-a-dp-4:
    - shard-dg2-set2:     NOTRUN -> [FAIL][44] ([Intel XE#3304])
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-435/igt@kms_content_protection@lic-type-0@pipe-a-dp-4.html

  * igt@kms_content_protection@uevent@pipe-a-dp-2:
    - shard-bmg:          NOTRUN -> [FAIL][45] ([Intel XE#1188])
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-2/igt@kms_content_protection@uevent@pipe-a-dp-2.html

  * igt@kms_cursor_crc@cursor-onscreen-256x85:
    - shard-bmg:          NOTRUN -> [SKIP][46] ([Intel XE#2320]) +1 other test skip
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_cursor_crc@cursor-onscreen-256x85.html

  * igt@kms_cursor_crc@cursor-onscreen-32x32:
    - shard-adlp:         NOTRUN -> [SKIP][47] ([Intel XE#455])
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_cursor_crc@cursor-onscreen-32x32.html

  * igt@kms_cursor_crc@cursor-random-32x32:
    - shard-lnl:          NOTRUN -> [SKIP][48] ([Intel XE#1424])
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_cursor_crc@cursor-random-32x32.html

  * igt@kms_cursor_crc@cursor-sliding-512x512:
    - shard-lnl:          NOTRUN -> [SKIP][49] ([Intel XE#2321])
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_cursor_crc@cursor-sliding-512x512.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-atomic:
    - shard-adlp:         NOTRUN -> [SKIP][50] ([Intel XE#309])
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_cursor_legacy@cursora-vs-flipb-atomic.html

  * igt@kms_cursor_legacy@cursorb-vs-flipa-legacy:
    - shard-bmg:          [PASS][51] -> [SKIP][52] ([Intel XE#2291]) +2 other tests skip
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-8/igt@kms_cursor_legacy@cursorb-vs-flipa-legacy.html
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipa-legacy.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
    - shard-dg2-set2:     NOTRUN -> [SKIP][53] ([Intel XE#323])
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html

  * igt@kms_dsc@dsc-with-bpc:
    - shard-dg2-set2:     NOTRUN -> [SKIP][54] ([Intel XE#455]) +5 other tests skip
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-463/igt@kms_dsc@dsc-with-bpc.html

  * igt@kms_flip@2x-flip-vs-rmfb-interruptible:
    - shard-lnl:          NOTRUN -> [SKIP][55] ([Intel XE#1421]) +4 other tests skip
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html

  * igt@kms_flip@2x-nonexisting-fb:
    - shard-bmg:          [PASS][56] -> [SKIP][57] ([Intel XE#2316]) +5 other tests skip
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-8/igt@kms_flip@2x-nonexisting-fb.html
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-6/igt@kms_flip@2x-nonexisting-fb.html

  * igt@kms_flip@flip-vs-expired-vblank@b-hdmi-a1:
    - shard-adlp:         [PASS][58] -> [DMESG-WARN][59] ([Intel XE#4543]) +5 other tests dmesg-warn
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-9/igt@kms_flip@flip-vs-expired-vblank@b-hdmi-a1.html
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-2/igt@kms_flip@flip-vs-expired-vblank@b-hdmi-a1.html

  * igt@kms_flip@flip-vs-suspend:
    - shard-bmg:          [PASS][60] -> [INCOMPLETE][61] ([Intel XE#2049] / [Intel XE#2597]) +1 other test incomplete
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-8/igt@kms_flip@flip-vs-suspend.html
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-5/igt@kms_flip@flip-vs-suspend.html

  * igt@kms_flip@flip-vs-suspend@c-dp4:
    - shard-dg2-set2:     [PASS][62] -> [INCOMPLETE][63] ([Intel XE#2049] / [Intel XE#2597]) +1 other test incomplete
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-433/igt@kms_flip@flip-vs-suspend@c-dp4.html
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@kms_flip@flip-vs-suspend@c-dp4.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling:
    - shard-bmg:          NOTRUN -> [SKIP][64] ([Intel XE#2293] / [Intel XE#2380])
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling@pipe-a-valid-mode:
    - shard-bmg:          NOTRUN -> [SKIP][65] ([Intel XE#2293])
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling:
    - shard-lnl:          NOTRUN -> [SKIP][66] ([Intel XE#1401] / [Intel XE#1745])
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling@pipe-a-default-mode:
    - shard-lnl:          NOTRUN -> [SKIP][67] ([Intel XE#1401])
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling@pipe-a-default-mode.html

  * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-pgflip-blt:
    - shard-bmg:          NOTRUN -> [SKIP][68] ([Intel XE#2311]) +4 other tests skip
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@drrs-rgb565-draw-render:
    - shard-adlp:         NOTRUN -> [SKIP][69] ([Intel XE#651]) +1 other test skip
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_frontbuffer_tracking@drrs-rgb565-draw-render.html

  * igt@kms_frontbuffer_tracking@drrs-suspend:
    - shard-dg2-set2:     NOTRUN -> [SKIP][70] ([Intel XE#651]) +12 other tests skip
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@kms_frontbuffer_tracking@drrs-suspend.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-fullscreen:
    - shard-adlp:         NOTRUN -> [SKIP][71] ([Intel XE#656]) +7 other tests skip
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-fullscreen.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-move:
    - shard-bmg:          NOTRUN -> [SKIP][72] ([Intel XE#5390]) +1 other test skip
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-move.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscren-pri-shrfb-draw-render:
    - shard-lnl:          NOTRUN -> [SKIP][73] ([Intel XE#651]) +1 other test skip
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscren-pri-shrfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-pri-shrfb-draw-render:
    - shard-lnl:          NOTRUN -> [SKIP][74] ([Intel XE#656]) +6 other tests skip
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-pri-shrfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y:
    - shard-dg2-set2:     NOTRUN -> [SKIP][75] ([Intel XE#658])
   [75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-shrfb-plflip-blt:
    - shard-adlp:         NOTRUN -> [SKIP][76] ([Intel XE#653]) +3 other tests skip
   [76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-1/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-shrfb-plflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-onoff:
    - shard-dg2-set2:     NOTRUN -> [SKIP][77] ([Intel XE#653]) +13 other tests skip
   [77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@psr-modesetfrombusy:
    - shard-bmg:          NOTRUN -> [SKIP][78] ([Intel XE#2313]) +6 other tests skip
   [78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_frontbuffer_tracking@psr-modesetfrombusy.html

  * igt@kms_hdr@brightness-with-hdr:
    - shard-lnl:          NOTRUN -> [SKIP][79] ([Intel XE#3374] / [Intel XE#3544])
   [79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_hdr@brightness-with-hdr.html

  * igt@kms_joiner@invalid-modeset-force-ultra-joiner:
    - shard-dg2-set2:     NOTRUN -> [SKIP][80] ([Intel XE#2925])
   [80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_joiner@invalid-modeset-force-ultra-joiner.html

  * igt@kms_plane_multiple@2x-tiling-4:
    - shard-bmg:          [PASS][81] -> [SKIP][82] ([Intel XE#4596])
   [81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-8/igt@kms_plane_multiple@2x-tiling-4.html
   [82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-6/igt@kms_plane_multiple@2x-tiling-4.html

  * igt@kms_plane_multiple@tiling-x@pipe-b-edp-1:
    - shard-lnl:          NOTRUN -> [FAIL][83] ([Intel XE#4658]) +3 other tests fail
   [83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_plane_multiple@tiling-x@pipe-b-edp-1.html

  * igt@kms_pm_backlight@brightness-with-dpms:
    - shard-bmg:          NOTRUN -> [SKIP][84] ([Intel XE#2938])
   [84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_pm_backlight@brightness-with-dpms.html

  * igt@kms_pm_rpm@modeset-non-lpsp-stress:
    - shard-adlp:         NOTRUN -> [SKIP][85] ([Intel XE#836])
   [85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_pm_rpm@modeset-non-lpsp-stress.html

  * igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-sf:
    - shard-bmg:          NOTRUN -> [SKIP][86] ([Intel XE#1489] / [Intel XE#5899]) +1 other test skip
   [86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-sf.html

  * igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-sf:
    - shard-adlp:         NOTRUN -> [SKIP][87] ([Intel XE#1489] / [Intel XE#5899])
   [87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-sf.html
    - shard-dg2-set2:     NOTRUN -> [SKIP][88] ([Intel XE#1489] / [Intel XE#5899]) +3 other tests skip
   [88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-sf.html

  * igt@kms_psr@fbc-pr-cursor-blt:
    - shard-bmg:          NOTRUN -> [SKIP][89] ([Intel XE#2234] / [Intel XE#2850] / [Intel XE#5899]) +2 other tests skip
   [89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_psr@fbc-pr-cursor-blt.html

  * igt@kms_psr@pr-dpms:
    - shard-lnl:          NOTRUN -> [SKIP][90] ([Intel XE#1406] / [Intel XE#5899])
   [90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_psr@pr-dpms.html

  * igt@kms_psr@pr-sprite-render:
    - shard-adlp:         NOTRUN -> [SKIP][91] ([Intel XE#2850] / [Intel XE#5899] / [Intel XE#929]) +1 other test skip
   [91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_psr@pr-sprite-render.html

  * igt@kms_psr@psr2-primary-render:
    - shard-dg2-set2:     NOTRUN -> [SKIP][92] ([Intel XE#2850] / [Intel XE#5899] / [Intel XE#929]) +5 other tests skip
   [92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-463/igt@kms_psr@psr2-primary-render.html

  * igt@kms_rotation_crc@multiplane-rotation-cropping-top:
    - shard-adlp:         NOTRUN -> [FAIL][93] ([Intel XE#1874])
   [93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_rotation_crc@multiplane-rotation-cropping-top.html

  * igt@kms_rotation_crc@primary-x-tiled-reflect-x-0:
    - shard-lnl:          NOTRUN -> [FAIL][94] ([Intel XE#4689])
   [94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_rotation_crc@primary-x-tiled-reflect-x-0.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-90:
    - shard-dg2-set2:     NOTRUN -> [SKIP][95] ([Intel XE#3414])
   [95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html

  * igt@kms_vblank@ts-continuation-suspend:
    - shard-adlp:         [PASS][96] -> [DMESG-WARN][97] ([Intel XE#2953] / [Intel XE#4173]) +4 other tests dmesg-warn
   [96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-6/igt@kms_vblank@ts-continuation-suspend.html
   [97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-2/igt@kms_vblank@ts-continuation-suspend.html

  * igt@kms_vrr@flip-dpms:
    - shard-bmg:          NOTRUN -> [SKIP][98] ([Intel XE#1499])
   [98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@kms_vrr@flip-dpms.html

  * igt@xe_ccs@block-multicopy-inplace:
    - shard-adlp:         NOTRUN -> [SKIP][99] ([Intel XE#455] / [Intel XE#488] / [Intel XE#5607])
   [99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@xe_ccs@block-multicopy-inplace.html

  * igt@xe_create@multigpu-create-massive-size:
    - shard-dg2-set2:     NOTRUN -> [SKIP][100] ([Intel XE#944]) +1 other test skip
   [100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@xe_create@multigpu-create-massive-size.html

  * igt@xe_eudebug@basic-connect:
    - shard-lnl:          NOTRUN -> [SKIP][101] ([Intel XE#4837]) +2 other tests skip
   [101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@xe_eudebug@basic-connect.html

  * igt@xe_eudebug@basic-vm-bind-discovery:
    - shard-dg2-set2:     NOTRUN -> [SKIP][102] ([Intel XE#4837]) +5 other tests skip
   [102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@xe_eudebug@basic-vm-bind-discovery.html

  * igt@xe_eudebug_online@interrupt-all-set-breakpoint-faultable:
    - shard-adlp:         NOTRUN -> [SKIP][103] ([Intel XE#4837] / [Intel XE#5565])
   [103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@xe_eudebug_online@interrupt-all-set-breakpoint-faultable.html

  * igt@xe_eudebug_online@single-step:
    - shard-bmg:          NOTRUN -> [SKIP][104] ([Intel XE#4837]) +1 other test skip
   [104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@xe_eudebug_online@single-step.html

  * igt@xe_evict@evict-beng-small-external-cm:
    - shard-adlp:         NOTRUN -> [SKIP][105] ([Intel XE#261] / [Intel XE#5564] / [Intel XE#688]) +1 other test skip
   [105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-1/igt@xe_evict@evict-beng-small-external-cm.html

  * igt@xe_evict@evict-large-external:
    - shard-adlp:         NOTRUN -> [SKIP][106] ([Intel XE#261] / [Intel XE#5564])
   [106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-1/igt@xe_evict@evict-large-external.html

  * igt@xe_exec_basic@multigpu-many-execqueues-many-vm-rebind:
    - shard-bmg:          NOTRUN -> [SKIP][107] ([Intel XE#2322]) +1 other test skip
   [107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-rebind.html

  * igt@xe_exec_basic@multigpu-once-basic-defer-mmap:
    - shard-adlp:         NOTRUN -> [SKIP][108] ([Intel XE#1392] / [Intel XE#5575]) +1 other test skip
   [108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@xe_exec_basic@multigpu-once-basic-defer-mmap.html

  * igt@xe_exec_basic@multigpu-once-bindexecqueue:
    - shard-lnl:          NOTRUN -> [SKIP][109] ([Intel XE#1392]) +2 other tests skip
   [109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@xe_exec_basic@multigpu-once-bindexecqueue.html

  * igt@xe_exec_basic@multigpu-once-null-rebind:
    - shard-dg2-set2:     [PASS][110] -> [SKIP][111] ([Intel XE#1392]) +5 other tests skip
   [110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-435/igt@xe_exec_basic@multigpu-once-null-rebind.html
   [111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-432/igt@xe_exec_basic@multigpu-once-null-rebind.html

  * igt@xe_exec_basic@twice-bindexecqueue:
    - shard-adlp:         [PASS][112] -> [DMESG-FAIL][113] ([Intel XE#3876]) +1 other test dmesg-fail
   [112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-2/igt@xe_exec_basic@twice-bindexecqueue.html
   [113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-6/igt@xe_exec_basic@twice-bindexecqueue.html

  * igt@xe_exec_compute_mode@many-userptr-rebind:
    - shard-lnl:          NOTRUN -> [FAIL][114] ([Intel XE#5817])
   [114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@xe_exec_compute_mode@many-userptr-rebind.html

  * igt@xe_exec_fault_mode@many-execqueues-userptr-invalidate-imm:
    - shard-dg2-set2:     NOTRUN -> [SKIP][115] ([Intel XE#288]) +10 other tests skip
   [115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@xe_exec_fault_mode@many-execqueues-userptr-invalidate-imm.html

  * igt@xe_exec_fault_mode@twice-userptr-rebind-prefetch:
    - shard-adlp:         NOTRUN -> [SKIP][116] ([Intel XE#288] / [Intel XE#5561]) +5 other tests skip
   [116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@xe_exec_fault_mode@twice-userptr-rebind-prefetch.html

  * igt@xe_exec_reset@parallel-gt-reset:
    - shard-adlp:         [PASS][117] -> [DMESG-WARN][118] ([Intel XE#3876])
   [117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-2/igt@xe_exec_reset@parallel-gt-reset.html
   [118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-6/igt@xe_exec_reset@parallel-gt-reset.html

  * igt@xe_exec_system_allocator@once-malloc-bo-unmap:
    - shard-adlp:         NOTRUN -> [SKIP][119] ([Intel XE#4915]) +36 other tests skip
   [119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@xe_exec_system_allocator@once-malloc-bo-unmap.html

  * igt@xe_exec_system_allocator@threads-many-execqueues-mmap-new-huge:
    - shard-bmg:          NOTRUN -> [SKIP][120] ([Intel XE#4943]) +5 other tests skip
   [120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@xe_exec_system_allocator@threads-many-execqueues-mmap-new-huge.html

  * igt@xe_exec_system_allocator@threads-many-stride-mmap-remap-eocheck:
    - shard-dg2-set2:     NOTRUN -> [SKIP][121] ([Intel XE#4915]) +111 other tests skip
   [121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@xe_exec_system_allocator@threads-many-stride-mmap-remap-eocheck.html

  * igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-free-huge-nomemset:
    - shard-lnl:          NOTRUN -> [SKIP][122] ([Intel XE#4943]) +5 other tests skip
   [122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-free-huge-nomemset.html

  * igt@xe_media_fill@media-fill:
    - shard-bmg:          NOTRUN -> [SKIP][123] ([Intel XE#2459] / [Intel XE#2596])
   [123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@xe_media_fill@media-fill.html

  * igt@xe_mmap@vram:
    - shard-adlp:         NOTRUN -> [SKIP][124] ([Intel XE#1008] / [Intel XE#5591])
   [124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@xe_mmap@vram.html

  * igt@xe_module_load@load:
    - shard-lnl:          ([PASS][125], [PASS][126], [PASS][127], [PASS][128], [PASS][129], [PASS][130], [PASS][131], [PASS][132], [PASS][133], [PASS][134], [PASS][135], [PASS][136], [PASS][137], [PASS][138], [PASS][139], [PASS][140], [PASS][141], [PASS][142], [PASS][143], [PASS][144], [PASS][145], [PASS][146], [PASS][147], [PASS][148], [PASS][149]) -> ([PASS][150], [PASS][151], [PASS][152], [PASS][153], [PASS][154], [PASS][155], [PASS][156], [PASS][157], [PASS][158], [PASS][159], [PASS][160], [PASS][161], [PASS][162], [PASS][163], [PASS][164], [PASS][165], [SKIP][166], [PASS][167], [PASS][168], [PASS][169], [PASS][170], [PASS][171], [PASS][172], [PASS][173], [PASS][174], [PASS][175]) ([Intel XE#378])
   [125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-2/igt@xe_module_load@load.html
   [126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-1/igt@xe_module_load@load.html
   [127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-3/igt@xe_module_load@load.html
   [128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-1/igt@xe_module_load@load.html
   [129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-7/igt@xe_module_load@load.html
   [130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-7/igt@xe_module_load@load.html
   [131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-2/igt@xe_module_load@load.html
   [132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-1/igt@xe_module_load@load.html
   [133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-8/igt@xe_module_load@load.html
   [134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-3/igt@xe_module_load@load.html
   [135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-3/igt@xe_module_load@load.html
   [136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-3/igt@xe_module_load@load.html
   [137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-5/igt@xe_module_load@load.html
   [138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-5/igt@xe_module_load@load.html
   [139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-1/igt@xe_module_load@load.html
   [140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-4/igt@xe_module_load@load.html
   [141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-2/igt@xe_module_load@load.html
   [142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-4/igt@xe_module_load@load.html
   [143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-4/igt@xe_module_load@load.html
   [144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-5/igt@xe_module_load@load.html
   [145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-7/igt@xe_module_load@load.html
   [146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-8/igt@xe_module_load@load.html
   [147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-8/igt@xe_module_load@load.html
   [148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-8/igt@xe_module_load@load.html
   [149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-2/igt@xe_module_load@load.html
   [150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-1/igt@xe_module_load@load.html
   [151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-3/igt@xe_module_load@load.html
   [152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-5/igt@xe_module_load@load.html
   [153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-5/igt@xe_module_load@load.html
   [154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-5/igt@xe_module_load@load.html
   [155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-5/igt@xe_module_load@load.html
   [156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-1/igt@xe_module_load@load.html
   [157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-2/igt@xe_module_load@load.html
   [158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-4/igt@xe_module_load@load.html
   [159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@xe_module_load@load.html
   [160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-1/igt@xe_module_load@load.html
   [161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-7/igt@xe_module_load@load.html
   [162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-3/igt@xe_module_load@load.html
   [163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-3/igt@xe_module_load@load.html
   [164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-4/igt@xe_module_load@load.html
   [165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-2/igt@xe_module_load@load.html
   [166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@xe_module_load@load.html
   [167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@xe_module_load@load.html
   [168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-7/igt@xe_module_load@load.html
   [169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-7/igt@xe_module_load@load.html
   [170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-4/igt@xe_module_load@load.html
   [171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-3/igt@xe_module_load@load.html
   [172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-2/igt@xe_module_load@load.html
   [173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-2/igt@xe_module_load@load.html
   [174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@xe_module_load@load.html
   [175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@xe_module_load@load.html

  * igt@xe_oa@mmio-triggered-reports-read:
    - shard-dg2-set2:     NOTRUN -> [SKIP][176] ([Intel XE#5103])
   [176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@xe_oa@mmio-triggered-reports-read.html

  * igt@xe_oa@polling:
    - shard-adlp:         NOTRUN -> [SKIP][177] ([Intel XE#3573])
   [177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@xe_oa@polling.html

  * igt@xe_oa@whitelisted-registers-userspace-config:
    - shard-dg2-set2:     NOTRUN -> [SKIP][178] ([Intel XE#3573]) +1 other test skip
   [178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@xe_oa@whitelisted-registers-userspace-config.html

  * igt@xe_peer2peer@read@read-gpua-vram01-gpub-system-p2p:
    - shard-dg2-set2:     NOTRUN -> [FAIL][179] ([Intel XE#1173]) +1 other test fail
   [179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-463/igt@xe_peer2peer@read@read-gpua-vram01-gpub-system-p2p.html

  * igt@xe_pm@s2idle-d3cold-basic-exec:
    - shard-bmg:          NOTRUN -> [SKIP][180] ([Intel XE#2284])
   [180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-3/igt@xe_pm@s2idle-d3cold-basic-exec.html

  * igt@xe_pm@s2idle-mocs:
    - shard-adlp:         [PASS][181] -> [DMESG-WARN][182] ([Intel XE#2953] / [Intel XE#4173] / [Intel XE#4504])
   [181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-1/igt@xe_pm@s2idle-mocs.html
   [182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-8/igt@xe_pm@s2idle-mocs.html

  * igt@xe_query@multigpu-query-invalid-uc-fw-version-mbz:
    - shard-lnl:          NOTRUN -> [SKIP][183] ([Intel XE#944])
   [183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@xe_query@multigpu-query-invalid-uc-fw-version-mbz.html

  * igt@xe_query@multigpu-query-pxp-status:
    - shard-adlp:         NOTRUN -> [SKIP][184] ([Intel XE#944])
   [184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@xe_query@multigpu-query-pxp-status.html

  
#### Possible fixes ####

  * igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p:
    - shard-bmg:          [SKIP][185] ([Intel XE#2314] / [Intel XE#2894]) -> [PASS][186]
   [185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html
   [186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-2/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs:
    - shard-dg2-set2:     [INCOMPLETE][187] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4345] / [Intel XE#4522]) -> [PASS][188]
   [187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
   [188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-dp-4:
    - shard-dg2-set2:     [INCOMPLETE][189] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4522]) -> [PASS][190]
   [189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-dp-4.html
   [190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-dp-4.html

  * igt@kms_concurrent@multi-plane-atomic-lowres:
    - shard-dg2-set2:     [ABORT][191] ([Intel XE#5826]) -> [PASS][192]
   [191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-436/igt@kms_concurrent@multi-plane-atomic-lowres.html
   [192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-463/igt@kms_concurrent@multi-plane-atomic-lowres.html

  * igt@kms_concurrent@multi-plane-atomic-lowres@pipe-a-hdmi-a-6:
    - shard-dg2-set2:     [ABORT][193] ([Intel XE#5826] / [Intel XE#5898]) -> [PASS][194]
   [193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-436/igt@kms_concurrent@multi-plane-atomic-lowres@pipe-a-hdmi-a-6.html
   [194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-463/igt@kms_concurrent@multi-plane-atomic-lowres@pipe-a-hdmi-a-6.html

  * igt@kms_cursor_crc@cursor-random-256x85:
    - shard-adlp:         [ABORT][195] ([Intel XE#5826]) -> [PASS][196] +1 other test pass
   [195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-4/igt@kms_cursor_crc@cursor-random-256x85.html
   [196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-1/igt@kms_cursor_crc@cursor-random-256x85.html

  * igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy:
    - shard-bmg:          [SKIP][197] ([Intel XE#2291]) -> [PASS][198] +2 other tests pass
   [197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-6/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html
   [198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-5/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html

  * igt@kms_display_modes@extended-mode-basic:
    - shard-bmg:          [SKIP][199] ([Intel XE#4302]) -> [PASS][200]
   [199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-6/igt@kms_display_modes@extended-mode-basic.html
   [200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-2/igt@kms_display_modes@extended-mode-basic.html

  * igt@kms_flip@2x-flip-vs-dpms-on-nop-interruptible:
    - shard-bmg:          [SKIP][201] ([Intel XE#2316]) -> [PASS][202] +5 other tests pass
   [201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-6/igt@kms_flip@2x-flip-vs-dpms-on-nop-interruptible.html
   [202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-2/igt@kms_flip@2x-flip-vs-dpms-on-nop-interruptible.html

  * igt@kms_flip@basic-plain-flip@b-hdmi-a1:
    - shard-adlp:         [DMESG-WARN][203] ([Intel XE#4543]) -> [PASS][204] +2 other tests pass
   [203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-9/igt@kms_flip@basic-plain-flip@b-hdmi-a1.html
   [204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-2/igt@kms_flip@basic-plain-flip@b-hdmi-a1.html

  * igt@kms_flip@flip-vs-expired-vblank@b-edp1:
    - shard-lnl:          [FAIL][205] ([Intel XE#301]) -> [PASS][206] +1 other test pass
   [205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-3/igt@kms_flip@flip-vs-expired-vblank@b-edp1.html
   [206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-5/igt@kms_flip@flip-vs-expired-vblank@b-edp1.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-dg2-set2:     [TIMEOUT][207] ([Intel XE#1504] / [Intel XE#5737]) -> [PASS][208]
   [207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-464/igt@kms_flip@flip-vs-suspend-interruptible.html
   [208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a6:
    - shard-dg2-set2:     [TIMEOUT][209] ([Intel XE#5737]) -> [PASS][210]
   [209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-464/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a6.html
   [210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a6.html

  * igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-hdmi-a-1:
    - shard-adlp:         [DMESG-WARN][211] ([Intel XE#2953] / [Intel XE#4173]) -> [PASS][212] +2 other tests pass
   [211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-8/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-hdmi-a-1.html
   [212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-4/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-hdmi-a-1.html

  * igt@xe_exec_basic@multigpu-no-exec-null-defer-bind:
    - shard-dg2-set2:     [SKIP][213] ([Intel XE#1392]) -> [PASS][214] +5 other tests pass
   [213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-432/igt@xe_exec_basic@multigpu-no-exec-null-defer-bind.html
   [214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-435/igt@xe_exec_basic@multigpu-no-exec-null-defer-bind.html

  * igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv:
    - shard-dg2-set2:     [DMESG-WARN][215] ([Intel XE#5893]) -> [PASS][216]
   [215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-464/igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv.html
   [216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-436/igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv.html

  
#### Warnings ####

  * igt@kms_chamelium_edid@hdmi-edid-change-during-hibernate:
    - shard-lnl:          [ABORT][217] -> [SKIP][218] ([Intel XE#373])
   [217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-lnl-1/igt@kms_chamelium_edid@hdmi-edid-change-during-hibernate.html
   [218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-lnl-8/igt@kms_chamelium_edid@hdmi-edid-change-during-hibernate.html

  * igt@kms_content_protection@uevent:
    - shard-bmg:          [SKIP][219] ([Intel XE#2341]) -> [FAIL][220] ([Intel XE#1188])
   [219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-6/igt@kms_content_protection@uevent.html
   [220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-2/igt@kms_content_protection@uevent.html

  * igt@kms_fbcon_fbt@fbc-suspend:
    - shard-adlp:         [ABORT][221] ([Intel XE#4847]) -> [ABORT][222] ([Intel XE#4847] / [Intel XE#5545])
   [221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-adlp-2/igt@kms_fbcon_fbt@fbc-suspend.html
   [222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-adlp-6/igt@kms_fbcon_fbt@fbc-suspend.html

  * igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-draw-render:
    - shard-bmg:          [SKIP][223] ([Intel XE#2312]) -> [SKIP][224] ([Intel XE#2311]) +9 other tests skip
   [223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-draw-render.html
   [224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-2/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-mmap-wc:
    - shard-bmg:          [SKIP][225] ([Intel XE#5390]) -> [SKIP][226] ([Intel XE#2312])
   [225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-mmap-wc.html
   [226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-msflip-blt:
    - shard-bmg:          [SKIP][227] ([Intel XE#2312]) -> [SKIP][228] ([Intel XE#5390]) +2 other tests skip
   [227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-msflip-blt.html
   [228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc:
    - shard-bmg:          [SKIP][229] ([Intel XE#2311]) -> [SKIP][230] ([Intel XE#2312]) +9 other tests skip
   [229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-7/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
   [230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-blt:
    - shard-bmg:          [SKIP][231] ([Intel XE#2312]) -> [SKIP][232] ([Intel XE#2313]) +9 other tests skip
   [231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-blt.html
   [232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
    - shard-bmg:          [SKIP][233] ([Intel XE#2313]) -> [SKIP][234] ([Intel XE#2312]) +6 other tests skip
   [233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-bmg-7/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
   [234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html

  * igt@kms_tiled_display@basic-test-pattern-with-chamelium:
    - shard-dg2-set2:     [SKIP][235] ([Intel XE#362]) -> [SKIP][236] ([Intel XE#1500])
   [235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-463/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
   [236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-464/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html

  * igt@xe_peer2peer@write:
    - shard-dg2-set2:     [FAIL][237] ([Intel XE#1173]) -> [SKIP][238] ([Intel XE#1061])
   [237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13/shard-dg2-435/igt@xe_peer2peer@write.html
   [238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/shard-dg2-432/igt@xe_peer2peer@write.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [Intel XE#1008]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1008
  [Intel XE#1061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1061
  [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
  [Intel XE#1173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1173
  [Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
  [Intel XE#1188]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1188
  [Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
  [Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401
  [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
  [Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
  [Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
  [Intel XE#1428]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1428
  [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
  [Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
  [Intel XE#1500]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1500
  [Intel XE#1504]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1504
  [Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
  [Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
  [Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
  [Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
  [Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
  [Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
  [Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
  [Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
  [Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
  [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
  [Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
  [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
  [Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
  [Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
  [Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
  [Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
  [Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
  [Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
  [Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
  [Intel XE#2459]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2459
  [Intel XE#2596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2596
  [Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
  [Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
  [Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
  [Intel XE#2705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2705
  [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
  [Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
  [Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
  [Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
  [Intel XE#2907]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2907
  [Intel XE#2925]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2925
  [Intel XE#2938]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2938
  [Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
  [Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
  [Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
  [Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
  [Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
  [Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
  [Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
  [Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
  [Intel XE#3279]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3279
  [Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304
  [Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
  [Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
  [Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
  [Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
  [Intel XE#362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/362
  [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
  [Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
  [Intel XE#3768]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3768
  [Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
  [Intel XE#3876]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3876
  [Intel XE#4173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4173
  [Intel XE#4212]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4212
  [Intel XE#4302]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4302
  [Intel XE#4345]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4345
  [Intel XE#4504]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4504
  [Intel XE#4522]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4522
  [Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
  [Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
  [Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
  [Intel XE#4658]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4658
  [Intel XE#4689]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4689
  [Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
  [Intel XE#4847]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4847
  [Intel XE#488]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/488
  [Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
  [Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
  [Intel XE#5103]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5103
  [Intel XE#5390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5390
  [Intel XE#5545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5545
  [Intel XE#5561]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5561
  [Intel XE#5564]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5564
  [Intel XE#5565]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5565
  [Intel XE#5575]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5575
  [Intel XE#5591]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5591
  [Intel XE#5607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5607
  [Intel XE#5737]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5737
  [Intel XE#5817]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5817
  [Intel XE#5826]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5826
  [Intel XE#5893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5893
  [Intel XE#5898]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5898
  [Intel XE#5899]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5899
  [Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
  [Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
  [Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
  [Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
  [Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
  [Intel XE#658]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/658
  [Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
  [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
  [Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
  [Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
  [Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944


Build changes
-------------

  * Linux: xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13 -> xe-pw-152884v1

  IGT_8493: 8493
  xe-3539-546fc742f08b8dbd3fa1486933c9b15085e11d13: 546fc742f08b8dbd3fa1486933c9b15085e11d13
  xe-pw-152884v1: 152884v1

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152884v1/index.html

[-- Attachment #2: Type: text/html, Size: 73852 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-08-13 15:37 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-13 12:38 [PATCH] drm/xe: MADVISE SQUASH for CI-v7_with_comments_addressed_gpuvm_changes Himal Prasad Ghimiray
2025-08-13 13:26 ` ✗ CI.checkpatch: warning for " Patchwork
2025-08-13 13:27 ` ✓ CI.KUnit: success " Patchwork
2025-08-13 14:30 ` ✓ Xe.CI.BAT: " Patchwork
2025-08-13 15:37 ` ✗ Xe.CI.Full: failure " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).