Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe
@ 2025-11-26 20:19 Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 1/8] drm/sched: Add several job helpers to avoid drivers touching scheduler state Matthew Brost
                   ` (11 more replies)
  0 siblings, 12 replies; 14+ messages in thread
From: Matthew Brost @ 2025-11-26 20:19 UTC (permalink / raw)
  To: intel-xe; +Cc: dri-devel

At XDC, we discussed that drivers should avoid accessing DRM scheduler
internals, misusing DRM scheduler locks, and adopt a well-defined
pending job list iterator. This series proposes the necessary changes to
the DRM scheduler to bring Xe in line with that agreement and updates Xe
to use the new DRM scheduler API.

While here, cleanup LR queue handling and simplify GuC state machine in
Xe too.

v2:
 - Fix checkpatch / naming issues
v3:
 - Only allow pending job list iterator to be called on stopped schedulers
 - Cleanup LR queue handling / fix a few misselanous Xe scheduler issues
v4:
 - Address Niranjana's feedback
 - Add patch to avoid toggling scheduler state in the TDR
v5:
 - Rebase
 - Fixup LRC timeout check (Umesh)

Matt

Matthew Brost (8):
  drm/sched: Add several job helpers to avoid drivers touching scheduler
    state
  drm/sched: Add pending job list iterator
  drm/xe: Add dedicated message lock
  drm/xe: Stop abusing DRM scheduler internals
  drm/xe: Only toggle scheduling in TDR if GuC is running
  drm/xe: Do not deregister queues in TDR
  drm/xe: Remove special casing for LR queues in submission
  drm/xe: Avoid toggling schedule state to check LRC timestamp in TDR

 drivers/gpu/drm/scheduler/sched_main.c       |   4 +-
 drivers/gpu/drm/xe/xe_gpu_scheduler.c        |   9 +-
 drivers/gpu/drm/xe/xe_gpu_scheduler.h        |  37 +-
 drivers/gpu/drm/xe/xe_gpu_scheduler_types.h  |   2 +
 drivers/gpu/drm/xe/xe_guc_exec_queue_types.h |   2 -
 drivers/gpu/drm/xe/xe_guc_submit.c           | 362 +++----------------
 drivers/gpu/drm/xe/xe_guc_submit_types.h     |  11 -
 drivers/gpu/drm/xe/xe_hw_fence.c             |  16 -
 drivers/gpu/drm/xe/xe_hw_fence.h             |   2 -
 drivers/gpu/drm/xe/xe_lrc.c                  |  42 ++-
 drivers/gpu/drm/xe/xe_lrc.h                  |   3 +-
 drivers/gpu/drm/xe/xe_sched_job.c            |   1 +
 drivers/gpu/drm/xe/xe_sched_job_types.h      |   2 +
 drivers/gpu/drm/xe/xe_trace.h                |   5 -
 include/drm/gpu_scheduler.h                  |  82 +++++
 15 files changed, 188 insertions(+), 392 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v5 1/8] drm/sched: Add several job helpers to avoid drivers touching scheduler state
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
@ 2025-11-26 20:19 ` Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 2/8] drm/sched: Add pending job list iterator Matthew Brost
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2025-11-26 20:19 UTC (permalink / raw)
  To: intel-xe; +Cc: dri-devel

Add helpers to see if scheduler is stopped and a jobs signaled state.
Expected to be used driver side on recovery and debug flows.

v4:
 - Reorder patch to first in series (Niranjana)
 - Also check parent fence for signaling (Niranjana)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
---
 drivers/gpu/drm/scheduler/sched_main.c |  4 ++--
 include/drm/gpu_scheduler.h            | 32 ++++++++++++++++++++++++++
 2 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 1d4f1b822e7b..cf40c18ab433 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -344,7 +344,7 @@ drm_sched_rq_select_entity_fifo(struct drm_gpu_scheduler *sched,
  */
 static void drm_sched_run_job_queue(struct drm_gpu_scheduler *sched)
 {
-	if (!READ_ONCE(sched->pause_submit))
+	if (!drm_sched_is_stopped(sched))
 		queue_work(sched->submit_wq, &sched->work_run_job);
 }
 
@@ -354,7 +354,7 @@ static void drm_sched_run_job_queue(struct drm_gpu_scheduler *sched)
  */
 static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
 {
-	if (!READ_ONCE(sched->pause_submit))
+	if (!drm_sched_is_stopped(sched))
 		queue_work(sched->submit_wq, &sched->work_free_job);
 }
 
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index fb88301b3c45..385bf34e76fe 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -698,4 +698,36 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
 				   struct drm_gpu_scheduler **sched_list,
 				   unsigned int num_sched_list);
 
+/* Inlines */
+
+/**
+ * drm_sched_is_stopped() - DRM is stopped
+ * @sched: DRM scheduler
+ *
+ * Return: True if sched is stopped, False otherwise
+ */
+static inline bool drm_sched_is_stopped(struct drm_gpu_scheduler *sched)
+{
+	return READ_ONCE(sched->pause_submit);
+}
+
+/**
+ * drm_sched_job_is_signaled() - DRM scheduler job is signaled
+ * @job: DRM scheduler job
+ *
+ * Determine if DRM scheduler job is signaled. DRM scheduler should be stopped
+ * to obtain a stable snapshot of state. Both parent fence (hardware fence) and
+ * finished fence (software fence) are check to determine signaling state.
+ *
+ * Return: True if job is signaled, False otherwise
+ */
+static inline bool drm_sched_job_is_signaled(struct drm_sched_job *job)
+{
+	struct drm_sched_fence *s_fence = job->s_fence;
+
+	WARN_ON(!drm_sched_is_stopped(job->sched));
+	return (s_fence->parent && dma_fence_is_signaled(s_fence->parent)) ||
+		dma_fence_is_signaled(&s_fence->finished);
+}
+
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 2/8] drm/sched: Add pending job list iterator
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 1/8] drm/sched: Add several job helpers to avoid drivers touching scheduler state Matthew Brost
@ 2025-11-26 20:19 ` Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 3/8] drm/xe: Add dedicated message lock Matthew Brost
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2025-11-26 20:19 UTC (permalink / raw)
  To: intel-xe; +Cc: dri-devel

Stop open coding pending job list in drivers. Add pending job list
iterator which safely walks DRM scheduler list asserting DRM scheduler
is stopped.

v2:
 - Fix checkpatch (CI)
v3:
 - Drop locked version (Christian)
v4:
 - Reorder patch (Niranjana)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
---
 include/drm/gpu_scheduler.h | 50 +++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 385bf34e76fe..9d228513d06c 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -730,4 +730,54 @@ static inline bool drm_sched_job_is_signaled(struct drm_sched_job *job)
 		dma_fence_is_signaled(&s_fence->finished);
 }
 
+/**
+ * struct drm_sched_pending_job_iter - DRM scheduler pending job iterator state
+ * @sched: DRM scheduler associated with pending job iterator
+ */
+struct drm_sched_pending_job_iter {
+	struct drm_gpu_scheduler *sched;
+};
+
+/* Drivers should never call this directly */
+static inline struct drm_sched_pending_job_iter
+__drm_sched_pending_job_iter_begin(struct drm_gpu_scheduler *sched)
+{
+	struct drm_sched_pending_job_iter iter = {
+		.sched = sched,
+	};
+
+	WARN_ON(!drm_sched_is_stopped(sched));
+	return iter;
+}
+
+/* Drivers should never call this directly */
+static inline void
+__drm_sched_pending_job_iter_end(const struct drm_sched_pending_job_iter iter)
+{
+	WARN_ON(!drm_sched_is_stopped(iter.sched));
+}
+
+DEFINE_CLASS(drm_sched_pending_job_iter, struct drm_sched_pending_job_iter,
+	     __drm_sched_pending_job_iter_end(_T),
+	     __drm_sched_pending_job_iter_begin(__sched),
+	     struct drm_gpu_scheduler *__sched);
+static inline void *
+class_drm_sched_pending_job_iter_lock_ptr(class_drm_sched_pending_job_iter_t *_T)
+{ return _T; }
+#define class_drm_sched_pending_job_iter_is_conditional false
+
+/**
+ * drm_sched_for_each_pending_job() - Iterator for each pending job in scheduler
+ * @__job: Current pending job being iterated over
+ * @__sched: DRM scheduler to iterate over pending jobs
+ * @__entity: DRM scheduler entity to filter jobs, NULL indicates no filter
+ *
+ * Iterator for each pending job in scheduler, filtering on an entity, and
+ * enforcing scheduler is fully stopped
+ */
+#define drm_sched_for_each_pending_job(__job, __sched, __entity)		\
+	scoped_guard(drm_sched_pending_job_iter, (__sched))			\
+		list_for_each_entry((__job), &(__sched)->pending_list, list)	\
+			for_each_if(!(__entity) || (__job)->entity == (__entity))
+
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 3/8] drm/xe: Add dedicated message lock
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 1/8] drm/sched: Add several job helpers to avoid drivers touching scheduler state Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 2/8] drm/sched: Add pending job list iterator Matthew Brost
@ 2025-11-26 20:19 ` Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 4/8] drm/xe: Stop abusing DRM scheduler internals Matthew Brost
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2025-11-26 20:19 UTC (permalink / raw)
  To: intel-xe; +Cc: dri-devel

Stop abusing DRM scheduler job list lock for messages, add dedicated
message lock.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
---
 drivers/gpu/drm/xe/xe_gpu_scheduler.c       | 5 +++--
 drivers/gpu/drm/xe/xe_gpu_scheduler.h       | 4 ++--
 drivers/gpu/drm/xe/xe_gpu_scheduler_types.h | 2 ++
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
index f91e06d03511..f4f23317191f 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
@@ -77,6 +77,7 @@ int xe_sched_init(struct xe_gpu_scheduler *sched,
 	};
 
 	sched->ops = xe_ops;
+	spin_lock_init(&sched->msg_lock);
 	INIT_LIST_HEAD(&sched->msgs);
 	INIT_WORK(&sched->work_process_msg, xe_sched_process_msg_work);
 
@@ -117,7 +118,7 @@ void xe_sched_add_msg(struct xe_gpu_scheduler *sched,
 void xe_sched_add_msg_locked(struct xe_gpu_scheduler *sched,
 			     struct xe_sched_msg *msg)
 {
-	lockdep_assert_held(&sched->base.job_list_lock);
+	lockdep_assert_held(&sched->msg_lock);
 
 	list_add_tail(&msg->link, &sched->msgs);
 	xe_sched_process_msg_queue(sched);
@@ -131,7 +132,7 @@ void xe_sched_add_msg_locked(struct xe_gpu_scheduler *sched,
 void xe_sched_add_msg_head(struct xe_gpu_scheduler *sched,
 			   struct xe_sched_msg *msg)
 {
-	lockdep_assert_held(&sched->base.job_list_lock);
+	lockdep_assert_held(&sched->msg_lock);
 
 	list_add(&msg->link, &sched->msgs);
 	xe_sched_process_msg_queue(sched);
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
index c7a77a3a9681..dceb2cd0ee5b 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
@@ -33,12 +33,12 @@ void xe_sched_add_msg_head(struct xe_gpu_scheduler *sched,
 
 static inline void xe_sched_msg_lock(struct xe_gpu_scheduler *sched)
 {
-	spin_lock(&sched->base.job_list_lock);
+	spin_lock(&sched->msg_lock);
 }
 
 static inline void xe_sched_msg_unlock(struct xe_gpu_scheduler *sched)
 {
-	spin_unlock(&sched->base.job_list_lock);
+	spin_unlock(&sched->msg_lock);
 }
 
 static inline void xe_sched_stop(struct xe_gpu_scheduler *sched)
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h b/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
index 6731b13da8bb..63d9bf92583c 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
@@ -47,6 +47,8 @@ struct xe_gpu_scheduler {
 	const struct xe_sched_backend_ops	*ops;
 	/** @msgs: list of messages to be processed in @work_process_msg */
 	struct list_head			msgs;
+	/** @msg_lock: Message lock */
+	spinlock_t				msg_lock;
 	/** @work_process_msg: processes messages */
 	struct work_struct		work_process_msg;
 };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 4/8] drm/xe: Stop abusing DRM scheduler internals
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
                   ` (2 preceding siblings ...)
  2025-11-26 20:19 ` [PATCH v5 3/8] drm/xe: Add dedicated message lock Matthew Brost
@ 2025-11-26 20:19 ` Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 5/8] drm/xe: Only toggle scheduling in TDR if GuC is running Matthew Brost
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2025-11-26 20:19 UTC (permalink / raw)
  To: intel-xe; +Cc: dri-devel

Use new pending job list iterator and new helper functions in Xe to
avoid reaching into DRM scheduler internals.

Part of this change involves removing pending jobs debug information
from debugfs and devcoredump. As agreed, the pending job list should
only be accessed when the scheduler is stopped. However, it's not
straightforward to determine whether the scheduler is stopped from the
shared debugfs/devcoredump code path. Additionally, the pending job list
provides little useful information, as pending jobs can be inferred from
seqnos and ring head/tail positions. Therefore, this debug information
is being removed.

v4:
 - Add comment around DRM_GPU_SCHED_STAT_NO_HANG (Niranjana)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
---
 drivers/gpu/drm/xe/xe_gpu_scheduler.c    |  4 +-
 drivers/gpu/drm/xe/xe_gpu_scheduler.h    | 33 ++--------
 drivers/gpu/drm/xe/xe_guc_submit.c       | 81 ++++++------------------
 drivers/gpu/drm/xe/xe_guc_submit_types.h | 11 ----
 drivers/gpu/drm/xe/xe_hw_fence.c         | 16 -----
 drivers/gpu/drm/xe/xe_hw_fence.h         |  2 -
 6 files changed, 27 insertions(+), 120 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
index f4f23317191f..9c8004d5dd91 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
@@ -7,7 +7,7 @@
 
 static void xe_sched_process_msg_queue(struct xe_gpu_scheduler *sched)
 {
-	if (!READ_ONCE(sched->base.pause_submit))
+	if (!drm_sched_is_stopped(&sched->base))
 		queue_work(sched->base.submit_wq, &sched->work_process_msg);
 }
 
@@ -43,7 +43,7 @@ static void xe_sched_process_msg_work(struct work_struct *w)
 		container_of(w, struct xe_gpu_scheduler, work_process_msg);
 	struct xe_sched_msg *msg;
 
-	if (READ_ONCE(sched->base.pause_submit))
+	if (drm_sched_is_stopped(&sched->base))
 		return;
 
 	msg = xe_sched_get_msg(sched);
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
index dceb2cd0ee5b..664c2db56af3 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
@@ -56,12 +56,9 @@ static inline void xe_sched_resubmit_jobs(struct xe_gpu_scheduler *sched)
 	struct drm_sched_job *s_job;
 	bool restore_replay = false;
 
-	list_for_each_entry(s_job, &sched->base.pending_list, list) {
-		struct drm_sched_fence *s_fence = s_job->s_fence;
-		struct dma_fence *hw_fence = s_fence->parent;
-
+	drm_sched_for_each_pending_job(s_job, &sched->base, NULL) {
 		restore_replay |= to_xe_sched_job(s_job)->restore_replay;
-		if (restore_replay || (hw_fence && !dma_fence_is_signaled(hw_fence)))
+		if (restore_replay || !drm_sched_job_is_signaled(s_job))
 			sched->base.ops->run_job(s_job);
 	}
 }
@@ -72,14 +69,6 @@ xe_sched_invalidate_job(struct xe_sched_job *job, int threshold)
 	return drm_sched_invalidate_job(&job->drm, threshold);
 }
 
-static inline void xe_sched_add_pending_job(struct xe_gpu_scheduler *sched,
-					    struct xe_sched_job *job)
-{
-	spin_lock(&sched->base.job_list_lock);
-	list_add(&job->drm.list, &sched->base.pending_list);
-	spin_unlock(&sched->base.job_list_lock);
-}
-
 /**
  * xe_sched_first_pending_job() - Find first pending job which is unsignaled
  * @sched: Xe GPU scheduler
@@ -89,21 +78,13 @@ static inline void xe_sched_add_pending_job(struct xe_gpu_scheduler *sched,
 static inline
 struct xe_sched_job *xe_sched_first_pending_job(struct xe_gpu_scheduler *sched)
 {
-	struct xe_sched_job *job, *r_job = NULL;
-
-	spin_lock(&sched->base.job_list_lock);
-	list_for_each_entry(job, &sched->base.pending_list, drm.list) {
-		struct drm_sched_fence *s_fence = job->drm.s_fence;
-		struct dma_fence *hw_fence = s_fence->parent;
+	struct drm_sched_job *job;
 
-		if (hw_fence && !dma_fence_is_signaled(hw_fence)) {
-			r_job = job;
-			break;
-		}
-	}
-	spin_unlock(&sched->base.job_list_lock);
+	drm_sched_for_each_pending_job(job, &sched->base, NULL)
+		if (!drm_sched_job_is_signaled(job))
+			return to_xe_sched_job(job);
 
-	return r_job;
+	return NULL;
 }
 
 static inline int
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 9a0842398e95..4166b4ec6a67 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1032,7 +1032,7 @@ static void xe_guc_exec_queue_lr_cleanup(struct work_struct *w)
 	struct xe_exec_queue *q = ge->q;
 	struct xe_guc *guc = exec_queue_to_guc(q);
 	struct xe_gpu_scheduler *sched = &ge->sched;
-	struct xe_sched_job *job;
+	struct drm_sched_job *job;
 	bool wedged = false;
 
 	xe_gt_assert(guc_to_gt(guc), xe_exec_queue_is_lr(q));
@@ -1091,16 +1091,10 @@ static void xe_guc_exec_queue_lr_cleanup(struct work_struct *w)
 	if (!exec_queue_killed(q) && !xe_lrc_ring_is_idle(q->lrc[0]))
 		xe_devcoredump(q, NULL, "LR job cleanup, guc_id=%d", q->guc->id);
 
-	xe_hw_fence_irq_stop(q->fence_irq);
+	drm_sched_for_each_pending_job(job, &sched->base, NULL)
+		xe_sched_job_set_error(to_xe_sched_job(job), -ECANCELED);
 
 	xe_sched_submission_start(sched);
-
-	spin_lock(&sched->base.job_list_lock);
-	list_for_each_entry(job, &sched->base.pending_list, drm.list)
-		xe_sched_job_set_error(job, -ECANCELED);
-	spin_unlock(&sched->base.job_list_lock);
-
-	xe_hw_fence_irq_start(q->fence_irq);
 }
 
 #define ADJUST_FIVE_PERCENT(__t)	mul_u64_u32_div(__t, 105, 100)
@@ -1219,7 +1213,7 @@ static enum drm_gpu_sched_stat
 guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 {
 	struct xe_sched_job *job = to_xe_sched_job(drm_job);
-	struct xe_sched_job *tmp_job;
+	struct drm_sched_job *tmp_job;
 	struct xe_exec_queue *q = job->q;
 	struct xe_gpu_scheduler *sched = &q->guc->sched;
 	struct xe_guc *guc = exec_queue_to_guc(q);
@@ -1227,7 +1221,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 	struct xe_device *xe = guc_to_xe(guc);
 	int err = -ETIME;
 	pid_t pid = -1;
-	int i = 0;
 	bool wedged = false, skip_timeout_check;
 
 	xe_gt_assert(guc_to_gt(guc), !xe_exec_queue_is_lr(q));
@@ -1392,28 +1385,19 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 		__deregister_exec_queue(guc, q);
 	}
 
-	/* Stop fence signaling */
-	xe_hw_fence_irq_stop(q->fence_irq);
+	/* Mark all outstanding jobs as bad, thus completing them */
+	xe_sched_job_set_error(job, err);
+	drm_sched_for_each_pending_job(tmp_job, &sched->base, NULL)
+		xe_sched_job_set_error(to_xe_sched_job(tmp_job), -ECANCELED);
 
-	/*
-	 * Fence state now stable, stop / start scheduler which cleans up any
-	 * fences that are complete
-	 */
-	xe_sched_add_pending_job(sched, job);
 	xe_sched_submission_start(sched);
-
 	xe_guc_exec_queue_trigger_cleanup(q);
 
-	/* Mark all outstanding jobs as bad, thus completing them */
-	spin_lock(&sched->base.job_list_lock);
-	list_for_each_entry(tmp_job, &sched->base.pending_list, drm.list)
-		xe_sched_job_set_error(tmp_job, !i++ ? err : -ECANCELED);
-	spin_unlock(&sched->base.job_list_lock);
-
-	/* Start fence signaling */
-	xe_hw_fence_irq_start(q->fence_irq);
-
-	return DRM_GPU_SCHED_STAT_RESET;
+	/*
+	 * We want the job added back to the pending list so it gets freed; this
+	 * is what DRM_GPU_SCHED_STAT_NO_HANG does.
+	 */
+	return DRM_GPU_SCHED_STAT_NO_HANG;
 
 sched_enable:
 	set_exec_queue_pending_tdr_exit(q);
@@ -2249,9 +2233,12 @@ static void guc_exec_queue_unpause_prepare(struct xe_guc *guc,
 {
 	struct xe_gpu_scheduler *sched = &q->guc->sched;
 	struct xe_sched_job *job = NULL;
+	struct drm_sched_job *s_job;
 	bool restore_replay = false;
 
-	list_for_each_entry(job, &sched->base.pending_list, drm.list) {
+	drm_sched_for_each_pending_job(s_job, &sched->base, NULL) {
+		job = to_xe_sched_job(s_job);
+
 		restore_replay |= job->restore_replay;
 		if (restore_replay) {
 			xe_gt_dbg(guc_to_gt(guc), "Replay JOB - guc_id=%d, seqno=%d",
@@ -2357,7 +2344,7 @@ void xe_guc_submit_unpause(struct xe_guc *guc)
 		 * created after resfix done.
 		 */
 		if (q->guc->id != index ||
-		    !READ_ONCE(q->guc->sched.base.pause_submit))
+		    !drm_sched_is_stopped(&q->guc->sched.base))
 			continue;
 
 		guc_exec_queue_unpause(guc, q);
@@ -2779,30 +2766,6 @@ xe_guc_exec_queue_snapshot_capture(struct xe_exec_queue *q)
 	if (snapshot->parallel_execution)
 		guc_exec_queue_wq_snapshot_capture(q, snapshot);
 
-	spin_lock(&sched->base.job_list_lock);
-	snapshot->pending_list_size = list_count_nodes(&sched->base.pending_list);
-	snapshot->pending_list = kmalloc_array(snapshot->pending_list_size,
-					       sizeof(struct pending_list_snapshot),
-					       GFP_ATOMIC);
-
-	if (snapshot->pending_list) {
-		struct xe_sched_job *job_iter;
-
-		i = 0;
-		list_for_each_entry(job_iter, &sched->base.pending_list, drm.list) {
-			snapshot->pending_list[i].seqno =
-				xe_sched_job_seqno(job_iter);
-			snapshot->pending_list[i].fence =
-				dma_fence_is_signaled(job_iter->fence) ? 1 : 0;
-			snapshot->pending_list[i].finished =
-				dma_fence_is_signaled(&job_iter->drm.s_fence->finished)
-				? 1 : 0;
-			i++;
-		}
-	}
-
-	spin_unlock(&sched->base.job_list_lock);
-
 	return snapshot;
 }
 
@@ -2860,13 +2823,6 @@ xe_guc_exec_queue_snapshot_print(struct xe_guc_submit_exec_queue_snapshot *snaps
 
 	if (snapshot->parallel_execution)
 		guc_exec_queue_wq_snapshot_print(snapshot, p);
-
-	for (i = 0; snapshot->pending_list && i < snapshot->pending_list_size;
-	     i++)
-		drm_printf(p, "\tJob: seqno=%d, fence=%d, finished=%d\n",
-			   snapshot->pending_list[i].seqno,
-			   snapshot->pending_list[i].fence,
-			   snapshot->pending_list[i].finished);
 }
 
 /**
@@ -2889,7 +2845,6 @@ void xe_guc_exec_queue_snapshot_free(struct xe_guc_submit_exec_queue_snapshot *s
 			xe_lrc_snapshot_free(snapshot->lrc[i]);
 		kfree(snapshot->lrc);
 	}
-	kfree(snapshot->pending_list);
 	kfree(snapshot);
 }
 
diff --git a/drivers/gpu/drm/xe/xe_guc_submit_types.h b/drivers/gpu/drm/xe/xe_guc_submit_types.h
index dc7456c34583..0b08c79cf3b9 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit_types.h
+++ b/drivers/gpu/drm/xe/xe_guc_submit_types.h
@@ -61,12 +61,6 @@ struct guc_submit_parallel_scratch {
 	u32 wq[WQ_SIZE / sizeof(u32)];
 };
 
-struct pending_list_snapshot {
-	u32 seqno;
-	bool fence;
-	bool finished;
-};
-
 /**
  * struct xe_guc_submit_exec_queue_snapshot - Snapshot for devcoredump
  */
@@ -134,11 +128,6 @@ struct xe_guc_submit_exec_queue_snapshot {
 		/** @wq: Workqueue Items */
 		u32 wq[WQ_SIZE / sizeof(u32)];
 	} parallel;
-
-	/** @pending_list_size: Size of the pending list snapshot array */
-	int pending_list_size;
-	/** @pending_list: snapshot of the pending list info */
-	struct pending_list_snapshot *pending_list;
 };
 
 #endif
diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
index b2a0c46dfcd4..e65dfcdfdbc5 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.c
+++ b/drivers/gpu/drm/xe/xe_hw_fence.c
@@ -110,22 +110,6 @@ void xe_hw_fence_irq_run(struct xe_hw_fence_irq *irq)
 	irq_work_queue(&irq->work);
 }
 
-void xe_hw_fence_irq_stop(struct xe_hw_fence_irq *irq)
-{
-	spin_lock_irq(&irq->lock);
-	irq->enabled = false;
-	spin_unlock_irq(&irq->lock);
-}
-
-void xe_hw_fence_irq_start(struct xe_hw_fence_irq *irq)
-{
-	spin_lock_irq(&irq->lock);
-	irq->enabled = true;
-	spin_unlock_irq(&irq->lock);
-
-	irq_work_queue(&irq->work);
-}
-
 void xe_hw_fence_ctx_init(struct xe_hw_fence_ctx *ctx, struct xe_gt *gt,
 			  struct xe_hw_fence_irq *irq, const char *name)
 {
diff --git a/drivers/gpu/drm/xe/xe_hw_fence.h b/drivers/gpu/drm/xe/xe_hw_fence.h
index f13a1c4982c7..599492c13f80 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.h
+++ b/drivers/gpu/drm/xe/xe_hw_fence.h
@@ -17,8 +17,6 @@ void xe_hw_fence_module_exit(void);
 void xe_hw_fence_irq_init(struct xe_hw_fence_irq *irq);
 void xe_hw_fence_irq_finish(struct xe_hw_fence_irq *irq);
 void xe_hw_fence_irq_run(struct xe_hw_fence_irq *irq);
-void xe_hw_fence_irq_stop(struct xe_hw_fence_irq *irq);
-void xe_hw_fence_irq_start(struct xe_hw_fence_irq *irq);
 
 void xe_hw_fence_ctx_init(struct xe_hw_fence_ctx *ctx, struct xe_gt *gt,
 			  struct xe_hw_fence_irq *irq, const char *name);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 5/8] drm/xe: Only toggle scheduling in TDR if GuC is running
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
                   ` (3 preceding siblings ...)
  2025-11-26 20:19 ` [PATCH v5 4/8] drm/xe: Stop abusing DRM scheduler internals Matthew Brost
@ 2025-11-26 20:19 ` Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 6/8] drm/xe: Do not deregister queues in TDR Matthew Brost
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2025-11-26 20:19 UTC (permalink / raw)
  To: intel-xe; +Cc: dri-devel

If the firmware is not running during TDR (e.g., when the driver is
unloading), there's no need to toggle scheduling in the GuC. In such
cases, skip this step.

v4:
 - Bail on wait UC not running (Niranjana)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
---
 drivers/gpu/drm/xe/xe_guc_submit.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 4166b4ec6a67..693e3a892639 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1274,7 +1274,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 		if (exec_queue_reset(q))
 			err = -EIO;
 
-		if (!exec_queue_destroyed(q)) {
+		if (!exec_queue_destroyed(q) && xe_uc_fw_is_running(&guc->fw)) {
 			/*
 			 * Wait for any pending G2H to flush out before
 			 * modifying state
@@ -1309,6 +1309,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 		 */
 		smp_rmb();
 		ret = wait_event_timeout(guc->ct.wq,
+					 !xe_uc_fw_is_running(&guc->fw) ||
 					 !exec_queue_pending_disable(q) ||
 					 xe_guc_read_stopped(guc) ||
 					 vf_recovery(guc), HZ * 5);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 6/8] drm/xe: Do not deregister queues in TDR
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
                   ` (4 preceding siblings ...)
  2025-11-26 20:19 ` [PATCH v5 5/8] drm/xe: Only toggle scheduling in TDR if GuC is running Matthew Brost
@ 2025-11-26 20:19 ` Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 7/8] drm/xe: Remove special casing for LR queues in submission Matthew Brost
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2025-11-26 20:19 UTC (permalink / raw)
  To: intel-xe; +Cc: dri-devel

Deregistering queues in the TDR introduces unnecessary complexity,
requiring reference-counting techniques to function correctly,
particularly to prevent use-after-free (UAF) issues while a
deregistration initiated from the TDR is in progress.

All that's needed in the TDR is to kick the queue off the hardware,
which is achieved by disabling scheduling. Queue deregistration should
be handled in a single, well-defined point in the cleanup path, tied to
the queue's reference count.

v4:
 - Explain why extra ref were needed prior to this patch (Niranjana)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
---
 drivers/gpu/drm/xe/xe_guc_submit.c | 65 +++++-------------------------
 1 file changed, 9 insertions(+), 56 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 693e3a892639..8ae1afb90e62 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -69,9 +69,8 @@ exec_queue_to_guc(struct xe_exec_queue *q)
 #define EXEC_QUEUE_STATE_WEDGED			(1 << 8)
 #define EXEC_QUEUE_STATE_BANNED			(1 << 9)
 #define EXEC_QUEUE_STATE_CHECK_TIMEOUT		(1 << 10)
-#define EXEC_QUEUE_STATE_EXTRA_REF		(1 << 11)
-#define EXEC_QUEUE_STATE_PENDING_RESUME		(1 << 12)
-#define EXEC_QUEUE_STATE_PENDING_TDR_EXIT	(1 << 13)
+#define EXEC_QUEUE_STATE_PENDING_RESUME		(1 << 11)
+#define EXEC_QUEUE_STATE_PENDING_TDR_EXIT	(1 << 12)
 
 static bool exec_queue_registered(struct xe_exec_queue *q)
 {
@@ -218,21 +217,6 @@ static void clear_exec_queue_check_timeout(struct xe_exec_queue *q)
 	atomic_and(~EXEC_QUEUE_STATE_CHECK_TIMEOUT, &q->guc->state);
 }
 
-static bool exec_queue_extra_ref(struct xe_exec_queue *q)
-{
-	return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_EXTRA_REF;
-}
-
-static void set_exec_queue_extra_ref(struct xe_exec_queue *q)
-{
-	atomic_or(EXEC_QUEUE_STATE_EXTRA_REF, &q->guc->state);
-}
-
-static void clear_exec_queue_extra_ref(struct xe_exec_queue *q)
-{
-	atomic_and(~EXEC_QUEUE_STATE_EXTRA_REF, &q->guc->state);
-}
-
 static bool exec_queue_pending_resume(struct xe_exec_queue *q)
 {
 	return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_PENDING_RESUME;
@@ -1190,25 +1174,6 @@ static void disable_scheduling(struct xe_exec_queue *q, bool immediate)
 		       G2H_LEN_DW_SCHED_CONTEXT_MODE_SET, 1);
 }
 
-static void __deregister_exec_queue(struct xe_guc *guc, struct xe_exec_queue *q)
-{
-	u32 action[] = {
-		XE_GUC_ACTION_DEREGISTER_CONTEXT,
-		q->guc->id,
-	};
-
-	xe_gt_assert(guc_to_gt(guc), !exec_queue_destroyed(q));
-	xe_gt_assert(guc_to_gt(guc), exec_queue_registered(q));
-	xe_gt_assert(guc_to_gt(guc), !exec_queue_pending_enable(q));
-	xe_gt_assert(guc_to_gt(guc), !exec_queue_pending_disable(q));
-
-	set_exec_queue_destroyed(q);
-	trace_xe_exec_queue_deregister(q);
-
-	xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
-		       G2H_LEN_DW_DEREGISTER_CONTEXT, 1);
-}
-
 static enum drm_gpu_sched_stat
 guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 {
@@ -1224,6 +1189,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 	bool wedged = false, skip_timeout_check;
 
 	xe_gt_assert(guc_to_gt(guc), !xe_exec_queue_is_lr(q));
+	xe_gt_assert(guc_to_gt(guc), !exec_queue_destroyed(q));
 
 	/*
 	 * TDR has fired before free job worker. Common if exec queue
@@ -1240,8 +1206,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 
 	/* Must check all state after stopping scheduler */
 	skip_timeout_check = exec_queue_reset(q) ||
-		exec_queue_killed_or_banned_or_wedged(q) ||
-		exec_queue_destroyed(q);
+		exec_queue_killed_or_banned_or_wedged(q);
 
 	/*
 	 * If devcoredump not captured and GuC capture for the job is not ready
@@ -1268,13 +1233,13 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 		wedged = guc_submit_hint_wedged(exec_queue_to_guc(q));
 
 	/* Engine state now stable, disable scheduling to check timestamp */
-	if (!wedged && exec_queue_registered(q)) {
+	if (!wedged && (exec_queue_enabled(q) || exec_queue_pending_disable(q))) {
 		int ret;
 
 		if (exec_queue_reset(q))
 			err = -EIO;
 
-		if (!exec_queue_destroyed(q) && xe_uc_fw_is_running(&guc->fw)) {
+		if (xe_uc_fw_is_running(&guc->fw)) {
 			/*
 			 * Wait for any pending G2H to flush out before
 			 * modifying state
@@ -1324,8 +1289,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 			xe_devcoredump(q, job,
 				       "Schedule disable failed to respond, guc_id=%d, ret=%d, guc_read=%d",
 				       q->guc->id, ret, xe_guc_read_stopped(guc));
-			set_exec_queue_extra_ref(q);
-			xe_exec_queue_get(q);	/* GT reset owns this */
 			set_exec_queue_banned(q);
 			xe_gt_reset_async(q->gt);
 			xe_sched_tdr_queue_imm(sched);
@@ -1378,13 +1341,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 		}
 	}
 
-	/* Finish cleaning up exec queue via deregister */
 	set_exec_queue_banned(q);
-	if (!wedged && exec_queue_registered(q) && !exec_queue_destroyed(q)) {
-		set_exec_queue_extra_ref(q);
-		xe_exec_queue_get(q);
-		__deregister_exec_queue(guc, q);
-	}
 
 	/* Mark all outstanding jobs as bad, thus completing them */
 	xe_sched_job_set_error(job, err);
@@ -1928,7 +1885,7 @@ static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
 
 	/* Clean up lost G2H + reset engine state */
 	if (exec_queue_registered(q)) {
-		if (exec_queue_extra_ref(q) || xe_exec_queue_is_lr(q))
+		if (xe_exec_queue_is_lr(q))
 			xe_exec_queue_put(q);
 		else if (exec_queue_destroyed(q))
 			__guc_exec_queue_destroy(guc, q);
@@ -2062,11 +2019,7 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_guc *guc,
 
 	if (exec_queue_destroyed(q) && exec_queue_registered(q)) {
 		clear_exec_queue_destroyed(q);
-		if (exec_queue_extra_ref(q))
-			xe_exec_queue_put(q);
-		else
-			q->guc->needs_cleanup = true;
-		clear_exec_queue_extra_ref(q);
+		q->guc->needs_cleanup = true;
 		xe_gt_dbg(guc_to_gt(guc), "Replay CLEANUP - guc_id=%d",
 			  q->guc->id);
 	}
@@ -2499,7 +2452,7 @@ static void handle_deregister_done(struct xe_guc *guc, struct xe_exec_queue *q)
 
 	clear_exec_queue_registered(q);
 
-	if (exec_queue_extra_ref(q) || xe_exec_queue_is_lr(q))
+	if (xe_exec_queue_is_lr(q))
 		xe_exec_queue_put(q);
 	else
 		__guc_exec_queue_destroy(guc, q);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 7/8] drm/xe: Remove special casing for LR queues in submission
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
                   ` (5 preceding siblings ...)
  2025-11-26 20:19 ` [PATCH v5 6/8] drm/xe: Do not deregister queues in TDR Matthew Brost
@ 2025-11-26 20:19 ` Matthew Brost
  2025-11-26 20:19 ` [PATCH v5 8/8] drm/xe: Avoid toggling schedule state to check LRC timestamp in TDR Matthew Brost
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2025-11-26 20:19 UTC (permalink / raw)
  To: intel-xe; +Cc: dri-devel

Now that LR jobs are tracked by the DRM scheduler, there's no longer a
need to special-case LR queues. This change removes all LR
queue-specific handling, including dedicated TDR logic, reference
counting schemes, and other related mechanisms.

v4:
 - Remove xe_exec_queue_lr_cleanup tracepoint (Niranjana)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
---
 drivers/gpu/drm/xe/xe_guc_exec_queue_types.h |   2 -
 drivers/gpu/drm/xe/xe_guc_submit.c           | 132 ++-----------------
 drivers/gpu/drm/xe/xe_trace.h                |   5 -
 3 files changed, 11 insertions(+), 128 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
index a3b034e4b205..fd0915ed8eb1 100644
--- a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
@@ -33,8 +33,6 @@ struct xe_guc_exec_queue {
 	 */
 #define MAX_STATIC_MSG_TYPE	3
 	struct xe_sched_msg static_msgs[MAX_STATIC_MSG_TYPE];
-	/** @lr_tdr: long running TDR worker */
-	struct work_struct lr_tdr;
 	/** @destroy_async: do final destroy async from this worker */
 	struct work_struct destroy_async;
 	/** @resume_time: time of last resume */
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 8ae1afb90e62..db3c57d758c6 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -674,14 +674,6 @@ static void register_exec_queue(struct xe_exec_queue *q, int ctx_type)
 		parallel_write(xe, map, wq_desc.wq_status, WQ_STATUS_ACTIVE);
 	}
 
-	/*
-	 * We must keep a reference for LR engines if engine is registered with
-	 * the GuC as jobs signal immediately and can't destroy an engine if the
-	 * GuC has a reference to it.
-	 */
-	if (xe_exec_queue_is_lr(q))
-		xe_exec_queue_get(q);
-
 	set_exec_queue_registered(q);
 	trace_xe_exec_queue_register(q);
 	if (xe_exec_queue_is_parallel(q))
@@ -854,7 +846,7 @@ guc_exec_queue_run_job(struct drm_sched_job *drm_job)
 	struct xe_sched_job *job = to_xe_sched_job(drm_job);
 	struct xe_exec_queue *q = job->q;
 	struct xe_guc *guc = exec_queue_to_guc(q);
-	bool lr = xe_exec_queue_is_lr(q), killed_or_banned_or_wedged =
+	bool killed_or_banned_or_wedged =
 		exec_queue_killed_or_banned_or_wedged(q);
 
 	xe_gt_assert(guc_to_gt(guc), !(exec_queue_destroyed(q) || exec_queue_pending_disable(q)) ||
@@ -871,15 +863,6 @@ guc_exec_queue_run_job(struct drm_sched_job *drm_job)
 		job->restore_replay = false;
 	}
 
-	/*
-	 * We don't care about job-fence ordering in LR VMs because these fences
-	 * are never exported; they are used solely to keep jobs on the pending
-	 * list. Once a queue enters an error state, there's no need to track
-	 * them.
-	 */
-	if (killed_or_banned_or_wedged && lr)
-		xe_sched_job_set_error(job, -ECANCELED);
-
 	return job->fence;
 }
 
@@ -923,8 +906,7 @@ static void disable_scheduling_deregister(struct xe_guc *guc,
 		xe_gt_warn(q->gt, "Pending enable/disable failed to respond\n");
 		xe_sched_submission_start(sched);
 		xe_gt_reset_async(q->gt);
-		if (!xe_exec_queue_is_lr(q))
-			xe_sched_tdr_queue_imm(sched);
+		xe_sched_tdr_queue_imm(sched);
 		return;
 	}
 
@@ -950,10 +932,7 @@ static void xe_guc_exec_queue_trigger_cleanup(struct xe_exec_queue *q)
 	/** to wakeup xe_wait_user_fence ioctl if exec queue is reset */
 	wake_up_all(&xe->ufence_wq);
 
-	if (xe_exec_queue_is_lr(q))
-		queue_work(guc_to_gt(guc)->ordered_wq, &q->guc->lr_tdr);
-	else
-		xe_sched_tdr_queue_imm(&q->guc->sched);
+	xe_sched_tdr_queue_imm(&q->guc->sched);
 }
 
 /**
@@ -1009,78 +988,6 @@ static bool guc_submit_hint_wedged(struct xe_guc *guc)
 	return true;
 }
 
-static void xe_guc_exec_queue_lr_cleanup(struct work_struct *w)
-{
-	struct xe_guc_exec_queue *ge =
-		container_of(w, struct xe_guc_exec_queue, lr_tdr);
-	struct xe_exec_queue *q = ge->q;
-	struct xe_guc *guc = exec_queue_to_guc(q);
-	struct xe_gpu_scheduler *sched = &ge->sched;
-	struct drm_sched_job *job;
-	bool wedged = false;
-
-	xe_gt_assert(guc_to_gt(guc), xe_exec_queue_is_lr(q));
-
-	if (vf_recovery(guc))
-		return;
-
-	trace_xe_exec_queue_lr_cleanup(q);
-
-	if (!exec_queue_killed(q))
-		wedged = guc_submit_hint_wedged(exec_queue_to_guc(q));
-
-	/* Kill the run_job / process_msg entry points */
-	xe_sched_submission_stop(sched);
-
-	/*
-	 * Engine state now mostly stable, disable scheduling / deregister if
-	 * needed. This cleanup routine might be called multiple times, where
-	 * the actual async engine deregister drops the final engine ref.
-	 * Calling disable_scheduling_deregister will mark the engine as
-	 * destroyed and fire off the CT requests to disable scheduling /
-	 * deregister, which we only want to do once. We also don't want to mark
-	 * the engine as pending_disable again as this may race with the
-	 * xe_guc_deregister_done_handler() which treats it as an unexpected
-	 * state.
-	 */
-	if (!wedged && exec_queue_registered(q) && !exec_queue_destroyed(q)) {
-		struct xe_guc *guc = exec_queue_to_guc(q);
-		int ret;
-
-		set_exec_queue_banned(q);
-		disable_scheduling_deregister(guc, q);
-
-		/*
-		 * Must wait for scheduling to be disabled before signalling
-		 * any fences, if GT broken the GT reset code should signal us.
-		 */
-		ret = wait_event_timeout(guc->ct.wq,
-					 !exec_queue_pending_disable(q) ||
-					 xe_guc_read_stopped(guc) ||
-					 vf_recovery(guc), HZ * 5);
-		if (vf_recovery(guc))
-			return;
-
-		if (!ret) {
-			xe_gt_warn(q->gt, "Schedule disable failed to respond, guc_id=%d\n",
-				   q->guc->id);
-			xe_devcoredump(q, NULL, "Schedule disable failed to respond, guc_id=%d\n",
-				       q->guc->id);
-			xe_sched_submission_start(sched);
-			xe_gt_reset_async(q->gt);
-			return;
-		}
-	}
-
-	if (!exec_queue_killed(q) && !xe_lrc_ring_is_idle(q->lrc[0]))
-		xe_devcoredump(q, NULL, "LR job cleanup, guc_id=%d", q->guc->id);
-
-	drm_sched_for_each_pending_job(job, &sched->base, NULL)
-		xe_sched_job_set_error(to_xe_sched_job(job), -ECANCELED);
-
-	xe_sched_submission_start(sched);
-}
-
 #define ADJUST_FIVE_PERCENT(__t)	mul_u64_u32_div(__t, 105, 100)
 
 static bool check_timeout(struct xe_exec_queue *q, struct xe_sched_job *job)
@@ -1150,8 +1057,7 @@ static void enable_scheduling(struct xe_exec_queue *q)
 		xe_gt_warn(guc_to_gt(guc), "Schedule enable failed to respond");
 		set_exec_queue_banned(q);
 		xe_gt_reset_async(q->gt);
-		if (!xe_exec_queue_is_lr(q))
-			xe_sched_tdr_queue_imm(&q->guc->sched);
+		xe_sched_tdr_queue_imm(&q->guc->sched);
 	}
 }
 
@@ -1188,7 +1094,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 	pid_t pid = -1;
 	bool wedged = false, skip_timeout_check;
 
-	xe_gt_assert(guc_to_gt(guc), !xe_exec_queue_is_lr(q));
 	xe_gt_assert(guc_to_gt(guc), !exec_queue_destroyed(q));
 
 	/*
@@ -1208,6 +1113,10 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 	skip_timeout_check = exec_queue_reset(q) ||
 		exec_queue_killed_or_banned_or_wedged(q);
 
+	/* LR jobs can only get here if queue has been killed or hit an error */
+	if (xe_exec_queue_is_lr(q))
+		xe_gt_assert(guc_to_gt(guc), skip_timeout_check);
+
 	/*
 	 * If devcoredump not captured and GuC capture for the job is not ready
 	 * do manual capture first and decide later if we need to use it
@@ -1397,8 +1306,6 @@ static void __guc_exec_queue_destroy_async(struct work_struct *w)
 	guard(xe_pm_runtime)(guc_to_xe(guc));
 	trace_xe_exec_queue_destroy(q);
 
-	if (xe_exec_queue_is_lr(q))
-		cancel_work_sync(&ge->lr_tdr);
 	/* Confirm no work left behind accessing device structures */
 	cancel_delayed_work_sync(&ge->sched.base.work_tdr);
 
@@ -1629,9 +1536,6 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
 	if (err)
 		goto err_sched;
 
-	if (xe_exec_queue_is_lr(q))
-		INIT_WORK(&q->guc->lr_tdr, xe_guc_exec_queue_lr_cleanup);
-
 	mutex_lock(&guc->submission_state.lock);
 
 	err = alloc_guc_id(guc, q);
@@ -1885,9 +1789,7 @@ static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
 
 	/* Clean up lost G2H + reset engine state */
 	if (exec_queue_registered(q)) {
-		if (xe_exec_queue_is_lr(q))
-			xe_exec_queue_put(q);
-		else if (exec_queue_destroyed(q))
+		if (exec_queue_destroyed(q))
 			__guc_exec_queue_destroy(guc, q);
 	}
 	if (q->guc->suspend_pending) {
@@ -1917,9 +1819,6 @@ static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
 				trace_xe_sched_job_ban(job);
 				ban = true;
 			}
-		} else if (xe_exec_queue_is_lr(q) &&
-			   !xe_lrc_ring_is_idle(q->lrc[0])) {
-			ban = true;
 		}
 
 		if (ban) {
@@ -2002,8 +1901,6 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_guc *guc,
 	if (pending_enable && !pending_resume &&
 	    !exec_queue_pending_tdr_exit(q)) {
 		clear_exec_queue_registered(q);
-		if (xe_exec_queue_is_lr(q))
-			xe_exec_queue_put(q);
 		xe_gt_dbg(guc_to_gt(guc), "Replay REGISTER - guc_id=%d",
 			  q->guc->id);
 	}
@@ -2072,10 +1969,7 @@ static void guc_exec_queue_pause(struct xe_guc *guc, struct xe_exec_queue *q)
 
 	/* Stop scheduling + flush any DRM scheduler operations */
 	xe_sched_submission_stop(sched);
-	if (xe_exec_queue_is_lr(q))
-		cancel_work_sync(&q->guc->lr_tdr);
-	else
-		cancel_delayed_work_sync(&sched->base.work_tdr);
+	cancel_delayed_work_sync(&sched->base.work_tdr);
 
 	guc_exec_queue_revert_pending_state_change(guc, q);
 
@@ -2451,11 +2345,7 @@ static void handle_deregister_done(struct xe_guc *guc, struct xe_exec_queue *q)
 	trace_xe_exec_queue_deregister_done(q);
 
 	clear_exec_queue_registered(q);
-
-	if (xe_exec_queue_is_lr(q))
-		xe_exec_queue_put(q);
-	else
-		__guc_exec_queue_destroy(guc, q);
+	__guc_exec_queue_destroy(guc, q);
 }
 
 int xe_guc_deregister_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
index 79a97b086cb2..cf2ef70fb7ce 100644
--- a/drivers/gpu/drm/xe/xe_trace.h
+++ b/drivers/gpu/drm/xe/xe_trace.h
@@ -182,11 +182,6 @@ DEFINE_EVENT(xe_exec_queue, xe_exec_queue_resubmit,
 	     TP_ARGS(q)
 );
 
-DEFINE_EVENT(xe_exec_queue, xe_exec_queue_lr_cleanup,
-	     TP_PROTO(struct xe_exec_queue *q),
-	     TP_ARGS(q)
-);
-
 DECLARE_EVENT_CLASS(xe_sched_job,
 		    TP_PROTO(struct xe_sched_job *job),
 		    TP_ARGS(job),
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 8/8] drm/xe: Avoid toggling schedule state to check LRC timestamp in TDR
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
                   ` (6 preceding siblings ...)
  2025-11-26 20:19 ` [PATCH v5 7/8] drm/xe: Remove special casing for LR queues in submission Matthew Brost
@ 2025-11-26 20:19 ` Matthew Brost
  2025-11-26 21:21   ` Matthew Brost
  2025-11-26 20:25 ` ✗ CI.checkpatch: warning for Fix DRM scheduler layering violations in Xe (rev5) Patchwork
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 14+ messages in thread
From: Matthew Brost @ 2025-11-26 20:19 UTC (permalink / raw)
  To: intel-xe; +Cc: dri-devel

We now have proper infrastructure to accurately check the LRC timestamp
without toggling the scheduling state for non-VFs. For VFs, it is still
possible to get an inaccurate view if the context is on hardware. We
guard against free-running contexts on VFs by banning jobs whose
timestamps are not moving. In addition, VFs have a timeslice quantum
that naturally triggers context switches when more than one VF is
running, thus updating the LRC timestamp.

For multi-queue, it is desirable to avoid scheduling toggling in the TDR
because this scheduling state is shared among many queues. Furthermore,
this change simplifies the GuC state machine. The trade-off for VF cases
seems worthwhile.

v5:
 - Add xe_lrc_timestamp helper (Umesh)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_guc_submit.c      | 97 ++++++-------------------
 drivers/gpu/drm/xe/xe_lrc.c             | 42 +++++++----
 drivers/gpu/drm/xe/xe_lrc.h             |  3 +-
 drivers/gpu/drm/xe/xe_sched_job.c       |  1 +
 drivers/gpu/drm/xe/xe_sched_job_types.h |  2 +
 5 files changed, 56 insertions(+), 89 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index db3c57d758c6..b8022826795b 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -68,9 +68,7 @@ exec_queue_to_guc(struct xe_exec_queue *q)
 #define EXEC_QUEUE_STATE_KILLED			(1 << 7)
 #define EXEC_QUEUE_STATE_WEDGED			(1 << 8)
 #define EXEC_QUEUE_STATE_BANNED			(1 << 9)
-#define EXEC_QUEUE_STATE_CHECK_TIMEOUT		(1 << 10)
-#define EXEC_QUEUE_STATE_PENDING_RESUME		(1 << 11)
-#define EXEC_QUEUE_STATE_PENDING_TDR_EXIT	(1 << 12)
+#define EXEC_QUEUE_STATE_PENDING_RESUME		(1 << 10)
 
 static bool exec_queue_registered(struct xe_exec_queue *q)
 {
@@ -202,21 +200,6 @@ static void set_exec_queue_wedged(struct xe_exec_queue *q)
 	atomic_or(EXEC_QUEUE_STATE_WEDGED, &q->guc->state);
 }
 
-static bool exec_queue_check_timeout(struct xe_exec_queue *q)
-{
-	return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_CHECK_TIMEOUT;
-}
-
-static void set_exec_queue_check_timeout(struct xe_exec_queue *q)
-{
-	atomic_or(EXEC_QUEUE_STATE_CHECK_TIMEOUT, &q->guc->state);
-}
-
-static void clear_exec_queue_check_timeout(struct xe_exec_queue *q)
-{
-	atomic_and(~EXEC_QUEUE_STATE_CHECK_TIMEOUT, &q->guc->state);
-}
-
 static bool exec_queue_pending_resume(struct xe_exec_queue *q)
 {
 	return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_PENDING_RESUME;
@@ -232,21 +215,6 @@ static void clear_exec_queue_pending_resume(struct xe_exec_queue *q)
 	atomic_and(~EXEC_QUEUE_STATE_PENDING_RESUME, &q->guc->state);
 }
 
-static bool exec_queue_pending_tdr_exit(struct xe_exec_queue *q)
-{
-	return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_PENDING_TDR_EXIT;
-}
-
-static void set_exec_queue_pending_tdr_exit(struct xe_exec_queue *q)
-{
-	atomic_or(EXEC_QUEUE_STATE_PENDING_TDR_EXIT, &q->guc->state);
-}
-
-static void clear_exec_queue_pending_tdr_exit(struct xe_exec_queue *q)
-{
-	atomic_and(~EXEC_QUEUE_STATE_PENDING_TDR_EXIT, &q->guc->state);
-}
-
 static bool exec_queue_killed_or_banned_or_wedged(struct xe_exec_queue *q)
 {
 	return (atomic_read(&q->guc->state) &
@@ -1006,7 +974,16 @@ static bool check_timeout(struct xe_exec_queue *q, struct xe_sched_job *job)
 		return xe_sched_invalidate_job(job, 2);
 	}
 
-	ctx_timestamp = lower_32_bits(xe_lrc_ctx_timestamp(q->lrc[0]));
+	ctx_timestamp = lower_32_bits(xe_lrc_timestamp(q->lrc[0]));
+	if (ctx_timestamp == job->sample_timestamp) {
+		xe_gt_warn(gt, "Check job timeout: seqno=%u, lrc_seqno=%u, guc_id=%d, timestamp stuck",
+			   xe_sched_job_seqno(job), xe_sched_job_lrc_seqno(job),
+			   q->guc->id);
+
+		return xe_sched_invalidate_job(job, 2);
+	}
+
+	job->sample_timestamp = ctx_timestamp;
 	ctx_job_timestamp = xe_lrc_ctx_job_timestamp(q->lrc[0]);
 
 	/*
@@ -1132,16 +1109,17 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 	}
 
 	/*
-	 * XXX: Sampling timeout doesn't work in wedged mode as we have to
-	 * modify scheduling state to read timestamp. We could read the
-	 * timestamp from a register to accumulate current running time but this
-	 * doesn't work for SRIOV. For now assuming timeouts in wedged mode are
-	 * genuine timeouts.
+	 * Check if job is actually timed out, if so restart job execution and TDR
 	 */
+	if (!skip_timeout_check && !check_timeout(q, job))
+		goto rearm;
+
 	if (!exec_queue_killed(q))
 		wedged = guc_submit_hint_wedged(exec_queue_to_guc(q));
 
-	/* Engine state now stable, disable scheduling to check timestamp */
+	set_exec_queue_banned(q);
+
+	/* Kick job / queue off hardware */
 	if (!wedged && (exec_queue_enabled(q) || exec_queue_pending_disable(q))) {
 		int ret;
 
@@ -1163,13 +1141,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 			if (!ret || xe_guc_read_stopped(guc))
 				goto trigger_reset;
 
-			/*
-			 * Flag communicates to G2H handler that schedule
-			 * disable originated from a timeout check. The G2H then
-			 * avoid triggering cleanup or deregistering the exec
-			 * queue.
-			 */
-			set_exec_queue_check_timeout(q);
 			disable_scheduling(q, skip_timeout_check);
 		}
 
@@ -1198,22 +1169,12 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 			xe_devcoredump(q, job,
 				       "Schedule disable failed to respond, guc_id=%d, ret=%d, guc_read=%d",
 				       q->guc->id, ret, xe_guc_read_stopped(guc));
-			set_exec_queue_banned(q);
 			xe_gt_reset_async(q->gt);
 			xe_sched_tdr_queue_imm(sched);
 			goto rearm;
 		}
 	}
 
-	/*
-	 * Check if job is actually timed out, if so restart job execution and TDR
-	 */
-	if (!wedged && !skip_timeout_check && !check_timeout(q, job) &&
-	    !exec_queue_reset(q) && exec_queue_registered(q)) {
-		clear_exec_queue_check_timeout(q);
-		goto sched_enable;
-	}
-
 	if (q->vm && q->vm->xef) {
 		process_name = q->vm->xef->process_name;
 		pid = q->vm->xef->pid;
@@ -1244,14 +1205,11 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 	if (!wedged && (q->flags & EXEC_QUEUE_FLAG_KERNEL ||
 			(q->flags & EXEC_QUEUE_FLAG_VM && !exec_queue_killed(q)))) {
 		if (!xe_sched_invalidate_job(job, 2)) {
-			clear_exec_queue_check_timeout(q);
 			xe_gt_reset_async(q->gt);
 			goto rearm;
 		}
 	}
 
-	set_exec_queue_banned(q);
-
 	/* Mark all outstanding jobs as bad, thus completing them */
 	xe_sched_job_set_error(job, err);
 	drm_sched_for_each_pending_job(tmp_job, &sched->base, NULL)
@@ -1266,9 +1224,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
 	 */
 	return DRM_GPU_SCHED_STAT_NO_HANG;
 
-sched_enable:
-	set_exec_queue_pending_tdr_exit(q);
-	enable_scheduling(q);
 rearm:
 	/*
 	 * XXX: Ideally want to adjust timeout based on current execution time
@@ -1898,8 +1853,7 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_guc *guc,
 			  q->guc->id);
 	}
 
-	if (pending_enable && !pending_resume &&
-	    !exec_queue_pending_tdr_exit(q)) {
+	if (pending_enable && !pending_resume) {
 		clear_exec_queue_registered(q);
 		xe_gt_dbg(guc_to_gt(guc), "Replay REGISTER - guc_id=%d",
 			  q->guc->id);
@@ -1908,7 +1862,6 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_guc *guc,
 	if (pending_enable) {
 		clear_exec_queue_enabled(q);
 		clear_exec_queue_pending_resume(q);
-		clear_exec_queue_pending_tdr_exit(q);
 		clear_exec_queue_pending_enable(q);
 		xe_gt_dbg(guc_to_gt(guc), "Replay ENABLE - guc_id=%d",
 			  q->guc->id);
@@ -1934,7 +1887,6 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_guc *guc,
 		if (!pending_enable)
 			set_exec_queue_enabled(q);
 		clear_exec_queue_pending_disable(q);
-		clear_exec_queue_check_timeout(q);
 		xe_gt_dbg(guc_to_gt(guc), "Replay DISABLE - guc_id=%d",
 			  q->guc->id);
 	}
@@ -2274,13 +2226,10 @@ static void handle_sched_done(struct xe_guc *guc, struct xe_exec_queue *q,
 
 		q->guc->resume_time = ktime_get();
 		clear_exec_queue_pending_resume(q);
-		clear_exec_queue_pending_tdr_exit(q);
 		clear_exec_queue_pending_enable(q);
 		smp_wmb();
 		wake_up_all(&guc->ct.wq);
 	} else {
-		bool check_timeout = exec_queue_check_timeout(q);
-
 		xe_gt_assert(guc_to_gt(guc), runnable_state == 0);
 		xe_gt_assert(guc_to_gt(guc), exec_queue_pending_disable(q));
 
@@ -2288,11 +2237,11 @@ static void handle_sched_done(struct xe_guc *guc, struct xe_exec_queue *q,
 			suspend_fence_signal(q);
 			clear_exec_queue_pending_disable(q);
 		} else {
-			if (exec_queue_banned(q) || check_timeout) {
+			if (exec_queue_banned(q)) {
 				smp_wmb();
 				wake_up_all(&guc->ct.wq);
 			}
-			if (!check_timeout && exec_queue_destroyed(q)) {
+			if (exec_queue_destroyed(q)) {
 				/*
 				 * Make sure to clear the pending_disable only
 				 * after sampling the destroyed state. We want
@@ -2402,7 +2351,7 @@ int xe_guc_exec_queue_reset_handler(struct xe_guc *guc, u32 *msg, u32 len)
 	 * guc_exec_queue_timedout_job.
 	 */
 	set_exec_queue_reset(q);
-	if (!exec_queue_banned(q) && !exec_queue_check_timeout(q))
+	if (!exec_queue_banned(q))
 		xe_guc_exec_queue_trigger_cleanup(q);
 
 	return 0;
@@ -2483,7 +2432,7 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
 
 	/* Treat the same as engine reset */
 	set_exec_queue_reset(q);
-	if (!exec_queue_banned(q) && !exec_queue_check_timeout(q))
+	if (!exec_queue_banned(q))
 		xe_guc_exec_queue_trigger_cleanup(q);
 
 	return 0;
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index b5083c99dd50..c9bfd11a8d5e 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -839,7 +839,7 @@ u32 xe_lrc_ctx_timestamp_udw_ggtt_addr(struct xe_lrc *lrc)
  *
  * Returns: ctx timestamp value
  */
-u64 xe_lrc_ctx_timestamp(struct xe_lrc *lrc)
+static u64 xe_lrc_ctx_timestamp(struct xe_lrc *lrc)
 {
 	struct xe_device *xe = lrc_to_xe(lrc);
 	struct iosys_map map;
@@ -2353,35 +2353,29 @@ static int get_ctx_timestamp(struct xe_lrc *lrc, u32 engine_id, u64 *reg_ctx_ts)
 }
 
 /**
- * xe_lrc_update_timestamp() - Update ctx timestamp
+ * xe_lrc_timestamp() - Current ctx timestamp
  * @lrc: Pointer to the lrc.
- * @old_ts: Old timestamp value
  *
- * Populate @old_ts current saved ctx timestamp, read new ctx timestamp and
- * update saved value. With support for active contexts, the calculation may be
- * slightly racy, so follow a read-again logic to ensure that the context is
- * still active before returning the right timestamp.
+ * Return latest ctx timestamp.
  *
  * Returns: New ctx timestamp value
  */
-u64 xe_lrc_update_timestamp(struct xe_lrc *lrc, u64 *old_ts)
+u64 xe_lrc_timestamp(struct xe_lrc *lrc)
 {
-	u64 lrc_ts, reg_ts;
+	u64 lrc_ts, reg_ts, new_ts;
 	u32 engine_id;
 
-	*old_ts = lrc->ctx_timestamp;
-
 	lrc_ts = xe_lrc_ctx_timestamp(lrc);
 	/* CTX_TIMESTAMP mmio read is invalid on VF, so return the LRC value */
 	if (IS_SRIOV_VF(lrc_to_xe(lrc))) {
-		lrc->ctx_timestamp = lrc_ts;
+		new_ts = lrc_ts;
 		goto done;
 	}
 
 	if (lrc_ts == CONTEXT_ACTIVE) {
 		engine_id = xe_lrc_engine_id(lrc);
 		if (!get_ctx_timestamp(lrc, engine_id, &reg_ts))
-			lrc->ctx_timestamp = reg_ts;
+			new_ts = reg_ts;
 
 		/* read lrc again to ensure context is still active */
 		lrc_ts = xe_lrc_ctx_timestamp(lrc);
@@ -2392,9 +2386,29 @@ u64 xe_lrc_update_timestamp(struct xe_lrc *lrc, u64 *old_ts)
 	 * be a separate if condition.
 	 */
 	if (lrc_ts != CONTEXT_ACTIVE)
-		lrc->ctx_timestamp = lrc_ts;
+		new_ts = lrc_ts;
 
 done:
+	return new_ts;
+}
+
+/**
+ * xe_lrc_update_timestamp() - Update ctx timestamp
+ * @lrc: Pointer to the lrc.
+ * @old_ts: Old timestamp value
+ *
+ * Populate @old_ts current saved ctx timestamp, read new ctx timestamp and
+ * update saved value. With support for active contexts, the calculation may be
+ * slightly racy, so follow a read-again logic to ensure that the context is
+ * still active before returning the right timestamp.
+ *
+ * Returns: New ctx timestamp value
+ */
+u64 xe_lrc_update_timestamp(struct xe_lrc *lrc, u64 *old_ts)
+{
+	*old_ts = lrc->ctx_timestamp;
+	lrc->ctx_timestamp = xe_lrc_timestamp(lrc);
+
 	trace_xe_lrc_update_timestamp(lrc, *old_ts);
 
 	return lrc->ctx_timestamp;
diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
index 2fb628da5c43..86b7174f424a 100644
--- a/drivers/gpu/drm/xe/xe_lrc.h
+++ b/drivers/gpu/drm/xe/xe_lrc.h
@@ -140,7 +140,6 @@ void xe_lrc_snapshot_free(struct xe_lrc_snapshot *snapshot);
 
 u32 xe_lrc_ctx_timestamp_ggtt_addr(struct xe_lrc *lrc);
 u32 xe_lrc_ctx_timestamp_udw_ggtt_addr(struct xe_lrc *lrc);
-u64 xe_lrc_ctx_timestamp(struct xe_lrc *lrc);
 u32 xe_lrc_ctx_job_timestamp_ggtt_addr(struct xe_lrc *lrc);
 u32 xe_lrc_ctx_job_timestamp(struct xe_lrc *lrc);
 int xe_lrc_setup_wa_bb_with_scratch(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
@@ -160,4 +159,6 @@ int xe_lrc_setup_wa_bb_with_scratch(struct xe_lrc *lrc, struct xe_hw_engine *hwe
  */
 u64 xe_lrc_update_timestamp(struct xe_lrc *lrc, u64 *old_ts);
 
+u64 xe_lrc_timestamp(struct xe_lrc *lrc);
+
 #endif
diff --git a/drivers/gpu/drm/xe/xe_sched_job.c b/drivers/gpu/drm/xe/xe_sched_job.c
index cb674a322113..39aec7f6d86d 100644
--- a/drivers/gpu/drm/xe/xe_sched_job.c
+++ b/drivers/gpu/drm/xe/xe_sched_job.c
@@ -110,6 +110,7 @@ struct xe_sched_job *xe_sched_job_create(struct xe_exec_queue *q,
 		return ERR_PTR(-ENOMEM);
 
 	job->q = q;
+	job->sample_timestamp = U64_MAX;
 	kref_init(&job->refcount);
 	xe_exec_queue_get(job->q);
 
diff --git a/drivers/gpu/drm/xe/xe_sched_job_types.h b/drivers/gpu/drm/xe/xe_sched_job_types.h
index 7c4c54fe920a..13c2970e81a8 100644
--- a/drivers/gpu/drm/xe/xe_sched_job_types.h
+++ b/drivers/gpu/drm/xe/xe_sched_job_types.h
@@ -59,6 +59,8 @@ struct xe_sched_job {
 	u32 lrc_seqno;
 	/** @migrate_flush_flags: Additional flush flags for migration jobs */
 	u32 migrate_flush_flags;
+	/** @sample_timestamp: Sampling of job timestamp in TDR */
+	u64 sample_timestamp;
 	/** @ring_ops_flush_tlb: The ring ops need to flush TLB before payload. */
 	bool ring_ops_flush_tlb;
 	/** @ggtt: mapped in ggtt. */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* ✗ CI.checkpatch: warning for Fix DRM scheduler layering violations in Xe (rev5)
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
                   ` (7 preceding siblings ...)
  2025-11-26 20:19 ` [PATCH v5 8/8] drm/xe: Avoid toggling schedule state to check LRC timestamp in TDR Matthew Brost
@ 2025-11-26 20:25 ` Patchwork
  2025-11-26 20:26 ` ✓ CI.KUnit: success " Patchwork
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2025-11-26 20:25 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Fix DRM scheduler layering violations in Xe (rev5)
URL   : https://patchwork.freedesktop.org/series/155314/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
2de9a3901bc28757c7906b454717b64e2a214021
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 1585a14a4f0a6a9ed398e33c210af67d9736df77
Author: Matthew Brost <matthew.brost@intel.com>
Date:   Wed Nov 26 12:19:16 2025 -0800

    drm/xe: Avoid toggling schedule state to check LRC timestamp in TDR
    
    We now have proper infrastructure to accurately check the LRC timestamp
    without toggling the scheduling state for non-VFs. For VFs, it is still
    possible to get an inaccurate view if the context is on hardware. We
    guard against free-running contexts on VFs by banning jobs whose
    timestamps are not moving. In addition, VFs have a timeslice quantum
    that naturally triggers context switches when more than one VF is
    running, thus updating the LRC timestamp.
    
    For multi-queue, it is desirable to avoid scheduling toggling in the TDR
    because this scheduling state is shared among many queues. Furthermore,
    this change simplifies the GuC state machine. The trade-off for VF cases
    seems worthwhile.
    
    v5:
     - Add xe_lrc_timestamp helper (Umesh)
    
    Signed-off-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch e41f42483c6f784fce3deb5eca0931bcbffb01df drm-intel
53c4e1d9dae5 drm/sched: Add several job helpers to avoid drivers touching scheduler state
b4dced1d5a9b drm/sched: Add pending job list iterator
-:73: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#73: FILE: include/drm/gpu_scheduler.h:778:
+#define drm_sched_for_each_pending_job(__job, __sched, __entity)		\
+	scoped_guard(drm_sched_pending_job_iter, (__sched))			\
+		list_for_each_entry((__job), &(__sched)->pending_list, list)	\
+			for_each_if(!(__entity) || (__job)->entity == (__entity))

BUT SEE:

   do {} while (0) advice is over-stated in a few situations:

   The more obvious case is macros, like MODULE_PARM_DESC, invoked at
   file-scope, where C disallows code (it must be in functions).  See
   $exceptions if you have one to add by name.

   More troublesome is declarative macros used at top of new scope,
   like DECLARE_PER_CPU.  These might just compile with a do-while-0
   wrapper, but would be incorrect.  Most of these are handled by
   detecting struct,union,etc declaration primitives in $exceptions.

   Theres also macros called inside an if (block), which "return" an
   expression.  These cannot do-while, and need a ({}) wrapper.

   Enjoy this qualification while we work to improve our heuristics.

-:73: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__job' - possible side-effects?
#73: FILE: include/drm/gpu_scheduler.h:778:
+#define drm_sched_for_each_pending_job(__job, __sched, __entity)		\
+	scoped_guard(drm_sched_pending_job_iter, (__sched))			\
+		list_for_each_entry((__job), &(__sched)->pending_list, list)	\
+			for_each_if(!(__entity) || (__job)->entity == (__entity))

-:73: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__sched' - possible side-effects?
#73: FILE: include/drm/gpu_scheduler.h:778:
+#define drm_sched_for_each_pending_job(__job, __sched, __entity)		\
+	scoped_guard(drm_sched_pending_job_iter, (__sched))			\
+		list_for_each_entry((__job), &(__sched)->pending_list, list)	\
+			for_each_if(!(__entity) || (__job)->entity == (__entity))

-:73: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__entity' - possible side-effects?
#73: FILE: include/drm/gpu_scheduler.h:778:
+#define drm_sched_for_each_pending_job(__job, __sched, __entity)		\
+	scoped_guard(drm_sched_pending_job_iter, (__sched))			\
+		list_for_each_entry((__job), &(__sched)->pending_list, list)	\
+			for_each_if(!(__entity) || (__job)->entity == (__entity))

total: 1 errors, 0 warnings, 3 checks, 54 lines checked
7e1105b2e2f7 drm/xe: Add dedicated message lock
f1c4fea65269 drm/xe: Stop abusing DRM scheduler internals
ffa72fa13c6a drm/xe: Only toggle scheduling in TDR if GuC is running
288805e801da drm/xe: Do not deregister queues in TDR
db33301b9292 drm/xe: Remove special casing for LR queues in submission
1585a14a4f0a drm/xe: Avoid toggling schedule state to check LRC timestamp in TDR



^ permalink raw reply	[flat|nested] 14+ messages in thread

* ✓ CI.KUnit: success for Fix DRM scheduler layering violations in Xe (rev5)
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
                   ` (8 preceding siblings ...)
  2025-11-26 20:25 ` ✗ CI.checkpatch: warning for Fix DRM scheduler layering violations in Xe (rev5) Patchwork
@ 2025-11-26 20:26 ` Patchwork
  2025-11-26 21:40 ` ✓ Xe.CI.BAT: " Patchwork
  2025-11-26 22:18 ` ✗ Xe.CI.Full: failure " Patchwork
  11 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2025-11-26 20:26 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Fix DRM scheduler layering violations in Xe (rev5)
URL   : https://patchwork.freedesktop.org/series/155314/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[20:25:13] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:25:17] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:25:48] Starting KUnit Kernel (1/1)...
[20:25:48] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:25:48] ================== guc_buf (11 subtests) ===================
[20:25:48] [PASSED] test_smallest
[20:25:48] [PASSED] test_largest
[20:25:48] [PASSED] test_granular
[20:25:48] [PASSED] test_unique
[20:25:48] [PASSED] test_overlap
[20:25:48] [PASSED] test_reusable
[20:25:48] [PASSED] test_too_big
[20:25:48] [PASSED] test_flush
[20:25:48] [PASSED] test_lookup
[20:25:48] [PASSED] test_data
[20:25:48] [PASSED] test_class
[20:25:48] ===================== [PASSED] guc_buf =====================
[20:25:48] =================== guc_dbm (7 subtests) ===================
[20:25:48] [PASSED] test_empty
[20:25:48] [PASSED] test_default
[20:25:48] ======================== test_size  ========================
[20:25:48] [PASSED] 4
[20:25:48] [PASSED] 8
[20:25:48] [PASSED] 32
[20:25:48] [PASSED] 256
[20:25:48] ==================== [PASSED] test_size ====================
[20:25:48] ======================= test_reuse  ========================
[20:25:48] [PASSED] 4
[20:25:48] [PASSED] 8
[20:25:48] [PASSED] 32
[20:25:48] [PASSED] 256
[20:25:48] =================== [PASSED] test_reuse ====================
[20:25:48] =================== test_range_overlap  ====================
[20:25:48] [PASSED] 4
[20:25:48] [PASSED] 8
[20:25:48] [PASSED] 32
[20:25:48] [PASSED] 256
[20:25:48] =============== [PASSED] test_range_overlap ================
[20:25:48] =================== test_range_compact  ====================
[20:25:48] [PASSED] 4
[20:25:48] [PASSED] 8
[20:25:48] [PASSED] 32
[20:25:48] [PASSED] 256
[20:25:48] =============== [PASSED] test_range_compact ================
[20:25:48] ==================== test_range_spare  =====================
[20:25:48] [PASSED] 4
[20:25:48] [PASSED] 8
[20:25:48] [PASSED] 32
[20:25:48] [PASSED] 256
[20:25:48] ================ [PASSED] test_range_spare =================
[20:25:48] ===================== [PASSED] guc_dbm =====================
[20:25:48] =================== guc_idm (6 subtests) ===================
[20:25:48] [PASSED] bad_init
[20:25:48] [PASSED] no_init
[20:25:48] [PASSED] init_fini
[20:25:48] [PASSED] check_used
[20:25:49] [PASSED] check_quota
[20:25:49] [PASSED] check_all
[20:25:49] ===================== [PASSED] guc_idm =====================
[20:25:49] ================== no_relay (3 subtests) ===================
[20:25:49] [PASSED] xe_drops_guc2pf_if_not_ready
[20:25:49] [PASSED] xe_drops_guc2vf_if_not_ready
[20:25:49] [PASSED] xe_rejects_send_if_not_ready
[20:25:49] ==================== [PASSED] no_relay =====================
[20:25:49] ================== pf_relay (14 subtests) ==================
[20:25:49] [PASSED] pf_rejects_guc2pf_too_short
[20:25:49] [PASSED] pf_rejects_guc2pf_too_long
[20:25:49] [PASSED] pf_rejects_guc2pf_no_payload
[20:25:49] [PASSED] pf_fails_no_payload
[20:25:49] [PASSED] pf_fails_bad_origin
[20:25:49] [PASSED] pf_fails_bad_type
[20:25:49] [PASSED] pf_txn_reports_error
[20:25:49] [PASSED] pf_txn_sends_pf2guc
[20:25:49] [PASSED] pf_sends_pf2guc
[20:25:49] [SKIPPED] pf_loopback_nop
[20:25:49] [SKIPPED] pf_loopback_echo
[20:25:49] [SKIPPED] pf_loopback_fail
[20:25:49] [SKIPPED] pf_loopback_busy
[20:25:49] [SKIPPED] pf_loopback_retry
[20:25:49] ==================== [PASSED] pf_relay =====================
[20:25:49] ================== vf_relay (3 subtests) ===================
[20:25:49] [PASSED] vf_rejects_guc2vf_too_short
[20:25:49] [PASSED] vf_rejects_guc2vf_too_long
[20:25:49] [PASSED] vf_rejects_guc2vf_no_payload
[20:25:49] ==================== [PASSED] vf_relay =====================
[20:25:49] ================ pf_gt_config (6 subtests) =================
[20:25:49] [PASSED] fair_contexts_1vf
[20:25:49] [PASSED] fair_doorbells_1vf
[20:25:49] [PASSED] fair_ggtt_1vf
[20:25:49] ====================== fair_contexts  ======================
[20:25:49] [PASSED] 1 VF
[20:25:49] [PASSED] 2 VFs
[20:25:49] [PASSED] 3 VFs
[20:25:49] [PASSED] 4 VFs
[20:25:49] [PASSED] 5 VFs
[20:25:49] [PASSED] 6 VFs
[20:25:49] [PASSED] 7 VFs
[20:25:49] [PASSED] 8 VFs
[20:25:49] [PASSED] 9 VFs
[20:25:49] [PASSED] 10 VFs
[20:25:49] [PASSED] 11 VFs
[20:25:49] [PASSED] 12 VFs
[20:25:49] [PASSED] 13 VFs
[20:25:49] [PASSED] 14 VFs
[20:25:49] [PASSED] 15 VFs
[20:25:49] [PASSED] 16 VFs
[20:25:49] [PASSED] 17 VFs
[20:25:49] [PASSED] 18 VFs
[20:25:49] [PASSED] 19 VFs
[20:25:49] [PASSED] 20 VFs
[20:25:49] [PASSED] 21 VFs
[20:25:49] [PASSED] 22 VFs
[20:25:49] [PASSED] 23 VFs
[20:25:49] [PASSED] 24 VFs
[20:25:49] [PASSED] 25 VFs
[20:25:49] [PASSED] 26 VFs
[20:25:49] [PASSED] 27 VFs
[20:25:49] [PASSED] 28 VFs
[20:25:49] [PASSED] 29 VFs
[20:25:49] [PASSED] 30 VFs
[20:25:49] [PASSED] 31 VFs
[20:25:49] [PASSED] 32 VFs
[20:25:49] [PASSED] 33 VFs
[20:25:49] [PASSED] 34 VFs
[20:25:49] [PASSED] 35 VFs
[20:25:49] [PASSED] 36 VFs
[20:25:49] [PASSED] 37 VFs
[20:25:49] [PASSED] 38 VFs
[20:25:49] [PASSED] 39 VFs
[20:25:49] [PASSED] 40 VFs
[20:25:49] [PASSED] 41 VFs
[20:25:49] [PASSED] 42 VFs
[20:25:49] [PASSED] 43 VFs
[20:25:49] [PASSED] 44 VFs
[20:25:49] [PASSED] 45 VFs
[20:25:49] [PASSED] 46 VFs
[20:25:49] [PASSED] 47 VFs
[20:25:49] [PASSED] 48 VFs
[20:25:49] [PASSED] 49 VFs
[20:25:49] [PASSED] 50 VFs
[20:25:49] [PASSED] 51 VFs
[20:25:49] [PASSED] 52 VFs
[20:25:49] [PASSED] 53 VFs
[20:25:49] [PASSED] 54 VFs
[20:25:49] [PASSED] 55 VFs
[20:25:49] [PASSED] 56 VFs
[20:25:49] [PASSED] 57 VFs
[20:25:49] [PASSED] 58 VFs
[20:25:49] [PASSED] 59 VFs
[20:25:49] [PASSED] 60 VFs
[20:25:49] [PASSED] 61 VFs
[20:25:49] [PASSED] 62 VFs
[20:25:49] [PASSED] 63 VFs
[20:25:49] ================== [PASSED] fair_contexts ==================
[20:25:49] ===================== fair_doorbells  ======================
[20:25:49] [PASSED] 1 VF
[20:25:49] [PASSED] 2 VFs
[20:25:49] [PASSED] 3 VFs
[20:25:49] [PASSED] 4 VFs
[20:25:49] [PASSED] 5 VFs
[20:25:49] [PASSED] 6 VFs
[20:25:49] [PASSED] 7 VFs
[20:25:49] [PASSED] 8 VFs
[20:25:49] [PASSED] 9 VFs
[20:25:49] [PASSED] 10 VFs
[20:25:49] [PASSED] 11 VFs
[20:25:49] [PASSED] 12 VFs
[20:25:49] [PASSED] 13 VFs
[20:25:49] [PASSED] 14 VFs
[20:25:49] [PASSED] 15 VFs
[20:25:49] [PASSED] 16 VFs
[20:25:49] [PASSED] 17 VFs
[20:25:49] [PASSED] 18 VFs
[20:25:49] [PASSED] 19 VFs
[20:25:49] [PASSED] 20 VFs
[20:25:49] [PASSED] 21 VFs
[20:25:49] [PASSED] 22 VFs
[20:25:49] [PASSED] 23 VFs
[20:25:49] [PASSED] 24 VFs
[20:25:49] [PASSED] 25 VFs
[20:25:49] [PASSED] 26 VFs
[20:25:49] [PASSED] 27 VFs
[20:25:49] [PASSED] 28 VFs
[20:25:49] [PASSED] 29 VFs
[20:25:49] [PASSED] 30 VFs
[20:25:49] [PASSED] 31 VFs
[20:25:49] [PASSED] 32 VFs
[20:25:49] [PASSED] 33 VFs
[20:25:49] [PASSED] 34 VFs
[20:25:49] [PASSED] 35 VFs
[20:25:49] [PASSED] 36 VFs
[20:25:49] [PASSED] 37 VFs
[20:25:49] [PASSED] 38 VFs
[20:25:49] [PASSED] 39 VFs
[20:25:49] [PASSED] 40 VFs
[20:25:49] [PASSED] 41 VFs
[20:25:49] [PASSED] 42 VFs
[20:25:49] [PASSED] 43 VFs
[20:25:49] [PASSED] 44 VFs
[20:25:49] [PASSED] 45 VFs
[20:25:49] [PASSED] 46 VFs
[20:25:49] [PASSED] 47 VFs
[20:25:49] [PASSED] 48 VFs
[20:25:49] [PASSED] 49 VFs
[20:25:49] [PASSED] 50 VFs
[20:25:49] [PASSED] 51 VFs
[20:25:49] [PASSED] 52 VFs
[20:25:49] [PASSED] 53 VFs
[20:25:49] [PASSED] 54 VFs
[20:25:49] [PASSED] 55 VFs
[20:25:49] [PASSED] 56 VFs
[20:25:49] [PASSED] 57 VFs
[20:25:49] [PASSED] 58 VFs
[20:25:49] [PASSED] 59 VFs
[20:25:49] [PASSED] 60 VFs
[20:25:49] [PASSED] 61 VFs
[20:25:49] [PASSED] 62 VFs
[20:25:49] [PASSED] 63 VFs
[20:25:49] ================= [PASSED] fair_doorbells ==================
[20:25:49] ======================== fair_ggtt  ========================
[20:25:49] [PASSED] 1 VF
[20:25:49] [PASSED] 2 VFs
[20:25:49] [PASSED] 3 VFs
[20:25:49] [PASSED] 4 VFs
[20:25:49] [PASSED] 5 VFs
[20:25:49] [PASSED] 6 VFs
[20:25:49] [PASSED] 7 VFs
[20:25:49] [PASSED] 8 VFs
[20:25:49] [PASSED] 9 VFs
[20:25:49] [PASSED] 10 VFs
[20:25:49] [PASSED] 11 VFs
[20:25:49] [PASSED] 12 VFs
[20:25:49] [PASSED] 13 VFs
[20:25:49] [PASSED] 14 VFs
[20:25:49] [PASSED] 15 VFs
[20:25:49] [PASSED] 16 VFs
[20:25:49] [PASSED] 17 VFs
[20:25:49] [PASSED] 18 VFs
[20:25:49] [PASSED] 19 VFs
[20:25:49] [PASSED] 20 VFs
[20:25:49] [PASSED] 21 VFs
[20:25:49] [PASSED] 22 VFs
[20:25:49] [PASSED] 23 VFs
[20:25:49] [PASSED] 24 VFs
[20:25:49] [PASSED] 25 VFs
[20:25:49] [PASSED] 26 VFs
[20:25:49] [PASSED] 27 VFs
[20:25:49] [PASSED] 28 VFs
[20:25:49] [PASSED] 29 VFs
[20:25:49] [PASSED] 30 VFs
[20:25:49] [PASSED] 31 VFs
[20:25:49] [PASSED] 32 VFs
[20:25:49] [PASSED] 33 VFs
[20:25:49] [PASSED] 34 VFs
[20:25:49] [PASSED] 35 VFs
[20:25:49] [PASSED] 36 VFs
[20:25:49] [PASSED] 37 VFs
[20:25:49] [PASSED] 38 VFs
[20:25:49] [PASSED] 39 VFs
[20:25:49] [PASSED] 40 VFs
[20:25:49] [PASSED] 41 VFs
[20:25:49] [PASSED] 42 VFs
[20:25:49] [PASSED] 43 VFs
[20:25:49] [PASSED] 44 VFs
[20:25:49] [PASSED] 45 VFs
[20:25:49] [PASSED] 46 VFs
[20:25:49] [PASSED] 47 VFs
[20:25:49] [PASSED] 48 VFs
[20:25:49] [PASSED] 49 VFs
[20:25:49] [PASSED] 50 VFs
[20:25:49] [PASSED] 51 VFs
[20:25:49] [PASSED] 52 VFs
[20:25:49] [PASSED] 53 VFs
[20:25:49] [PASSED] 54 VFs
[20:25:49] [PASSED] 55 VFs
[20:25:49] [PASSED] 56 VFs
[20:25:49] [PASSED] 57 VFs
[20:25:49] [PASSED] 58 VFs
[20:25:49] [PASSED] 59 VFs
[20:25:49] [PASSED] 60 VFs
[20:25:49] [PASSED] 61 VFs
[20:25:49] [PASSED] 62 VFs
[20:25:49] [PASSED] 63 VFs
[20:25:49] ==================== [PASSED] fair_ggtt ====================
[20:25:49] ================== [PASSED] pf_gt_config ===================
[20:25:49] ===================== lmtt (1 subtest) =====================
[20:25:49] ======================== test_ops  =========================
[20:25:49] [PASSED] 2-level
[20:25:49] [PASSED] multi-level
[20:25:49] ==================== [PASSED] test_ops =====================
[20:25:49] ====================== [PASSED] lmtt =======================
[20:25:49] ================= pf_service (11 subtests) =================
[20:25:49] [PASSED] pf_negotiate_any
[20:25:49] [PASSED] pf_negotiate_base_match
[20:25:49] [PASSED] pf_negotiate_base_newer
[20:25:49] [PASSED] pf_negotiate_base_next
[20:25:49] [SKIPPED] pf_negotiate_base_older
[20:25:49] [PASSED] pf_negotiate_base_prev
[20:25:49] [PASSED] pf_negotiate_latest_match
[20:25:49] [PASSED] pf_negotiate_latest_newer
[20:25:49] [PASSED] pf_negotiate_latest_next
[20:25:49] [SKIPPED] pf_negotiate_latest_older
[20:25:49] [SKIPPED] pf_negotiate_latest_prev
[20:25:49] =================== [PASSED] pf_service ====================
[20:25:49] ================= xe_guc_g2g (2 subtests) ==================
[20:25:49] ============== xe_live_guc_g2g_kunit_default  ==============
[20:25:49] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[20:25:49] ============== xe_live_guc_g2g_kunit_allmem  ===============
[20:25:49] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[20:25:49] =================== [SKIPPED] xe_guc_g2g ===================
[20:25:49] =================== xe_mocs (2 subtests) ===================
[20:25:49] ================ xe_live_mocs_kernel_kunit  ================
[20:25:49] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[20:25:49] ================ xe_live_mocs_reset_kunit  =================
[20:25:49] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[20:25:49] ==================== [SKIPPED] xe_mocs =====================
[20:25:49] ================= xe_migrate (2 subtests) ==================
[20:25:49] ================= xe_migrate_sanity_kunit  =================
[20:25:49] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[20:25:49] ================== xe_validate_ccs_kunit  ==================
[20:25:49] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[20:25:49] =================== [SKIPPED] xe_migrate ===================
[20:25:49] ================== xe_dma_buf (1 subtest) ==================
[20:25:49] ==================== xe_dma_buf_kunit  =====================
[20:25:49] ================ [SKIPPED] xe_dma_buf_kunit ================
[20:25:49] =================== [SKIPPED] xe_dma_buf ===================
[20:25:49] ================= xe_bo_shrink (1 subtest) =================
[20:25:49] =================== xe_bo_shrink_kunit  ====================
[20:25:49] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[20:25:49] ================== [SKIPPED] xe_bo_shrink ==================
[20:25:49] ==================== xe_bo (2 subtests) ====================
[20:25:49] ================== xe_ccs_migrate_kunit  ===================
[20:25:49] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[20:25:49] ==================== xe_bo_evict_kunit  ====================
[20:25:49] =============== [SKIPPED] xe_bo_evict_kunit ================
[20:25:49] ===================== [SKIPPED] xe_bo ======================
[20:25:49] ==================== args (11 subtests) ====================
[20:25:49] [PASSED] count_args_test
[20:25:49] [PASSED] call_args_example
[20:25:49] [PASSED] call_args_test
[20:25:49] [PASSED] drop_first_arg_example
[20:25:49] [PASSED] drop_first_arg_test
[20:25:49] [PASSED] first_arg_example
[20:25:49] [PASSED] first_arg_test
[20:25:49] [PASSED] last_arg_example
[20:25:49] [PASSED] last_arg_test
[20:25:49] [PASSED] pick_arg_example
[20:25:49] [PASSED] sep_comma_example
[20:25:49] ====================== [PASSED] args =======================
[20:25:49] =================== xe_pci (3 subtests) ====================
[20:25:49] ==================== check_graphics_ip  ====================
[20:25:49] [PASSED] 12.00 Xe_LP
[20:25:49] [PASSED] 12.10 Xe_LP+
[20:25:49] [PASSED] 12.55 Xe_HPG
[20:25:49] [PASSED] 12.60 Xe_HPC
[20:25:49] [PASSED] 12.70 Xe_LPG
[20:25:49] [PASSED] 12.71 Xe_LPG
[20:25:49] [PASSED] 12.74 Xe_LPG+
[20:25:49] [PASSED] 20.01 Xe2_HPG
[20:25:49] [PASSED] 20.02 Xe2_HPG
[20:25:49] [PASSED] 20.04 Xe2_LPG
[20:25:49] [PASSED] 30.00 Xe3_LPG
[20:25:49] [PASSED] 30.01 Xe3_LPG
[20:25:49] [PASSED] 30.03 Xe3_LPG
[20:25:49] [PASSED] 30.04 Xe3_LPG
[20:25:49] [PASSED] 30.05 Xe3_LPG
[20:25:49] [PASSED] 35.11 Xe3p_XPC
[20:25:49] ================ [PASSED] check_graphics_ip ================
[20:25:49] ===================== check_media_ip  ======================
[20:25:49] [PASSED] 12.00 Xe_M
[20:25:49] [PASSED] 12.55 Xe_HPM
[20:25:49] [PASSED] 13.00 Xe_LPM+
[20:25:49] [PASSED] 13.01 Xe2_HPM
[20:25:49] [PASSED] 20.00 Xe2_LPM
[20:25:49] [PASSED] 30.00 Xe3_LPM
[20:25:49] [PASSED] 30.02 Xe3_LPM
[20:25:49] [PASSED] 35.00 Xe3p_LPM
[20:25:49] [PASSED] 35.03 Xe3p_HPM
[20:25:49] ================= [PASSED] check_media_ip ==================
[20:25:49] =================== check_platform_desc  ===================
[20:25:49] [PASSED] 0x9A60 (TIGERLAKE)
[20:25:49] [PASSED] 0x9A68 (TIGERLAKE)
[20:25:49] [PASSED] 0x9A70 (TIGERLAKE)
[20:25:49] [PASSED] 0x9A40 (TIGERLAKE)
[20:25:49] [PASSED] 0x9A49 (TIGERLAKE)
[20:25:49] [PASSED] 0x9A59 (TIGERLAKE)
[20:25:49] [PASSED] 0x9A78 (TIGERLAKE)
[20:25:49] [PASSED] 0x9AC0 (TIGERLAKE)
[20:25:49] [PASSED] 0x9AC9 (TIGERLAKE)
[20:25:49] [PASSED] 0x9AD9 (TIGERLAKE)
[20:25:49] [PASSED] 0x9AF8 (TIGERLAKE)
[20:25:49] [PASSED] 0x4C80 (ROCKETLAKE)
[20:25:49] [PASSED] 0x4C8A (ROCKETLAKE)
[20:25:49] [PASSED] 0x4C8B (ROCKETLAKE)
[20:25:49] [PASSED] 0x4C8C (ROCKETLAKE)
[20:25:49] [PASSED] 0x4C90 (ROCKETLAKE)
[20:25:49] [PASSED] 0x4C9A (ROCKETLAKE)
[20:25:49] [PASSED] 0x4680 (ALDERLAKE_S)
[20:25:49] [PASSED] 0x4682 (ALDERLAKE_S)
[20:25:49] [PASSED] 0x4688 (ALDERLAKE_S)
[20:25:49] [PASSED] 0x468A (ALDERLAKE_S)
[20:25:49] [PASSED] 0x468B (ALDERLAKE_S)
[20:25:49] [PASSED] 0x4690 (ALDERLAKE_S)
[20:25:49] [PASSED] 0x4692 (ALDERLAKE_S)
[20:25:49] [PASSED] 0x4693 (ALDERLAKE_S)
[20:25:49] [PASSED] 0x46A0 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46A1 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46A2 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46A3 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46A6 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46A8 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46AA (ALDERLAKE_P)
[20:25:49] [PASSED] 0x462A (ALDERLAKE_P)
[20:25:49] [PASSED] 0x4626 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x4628 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46B0 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[20:25:49] [PASSED] 0x46B1 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46B2 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46B3 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46C0 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46C1 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46C2 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46C3 (ALDERLAKE_P)
[20:25:49] [PASSED] 0x46D0 (ALDERLAKE_N)
[20:25:49] [PASSED] 0x46D1 (ALDERLAKE_N)
[20:25:49] [PASSED] 0x46D2 (ALDERLAKE_N)
[20:25:49] [PASSED] 0x46D3 (ALDERLAKE_N)
[20:25:49] [PASSED] 0x46D4 (ALDERLAKE_N)
[20:25:49] [PASSED] 0xA721 (ALDERLAKE_P)
[20:25:49] [PASSED] 0xA7A1 (ALDERLAKE_P)
[20:25:49] [PASSED] 0xA7A9 (ALDERLAKE_P)
[20:25:49] [PASSED] 0xA7AC (ALDERLAKE_P)
[20:25:49] [PASSED] 0xA7AD (ALDERLAKE_P)
[20:25:49] [PASSED] 0xA720 (ALDERLAKE_P)
[20:25:49] [PASSED] 0xA7A0 (ALDERLAKE_P)
[20:25:49] [PASSED] 0xA7A8 (ALDERLAKE_P)
[20:25:49] [PASSED] 0xA7AA (ALDERLAKE_P)
[20:25:49] [PASSED] 0xA7AB (ALDERLAKE_P)
[20:25:49] [PASSED] 0xA780 (ALDERLAKE_S)
[20:25:49] [PASSED] 0xA781 (ALDERLAKE_S)
[20:25:49] [PASSED] 0xA782 (ALDERLAKE_S)
[20:25:49] [PASSED] 0xA783 (ALDERLAKE_S)
[20:25:49] [PASSED] 0xA788 (ALDERLAKE_S)
[20:25:49] [PASSED] 0xA789 (ALDERLAKE_S)
[20:25:49] [PASSED] 0xA78A (ALDERLAKE_S)
[20:25:49] [PASSED] 0xA78B (ALDERLAKE_S)
[20:25:49] [PASSED] 0x4905 (DG1)
[20:25:49] [PASSED] 0x4906 (DG1)
[20:25:49] [PASSED] 0x4907 (DG1)
[20:25:49] [PASSED] 0x4908 (DG1)
[20:25:49] [PASSED] 0x4909 (DG1)
[20:25:49] [PASSED] 0x56C0 (DG2)
[20:25:49] [PASSED] 0x56C2 (DG2)
[20:25:49] [PASSED] 0x56C1 (DG2)
[20:25:49] [PASSED] 0x7D51 (METEORLAKE)
[20:25:49] [PASSED] 0x7DD1 (METEORLAKE)
[20:25:49] [PASSED] 0x7D41 (METEORLAKE)
[20:25:49] [PASSED] 0x7D67 (METEORLAKE)
[20:25:49] [PASSED] 0xB640 (METEORLAKE)
[20:25:49] [PASSED] 0x56A0 (DG2)
[20:25:49] [PASSED] 0x56A1 (DG2)
[20:25:49] [PASSED] 0x56A2 (DG2)
[20:25:49] [PASSED] 0x56BE (DG2)
[20:25:49] [PASSED] 0x56BF (DG2)
[20:25:49] [PASSED] 0x5690 (DG2)
[20:25:49] [PASSED] 0x5691 (DG2)
[20:25:49] [PASSED] 0x5692 (DG2)
[20:25:49] [PASSED] 0x56A5 (DG2)
[20:25:49] [PASSED] 0x56A6 (DG2)
[20:25:49] [PASSED] 0x56B0 (DG2)
[20:25:49] [PASSED] 0x56B1 (DG2)
[20:25:49] [PASSED] 0x56BA (DG2)
[20:25:49] [PASSED] 0x56BB (DG2)
[20:25:49] [PASSED] 0x56BC (DG2)
[20:25:49] [PASSED] 0x56BD (DG2)
[20:25:49] [PASSED] 0x5693 (DG2)
[20:25:49] [PASSED] 0x5694 (DG2)
[20:25:49] [PASSED] 0x5695 (DG2)
[20:25:49] [PASSED] 0x56A3 (DG2)
[20:25:49] [PASSED] 0x56A4 (DG2)
[20:25:49] [PASSED] 0x56B2 (DG2)
[20:25:49] [PASSED] 0x56B3 (DG2)
[20:25:49] [PASSED] 0x5696 (DG2)
[20:25:49] [PASSED] 0x5697 (DG2)
[20:25:49] [PASSED] 0xB69 (PVC)
[20:25:49] [PASSED] 0xB6E (PVC)
[20:25:49] [PASSED] 0xBD4 (PVC)
[20:25:49] [PASSED] 0xBD5 (PVC)
[20:25:49] [PASSED] 0xBD6 (PVC)
[20:25:49] [PASSED] 0xBD7 (PVC)
[20:25:49] [PASSED] 0xBD8 (PVC)
[20:25:49] [PASSED] 0xBD9 (PVC)
[20:25:49] [PASSED] 0xBDA (PVC)
[20:25:49] [PASSED] 0xBDB (PVC)
[20:25:49] [PASSED] 0xBE0 (PVC)
[20:25:49] [PASSED] 0xBE1 (PVC)
[20:25:49] [PASSED] 0xBE5 (PVC)
[20:25:49] [PASSED] 0x7D40 (METEORLAKE)
[20:25:49] [PASSED] 0x7D45 (METEORLAKE)
[20:25:49] [PASSED] 0x7D55 (METEORLAKE)
[20:25:49] [PASSED] 0x7D60 (METEORLAKE)
[20:25:49] [PASSED] 0x7DD5 (METEORLAKE)
[20:25:49] [PASSED] 0x6420 (LUNARLAKE)
[20:25:49] [PASSED] 0x64A0 (LUNARLAKE)
[20:25:49] [PASSED] 0x64B0 (LUNARLAKE)
[20:25:49] [PASSED] 0xE202 (BATTLEMAGE)
[20:25:49] [PASSED] 0xE209 (BATTLEMAGE)
[20:25:49] [PASSED] 0xE20B (BATTLEMAGE)
[20:25:49] [PASSED] 0xE20C (BATTLEMAGE)
[20:25:49] [PASSED] 0xE20D (BATTLEMAGE)
[20:25:49] [PASSED] 0xE210 (BATTLEMAGE)
[20:25:49] [PASSED] 0xE211 (BATTLEMAGE)
[20:25:49] [PASSED] 0xE212 (BATTLEMAGE)
[20:25:49] [PASSED] 0xE216 (BATTLEMAGE)
[20:25:49] [PASSED] 0xE220 (BATTLEMAGE)
[20:25:49] [PASSED] 0xE221 (BATTLEMAGE)
[20:25:49] [PASSED] 0xE222 (BATTLEMAGE)
[20:25:49] [PASSED] 0xE223 (BATTLEMAGE)
[20:25:49] [PASSED] 0xB080 (PANTHERLAKE)
[20:25:49] [PASSED] 0xB081 (PANTHERLAKE)
[20:25:49] [PASSED] 0xB082 (PANTHERLAKE)
[20:25:49] [PASSED] 0xB083 (PANTHERLAKE)
[20:25:49] [PASSED] 0xB084 (PANTHERLAKE)
[20:25:49] [PASSED] 0xB085 (PANTHERLAKE)
[20:25:49] [PASSED] 0xB086 (PANTHERLAKE)
[20:25:49] [PASSED] 0xB087 (PANTHERLAKE)
[20:25:49] [PASSED] 0xB08F (PANTHERLAKE)
[20:25:49] [PASSED] 0xB090 (PANTHERLAKE)
[20:25:49] [PASSED] 0xB0A0 (PANTHERLAKE)
[20:25:49] [PASSED] 0xB0B0 (PANTHERLAKE)
[20:25:49] [PASSED] 0xD740 (NOVALAKE_S)
[20:25:49] [PASSED] 0xD741 (NOVALAKE_S)
[20:25:49] [PASSED] 0xD742 (NOVALAKE_S)
[20:25:49] [PASSED] 0xD743 (NOVALAKE_S)
[20:25:49] [PASSED] 0xD744 (NOVALAKE_S)
[20:25:49] [PASSED] 0xD745 (NOVALAKE_S)
[20:25:49] [PASSED] 0x674C (CRESCENTISLAND)
[20:25:49] [PASSED] 0xFD80 (PANTHERLAKE)
[20:25:49] [PASSED] 0xFD81 (PANTHERLAKE)
[20:25:49] =============== [PASSED] check_platform_desc ===============
[20:25:49] ===================== [PASSED] xe_pci ======================
[20:25:49] =================== xe_rtp (2 subtests) ====================
[20:25:49] =============== xe_rtp_process_to_sr_tests  ================
[20:25:49] [PASSED] coalesce-same-reg
[20:25:49] [PASSED] no-match-no-add
[20:25:49] [PASSED] match-or
[20:25:49] [PASSED] match-or-xfail
[20:25:49] [PASSED] no-match-no-add-multiple-rules
[20:25:49] [PASSED] two-regs-two-entries
[20:25:49] [PASSED] clr-one-set-other
[20:25:49] [PASSED] set-field
[20:25:49] [PASSED] conflict-duplicate
[20:25:49] [PASSED] conflict-not-disjoint
[20:25:49] [PASSED] conflict-reg-type
[20:25:49] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[20:25:49] ================== xe_rtp_process_tests  ===================
[20:25:49] [PASSED] active1
[20:25:49] [PASSED] active2
[20:25:49] [PASSED] active-inactive
[20:25:49] [PASSED] inactive-active
[20:25:49] [PASSED] inactive-1st_or_active-inactive
[20:25:49] [PASSED] inactive-2nd_or_active-inactive
[20:25:49] [PASSED] inactive-last_or_active-inactive
[20:25:49] [PASSED] inactive-no_or_active-inactive
[20:25:49] ============== [PASSED] xe_rtp_process_tests ===============
[20:25:49] ===================== [PASSED] xe_rtp ======================
[20:25:49] ==================== xe_wa (1 subtest) =====================
[20:25:49] ======================== xe_wa_gt  =========================
[20:25:49] [PASSED] TIGERLAKE B0
[20:25:49] [PASSED] DG1 A0
[20:25:49] [PASSED] DG1 B0
[20:25:49] [PASSED] ALDERLAKE_S A0
[20:25:49] [PASSED] ALDERLAKE_S B0
[20:25:49] [PASSED] ALDERLAKE_S C0
[20:25:49] [PASSED] ALDERLAKE_S D0
[20:25:49] [PASSED] ALDERLAKE_P A0
[20:25:49] [PASSED] ALDERLAKE_P B0
[20:25:49] [PASSED] ALDERLAKE_P C0
[20:25:49] [PASSED] ALDERLAKE_S RPLS D0
[20:25:49] [PASSED] ALDERLAKE_P RPLU E0
[20:25:49] [PASSED] DG2 G10 C0
[20:25:49] [PASSED] DG2 G11 B1
[20:25:49] [PASSED] DG2 G12 A1
[20:25:49] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[20:25:49] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[20:25:49] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[20:25:49] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[20:25:49] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[20:25:49] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[20:25:49] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[20:25:49] ==================== [PASSED] xe_wa_gt =====================
[20:25:49] ====================== [PASSED] xe_wa ======================
[20:25:49] ============================================================
[20:25:49] Testing complete. Ran 510 tests: passed: 492, skipped: 18
[20:25:49] Elapsed time: 35.591s total, 4.203s configuring, 30.920s building, 0.453s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[20:25:49] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:25:51] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:26:16] Starting KUnit Kernel (1/1)...
[20:26:16] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:26:16] ============ drm_test_pick_cmdline (2 subtests) ============
[20:26:16] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[20:26:16] =============== drm_test_pick_cmdline_named  ===============
[20:26:16] [PASSED] NTSC
[20:26:16] [PASSED] NTSC-J
[20:26:16] [PASSED] PAL
[20:26:16] [PASSED] PAL-M
[20:26:16] =========== [PASSED] drm_test_pick_cmdline_named ===========
[20:26:16] ============== [PASSED] drm_test_pick_cmdline ==============
[20:26:16] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[20:26:16] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[20:26:16] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[20:26:16] =========== drm_validate_clone_mode (2 subtests) ===========
[20:26:16] ============== drm_test_check_in_clone_mode  ===============
[20:26:16] [PASSED] in_clone_mode
[20:26:16] [PASSED] not_in_clone_mode
[20:26:16] ========== [PASSED] drm_test_check_in_clone_mode ===========
[20:26:16] =============== drm_test_check_valid_clones  ===============
[20:26:16] [PASSED] not_in_clone_mode
[20:26:16] [PASSED] valid_clone
[20:26:16] [PASSED] invalid_clone
[20:26:16] =========== [PASSED] drm_test_check_valid_clones ===========
[20:26:16] ============= [PASSED] drm_validate_clone_mode =============
[20:26:16] ============= drm_validate_modeset (1 subtest) =============
[20:26:16] [PASSED] drm_test_check_connector_changed_modeset
[20:26:16] ============== [PASSED] drm_validate_modeset ===============
[20:26:16] ====== drm_test_bridge_get_current_state (2 subtests) ======
[20:26:16] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[20:26:16] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[20:26:16] ======== [PASSED] drm_test_bridge_get_current_state ========
[20:26:16] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[20:26:16] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[20:26:16] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[20:26:16] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[20:26:16] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[20:26:16] ============== drm_bridge_alloc (2 subtests) ===============
[20:26:16] [PASSED] drm_test_drm_bridge_alloc_basic
[20:26:16] [PASSED] drm_test_drm_bridge_alloc_get_put
[20:26:16] ================ [PASSED] drm_bridge_alloc =================
[20:26:16] ================== drm_buddy (8 subtests) ==================
[20:26:16] [PASSED] drm_test_buddy_alloc_limit
[20:26:16] [PASSED] drm_test_buddy_alloc_optimistic
[20:26:16] [PASSED] drm_test_buddy_alloc_pessimistic
[20:26:16] [PASSED] drm_test_buddy_alloc_pathological
[20:26:16] [PASSED] drm_test_buddy_alloc_contiguous
[20:26:16] [PASSED] drm_test_buddy_alloc_clear
[20:26:16] [PASSED] drm_test_buddy_alloc_range_bias
[20:26:16] [PASSED] drm_test_buddy_fragmentation_performance
[20:26:16] ==================== [PASSED] drm_buddy ====================
[20:26:16] ============= drm_cmdline_parser (40 subtests) =============
[20:26:16] [PASSED] drm_test_cmdline_force_d_only
[20:26:16] [PASSED] drm_test_cmdline_force_D_only_dvi
[20:26:16] [PASSED] drm_test_cmdline_force_D_only_hdmi
[20:26:16] [PASSED] drm_test_cmdline_force_D_only_not_digital
[20:26:16] [PASSED] drm_test_cmdline_force_e_only
[20:26:16] [PASSED] drm_test_cmdline_res
[20:26:16] [PASSED] drm_test_cmdline_res_vesa
[20:26:16] [PASSED] drm_test_cmdline_res_vesa_rblank
[20:26:16] [PASSED] drm_test_cmdline_res_rblank
[20:26:16] [PASSED] drm_test_cmdline_res_bpp
[20:26:16] [PASSED] drm_test_cmdline_res_refresh
[20:26:16] [PASSED] drm_test_cmdline_res_bpp_refresh
[20:26:16] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[20:26:16] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[20:26:16] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[20:26:16] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[20:26:16] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[20:26:16] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[20:26:16] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[20:26:16] [PASSED] drm_test_cmdline_res_margins_force_on
[20:26:16] [PASSED] drm_test_cmdline_res_vesa_margins
[20:26:16] [PASSED] drm_test_cmdline_name
[20:26:16] [PASSED] drm_test_cmdline_name_bpp
[20:26:16] [PASSED] drm_test_cmdline_name_option
[20:26:16] [PASSED] drm_test_cmdline_name_bpp_option
[20:26:16] [PASSED] drm_test_cmdline_rotate_0
[20:26:16] [PASSED] drm_test_cmdline_rotate_90
[20:26:16] [PASSED] drm_test_cmdline_rotate_180
[20:26:16] [PASSED] drm_test_cmdline_rotate_270
[20:26:16] [PASSED] drm_test_cmdline_hmirror
[20:26:16] [PASSED] drm_test_cmdline_vmirror
[20:26:16] [PASSED] drm_test_cmdline_margin_options
[20:26:16] [PASSED] drm_test_cmdline_multiple_options
[20:26:16] [PASSED] drm_test_cmdline_bpp_extra_and_option
[20:26:16] [PASSED] drm_test_cmdline_extra_and_option
[20:26:16] [PASSED] drm_test_cmdline_freestanding_options
[20:26:16] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[20:26:16] [PASSED] drm_test_cmdline_panel_orientation
[20:26:16] ================ drm_test_cmdline_invalid  =================
[20:26:16] [PASSED] margin_only
[20:26:16] [PASSED] interlace_only
[20:26:16] [PASSED] res_missing_x
[20:26:16] [PASSED] res_missing_y
[20:26:16] [PASSED] res_bad_y
[20:26:16] [PASSED] res_missing_y_bpp
[20:26:16] [PASSED] res_bad_bpp
[20:26:16] [PASSED] res_bad_refresh
[20:26:16] [PASSED] res_bpp_refresh_force_on_off
[20:26:16] [PASSED] res_invalid_mode
[20:26:16] [PASSED] res_bpp_wrong_place_mode
[20:26:16] [PASSED] name_bpp_refresh
[20:26:16] [PASSED] name_refresh
[20:26:16] [PASSED] name_refresh_wrong_mode
[20:26:16] [PASSED] name_refresh_invalid_mode
[20:26:16] [PASSED] rotate_multiple
[20:26:16] [PASSED] rotate_invalid_val
[20:26:16] [PASSED] rotate_truncated
[20:26:16] [PASSED] invalid_option
[20:26:16] [PASSED] invalid_tv_option
[20:26:16] [PASSED] truncated_tv_option
[20:26:16] ============ [PASSED] drm_test_cmdline_invalid =============
[20:26:16] =============== drm_test_cmdline_tv_options  ===============
[20:26:16] [PASSED] NTSC
[20:26:16] [PASSED] NTSC_443
[20:26:16] [PASSED] NTSC_J
[20:26:16] [PASSED] PAL
[20:26:16] [PASSED] PAL_M
[20:26:16] [PASSED] PAL_N
[20:26:16] [PASSED] SECAM
[20:26:16] [PASSED] MONO_525
[20:26:16] [PASSED] MONO_625
[20:26:16] =========== [PASSED] drm_test_cmdline_tv_options ===========
[20:26:16] =============== [PASSED] drm_cmdline_parser ================
[20:26:16] ========== drmm_connector_hdmi_init (20 subtests) ==========
[20:26:16] [PASSED] drm_test_connector_hdmi_init_valid
[20:26:16] [PASSED] drm_test_connector_hdmi_init_bpc_8
[20:26:16] [PASSED] drm_test_connector_hdmi_init_bpc_10
[20:26:16] [PASSED] drm_test_connector_hdmi_init_bpc_12
[20:26:16] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[20:26:16] [PASSED] drm_test_connector_hdmi_init_bpc_null
[20:26:16] [PASSED] drm_test_connector_hdmi_init_formats_empty
[20:26:16] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[20:26:16] === drm_test_connector_hdmi_init_formats_yuv420_allowed  ===
[20:26:16] [PASSED] supported_formats=0x9 yuv420_allowed=1
[20:26:16] [PASSED] supported_formats=0x9 yuv420_allowed=0
[20:26:16] [PASSED] supported_formats=0x3 yuv420_allowed=1
[20:26:16] [PASSED] supported_formats=0x3 yuv420_allowed=0
[20:26:16] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[20:26:16] [PASSED] drm_test_connector_hdmi_init_null_ddc
[20:26:16] [PASSED] drm_test_connector_hdmi_init_null_product
[20:26:16] [PASSED] drm_test_connector_hdmi_init_null_vendor
[20:26:16] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[20:26:16] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[20:26:16] [PASSED] drm_test_connector_hdmi_init_product_valid
[20:26:16] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[20:26:16] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[20:26:16] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[20:26:16] ========= drm_test_connector_hdmi_init_type_valid  =========
[20:26:16] [PASSED] HDMI-A
[20:26:16] [PASSED] HDMI-B
[20:26:16] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[20:26:16] ======== drm_test_connector_hdmi_init_type_invalid  ========
[20:26:16] [PASSED] Unknown
[20:26:16] [PASSED] VGA
[20:26:16] [PASSED] DVI-I
[20:26:16] [PASSED] DVI-D
[20:26:16] [PASSED] DVI-A
[20:26:16] [PASSED] Composite
[20:26:16] [PASSED] SVIDEO
[20:26:16] [PASSED] LVDS
[20:26:16] [PASSED] Component
[20:26:16] [PASSED] DIN
[20:26:16] [PASSED] DP
[20:26:16] [PASSED] TV
[20:26:16] [PASSED] eDP
[20:26:16] [PASSED] Virtual
[20:26:16] [PASSED] DSI
[20:26:16] [PASSED] DPI
[20:26:16] [PASSED] Writeback
[20:26:16] [PASSED] SPI
[20:26:16] [PASSED] USB
[20:26:16] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[20:26:16] ============ [PASSED] drmm_connector_hdmi_init =============
[20:26:16] ============= drmm_connector_init (3 subtests) =============
[20:26:16] [PASSED] drm_test_drmm_connector_init
[20:26:16] [PASSED] drm_test_drmm_connector_init_null_ddc
[20:26:16] ========= drm_test_drmm_connector_init_type_valid  =========
[20:26:16] [PASSED] Unknown
[20:26:16] [PASSED] VGA
[20:26:16] [PASSED] DVI-I
[20:26:16] [PASSED] DVI-D
[20:26:16] [PASSED] DVI-A
[20:26:16] [PASSED] Composite
[20:26:16] [PASSED] SVIDEO
[20:26:16] [PASSED] LVDS
[20:26:16] [PASSED] Component
[20:26:16] [PASSED] DIN
[20:26:16] [PASSED] DP
[20:26:16] [PASSED] HDMI-A
[20:26:16] [PASSED] HDMI-B
[20:26:16] [PASSED] TV
[20:26:16] [PASSED] eDP
[20:26:16] [PASSED] Virtual
[20:26:16] [PASSED] DSI
[20:26:16] [PASSED] DPI
[20:26:16] [PASSED] Writeback
[20:26:16] [PASSED] SPI
[20:26:16] [PASSED] USB
[20:26:16] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[20:26:16] =============== [PASSED] drmm_connector_init ===============
[20:26:16] ========= drm_connector_dynamic_init (6 subtests) ==========
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_init
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_init_properties
[20:26:16] ===== drm_test_drm_connector_dynamic_init_type_valid  ======
[20:26:16] [PASSED] Unknown
[20:26:16] [PASSED] VGA
[20:26:16] [PASSED] DVI-I
[20:26:16] [PASSED] DVI-D
[20:26:16] [PASSED] DVI-A
[20:26:16] [PASSED] Composite
[20:26:16] [PASSED] SVIDEO
[20:26:16] [PASSED] LVDS
[20:26:16] [PASSED] Component
[20:26:16] [PASSED] DIN
[20:26:16] [PASSED] DP
[20:26:16] [PASSED] HDMI-A
[20:26:16] [PASSED] HDMI-B
[20:26:16] [PASSED] TV
[20:26:16] [PASSED] eDP
[20:26:16] [PASSED] Virtual
[20:26:16] [PASSED] DSI
[20:26:16] [PASSED] DPI
[20:26:16] [PASSED] Writeback
[20:26:16] [PASSED] SPI
[20:26:16] [PASSED] USB
[20:26:16] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[20:26:16] ======== drm_test_drm_connector_dynamic_init_name  =========
[20:26:16] [PASSED] Unknown
[20:26:16] [PASSED] VGA
[20:26:16] [PASSED] DVI-I
[20:26:16] [PASSED] DVI-D
[20:26:16] [PASSED] DVI-A
[20:26:16] [PASSED] Composite
[20:26:16] [PASSED] SVIDEO
[20:26:16] [PASSED] LVDS
[20:26:16] [PASSED] Component
[20:26:16] [PASSED] DIN
[20:26:16] [PASSED] DP
[20:26:16] [PASSED] HDMI-A
[20:26:16] [PASSED] HDMI-B
[20:26:16] [PASSED] TV
[20:26:16] [PASSED] eDP
[20:26:16] [PASSED] Virtual
[20:26:16] [PASSED] DSI
[20:26:16] [PASSED] DPI
[20:26:16] [PASSED] Writeback
[20:26:16] [PASSED] SPI
[20:26:16] [PASSED] USB
[20:26:16] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[20:26:16] =========== [PASSED] drm_connector_dynamic_init ============
[20:26:16] ==== drm_connector_dynamic_register_early (4 subtests) =====
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[20:26:16] ====== [PASSED] drm_connector_dynamic_register_early =======
[20:26:16] ======= drm_connector_dynamic_register (7 subtests) ========
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[20:26:16] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[20:26:16] ========= [PASSED] drm_connector_dynamic_register ==========
[20:26:16] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[20:26:16] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[20:26:16] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[20:26:16] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[20:26:16] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[20:26:16] ========== drm_test_get_tv_mode_from_name_valid  ===========
[20:26:16] [PASSED] NTSC
[20:26:16] [PASSED] NTSC-443
[20:26:16] [PASSED] NTSC-J
[20:26:16] [PASSED] PAL
[20:26:16] [PASSED] PAL-M
[20:26:16] [PASSED] PAL-N
[20:26:16] [PASSED] SECAM
[20:26:16] [PASSED] Mono
[20:26:16] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[20:26:16] [PASSED] drm_test_get_tv_mode_from_name_truncated
[20:26:16] ============ [PASSED] drm_get_tv_mode_from_name ============
[20:26:16] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[20:26:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[20:26:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[20:26:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[20:26:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[20:26:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[20:26:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[20:26:16] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[20:26:16] [PASSED] VIC 96
[20:26:16] [PASSED] VIC 97
[20:26:16] [PASSED] VIC 101
[20:26:16] [PASSED] VIC 102
[20:26:16] [PASSED] VIC 106
[20:26:16] [PASSED] VIC 107
[20:26:16] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[20:26:16] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[20:26:16] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[20:26:16] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[20:26:16] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[20:26:16] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[20:26:16] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[20:26:16] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[20:26:16] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[20:26:16] [PASSED] Automatic
[20:26:16] [PASSED] Full
[20:26:16] [PASSED] Limited 16:235
[20:26:16] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[20:26:16] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[20:26:16] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[20:26:16] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[20:26:16] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[20:26:16] [PASSED] RGB
[20:26:16] [PASSED] YUV 4:2:0
[20:26:16] [PASSED] YUV 4:2:2
[20:26:16] [PASSED] YUV 4:4:4
[20:26:16] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[20:26:16] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[20:26:16] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[20:26:16] ============= drm_damage_helper (21 subtests) ==============
[20:26:16] [PASSED] drm_test_damage_iter_no_damage
[20:26:16] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[20:26:16] [PASSED] drm_test_damage_iter_no_damage_src_moved
[20:26:16] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[20:26:16] [PASSED] drm_test_damage_iter_no_damage_not_visible
[20:26:16] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[20:26:16] [PASSED] drm_test_damage_iter_no_damage_no_fb
[20:26:16] [PASSED] drm_test_damage_iter_simple_damage
[20:26:16] [PASSED] drm_test_damage_iter_single_damage
[20:26:16] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[20:26:16] [PASSED] drm_test_damage_iter_single_damage_outside_src
[20:26:16] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[20:26:16] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[20:26:16] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[20:26:16] [PASSED] drm_test_damage_iter_single_damage_src_moved
[20:26:16] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[20:26:16] [PASSED] drm_test_damage_iter_damage
[20:26:16] [PASSED] drm_test_damage_iter_damage_one_intersect
[20:26:16] [PASSED] drm_test_damage_iter_damage_one_outside
[20:26:16] [PASSED] drm_test_damage_iter_damage_src_moved
[20:26:16] [PASSED] drm_test_damage_iter_damage_not_visible
[20:26:16] ================ [PASSED] drm_damage_helper ================
[20:26:16] ============== drm_dp_mst_helper (3 subtests) ==============
[20:26:16] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[20:26:16] [PASSED] Clock 154000 BPP 30 DSC disabled
[20:26:16] [PASSED] Clock 234000 BPP 30 DSC disabled
[20:26:16] [PASSED] Clock 297000 BPP 24 DSC disabled
[20:26:16] [PASSED] Clock 332880 BPP 24 DSC enabled
[20:26:16] [PASSED] Clock 324540 BPP 24 DSC enabled
[20:26:16] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[20:26:16] ============== drm_test_dp_mst_calc_pbn_div  ===============
[20:26:16] [PASSED] Link rate 2000000 lane count 4
[20:26:16] [PASSED] Link rate 2000000 lane count 2
[20:26:16] [PASSED] Link rate 2000000 lane count 1
[20:26:16] [PASSED] Link rate 1350000 lane count 4
[20:26:16] [PASSED] Link rate 1350000 lane count 2
[20:26:16] [PASSED] Link rate 1350000 lane count 1
[20:26:16] [PASSED] Link rate 1000000 lane count 4
[20:26:16] [PASSED] Link rate 1000000 lane count 2
[20:26:16] [PASSED] Link rate 1000000 lane count 1
[20:26:16] [PASSED] Link rate 810000 lane count 4
[20:26:16] [PASSED] Link rate 810000 lane count 2
[20:26:16] [PASSED] Link rate 810000 lane count 1
[20:26:16] [PASSED] Link rate 540000 lane count 4
[20:26:16] [PASSED] Link rate 540000 lane count 2
[20:26:16] [PASSED] Link rate 540000 lane count 1
[20:26:16] [PASSED] Link rate 270000 lane count 4
[20:26:16] [PASSED] Link rate 270000 lane count 2
[20:26:16] [PASSED] Link rate 270000 lane count 1
[20:26:16] [PASSED] Link rate 162000 lane count 4
[20:26:16] [PASSED] Link rate 162000 lane count 2
[20:26:16] [PASSED] Link rate 162000 lane count 1
[20:26:16] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[20:26:16] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[20:26:16] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[20:26:16] [PASSED] DP_POWER_UP_PHY with port number
[20:26:16] [PASSED] DP_POWER_DOWN_PHY with port number
[20:26:16] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[20:26:16] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[20:26:16] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[20:26:16] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[20:26:16] [PASSED] DP_QUERY_PAYLOAD with port number
[20:26:16] [PASSED] DP_QUERY_PAYLOAD with VCPI
[20:26:16] [PASSED] DP_REMOTE_DPCD_READ with port number
[20:26:16] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[20:26:16] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[20:26:16] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[20:26:16] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[20:26:16] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[20:26:16] [PASSED] DP_REMOTE_I2C_READ with port number
[20:26:16] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[20:26:16] [PASSED] DP_REMOTE_I2C_READ with transactions array
[20:26:16] [PASSED] DP_REMOTE_I2C_WRITE with port number
[20:26:16] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[20:26:16] [PASSED] DP_REMOTE_I2C_WRITE with data array
[20:26:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[20:26:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[20:26:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[20:26:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[20:26:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[20:26:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[20:26:16] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[20:26:16] ================ [PASSED] drm_dp_mst_helper ================
[20:26:16] ================== drm_exec (7 subtests) ===================
[20:26:16] [PASSED] sanitycheck
[20:26:16] [PASSED] test_lock
[20:26:16] [PASSED] test_lock_unlock
[20:26:16] [PASSED] test_duplicates
[20:26:16] [PASSED] test_prepare
[20:26:16] [PASSED] test_prepare_array
[20:26:16] [PASSED] test_multiple_loops
[20:26:16] ==================== [PASSED] drm_exec =====================
[20:26:16] =========== drm_format_helper_test (17 subtests) ===========
[20:26:16] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[20:26:16] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[20:26:16] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[20:26:16] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[20:26:16] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[20:26:16] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[20:26:16] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[20:26:16] ============= drm_test_fb_xrgb8888_to_bgr888  ==============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[20:26:16] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[20:26:16] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[20:26:16] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[20:26:16] ============== drm_test_fb_xrgb8888_to_mono  ===============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[20:26:16] ==================== drm_test_fb_swab  =====================
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ================ [PASSED] drm_test_fb_swab =================
[20:26:16] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[20:26:16] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[20:26:16] [PASSED] single_pixel_source_buffer
[20:26:16] [PASSED] single_pixel_clip_rectangle
[20:26:16] [PASSED] well_known_colors
[20:26:16] [PASSED] destination_pitch
[20:26:16] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[20:26:16] ================= drm_test_fb_clip_offset  =================
[20:26:16] [PASSED] pass through
[20:26:16] [PASSED] horizontal offset
[20:26:16] [PASSED] vertical offset
[20:26:16] [PASSED] horizontal and vertical offset
[20:26:16] [PASSED] horizontal offset (custom pitch)
[20:26:16] [PASSED] vertical offset (custom pitch)
[20:26:16] [PASSED] horizontal and vertical offset (custom pitch)
[20:26:16] ============= [PASSED] drm_test_fb_clip_offset =============
[20:26:16] =================== drm_test_fb_memcpy  ====================
[20:26:16] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[20:26:16] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[20:26:16] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[20:26:16] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[20:26:16] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[20:26:16] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[20:26:16] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[20:26:16] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[20:26:16] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[20:26:16] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[20:26:16] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[20:26:16] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[20:26:16] =============== [PASSED] drm_test_fb_memcpy ================
[20:26:16] ============= [PASSED] drm_format_helper_test ==============
[20:26:16] ================= drm_format (18 subtests) =================
[20:26:16] [PASSED] drm_test_format_block_width_invalid
[20:26:16] [PASSED] drm_test_format_block_width_one_plane
[20:26:16] [PASSED] drm_test_format_block_width_two_plane
[20:26:16] [PASSED] drm_test_format_block_width_three_plane
[20:26:16] [PASSED] drm_test_format_block_width_tiled
[20:26:16] [PASSED] drm_test_format_block_height_invalid
[20:26:16] [PASSED] drm_test_format_block_height_one_plane
[20:26:16] [PASSED] drm_test_format_block_height_two_plane
[20:26:16] [PASSED] drm_test_format_block_height_three_plane
[20:26:16] [PASSED] drm_test_format_block_height_tiled
[20:26:16] [PASSED] drm_test_format_min_pitch_invalid
[20:26:16] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[20:26:16] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[20:26:16] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[20:26:16] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[20:26:16] [PASSED] drm_test_format_min_pitch_two_plane
[20:26:16] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[20:26:16] [PASSED] drm_test_format_min_pitch_tiled
[20:26:16] =================== [PASSED] drm_format ====================
[20:26:16] ============== drm_framebuffer (10 subtests) ===============
[20:26:16] ========== drm_test_framebuffer_check_src_coords  ==========
[20:26:16] [PASSED] Success: source fits into fb
[20:26:16] [PASSED] Fail: overflowing fb with x-axis coordinate
[20:26:16] [PASSED] Fail: overflowing fb with y-axis coordinate
[20:26:16] [PASSED] Fail: overflowing fb with source width
[20:26:16] [PASSED] Fail: overflowing fb with source height
[20:26:16] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[20:26:16] [PASSED] drm_test_framebuffer_cleanup
[20:26:16] =============== drm_test_framebuffer_create  ===============
[20:26:16] [PASSED] ABGR8888 normal sizes
[20:26:16] [PASSED] ABGR8888 max sizes
[20:26:16] [PASSED] ABGR8888 pitch greater than min required
[20:26:16] [PASSED] ABGR8888 pitch less than min required
[20:26:16] [PASSED] ABGR8888 Invalid width
[20:26:16] [PASSED] ABGR8888 Invalid buffer handle
[20:26:16] [PASSED] No pixel format
[20:26:16] [PASSED] ABGR8888 Width 0
[20:26:16] [PASSED] ABGR8888 Height 0
[20:26:16] [PASSED] ABGR8888 Out of bound height * pitch combination
[20:26:16] [PASSED] ABGR8888 Large buffer offset
[20:26:16] [PASSED] ABGR8888 Buffer offset for inexistent plane
[20:26:16] [PASSED] ABGR8888 Invalid flag
[20:26:16] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[20:26:16] [PASSED] ABGR8888 Valid buffer modifier
[20:26:16] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[20:26:16] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[20:26:16] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[20:26:16] [PASSED] NV12 Normal sizes
[20:26:16] [PASSED] NV12 Max sizes
[20:26:16] [PASSED] NV12 Invalid pitch
[20:26:16] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[20:26:16] [PASSED] NV12 different  modifier per-plane
[20:26:16] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[20:26:16] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[20:26:16] [PASSED] NV12 Modifier for inexistent plane
[20:26:16] [PASSED] NV12 Handle for inexistent plane
[20:26:16] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[20:26:16] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[20:26:16] [PASSED] YVU420 Normal sizes
[20:26:16] [PASSED] YVU420 Max sizes
[20:26:16] [PASSED] YVU420 Invalid pitch
[20:26:16] [PASSED] YVU420 Different pitches
[20:26:16] [PASSED] YVU420 Different buffer offsets/pitches
[20:26:16] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[20:26:16] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[20:26:16] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[20:26:16] [PASSED] YVU420 Valid modifier
[20:26:16] [PASSED] YVU420 Different modifiers per plane
[20:26:16] [PASSED] YVU420 Modifier for inexistent plane
[20:26:16] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[20:26:16] [PASSED] X0L2 Normal sizes
[20:26:16] [PASSED] X0L2 Max sizes
[20:26:16] [PASSED] X0L2 Invalid pitch
[20:26:16] [PASSED] X0L2 Pitch greater than minimum required
[20:26:16] [PASSED] X0L2 Handle for inexistent plane
[20:26:16] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[20:26:16] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[20:26:16] [PASSED] X0L2 Valid modifier
[20:26:16] [PASSED] X0L2 Modifier for inexistent plane
[20:26:16] =========== [PASSED] drm_test_framebuffer_create ===========
[20:26:16] [PASSED] drm_test_framebuffer_free
[20:26:16] [PASSED] drm_test_framebuffer_init
[20:26:16] [PASSED] drm_test_framebuffer_init_bad_format
[20:26:16] [PASSED] drm_test_framebuffer_init_dev_mismatch
[20:26:16] [PASSED] drm_test_framebuffer_lookup
[20:26:16] [PASSED] drm_test_framebuffer_lookup_inexistent
[20:26:16] [PASSED] drm_test_framebuffer_modifiers_not_supported
[20:26:16] ================= [PASSED] drm_framebuffer =================
[20:26:16] ================ drm_gem_shmem (8 subtests) ================
[20:26:16] [PASSED] drm_gem_shmem_test_obj_create
[20:26:16] [PASSED] drm_gem_shmem_test_obj_create_private
[20:26:16] [PASSED] drm_gem_shmem_test_pin_pages
[20:26:16] [PASSED] drm_gem_shmem_test_vmap
[20:26:16] [PASSED] drm_gem_shmem_test_get_pages_sgt
[20:26:16] [PASSED] drm_gem_shmem_test_get_sg_table
[20:26:16] [PASSED] drm_gem_shmem_test_madvise
[20:26:16] [PASSED] drm_gem_shmem_test_purge
[20:26:16] ================== [PASSED] drm_gem_shmem ==================
[20:26:16] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[20:26:16] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[20:26:16] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[20:26:16] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[20:26:16] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[20:26:16] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[20:26:16] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[20:26:16] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420  =======
[20:26:16] [PASSED] Automatic
[20:26:16] [PASSED] Full
[20:26:16] [PASSED] Limited 16:235
[20:26:16] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[20:26:16] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[20:26:16] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[20:26:16] [PASSED] drm_test_check_disable_connector
[20:26:16] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[20:26:16] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[20:26:16] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[20:26:16] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[20:26:16] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[20:26:16] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[20:26:16] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[20:26:16] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[20:26:16] [PASSED] drm_test_check_output_bpc_dvi
[20:26:16] [PASSED] drm_test_check_output_bpc_format_vic_1
[20:26:16] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[20:26:16] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[20:26:16] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[20:26:16] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[20:26:16] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[20:26:16] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[20:26:16] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[20:26:16] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[20:26:16] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[20:26:16] [PASSED] drm_test_check_broadcast_rgb_value
[20:26:16] [PASSED] drm_test_check_bpc_8_value
[20:26:16] [PASSED] drm_test_check_bpc_10_value
[20:26:16] [PASSED] drm_test_check_bpc_12_value
[20:26:16] [PASSED] drm_test_check_format_value
[20:26:16] [PASSED] drm_test_check_tmds_char_value
[20:26:16] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[20:26:16] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[20:26:16] [PASSED] drm_test_check_mode_valid
[20:26:16] [PASSED] drm_test_check_mode_valid_reject
[20:26:16] [PASSED] drm_test_check_mode_valid_reject_rate
[20:26:16] [PASSED] drm_test_check_mode_valid_reject_max_clock
[20:26:16] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[20:26:16] ================= drm_managed (2 subtests) =================
[20:26:16] [PASSED] drm_test_managed_release_action
[20:26:16] [PASSED] drm_test_managed_run_action
[20:26:16] =================== [PASSED] drm_managed ===================
[20:26:16] =================== drm_mm (6 subtests) ====================
[20:26:16] [PASSED] drm_test_mm_init
[20:26:16] [PASSED] drm_test_mm_debug
[20:26:16] [PASSED] drm_test_mm_align32
[20:26:16] [PASSED] drm_test_mm_align64
[20:26:16] [PASSED] drm_test_mm_lowest
[20:26:16] [PASSED] drm_test_mm_highest
[20:26:16] ===================== [PASSED] drm_mm ======================
[20:26:16] ============= drm_modes_analog_tv (5 subtests) =============
[20:26:16] [PASSED] drm_test_modes_analog_tv_mono_576i
[20:26:16] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[20:26:16] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[20:26:16] [PASSED] drm_test_modes_analog_tv_pal_576i
[20:26:16] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[20:26:16] =============== [PASSED] drm_modes_analog_tv ===============
[20:26:16] ============== drm_plane_helper (2 subtests) ===============
[20:26:16] =============== drm_test_check_plane_state  ================
[20:26:16] [PASSED] clipping_simple
[20:26:16] [PASSED] clipping_rotate_reflect
[20:26:16] [PASSED] positioning_simple
[20:26:16] [PASSED] upscaling
[20:26:16] [PASSED] downscaling
[20:26:16] [PASSED] rounding1
[20:26:16] [PASSED] rounding2
[20:26:16] [PASSED] rounding3
[20:26:16] [PASSED] rounding4
[20:26:16] =========== [PASSED] drm_test_check_plane_state ============
[20:26:16] =========== drm_test_check_invalid_plane_state  ============
[20:26:16] [PASSED] positioning_invalid
[20:26:16] [PASSED] upscaling_invalid
[20:26:16] [PASSED] downscaling_invalid
[20:26:16] ======= [PASSED] drm_test_check_invalid_plane_state ========
[20:26:16] ================ [PASSED] drm_plane_helper =================
[20:26:16] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[20:26:16] ====== drm_test_connector_helper_tv_get_modes_check  =======
[20:26:16] [PASSED] None
[20:26:16] [PASSED] PAL
[20:26:16] [PASSED] NTSC
[20:26:16] [PASSED] Both, NTSC Default
[20:26:16] [PASSED] Both, PAL Default
[20:26:16] [PASSED] Both, NTSC Default, with PAL on command-line
[20:26:16] [PASSED] Both, PAL Default, with NTSC on command-line
[20:26:16] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[20:26:16] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[20:26:16] ================== drm_rect (9 subtests) ===================
[20:26:16] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[20:26:16] [PASSED] drm_test_rect_clip_scaled_not_clipped
[20:26:16] [PASSED] drm_test_rect_clip_scaled_clipped
[20:26:16] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[20:26:16] ================= drm_test_rect_intersect  =================
[20:26:16] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[20:26:16] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[20:26:16] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[20:26:16] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[20:26:16] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[20:26:16] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[20:26:16] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[20:26:16] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[20:26:16] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[20:26:16] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[20:26:16] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[20:26:16] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[20:26:16] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[20:26:16] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[20:26:16] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[20:26:16] ============= [PASSED] drm_test_rect_intersect =============
[20:26:16] ================ drm_test_rect_calc_hscale  ================
[20:26:16] [PASSED] normal use
[20:26:16] [PASSED] out of max range
[20:26:16] [PASSED] out of min range
[20:26:16] [PASSED] zero dst
[20:26:16] [PASSED] negative src
[20:26:16] [PASSED] negative dst
[20:26:16] ============ [PASSED] drm_test_rect_calc_hscale ============
[20:26:16] ================ drm_test_rect_calc_vscale  ================
[20:26:16] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[20:26:16] [PASSED] out of max range
[20:26:16] [PASSED] out of min range
[20:26:16] [PASSED] zero dst
[20:26:16] [PASSED] negative src
[20:26:16] [PASSED] negative dst
[20:26:16] ============ [PASSED] drm_test_rect_calc_vscale ============
[20:26:16] ================== drm_test_rect_rotate  ===================
[20:26:16] [PASSED] reflect-x
[20:26:16] [PASSED] reflect-y
[20:26:16] [PASSED] rotate-0
[20:26:16] [PASSED] rotate-90
[20:26:16] [PASSED] rotate-180
[20:26:16] [PASSED] rotate-270
[20:26:16] ============== [PASSED] drm_test_rect_rotate ===============
[20:26:16] ================ drm_test_rect_rotate_inv  =================
[20:26:16] [PASSED] reflect-x
[20:26:16] [PASSED] reflect-y
[20:26:16] [PASSED] rotate-0
[20:26:16] [PASSED] rotate-90
[20:26:16] [PASSED] rotate-180
[20:26:16] [PASSED] rotate-270
[20:26:16] ============ [PASSED] drm_test_rect_rotate_inv =============
[20:26:16] ==================== [PASSED] drm_rect =====================
[20:26:16] ============ drm_sysfb_modeset_test (1 subtest) ============
[20:26:16] ============ drm_test_sysfb_build_fourcc_list  =============
[20:26:16] [PASSED] no native formats
[20:26:16] [PASSED] XRGB8888 as native format
[20:26:16] [PASSED] remove duplicates
[20:26:16] [PASSED] convert alpha formats
[20:26:16] [PASSED] random formats
[20:26:16] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[20:26:16] ============= [PASSED] drm_sysfb_modeset_test ==============
[20:26:16] ============================================================
[20:26:16] Testing complete. Ran 622 tests: passed: 622
[20:26:16] Elapsed time: 27.146s total, 1.710s configuring, 25.012s building, 0.423s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[20:26:16] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:26:18] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:26:27] Starting KUnit Kernel (1/1)...
[20:26:27] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:26:27] ================= ttm_device (5 subtests) ==================
[20:26:27] [PASSED] ttm_device_init_basic
[20:26:27] [PASSED] ttm_device_init_multiple
[20:26:27] [PASSED] ttm_device_fini_basic
[20:26:27] [PASSED] ttm_device_init_no_vma_man
[20:26:27] ================== ttm_device_init_pools  ==================
[20:26:27] [PASSED] No DMA allocations, no DMA32 required
[20:26:27] [PASSED] DMA allocations, DMA32 required
[20:26:27] [PASSED] No DMA allocations, DMA32 required
[20:26:27] [PASSED] DMA allocations, no DMA32 required
[20:26:27] ============== [PASSED] ttm_device_init_pools ==============
[20:26:27] =================== [PASSED] ttm_device ====================
[20:26:27] ================== ttm_pool (8 subtests) ===================
[20:26:27] ================== ttm_pool_alloc_basic  ===================
[20:26:27] [PASSED] One page
[20:26:27] [PASSED] More than one page
[20:26:27] [PASSED] Above the allocation limit
[20:26:27] [PASSED] One page, with coherent DMA mappings enabled
[20:26:27] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[20:26:27] ============== [PASSED] ttm_pool_alloc_basic ===============
[20:26:27] ============== ttm_pool_alloc_basic_dma_addr  ==============
[20:26:27] [PASSED] One page
[20:26:27] [PASSED] More than one page
[20:26:27] [PASSED] Above the allocation limit
[20:26:27] [PASSED] One page, with coherent DMA mappings enabled
[20:26:27] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[20:26:27] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[20:26:27] [PASSED] ttm_pool_alloc_order_caching_match
[20:26:27] [PASSED] ttm_pool_alloc_caching_mismatch
[20:26:27] [PASSED] ttm_pool_alloc_order_mismatch
[20:26:27] [PASSED] ttm_pool_free_dma_alloc
[20:26:27] [PASSED] ttm_pool_free_no_dma_alloc
[20:26:27] [PASSED] ttm_pool_fini_basic
[20:26:27] ==================== [PASSED] ttm_pool =====================
[20:26:27] ================ ttm_resource (8 subtests) =================
[20:26:27] ================= ttm_resource_init_basic  =================
[20:26:27] [PASSED] Init resource in TTM_PL_SYSTEM
[20:26:27] [PASSED] Init resource in TTM_PL_VRAM
[20:26:27] [PASSED] Init resource in a private placement
[20:26:27] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[20:26:27] ============= [PASSED] ttm_resource_init_basic =============
[20:26:27] [PASSED] ttm_resource_init_pinned
[20:26:27] [PASSED] ttm_resource_fini_basic
[20:26:27] [PASSED] ttm_resource_manager_init_basic
[20:26:27] [PASSED] ttm_resource_manager_usage_basic
[20:26:27] [PASSED] ttm_resource_manager_set_used_basic
[20:26:27] [PASSED] ttm_sys_man_alloc_basic
[20:26:27] [PASSED] ttm_sys_man_free_basic
[20:26:27] ================== [PASSED] ttm_resource ===================
[20:26:27] =================== ttm_tt (15 subtests) ===================
[20:26:27] ==================== ttm_tt_init_basic  ====================
[20:26:27] [PASSED] Page-aligned size
[20:26:27] [PASSED] Extra pages requested
[20:26:27] ================ [PASSED] ttm_tt_init_basic ================
[20:26:27] [PASSED] ttm_tt_init_misaligned
[20:26:27] [PASSED] ttm_tt_fini_basic
[20:26:27] [PASSED] ttm_tt_fini_sg
[20:26:27] [PASSED] ttm_tt_fini_shmem
[20:26:27] [PASSED] ttm_tt_create_basic
[20:26:27] [PASSED] ttm_tt_create_invalid_bo_type
[20:26:27] [PASSED] ttm_tt_create_ttm_exists
[20:26:27] [PASSED] ttm_tt_create_failed
[20:26:27] [PASSED] ttm_tt_destroy_basic
[20:26:27] [PASSED] ttm_tt_populate_null_ttm
[20:26:27] [PASSED] ttm_tt_populate_populated_ttm
[20:26:27] [PASSED] ttm_tt_unpopulate_basic
[20:26:27] [PASSED] ttm_tt_unpopulate_empty_ttm
[20:26:27] [PASSED] ttm_tt_swapin_basic
[20:26:27] ===================== [PASSED] ttm_tt ======================
[20:26:27] =================== ttm_bo (14 subtests) ===================
[20:26:27] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[20:26:27] [PASSED] Cannot be interrupted and sleeps
[20:26:27] [PASSED] Cannot be interrupted, locks straight away
[20:26:27] [PASSED] Can be interrupted, sleeps
[20:26:27] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[20:26:27] [PASSED] ttm_bo_reserve_locked_no_sleep
[20:26:27] [PASSED] ttm_bo_reserve_no_wait_ticket
[20:26:27] [PASSED] ttm_bo_reserve_double_resv
[20:26:27] [PASSED] ttm_bo_reserve_interrupted
[20:26:27] [PASSED] ttm_bo_reserve_deadlock
[20:26:27] [PASSED] ttm_bo_unreserve_basic
[20:26:27] [PASSED] ttm_bo_unreserve_pinned
[20:26:27] [PASSED] ttm_bo_unreserve_bulk
[20:26:27] [PASSED] ttm_bo_fini_basic
[20:26:27] [PASSED] ttm_bo_fini_shared_resv
[20:26:27] [PASSED] ttm_bo_pin_basic
[20:26:27] [PASSED] ttm_bo_pin_unpin_resource
[20:26:27] [PASSED] ttm_bo_multiple_pin_one_unpin
[20:26:27] ===================== [PASSED] ttm_bo ======================
[20:26:27] ============== ttm_bo_validate (21 subtests) ===============
[20:26:27] ============== ttm_bo_init_reserved_sys_man  ===============
[20:26:27] [PASSED] Buffer object for userspace
[20:26:27] [PASSED] Kernel buffer object
[20:26:27] [PASSED] Shared buffer object
[20:26:27] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[20:26:27] ============== ttm_bo_init_reserved_mock_man  ==============
[20:26:27] [PASSED] Buffer object for userspace
[20:26:27] [PASSED] Kernel buffer object
[20:26:27] [PASSED] Shared buffer object
[20:26:27] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[20:26:27] [PASSED] ttm_bo_init_reserved_resv
[20:26:27] ================== ttm_bo_validate_basic  ==================
[20:26:27] [PASSED] Buffer object for userspace
[20:26:27] [PASSED] Kernel buffer object
[20:26:27] [PASSED] Shared buffer object
[20:26:27] ============== [PASSED] ttm_bo_validate_basic ==============
[20:26:27] [PASSED] ttm_bo_validate_invalid_placement
[20:26:27] ============= ttm_bo_validate_same_placement  ==============
[20:26:27] [PASSED] System manager
[20:26:27] [PASSED] VRAM manager
[20:26:27] ========= [PASSED] ttm_bo_validate_same_placement ==========
[20:26:27] [PASSED] ttm_bo_validate_failed_alloc
[20:26:27] [PASSED] ttm_bo_validate_pinned
[20:26:27] [PASSED] ttm_bo_validate_busy_placement
[20:26:27] ================ ttm_bo_validate_multihop  =================
[20:26:27] [PASSED] Buffer object for userspace
[20:26:27] [PASSED] Kernel buffer object
[20:26:27] [PASSED] Shared buffer object
[20:26:27] ============ [PASSED] ttm_bo_validate_multihop =============
[20:26:27] ========== ttm_bo_validate_no_placement_signaled  ==========
[20:26:27] [PASSED] Buffer object in system domain, no page vector
[20:26:27] [PASSED] Buffer object in system domain with an existing page vector
[20:26:27] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[20:26:27] ======== ttm_bo_validate_no_placement_not_signaled  ========
[20:26:27] [PASSED] Buffer object for userspace
[20:26:27] [PASSED] Kernel buffer object
[20:26:27] [PASSED] Shared buffer object
[20:26:27] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[20:26:27] [PASSED] ttm_bo_validate_move_fence_signaled
[20:26:27] ========= ttm_bo_validate_move_fence_not_signaled  =========
[20:26:27] [PASSED] Waits for GPU
[20:26:27] [PASSED] Tries to lock straight away
[20:26:27] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[20:26:27] [PASSED] ttm_bo_validate_happy_evict
[20:26:27] [PASSED] ttm_bo_validate_all_pinned_evict
[20:26:27] [PASSED] ttm_bo_validate_allowed_only_evict
[20:26:27] [PASSED] ttm_bo_validate_deleted_evict
[20:26:27] [PASSED] ttm_bo_validate_busy_domain_evict
[20:26:27] [PASSED] ttm_bo_validate_evict_gutting
[20:26:27] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[20:26:27] ================= [PASSED] ttm_bo_validate =================
[20:26:27] ============================================================
[20:26:27] Testing complete. Ran 101 tests: passed: 101
[20:26:27] Elapsed time: 11.257s total, 1.582s configuring, 9.459s building, 0.183s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 8/8] drm/xe: Avoid toggling schedule state to check LRC timestamp in TDR
  2025-11-26 20:19 ` [PATCH v5 8/8] drm/xe: Avoid toggling schedule state to check LRC timestamp in TDR Matthew Brost
@ 2025-11-26 21:21   ` Matthew Brost
  0 siblings, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2025-11-26 21:21 UTC (permalink / raw)
  To: intel-xe; +Cc: dri-devel

On Wed, Nov 26, 2025 at 12:19:16PM -0800, Matthew Brost wrote:
> We now have proper infrastructure to accurately check the LRC timestamp
> without toggling the scheduling state for non-VFs. For VFs, it is still
> possible to get an inaccurate view if the context is on hardware. We
> guard against free-running contexts on VFs by banning jobs whose
> timestamps are not moving. In addition, VFs have a timeslice quantum
> that naturally triggers context switches when more than one VF is
> running, thus updating the LRC timestamp.
> 
> For multi-queue, it is desirable to avoid scheduling toggling in the TDR
> because this scheduling state is shared among many queues. Furthermore,
> this change simplifies the GuC state machine. The trade-off for VF cases
> seems worthwhile.
> 
> v5:
>  - Add xe_lrc_timestamp helper (Umesh)
> 

Ignore this patch, this is broken a VF. I believe I have a fix but that
doesn't appear to be working either... I'll have to dig in after the
break.

Matt

> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_guc_submit.c      | 97 ++++++-------------------
>  drivers/gpu/drm/xe/xe_lrc.c             | 42 +++++++----
>  drivers/gpu/drm/xe/xe_lrc.h             |  3 +-
>  drivers/gpu/drm/xe/xe_sched_job.c       |  1 +
>  drivers/gpu/drm/xe/xe_sched_job_types.h |  2 +
>  5 files changed, 56 insertions(+), 89 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
> index db3c57d758c6..b8022826795b 100644
> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> @@ -68,9 +68,7 @@ exec_queue_to_guc(struct xe_exec_queue *q)
>  #define EXEC_QUEUE_STATE_KILLED			(1 << 7)
>  #define EXEC_QUEUE_STATE_WEDGED			(1 << 8)
>  #define EXEC_QUEUE_STATE_BANNED			(1 << 9)
> -#define EXEC_QUEUE_STATE_CHECK_TIMEOUT		(1 << 10)
> -#define EXEC_QUEUE_STATE_PENDING_RESUME		(1 << 11)
> -#define EXEC_QUEUE_STATE_PENDING_TDR_EXIT	(1 << 12)
> +#define EXEC_QUEUE_STATE_PENDING_RESUME		(1 << 10)
>  
>  static bool exec_queue_registered(struct xe_exec_queue *q)
>  {
> @@ -202,21 +200,6 @@ static void set_exec_queue_wedged(struct xe_exec_queue *q)
>  	atomic_or(EXEC_QUEUE_STATE_WEDGED, &q->guc->state);
>  }
>  
> -static bool exec_queue_check_timeout(struct xe_exec_queue *q)
> -{
> -	return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_CHECK_TIMEOUT;
> -}
> -
> -static void set_exec_queue_check_timeout(struct xe_exec_queue *q)
> -{
> -	atomic_or(EXEC_QUEUE_STATE_CHECK_TIMEOUT, &q->guc->state);
> -}
> -
> -static void clear_exec_queue_check_timeout(struct xe_exec_queue *q)
> -{
> -	atomic_and(~EXEC_QUEUE_STATE_CHECK_TIMEOUT, &q->guc->state);
> -}
> -
>  static bool exec_queue_pending_resume(struct xe_exec_queue *q)
>  {
>  	return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_PENDING_RESUME;
> @@ -232,21 +215,6 @@ static void clear_exec_queue_pending_resume(struct xe_exec_queue *q)
>  	atomic_and(~EXEC_QUEUE_STATE_PENDING_RESUME, &q->guc->state);
>  }
>  
> -static bool exec_queue_pending_tdr_exit(struct xe_exec_queue *q)
> -{
> -	return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_PENDING_TDR_EXIT;
> -}
> -
> -static void set_exec_queue_pending_tdr_exit(struct xe_exec_queue *q)
> -{
> -	atomic_or(EXEC_QUEUE_STATE_PENDING_TDR_EXIT, &q->guc->state);
> -}
> -
> -static void clear_exec_queue_pending_tdr_exit(struct xe_exec_queue *q)
> -{
> -	atomic_and(~EXEC_QUEUE_STATE_PENDING_TDR_EXIT, &q->guc->state);
> -}
> -
>  static bool exec_queue_killed_or_banned_or_wedged(struct xe_exec_queue *q)
>  {
>  	return (atomic_read(&q->guc->state) &
> @@ -1006,7 +974,16 @@ static bool check_timeout(struct xe_exec_queue *q, struct xe_sched_job *job)
>  		return xe_sched_invalidate_job(job, 2);
>  	}
>  
> -	ctx_timestamp = lower_32_bits(xe_lrc_ctx_timestamp(q->lrc[0]));
> +	ctx_timestamp = lower_32_bits(xe_lrc_timestamp(q->lrc[0]));
> +	if (ctx_timestamp == job->sample_timestamp) {
> +		xe_gt_warn(gt, "Check job timeout: seqno=%u, lrc_seqno=%u, guc_id=%d, timestamp stuck",
> +			   xe_sched_job_seqno(job), xe_sched_job_lrc_seqno(job),
> +			   q->guc->id);
> +
> +		return xe_sched_invalidate_job(job, 2);
> +	}
> +
> +	job->sample_timestamp = ctx_timestamp;
>  	ctx_job_timestamp = xe_lrc_ctx_job_timestamp(q->lrc[0]);
>  
>  	/*
> @@ -1132,16 +1109,17 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
>  	}
>  
>  	/*
> -	 * XXX: Sampling timeout doesn't work in wedged mode as we have to
> -	 * modify scheduling state to read timestamp. We could read the
> -	 * timestamp from a register to accumulate current running time but this
> -	 * doesn't work for SRIOV. For now assuming timeouts in wedged mode are
> -	 * genuine timeouts.
> +	 * Check if job is actually timed out, if so restart job execution and TDR
>  	 */
> +	if (!skip_timeout_check && !check_timeout(q, job))
> +		goto rearm;
> +
>  	if (!exec_queue_killed(q))
>  		wedged = guc_submit_hint_wedged(exec_queue_to_guc(q));
>  
> -	/* Engine state now stable, disable scheduling to check timestamp */
> +	set_exec_queue_banned(q);
> +
> +	/* Kick job / queue off hardware */
>  	if (!wedged && (exec_queue_enabled(q) || exec_queue_pending_disable(q))) {
>  		int ret;
>  
> @@ -1163,13 +1141,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
>  			if (!ret || xe_guc_read_stopped(guc))
>  				goto trigger_reset;
>  
> -			/*
> -			 * Flag communicates to G2H handler that schedule
> -			 * disable originated from a timeout check. The G2H then
> -			 * avoid triggering cleanup or deregistering the exec
> -			 * queue.
> -			 */
> -			set_exec_queue_check_timeout(q);
>  			disable_scheduling(q, skip_timeout_check);
>  		}
>  
> @@ -1198,22 +1169,12 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
>  			xe_devcoredump(q, job,
>  				       "Schedule disable failed to respond, guc_id=%d, ret=%d, guc_read=%d",
>  				       q->guc->id, ret, xe_guc_read_stopped(guc));
> -			set_exec_queue_banned(q);
>  			xe_gt_reset_async(q->gt);
>  			xe_sched_tdr_queue_imm(sched);
>  			goto rearm;
>  		}
>  	}
>  
> -	/*
> -	 * Check if job is actually timed out, if so restart job execution and TDR
> -	 */
> -	if (!wedged && !skip_timeout_check && !check_timeout(q, job) &&
> -	    !exec_queue_reset(q) && exec_queue_registered(q)) {
> -		clear_exec_queue_check_timeout(q);
> -		goto sched_enable;
> -	}
> -
>  	if (q->vm && q->vm->xef) {
>  		process_name = q->vm->xef->process_name;
>  		pid = q->vm->xef->pid;
> @@ -1244,14 +1205,11 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
>  	if (!wedged && (q->flags & EXEC_QUEUE_FLAG_KERNEL ||
>  			(q->flags & EXEC_QUEUE_FLAG_VM && !exec_queue_killed(q)))) {
>  		if (!xe_sched_invalidate_job(job, 2)) {
> -			clear_exec_queue_check_timeout(q);
>  			xe_gt_reset_async(q->gt);
>  			goto rearm;
>  		}
>  	}
>  
> -	set_exec_queue_banned(q);
> -
>  	/* Mark all outstanding jobs as bad, thus completing them */
>  	xe_sched_job_set_error(job, err);
>  	drm_sched_for_each_pending_job(tmp_job, &sched->base, NULL)
> @@ -1266,9 +1224,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
>  	 */
>  	return DRM_GPU_SCHED_STAT_NO_HANG;
>  
> -sched_enable:
> -	set_exec_queue_pending_tdr_exit(q);
> -	enable_scheduling(q);
>  rearm:
>  	/*
>  	 * XXX: Ideally want to adjust timeout based on current execution time
> @@ -1898,8 +1853,7 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_guc *guc,
>  			  q->guc->id);
>  	}
>  
> -	if (pending_enable && !pending_resume &&
> -	    !exec_queue_pending_tdr_exit(q)) {
> +	if (pending_enable && !pending_resume) {
>  		clear_exec_queue_registered(q);
>  		xe_gt_dbg(guc_to_gt(guc), "Replay REGISTER - guc_id=%d",
>  			  q->guc->id);
> @@ -1908,7 +1862,6 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_guc *guc,
>  	if (pending_enable) {
>  		clear_exec_queue_enabled(q);
>  		clear_exec_queue_pending_resume(q);
> -		clear_exec_queue_pending_tdr_exit(q);
>  		clear_exec_queue_pending_enable(q);
>  		xe_gt_dbg(guc_to_gt(guc), "Replay ENABLE - guc_id=%d",
>  			  q->guc->id);
> @@ -1934,7 +1887,6 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_guc *guc,
>  		if (!pending_enable)
>  			set_exec_queue_enabled(q);
>  		clear_exec_queue_pending_disable(q);
> -		clear_exec_queue_check_timeout(q);
>  		xe_gt_dbg(guc_to_gt(guc), "Replay DISABLE - guc_id=%d",
>  			  q->guc->id);
>  	}
> @@ -2274,13 +2226,10 @@ static void handle_sched_done(struct xe_guc *guc, struct xe_exec_queue *q,
>  
>  		q->guc->resume_time = ktime_get();
>  		clear_exec_queue_pending_resume(q);
> -		clear_exec_queue_pending_tdr_exit(q);
>  		clear_exec_queue_pending_enable(q);
>  		smp_wmb();
>  		wake_up_all(&guc->ct.wq);
>  	} else {
> -		bool check_timeout = exec_queue_check_timeout(q);
> -
>  		xe_gt_assert(guc_to_gt(guc), runnable_state == 0);
>  		xe_gt_assert(guc_to_gt(guc), exec_queue_pending_disable(q));
>  
> @@ -2288,11 +2237,11 @@ static void handle_sched_done(struct xe_guc *guc, struct xe_exec_queue *q,
>  			suspend_fence_signal(q);
>  			clear_exec_queue_pending_disable(q);
>  		} else {
> -			if (exec_queue_banned(q) || check_timeout) {
> +			if (exec_queue_banned(q)) {
>  				smp_wmb();
>  				wake_up_all(&guc->ct.wq);
>  			}
> -			if (!check_timeout && exec_queue_destroyed(q)) {
> +			if (exec_queue_destroyed(q)) {
>  				/*
>  				 * Make sure to clear the pending_disable only
>  				 * after sampling the destroyed state. We want
> @@ -2402,7 +2351,7 @@ int xe_guc_exec_queue_reset_handler(struct xe_guc *guc, u32 *msg, u32 len)
>  	 * guc_exec_queue_timedout_job.
>  	 */
>  	set_exec_queue_reset(q);
> -	if (!exec_queue_banned(q) && !exec_queue_check_timeout(q))
> +	if (!exec_queue_banned(q))
>  		xe_guc_exec_queue_trigger_cleanup(q);
>  
>  	return 0;
> @@ -2483,7 +2432,7 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
>  
>  	/* Treat the same as engine reset */
>  	set_exec_queue_reset(q);
> -	if (!exec_queue_banned(q) && !exec_queue_check_timeout(q))
> +	if (!exec_queue_banned(q))
>  		xe_guc_exec_queue_trigger_cleanup(q);
>  
>  	return 0;
> diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
> index b5083c99dd50..c9bfd11a8d5e 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.c
> +++ b/drivers/gpu/drm/xe/xe_lrc.c
> @@ -839,7 +839,7 @@ u32 xe_lrc_ctx_timestamp_udw_ggtt_addr(struct xe_lrc *lrc)
>   *
>   * Returns: ctx timestamp value
>   */
> -u64 xe_lrc_ctx_timestamp(struct xe_lrc *lrc)
> +static u64 xe_lrc_ctx_timestamp(struct xe_lrc *lrc)
>  {
>  	struct xe_device *xe = lrc_to_xe(lrc);
>  	struct iosys_map map;
> @@ -2353,35 +2353,29 @@ static int get_ctx_timestamp(struct xe_lrc *lrc, u32 engine_id, u64 *reg_ctx_ts)
>  }
>  
>  /**
> - * xe_lrc_update_timestamp() - Update ctx timestamp
> + * xe_lrc_timestamp() - Current ctx timestamp
>   * @lrc: Pointer to the lrc.
> - * @old_ts: Old timestamp value
>   *
> - * Populate @old_ts current saved ctx timestamp, read new ctx timestamp and
> - * update saved value. With support for active contexts, the calculation may be
> - * slightly racy, so follow a read-again logic to ensure that the context is
> - * still active before returning the right timestamp.
> + * Return latest ctx timestamp.
>   *
>   * Returns: New ctx timestamp value
>   */
> -u64 xe_lrc_update_timestamp(struct xe_lrc *lrc, u64 *old_ts)
> +u64 xe_lrc_timestamp(struct xe_lrc *lrc)
>  {
> -	u64 lrc_ts, reg_ts;
> +	u64 lrc_ts, reg_ts, new_ts;
>  	u32 engine_id;
>  
> -	*old_ts = lrc->ctx_timestamp;
> -
>  	lrc_ts = xe_lrc_ctx_timestamp(lrc);
>  	/* CTX_TIMESTAMP mmio read is invalid on VF, so return the LRC value */
>  	if (IS_SRIOV_VF(lrc_to_xe(lrc))) {
> -		lrc->ctx_timestamp = lrc_ts;
> +		new_ts = lrc_ts;
>  		goto done;
>  	}
>  
>  	if (lrc_ts == CONTEXT_ACTIVE) {
>  		engine_id = xe_lrc_engine_id(lrc);
>  		if (!get_ctx_timestamp(lrc, engine_id, &reg_ts))
> -			lrc->ctx_timestamp = reg_ts;
> +			new_ts = reg_ts;
>  
>  		/* read lrc again to ensure context is still active */
>  		lrc_ts = xe_lrc_ctx_timestamp(lrc);
> @@ -2392,9 +2386,29 @@ u64 xe_lrc_update_timestamp(struct xe_lrc *lrc, u64 *old_ts)
>  	 * be a separate if condition.
>  	 */
>  	if (lrc_ts != CONTEXT_ACTIVE)
> -		lrc->ctx_timestamp = lrc_ts;
> +		new_ts = lrc_ts;
>  
>  done:
> +	return new_ts;
> +}
> +
> +/**
> + * xe_lrc_update_timestamp() - Update ctx timestamp
> + * @lrc: Pointer to the lrc.
> + * @old_ts: Old timestamp value
> + *
> + * Populate @old_ts current saved ctx timestamp, read new ctx timestamp and
> + * update saved value. With support for active contexts, the calculation may be
> + * slightly racy, so follow a read-again logic to ensure that the context is
> + * still active before returning the right timestamp.
> + *
> + * Returns: New ctx timestamp value
> + */
> +u64 xe_lrc_update_timestamp(struct xe_lrc *lrc, u64 *old_ts)
> +{
> +	*old_ts = lrc->ctx_timestamp;
> +	lrc->ctx_timestamp = xe_lrc_timestamp(lrc);
> +
>  	trace_xe_lrc_update_timestamp(lrc, *old_ts);
>  
>  	return lrc->ctx_timestamp;
> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
> index 2fb628da5c43..86b7174f424a 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.h
> +++ b/drivers/gpu/drm/xe/xe_lrc.h
> @@ -140,7 +140,6 @@ void xe_lrc_snapshot_free(struct xe_lrc_snapshot *snapshot);
>  
>  u32 xe_lrc_ctx_timestamp_ggtt_addr(struct xe_lrc *lrc);
>  u32 xe_lrc_ctx_timestamp_udw_ggtt_addr(struct xe_lrc *lrc);
> -u64 xe_lrc_ctx_timestamp(struct xe_lrc *lrc);
>  u32 xe_lrc_ctx_job_timestamp_ggtt_addr(struct xe_lrc *lrc);
>  u32 xe_lrc_ctx_job_timestamp(struct xe_lrc *lrc);
>  int xe_lrc_setup_wa_bb_with_scratch(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
> @@ -160,4 +159,6 @@ int xe_lrc_setup_wa_bb_with_scratch(struct xe_lrc *lrc, struct xe_hw_engine *hwe
>   */
>  u64 xe_lrc_update_timestamp(struct xe_lrc *lrc, u64 *old_ts);
>  
> +u64 xe_lrc_timestamp(struct xe_lrc *lrc);
> +
>  #endif
> diff --git a/drivers/gpu/drm/xe/xe_sched_job.c b/drivers/gpu/drm/xe/xe_sched_job.c
> index cb674a322113..39aec7f6d86d 100644
> --- a/drivers/gpu/drm/xe/xe_sched_job.c
> +++ b/drivers/gpu/drm/xe/xe_sched_job.c
> @@ -110,6 +110,7 @@ struct xe_sched_job *xe_sched_job_create(struct xe_exec_queue *q,
>  		return ERR_PTR(-ENOMEM);
>  
>  	job->q = q;
> +	job->sample_timestamp = U64_MAX;
>  	kref_init(&job->refcount);
>  	xe_exec_queue_get(job->q);
>  
> diff --git a/drivers/gpu/drm/xe/xe_sched_job_types.h b/drivers/gpu/drm/xe/xe_sched_job_types.h
> index 7c4c54fe920a..13c2970e81a8 100644
> --- a/drivers/gpu/drm/xe/xe_sched_job_types.h
> +++ b/drivers/gpu/drm/xe/xe_sched_job_types.h
> @@ -59,6 +59,8 @@ struct xe_sched_job {
>  	u32 lrc_seqno;
>  	/** @migrate_flush_flags: Additional flush flags for migration jobs */
>  	u32 migrate_flush_flags;
> +	/** @sample_timestamp: Sampling of job timestamp in TDR */
> +	u64 sample_timestamp;
>  	/** @ring_ops_flush_tlb: The ring ops need to flush TLB before payload. */
>  	bool ring_ops_flush_tlb;
>  	/** @ggtt: mapped in ggtt. */
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* ✓ Xe.CI.BAT: success for Fix DRM scheduler layering violations in Xe (rev5)
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
                   ` (9 preceding siblings ...)
  2025-11-26 20:26 ` ✓ CI.KUnit: success " Patchwork
@ 2025-11-26 21:40 ` Patchwork
  2025-11-26 22:18 ` ✗ Xe.CI.Full: failure " Patchwork
  11 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2025-11-26 21:40 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 1933 bytes --]

== Series Details ==

Series: Fix DRM scheduler layering violations in Xe (rev5)
URL   : https://patchwork.freedesktop.org/series/155314/
State : success

== Summary ==

CI Bug Log - changes from xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df_BAT -> xe-pw-155314v5_BAT
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (12 -> 12)
------------------------------

  No changes in participating hosts

Known issues
------------

  Here are the changes found in xe-pw-155314v5_BAT that come from known issues:

### IGT changes ###

#### Possible fixes ####

  * igt@xe_waitfence@abstime:
    - bat-dg2-oem2:       [TIMEOUT][1] ([Intel XE#6506]) -> [PASS][2]
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/bat-dg2-oem2/igt@xe_waitfence@abstime.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/bat-dg2-oem2/igt@xe_waitfence@abstime.html

  * igt@xe_waitfence@engine:
    - bat-dg2-oem2:       [FAIL][3] ([Intel XE#6519]) -> [PASS][4]
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/bat-dg2-oem2/igt@xe_waitfence@engine.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/bat-dg2-oem2/igt@xe_waitfence@engine.html

  
  [Intel XE#6506]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6506
  [Intel XE#6519]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6519


Build changes
-------------

  * Linux: xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df -> xe-pw-155314v5

  IGT_8639: 2ce563031e6b2ec91479f6af8c326d25c15bdb26 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df: e41f42483c6f784fce3deb5eca0931bcbffb01df
  xe-pw-155314v5: 155314v5

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/index.html

[-- Attachment #2: Type: text/html, Size: 2532 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* ✗ Xe.CI.Full: failure for Fix DRM scheduler layering violations in Xe (rev5)
  2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
                   ` (10 preceding siblings ...)
  2025-11-26 21:40 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-11-26 22:18 ` Patchwork
  11 siblings, 0 replies; 14+ messages in thread
From: Patchwork @ 2025-11-26 22:18 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 57819 bytes --]

== Series Details ==

Series: Fix DRM scheduler layering violations in Xe (rev5)
URL   : https://patchwork.freedesktop.org/series/155314/
State : failure

== Summary ==

CI Bug Log - changes from xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df_FULL -> xe-pw-155314v5_FULL
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-155314v5_FULL absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-155314v5_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (4 -> 4)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-155314v5_FULL:

### IGT changes ###

#### Possible regressions ####

  * igt@xe_exec_reset@gt-reset-stress:
    - shard-lnl:          [PASS][1] -> [DMESG-WARN][2]
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-4/igt@xe_exec_reset@gt-reset-stress.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_exec_reset@gt-reset-stress.html

  
#### Warnings ####

  * igt@kms_content_protection@suspend-resume:
    - shard-bmg:          [FAIL][3] ([Intel XE#1178]) -> [SKIP][4]
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-1/igt@kms_content_protection@suspend-resume.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_content_protection@suspend-resume.html

  
Known issues
------------

  Here are the changes found in xe-pw-155314v5_FULL that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@intel_hwmon@hwmon-read:
    - shard-adlp:         NOTRUN -> [SKIP][5] ([Intel XE#1125] / [Intel XE#5574])
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@intel_hwmon@hwmon-read.html

  * igt@kms_async_flips@test-time-stamp:
    - shard-lnl:          NOTRUN -> [FAIL][6] ([Intel XE#6677]) +2 other tests fail
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_async_flips@test-time-stamp.html

  * igt@kms_atomic_transition@plane-all-modeset-transition:
    - shard-lnl:          NOTRUN -> [SKIP][7] ([Intel XE#3279])
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_atomic_transition@plane-all-modeset-transition.html

  * igt@kms_big_fb@linear-64bpp-rotate-270:
    - shard-lnl:          NOTRUN -> [SKIP][8] ([Intel XE#1407])
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_big_fb@linear-64bpp-rotate-270.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-async-flip:
    - shard-adlp:         NOTRUN -> [FAIL][9] ([Intel XE#1231])
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html

  * igt@kms_big_fb@y-tiled-8bpp-rotate-90:
    - shard-adlp:         NOTRUN -> [SKIP][10] ([Intel XE#316]) +2 other tests skip
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_big_fb@y-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-64bpp-rotate-180:
    - shard-lnl:          NOTRUN -> [SKIP][11] ([Intel XE#1124]) +1 other test skip
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_big_fb@yf-tiled-64bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-90:
    - shard-adlp:         NOTRUN -> [SKIP][12] ([Intel XE#1124])
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@kms_big_fb@yf-tiled-8bpp-rotate-90.html

  * igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p:
    - shard-adlp:         NOTRUN -> [SKIP][13] ([Intel XE#2191]) +1 other test skip
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html

  * igt@kms_bw@linear-tiling-4-displays-1920x1080p:
    - shard-lnl:          NOTRUN -> [SKIP][14] ([Intel XE#1512])
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_bw@linear-tiling-4-displays-1920x1080p.html

  * igt@kms_bw@linear-tiling-4-displays-3840x2160p:
    - shard-adlp:         NOTRUN -> [SKIP][15] ([Intel XE#367])
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@kms_bw@linear-tiling-4-displays-3840x2160p.html

  * igt@kms_ccs@bad-rotation-90-4-tiled-mtl-rc-ccs-cc:
    - shard-bmg:          NOTRUN -> [SKIP][16] ([Intel XE#2887]) +1 other test skip
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_ccs@bad-rotation-90-4-tiled-mtl-rc-ccs-cc.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-mc-ccs@pipe-b-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [SKIP][17] ([Intel XE#787]) +29 other tests skip
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-mc-ccs@pipe-b-hdmi-a-1.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc:
    - shard-lnl:          NOTRUN -> [SKIP][18] ([Intel XE#2887]) +3 other tests skip
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc:
    - shard-adlp:         NOTRUN -> [SKIP][19] ([Intel XE#455] / [Intel XE#787]) +19 other tests skip
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc.html

  * igt@kms_chamelium_color@ctm-0-75:
    - shard-lnl:          NOTRUN -> [SKIP][20] ([Intel XE#306])
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_chamelium_color@ctm-0-75.html

  * igt@kms_chamelium_color@gamma:
    - shard-adlp:         NOTRUN -> [SKIP][21] ([Intel XE#306])
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@kms_chamelium_color@gamma.html

  * igt@kms_chamelium_edid@dp-edid-change-during-hibernate:
    - shard-bmg:          NOTRUN -> [SKIP][22] ([Intel XE#2252]) +1 other test skip
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_chamelium_edid@dp-edid-change-during-hibernate.html

  * igt@kms_chamelium_frames@hdmi-frame-dump:
    - shard-adlp:         NOTRUN -> [SKIP][23] ([Intel XE#373]) +2 other tests skip
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-1/igt@kms_chamelium_frames@hdmi-frame-dump.html

  * igt@kms_chamelium_hpd@dp-hpd-for-each-pipe:
    - shard-lnl:          NOTRUN -> [SKIP][24] ([Intel XE#373])
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_chamelium_hpd@dp-hpd-for-each-pipe.html

  * igt@kms_cursor_crc@cursor-sliding-32x10:
    - shard-lnl:          NOTRUN -> [SKIP][25] ([Intel XE#1424])
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_cursor_crc@cursor-sliding-32x10.html

  * igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic:
    - shard-bmg:          NOTRUN -> [SKIP][26] ([Intel XE#2291]) +1 other test skip
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic.html

  * igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy:
    - shard-bmg:          [PASS][27] -> [SKIP][28] ([Intel XE#2291]) +1 other test skip
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-4/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-6/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-varying-size:
    - shard-adlp:         NOTRUN -> [SKIP][29] ([Intel XE#309]) +1 other test skip
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
    - shard-adlp:         NOTRUN -> [SKIP][30] ([Intel XE#323])
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-1/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html

  * igt@kms_dirtyfb@psr-dirtyfb-ioctl:
    - shard-adlp:         NOTRUN -> [SKIP][31] ([Intel XE#455]) +10 other tests skip
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_dirtyfb@psr-dirtyfb-ioctl.html

  * igt@kms_dp_link_training@non-uhbr-sst:
    - shard-adlp:         NOTRUN -> [SKIP][32] ([Intel XE#4354])
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@kms_dp_link_training@non-uhbr-sst.html

  * igt@kms_dp_linktrain_fallback@dsc-fallback:
    - shard-lnl:          NOTRUN -> [SKIP][33] ([Intel XE#4331])
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_dp_linktrain_fallback@dsc-fallback.html

  * igt@kms_fbcon_fbt@psr-suspend:
    - shard-bmg:          NOTRUN -> [SKIP][34] ([Intel XE#776])
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_fbcon_fbt@psr-suspend.html

  * igt@kms_flip@2x-blocking-wf_vblank:
    - shard-bmg:          [PASS][35] -> [SKIP][36] ([Intel XE#2316])
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-4/igt@kms_flip@2x-blocking-wf_vblank.html
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-6/igt@kms_flip@2x-blocking-wf_vblank.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible:
    - shard-lnl:          NOTRUN -> [SKIP][37] ([Intel XE#1421])
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_flip@2x-flip-vs-suspend-interruptible.html

  * igt@kms_flip@2x-nonexisting-fb-interruptible:
    - shard-adlp:         NOTRUN -> [SKIP][38] ([Intel XE#310]) +2 other tests skip
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_flip@2x-nonexisting-fb-interruptible.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling:
    - shard-lnl:          NOTRUN -> [SKIP][39] ([Intel XE#1401] / [Intel XE#1745])
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling@pipe-a-default-mode:
    - shard-lnl:          NOTRUN -> [SKIP][40] ([Intel XE#1401])
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling:
    - shard-bmg:          NOTRUN -> [SKIP][41] ([Intel XE#2293] / [Intel XE#2380])
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling@pipe-a-valid-mode:
    - shard-bmg:          NOTRUN -> [SKIP][42] ([Intel XE#2293])
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_tiling@flip-change-tiling@pipe-b-hdmi-a-1-x-to-y:
    - shard-adlp:         [PASS][43] -> [FAIL][44] ([Intel XE#1874])
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-2/igt@kms_flip_tiling@flip-change-tiling@pipe-b-hdmi-a-1-x-to-y.html
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-6/igt@kms_flip_tiling@flip-change-tiling@pipe-b-hdmi-a-1-x-to-y.html

  * igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-msflip-blt:
    - shard-bmg:          NOTRUN -> [SKIP][45] ([Intel XE#2311]) +1 other test skip
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-pgflip-blt:
    - shard-adlp:         NOTRUN -> [SKIP][46] ([Intel XE#656]) +18 other tests skip
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-1/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-spr-indfb-draw-blt:
    - shard-bmg:          NOTRUN -> [SKIP][47] ([Intel XE#2312]) +2 other tests skip
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-spr-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-rgb101010-draw-mmap-wc:
    - shard-lnl:          NOTRUN -> [SKIP][48] ([Intel XE#651])
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_frontbuffer_tracking@fbcdrrs-rgb101010-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-rgb565-draw-render:
    - shard-adlp:         NOTRUN -> [SKIP][49] ([Intel XE#651]) +8 other tests skip
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_frontbuffer_tracking@fbcdrrs-rgb565-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt:
    - shard-adlp:         NOTRUN -> [SKIP][50] ([Intel XE#653]) +5 other tests skip
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-indfb-draw-mmap-wc:
    - shard-adlp:         NOTRUN -> [SKIP][51] ([Intel XE#6312]) +1 other test skip
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-pgflip-blt:
    - shard-bmg:          NOTRUN -> [SKIP][52] ([Intel XE#2313]) +1 other test skip
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-plflip-blt:
    - shard-lnl:          NOTRUN -> [SKIP][53] ([Intel XE#656]) +4 other tests skip
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-plflip-blt.html

  * igt@kms_hdr@bpc-switch:
    - shard-bmg:          [PASS][54] -> [ABORT][55] ([Intel XE#6662])
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-6/igt@kms_hdr@bpc-switch.html
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-8/igt@kms_hdr@bpc-switch.html

  * igt@kms_hdr@bpc-switch@pipe-a-dp-2:
    - shard-bmg:          NOTRUN -> [ABORT][56] ([Intel XE#6662])
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-8/igt@kms_hdr@bpc-switch@pipe-a-dp-2.html

  * igt@kms_joiner@basic-max-non-joiner:
    - shard-lnl:          NOTRUN -> [SKIP][57] ([Intel XE#2925])
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_joiner@basic-max-non-joiner.html

  * igt@kms_joiner@invalid-modeset-big-joiner:
    - shard-adlp:         NOTRUN -> [SKIP][58] ([Intel XE#2925] / [Intel XE#346])
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@kms_joiner@invalid-modeset-big-joiner.html

  * igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [ABORT][59] ([Intel XE#6675]) +4 other tests abort
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-hdmi-a-1.html

  * igt@kms_pipe_stress@stress-xrgb8888-ytiled:
    - shard-lnl:          NOTRUN -> [SKIP][60] ([Intel XE#4329])
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_pipe_stress@stress-xrgb8888-ytiled.html

  * igt@kms_plane_lowres@tiling-y:
    - shard-lnl:          NOTRUN -> [SKIP][61] ([Intel XE#599])
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_plane_lowres@tiling-y.html

  * igt@kms_plane_multiple@2x-tiling-none:
    - shard-adlp:         NOTRUN -> [SKIP][62] ([Intel XE#4596])
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@kms_plane_multiple@2x-tiling-none.html

  * igt@kms_psr2_sf@fbc-pr-plane-move-sf-dmg-area:
    - shard-lnl:          NOTRUN -> [SKIP][63] ([Intel XE#1406] / [Intel XE#2893])
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@kms_psr2_sf@fbc-pr-plane-move-sf-dmg-area.html

  * igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-sf:
    - shard-adlp:         NOTRUN -> [SKIP][64] ([Intel XE#1406] / [Intel XE#1489]) +2 other tests skip
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-sf.html

  * igt@kms_psr2_su@page_flip-p010:
    - shard-adlp:         NOTRUN -> [SKIP][65] ([Intel XE#1122] / [Intel XE#1406] / [Intel XE#5580])
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-1/igt@kms_psr2_su@page_flip-p010.html

  * igt@kms_psr@fbc-pr-no-drrs:
    - shard-adlp:         NOTRUN -> [SKIP][66] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +6 other tests skip
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@kms_psr@fbc-pr-no-drrs.html

  * igt@kms_psr@psr-no-drrs:
    - shard-bmg:          NOTRUN -> [SKIP][67] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850])
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_psr@psr-no-drrs.html

  * igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
    - shard-adlp:         NOTRUN -> [SKIP][68] ([Intel XE#1406] / [Intel XE#2939] / [Intel XE#5585])
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html

  * igt@kms_rotation_crc@primary-rotation-270:
    - shard-adlp:         NOTRUN -> [SKIP][69] ([Intel XE#3414])
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_rotation_crc@primary-rotation-270.html

  * igt@kms_tv_load_detect@load-detect:
    - shard-adlp:         NOTRUN -> [SKIP][70] ([Intel XE#330])
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@kms_tv_load_detect@load-detect.html

  * igt@xe_ccs@block-multicopy-compressed:
    - shard-adlp:         NOTRUN -> [SKIP][71] ([Intel XE#455] / [Intel XE#488] / [Intel XE#5607]) +1 other test skip
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_ccs@block-multicopy-compressed.html

  * igt@xe_compute_preempt@compute-preempt:
    - shard-adlp:         NOTRUN -> [SKIP][72] ([Intel XE#6360])
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@xe_compute_preempt@compute-preempt.html

  * igt@xe_copy_basic@mem-copy-linear-0x8fffe:
    - shard-adlp:         NOTRUN -> [SKIP][73] ([Intel XE#5300])
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-1/igt@xe_copy_basic@mem-copy-linear-0x8fffe.html

  * igt@xe_copy_basic@mem-set-linear-0x369:
    - shard-adlp:         NOTRUN -> [SKIP][74] ([Intel XE#1126])
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-1/igt@xe_copy_basic@mem-set-linear-0x369.html

  * igt@xe_eu_stall@blocking-re-enable:
    - shard-adlp:         NOTRUN -> [SKIP][75] ([Intel XE#5626]) +1 other test skip
   [75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@xe_eu_stall@blocking-re-enable.html

  * igt@xe_eudebug@basic-exec-queues:
    - shard-adlp:         NOTRUN -> [SKIP][76] ([Intel XE#4837] / [Intel XE#5565]) +3 other tests skip
   [76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_eudebug@basic-exec-queues.html

  * igt@xe_eudebug@basic-vm-bind-ufence:
    - shard-bmg:          NOTRUN -> [SKIP][77] ([Intel XE#4837]) +1 other test skip
   [77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@xe_eudebug@basic-vm-bind-ufence.html

  * igt@xe_eudebug_online@writes-caching-vram-bb-vram-target-vram:
    - shard-lnl:          NOTRUN -> [SKIP][78] ([Intel XE#4837]) +3 other tests skip
   [78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_eudebug_online@writes-caching-vram-bb-vram-target-vram.html

  * igt@xe_evict@evict-beng-small-multi-vm-cm:
    - shard-lnl:          NOTRUN -> [SKIP][79] ([Intel XE#688])
   [79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_evict@evict-beng-small-multi-vm-cm.html

  * igt@xe_evict@evict-cm-threads-large:
    - shard-adlp:         NOTRUN -> [SKIP][80] ([Intel XE#261]) +2 other tests skip
   [80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_evict@evict-cm-threads-large.html

  * igt@xe_evict@evict-mixed-many-threads-small:
    - shard-bmg:          [PASS][81] -> [INCOMPLETE][82] ([Intel XE#6321] / [Intel XE#6606])
   [81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-1/igt@xe_evict@evict-mixed-many-threads-small.html
   [82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-8/igt@xe_evict@evict-mixed-many-threads-small.html

  * igt@xe_evict@evict-small-multi-vm-cm:
    - shard-adlp:         NOTRUN -> [SKIP][83] ([Intel XE#261] / [Intel XE#688])
   [83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@xe_evict@evict-small-multi-vm-cm.html

  * igt@xe_evict_ccs@evict-overcommit-simple:
    - shard-adlp:         NOTRUN -> [SKIP][84] ([Intel XE#688])
   [84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@xe_evict_ccs@evict-overcommit-simple.html

  * igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate:
    - shard-adlp:         NOTRUN -> [SKIP][85] ([Intel XE#1392] / [Intel XE#5575]) +4 other tests skip
   [85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate.html

  * igt@xe_exec_basic@multigpu-once-userptr:
    - shard-lnl:          NOTRUN -> [SKIP][86] ([Intel XE#1392]) +1 other test skip
   [86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_exec_basic@multigpu-once-userptr.html

  * igt@xe_exec_fault_mode@many-execqueues-bindexecqueue:
    - shard-adlp:         NOTRUN -> [SKIP][87] ([Intel XE#288] / [Intel XE#5561]) +9 other tests skip
   [87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@xe_exec_fault_mode@many-execqueues-bindexecqueue.html

  * igt@xe_exec_system_allocator@process-many-stride-mmap-new-huge-nomemset:
    - shard-bmg:          NOTRUN -> [SKIP][88] ([Intel XE#4943]) +4 other tests skip
   [88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@xe_exec_system_allocator@process-many-stride-mmap-new-huge-nomemset.html

  * igt@xe_exec_system_allocator@threads-many-large-execqueues-mmap-huge:
    - shard-lnl:          NOTRUN -> [SKIP][89] ([Intel XE#4943]) +5 other tests skip
   [89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_exec_system_allocator@threads-many-large-execqueues-mmap-huge.html

  * igt@xe_exec_system_allocator@threads-many-large-mmap-shared-remap:
    - shard-adlp:         NOTRUN -> [SKIP][90] ([Intel XE#4915]) +152 other tests skip
   [90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@xe_exec_system_allocator@threads-many-large-mmap-shared-remap.html

  * igt@xe_live_ktest@xe_bo:
    - shard-adlp:         NOTRUN -> [SKIP][91] ([Intel XE#2229] / [Intel XE#455]) +1 other test skip
   [91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_live_ktest@xe_bo.html

  * igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit:
    - shard-adlp:         NOTRUN -> [SKIP][92] ([Intel XE#2229])
   [92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html

  * igt@xe_live_ktest@xe_eudebug:
    - shard-lnl:          NOTRUN -> [SKIP][93] ([Intel XE#2833])
   [93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_live_ktest@xe_eudebug.html

  * igt@xe_mmap@pci-membarrier-bad-object:
    - shard-adlp:         NOTRUN -> [SKIP][94] ([Intel XE#5100]) +1 other test skip
   [94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_mmap@pci-membarrier-bad-object.html

  * igt@xe_oa@invalid-remove-userspace-config:
    - shard-adlp:         NOTRUN -> [SKIP][95] ([Intel XE#3573]) +4 other tests skip
   [95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_oa@invalid-remove-userspace-config.html

  * igt@xe_pat@pat-index-xe2:
    - shard-adlp:         NOTRUN -> [SKIP][96] ([Intel XE#977])
   [96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_pat@pat-index-xe2.html

  * igt@xe_pat@pat-index-xelpg:
    - shard-adlp:         NOTRUN -> [SKIP][97] ([Intel XE#979])
   [97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_pat@pat-index-xelpg.html

  * igt@xe_pm@d3cold-i2c:
    - shard-adlp:         NOTRUN -> [SKIP][98] ([Intel XE#5694])
   [98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@xe_pm@d3cold-i2c.html

  * igt@xe_pm@s2idle-exec-after:
    - shard-lnl:          [PASS][99] -> [ABORT][100] ([Intel XE#6675])
   [99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-5/igt@xe_pm@s2idle-exec-after.html
   [100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-1/igt@xe_pm@s2idle-exec-after.html

  * igt@xe_pm@s2idle-vm-bind-prefetch:
    - shard-bmg:          [PASS][101] -> [ABORT][102] ([Intel XE#6675])
   [101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-7/igt@xe_pm@s2idle-vm-bind-prefetch.html
   [102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-3/igt@xe_pm@s2idle-vm-bind-prefetch.html

  * igt@xe_pm@s3-exec-after:
    - shard-bmg:          NOTRUN -> [ABORT][103] ([Intel XE#6675]) +1 other test abort
   [103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@xe_pm@s3-exec-after.html

  * igt@xe_pm@s3-multiple-execs:
    - shard-adlp:         [PASS][104] -> [ABORT][105] ([Intel XE#6675]) +2 other tests abort
   [104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-1/igt@xe_pm@s3-multiple-execs.html
   [105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-6/igt@xe_pm@s3-multiple-execs.html

  * igt@xe_pm@vram-d3cold-threshold:
    - shard-adlp:         NOTRUN -> [SKIP][106] ([Intel XE#5611] / [Intel XE#579])
   [106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_pm@vram-d3cold-threshold.html

  * igt@xe_render_copy@render-stress-0-copies:
    - shard-adlp:         NOTRUN -> [SKIP][107] ([Intel XE#4814] / [Intel XE#5614])
   [107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_render_copy@render-stress-0-copies.html

  * igt@xe_sriov_vram@vf-access-provisioned:
    - shard-adlp:         NOTRUN -> [SKIP][108] ([Intel XE#6376])
   [108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_sriov_vram@vf-access-provisioned.html

  * igt@xe_survivability@i2c-functionality:
    - shard-lnl:          NOTRUN -> [SKIP][109] ([Intel XE#6529])
   [109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_survivability@i2c-functionality.html

  
#### Possible fixes ####

  * igt@kms_cursor_legacy@flip-vs-cursor-legacy:
    - shard-bmg:          [FAIL][110] ([Intel XE#5299]) -> [PASS][111]
   [110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-1/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
   [111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html

  * igt@kms_flip@2x-modeset-vs-vblank-race:
    - shard-bmg:          [SKIP][112] ([Intel XE#2316]) -> [PASS][113] +4 other tests pass
   [112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-2/igt@kms_flip@2x-modeset-vs-vblank-race.html
   [113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-5/igt@kms_flip@2x-modeset-vs-vblank-race.html

  * igt@kms_flip@dpms-off-confusion@a-hdmi-a1:
    - shard-adlp:         [DMESG-WARN][114] ([Intel XE#2953] / [Intel XE#4173]) -> [PASS][115] +2 other tests pass
   [114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-4/igt@kms_flip@dpms-off-confusion@a-hdmi-a1.html
   [115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@kms_flip@dpms-off-confusion@a-hdmi-a1.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1:
    - shard-lnl:          [FAIL][116] ([Intel XE#301]) -> [PASS][117]
   [116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
   [117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-2/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@c-edp1:
    - shard-lnl:          [FAIL][118] ([Intel XE#301] / [Intel XE#3149]) -> [PASS][119] +1 other test pass
   [118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-edp1.html
   [119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-2/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-edp1.html

  * igt@kms_flip_tiling@flip-change-tiling@pipe-b-hdmi-a-1-y-to-x:
    - shard-adlp:         [FAIL][120] ([Intel XE#1874]) -> [PASS][121]
   [120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-2/igt@kms_flip_tiling@flip-change-tiling@pipe-b-hdmi-a-1-y-to-x.html
   [121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-6/igt@kms_flip_tiling@flip-change-tiling@pipe-b-hdmi-a-1-y-to-x.html

  * igt@kms_plane@plane-panning-bottom-right-suspend:
    - shard-adlp:         [ABORT][122] ([Intel XE#6675]) -> [PASS][123] +6 other tests pass
   [122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-1/igt@kms_plane@plane-panning-bottom-right-suspend.html
   [123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@kms_plane@plane-panning-bottom-right-suspend.html

  * igt@kms_plane_multiple@2x-tiling-4:
    - shard-bmg:          [SKIP][124] ([Intel XE#4596]) -> [PASS][125]
   [124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-2/igt@kms_plane_multiple@2x-tiling-4.html
   [125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-5/igt@kms_plane_multiple@2x-tiling-4.html

  * igt@kms_setmode@clone-exclusive-crtc:
    - shard-bmg:          [SKIP][126] ([Intel XE#1435]) -> [PASS][127]
   [126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-2/igt@kms_setmode@clone-exclusive-crtc.html
   [127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-5/igt@kms_setmode@clone-exclusive-crtc.html

  * igt@kms_vrr@cmrr@pipe-a-edp-1:
    - shard-lnl:          [FAIL][128] ([Intel XE#4459]) -> [PASS][129] +1 other test pass
   [128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-7/igt@kms_vrr@cmrr@pipe-a-edp-1.html
   [129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-7/igt@kms_vrr@cmrr@pipe-a-edp-1.html

  * igt@xe_module_load@load:
    - shard-lnl:          ([PASS][130], [PASS][131], [PASS][132], [PASS][133], [PASS][134], [PASS][135], [PASS][136], [PASS][137], [PASS][138], [PASS][139], [PASS][140], [PASS][141], [PASS][142], [PASS][143], [PASS][144], [PASS][145], [PASS][146], [PASS][147], [PASS][148], [PASS][149], [PASS][150], [SKIP][151], [PASS][152], [PASS][153], [PASS][154], [PASS][155]) ([Intel XE#378]) -> ([PASS][156], [PASS][157], [PASS][158], [PASS][159], [PASS][160], [PASS][161], [PASS][162], [PASS][163], [PASS][164], [PASS][165], [PASS][166], [PASS][167], [PASS][168], [PASS][169], [PASS][170], [PASS][171], [PASS][172], [PASS][173], [PASS][174], [PASS][175], [PASS][176], [PASS][177], [PASS][178], [PASS][179], [PASS][180])
   [130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-8/igt@xe_module_load@load.html
   [131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-7/igt@xe_module_load@load.html
   [132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-7/igt@xe_module_load@load.html
   [133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-3/igt@xe_module_load@load.html
   [134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-5/igt@xe_module_load@load.html
   [135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-7/igt@xe_module_load@load.html
   [136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-3/igt@xe_module_load@load.html
   [137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-3/igt@xe_module_load@load.html
   [138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-1/igt@xe_module_load@load.html
   [139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-2/igt@xe_module_load@load.html
   [140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-8/igt@xe_module_load@load.html
   [141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-8/igt@xe_module_load@load.html
   [142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-3/igt@xe_module_load@load.html
   [143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-2/igt@xe_module_load@load.html
   [144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-7/igt@xe_module_load@load.html
   [145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-2/igt@xe_module_load@load.html
   [146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-5/igt@xe_module_load@load.html
   [147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-5/igt@xe_module_load@load.html
   [148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-1/igt@xe_module_load@load.html
   [149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-5/igt@xe_module_load@load.html
   [150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-1/igt@xe_module_load@load.html
   [151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-5/igt@xe_module_load@load.html
   [152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-8/igt@xe_module_load@load.html
   [153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-4/igt@xe_module_load@load.html
   [154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-4/igt@xe_module_load@load.html
   [155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-4/igt@xe_module_load@load.html
   [156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_module_load@load.html
   [157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-5/igt@xe_module_load@load.html
   [158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-2/igt@xe_module_load@load.html
   [159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-3/igt@xe_module_load@load.html
   [160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-3/igt@xe_module_load@load.html
   [161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-2/igt@xe_module_load@load.html
   [162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-7/igt@xe_module_load@load.html
   [163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-2/igt@xe_module_load@load.html
   [164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_module_load@load.html
   [165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_module_load@load.html
   [166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_module_load@load.html
   [167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-3/igt@xe_module_load@load.html
   [168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-3/igt@xe_module_load@load.html
   [169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-5/igt@xe_module_load@load.html
   [170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-4/igt@xe_module_load@load.html
   [171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-7/igt@xe_module_load@load.html
   [172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-4/igt@xe_module_load@load.html
   [173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-5/igt@xe_module_load@load.html
   [174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-5/igt@xe_module_load@load.html
   [175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-4/igt@xe_module_load@load.html
   [176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-7/igt@xe_module_load@load.html
   [177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-7/igt@xe_module_load@load.html
   [178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-1/igt@xe_module_load@load.html
   [179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-1/igt@xe_module_load@load.html
   [180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-1/igt@xe_module_load@load.html
    - shard-adlp:         ([PASS][181], [PASS][182], [PASS][183], [SKIP][184], [PASS][185], [PASS][186], [PASS][187], [PASS][188], [PASS][189], [PASS][190], [PASS][191], [PASS][192], [PASS][193], [PASS][194], [PASS][195], [PASS][196], [PASS][197], [PASS][198], [PASS][199], [PASS][200], [PASS][201], [PASS][202], [PASS][203], [PASS][204], [PASS][205], [PASS][206]) ([Intel XE#378] / [Intel XE#5612]) -> ([PASS][207], [PASS][208], [PASS][209], [PASS][210], [PASS][211], [PASS][212], [PASS][213], [PASS][214], [PASS][215], [PASS][216], [PASS][217], [PASS][218], [PASS][219], [PASS][220], [PASS][221], [PASS][222], [PASS][223], [PASS][224], [PASS][225], [PASS][226], [PASS][227], [PASS][228], [PASS][229], [PASS][230], [PASS][231])
   [181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-9/igt@xe_module_load@load.html
   [182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-9/igt@xe_module_load@load.html
   [183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-9/igt@xe_module_load@load.html
   [184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-9/igt@xe_module_load@load.html
   [185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-8/igt@xe_module_load@load.html
   [186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-6/igt@xe_module_load@load.html
   [187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-8/igt@xe_module_load@load.html
   [188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-9/igt@xe_module_load@load.html
   [189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-6/igt@xe_module_load@load.html
   [190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-3/igt@xe_module_load@load.html
   [191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-2/igt@xe_module_load@load.html
   [192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-2/igt@xe_module_load@load.html
   [193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-2/igt@xe_module_load@load.html
   [194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-4/igt@xe_module_load@load.html
   [195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-4/igt@xe_module_load@load.html
   [196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-4/igt@xe_module_load@load.html
   [197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-4/igt@xe_module_load@load.html
   [198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-6/igt@xe_module_load@load.html
   [199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-1/igt@xe_module_load@load.html
   [200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-1/igt@xe_module_load@load.html
   [201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-1/igt@xe_module_load@load.html
   [202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-8/igt@xe_module_load@load.html
   [203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-3/igt@xe_module_load@load.html
   [204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-8/igt@xe_module_load@load.html
   [205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-3/igt@xe_module_load@load.html
   [206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-adlp-6/igt@xe_module_load@load.html
   [207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@xe_module_load@load.html
   [208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_module_load@load.html
   [209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@xe_module_load@load.html
   [210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-1/igt@xe_module_load@load.html
   [211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-8/igt@xe_module_load@load.html
   [212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-8/igt@xe_module_load@load.html
   [213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-8/igt@xe_module_load@load.html
   [214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@xe_module_load@load.html
   [215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@xe_module_load@load.html
   [216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-3/igt@xe_module_load@load.html
   [217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-6/igt@xe_module_load@load.html
   [218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@xe_module_load@load.html
   [219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-4/igt@xe_module_load@load.html
   [220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_module_load@load.html
   [221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_module_load@load.html
   [222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-1/igt@xe_module_load@load.html
   [223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-9/igt@xe_module_load@load.html
   [224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-9/igt@xe_module_load@load.html
   [225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-6/igt@xe_module_load@load.html
   [226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-1/igt@xe_module_load@load.html
   [227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-9/igt@xe_module_load@load.html
   [228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-9/igt@xe_module_load@load.html
   [229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-2/igt@xe_module_load@load.html
   [230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-1/igt@xe_module_load@load.html
   [231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-adlp-6/igt@xe_module_load@load.html

  * igt@xe_pm@s4-basic:
    - shard-lnl:          [ABORT][232] ([Intel XE#6675]) -> [PASS][233]
   [232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-lnl-4/igt@xe_pm@s4-basic.html
   [233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-lnl-8/igt@xe_pm@s4-basic.html

  
#### Warnings ####

  * igt@kms_frontbuffer_tracking@drrs-2p-rte:
    - shard-bmg:          [SKIP][234] ([Intel XE#2311]) -> [SKIP][235] ([Intel XE#2312]) +2 other tests skip
   [234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-4/igt@kms_frontbuffer_tracking@drrs-2p-rte.html
   [235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-rte.html

  * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-cur-indfb-draw-blt:
    - shard-bmg:          [SKIP][236] ([Intel XE#2312]) -> [SKIP][237] ([Intel XE#2311]) +2 other tests skip
   [236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-2/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-cur-indfb-draw-blt.html
   [237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-cur-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render:
    - shard-bmg:          [SKIP][238] ([Intel XE#4141]) -> [SKIP][239] ([Intel XE#2312]) +2 other tests skip
   [238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-1/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render.html
   [239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-pri-indfb-multidraw:
    - shard-bmg:          [SKIP][240] ([Intel XE#2313]) -> [SKIP][241] ([Intel XE#2312]) +4 other tests skip
   [240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-1/igt@kms_frontbuffer_tracking@fbcpsr-2p-pri-indfb-multidraw.html
   [241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-pri-indfb-multidraw.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-indfb-plflip-blt:
    - shard-bmg:          [SKIP][242] ([Intel XE#2312]) -> [SKIP][243] ([Intel XE#2313]) +4 other tests skip
   [242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-indfb-plflip-blt.html
   [243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-indfb-plflip-blt.html

  
  [Intel XE#1122]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1122
  [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
  [Intel XE#1125]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1125
  [Intel XE#1126]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1126
  [Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
  [Intel XE#1231]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1231
  [Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
  [Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401
  [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
  [Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
  [Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
  [Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
  [Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
  [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
  [Intel XE#1512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1512
  [Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
  [Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
  [Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
  [Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
  [Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
  [Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
  [Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
  [Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
  [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
  [Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
  [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
  [Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
  [Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
  [Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
  [Intel XE#2833]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2833
  [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
  [Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
  [Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
  [Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
  [Intel XE#2925]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2925
  [Intel XE#2939]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2939
  [Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
  [Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
  [Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
  [Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
  [Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
  [Intel XE#3149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3149
  [Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
  [Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
  [Intel XE#3279]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3279
  [Intel XE#330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/330
  [Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
  [Intel XE#346]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/346
  [Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
  [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
  [Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
  [Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
  [Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
  [Intel XE#4173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4173
  [Intel XE#4329]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4329
  [Intel XE#4331]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4331
  [Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
  [Intel XE#4459]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4459
  [Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
  [Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
  [Intel XE#4814]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4814
  [Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
  [Intel XE#488]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/488
  [Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
  [Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
  [Intel XE#5100]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5100
  [Intel XE#5299]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5299
  [Intel XE#5300]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5300
  [Intel XE#5561]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5561
  [Intel XE#5565]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5565
  [Intel XE#5574]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5574
  [Intel XE#5575]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5575
  [Intel XE#5580]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5580
  [Intel XE#5585]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5585
  [Intel XE#5607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5607
  [Intel XE#5611]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5611
  [Intel XE#5612]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5612
  [Intel XE#5614]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5614
  [Intel XE#5626]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5626
  [Intel XE#5694]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5694
  [Intel XE#579]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/579
  [Intel XE#599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/599
  [Intel XE#6312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6312
  [Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321
  [Intel XE#6360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6360
  [Intel XE#6376]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6376
  [Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
  [Intel XE#6529]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6529
  [Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
  [Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
  [Intel XE#6606]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6606
  [Intel XE#6662]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6662
  [Intel XE#6675]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6675
  [Intel XE#6677]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6677
  [Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
  [Intel XE#776]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/776
  [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
  [Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
  [Intel XE#977]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/977
  [Intel XE#979]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/979


Build changes
-------------

  * Linux: xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df -> xe-pw-155314v5

  IGT_8639: 2ce563031e6b2ec91479f6af8c326d25c15bdb26 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-4155-e41f42483c6f784fce3deb5eca0931bcbffb01df: e41f42483c6f784fce3deb5eca0931bcbffb01df
  xe-pw-155314v5: 155314v5

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-155314v5/index.html

[-- Attachment #2: Type: text/html, Size: 64811 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-11-26 22:18 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-26 20:19 [PATCH v5 0/8] Fix DRM scheduler layering violations in Xe Matthew Brost
2025-11-26 20:19 ` [PATCH v5 1/8] drm/sched: Add several job helpers to avoid drivers touching scheduler state Matthew Brost
2025-11-26 20:19 ` [PATCH v5 2/8] drm/sched: Add pending job list iterator Matthew Brost
2025-11-26 20:19 ` [PATCH v5 3/8] drm/xe: Add dedicated message lock Matthew Brost
2025-11-26 20:19 ` [PATCH v5 4/8] drm/xe: Stop abusing DRM scheduler internals Matthew Brost
2025-11-26 20:19 ` [PATCH v5 5/8] drm/xe: Only toggle scheduling in TDR if GuC is running Matthew Brost
2025-11-26 20:19 ` [PATCH v5 6/8] drm/xe: Do not deregister queues in TDR Matthew Brost
2025-11-26 20:19 ` [PATCH v5 7/8] drm/xe: Remove special casing for LR queues in submission Matthew Brost
2025-11-26 20:19 ` [PATCH v5 8/8] drm/xe: Avoid toggling schedule state to check LRC timestamp in TDR Matthew Brost
2025-11-26 21:21   ` Matthew Brost
2025-11-26 20:25 ` ✗ CI.checkpatch: warning for Fix DRM scheduler layering violations in Xe (rev5) Patchwork
2025-11-26 20:26 ` ✓ CI.KUnit: success " Patchwork
2025-11-26 21:40 ` ✓ Xe.CI.BAT: " Patchwork
2025-11-26 22:18 ` ✗ Xe.CI.Full: failure " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox