Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/22] Fence deadlines in Xe
@ 2026-01-05  4:02 Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 01/22] drm/xe: Add dedicated message lock Matthew Brost
                   ` (26 more replies)
  0 siblings, 27 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

This series introduces deadline support for exported scheduler fences.
When a deadline is set on a fence, the driver attempts to complete the
work within a configurable window (default 3 ms, via Kconfig) ahead of
the deadline.

As the deadline approaches, the exec queue first receives a frequency
boost. If the queue has CAP_SYS_ADMIN privileges, a later stage also
applies a priority boost. This tiered approach allows work to run at the
same baseline priority as the application, only increasing priority when
a deadline is at risk of being missed.

The primary use case is compositors that target a stable frame cadence
while avoiding a permanently elevated scheduling priority.

Lightly tested with an IGT and running display; behavior appears
correct.

VLK-82801

IGT: https://patchwork.freedesktop.org/series/159616/

v2:
 - Fully implemented deadline manager
 - Separate frequency and priority boost windows
 - Enable deadlines in Intel display

Matt

Matthew Brost (22):
  drm/xe: Add dedicated message lock
  drm/xe: Add EXEC_QUEUE_FLAG_CAP_SYS_NICE
  drm/xe: Store exec queue in hardware fence
  drm/xe: Add deadline exec queue vfuncs
  drm/xe: Export to_xe_hw_fence
  drm/xe: Export xe_hw_fence_signaled
  drm/xe: Implement deadline manager
  drm/xe: Initialize deadline manager on exec queues
  drm/xe: Stub out execlists deadline vfuncs as NOPs
  drm/xe: Make scheduler message lock IRQ-safe
  drm/xe: Support unstable opcodes for static scheduler messages
  drm/xe: Implement GuC submission backend ops for deadlines
  drm/xe: Enable deadlines on hardware fences
  drm/xe: Fix Kconfig.profile newlines
  drm/xe: Add deadline Kconfig options
  drm/xe: Add exec queue deadline trace points
  drm/xe: Add hw fence deadline trace points
  drm/xe: Add timestamp_ms to LRC snapshot
  drm/xe: Enforce GuC static message defines
  drm/xe: Document the deadline manager
  drm/atomic: Export fence deadline helper for atomic commits
  drm/i915/display: Use atomic helper to set plane fence deadlines

 drivers/gpu/drm/drm_atomic_helper.c          |  11 +-
 drivers/gpu/drm/i915/display/intel_display.c |   2 +
 drivers/gpu/drm/xe/Kconfig.profile           |  29 ++
 drivers/gpu/drm/xe/Makefile                  |   1 +
 drivers/gpu/drm/xe/xe_deadline_mgr.c         | 463 +++++++++++++++++++
 drivers/gpu/drm/xe/xe_deadline_mgr.h         |  26 ++
 drivers/gpu/drm/xe/xe_deadline_mgr_types.h   |  52 +++
 drivers/gpu/drm/xe/xe_exec_queue.c           |  11 +
 drivers/gpu/drm/xe/xe_exec_queue_types.h     |  15 +
 drivers/gpu/drm/xe/xe_execlist.c             |  15 +
 drivers/gpu/drm/xe/xe_gpu_scheduler.c        |  43 +-
 drivers/gpu/drm/xe/xe_gpu_scheduler.h        |  17 +-
 drivers/gpu/drm/xe/xe_gpu_scheduler_types.h  |   4 +-
 drivers/gpu/drm/xe/xe_guc_exec_queue_types.h |   2 +-
 drivers/gpu/drm/xe/xe_guc_submit.c           | 170 +++++--
 drivers/gpu/drm/xe/xe_hw_fence.c             |  38 +-
 drivers/gpu/drm/xe/xe_hw_fence.h             |   7 +-
 drivers/gpu/drm/xe/xe_hw_fence_types.h       |  19 +
 drivers/gpu/drm/xe/xe_lrc.c                  |  10 +-
 drivers/gpu/drm/xe/xe_lrc.h                  |   4 +-
 drivers/gpu/drm/xe/xe_sched_job.c            |   5 +-
 drivers/gpu/drm/xe/xe_trace.h                |  42 +-
 include/drm/drm_atomic_helper.h              |   3 +
 23 files changed, 918 insertions(+), 71 deletions(-)
 create mode 100644 drivers/gpu/drm/xe/xe_deadline_mgr.c
 create mode 100644 drivers/gpu/drm/xe/xe_deadline_mgr.h
 create mode 100644 drivers/gpu/drm/xe/xe_deadline_mgr_types.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v2 01/22] drm/xe: Add dedicated message lock
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 02/22] drm/xe: Add EXEC_QUEUE_FLAG_CAP_SYS_NICE Matthew Brost
                   ` (25 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe
  Cc: daniele.ceraolospurio, carlos.santa, Niranjana Vishwanathapura,
	Philipp Stanner

Stop abusing DRM scheduler job list lock for messages, add dedicated
message lock.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Acked-by: Philipp Stanner <phasta@kernel.org>
---
 drivers/gpu/drm/xe/xe_gpu_scheduler.c       | 5 +++--
 drivers/gpu/drm/xe/xe_gpu_scheduler.h       | 4 ++--
 drivers/gpu/drm/xe/xe_gpu_scheduler_types.h | 2 ++
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
index f91e06d03511..f4f23317191f 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
@@ -77,6 +77,7 @@ int xe_sched_init(struct xe_gpu_scheduler *sched,
 	};
 
 	sched->ops = xe_ops;
+	spin_lock_init(&sched->msg_lock);
 	INIT_LIST_HEAD(&sched->msgs);
 	INIT_WORK(&sched->work_process_msg, xe_sched_process_msg_work);
 
@@ -117,7 +118,7 @@ void xe_sched_add_msg(struct xe_gpu_scheduler *sched,
 void xe_sched_add_msg_locked(struct xe_gpu_scheduler *sched,
 			     struct xe_sched_msg *msg)
 {
-	lockdep_assert_held(&sched->base.job_list_lock);
+	lockdep_assert_held(&sched->msg_lock);
 
 	list_add_tail(&msg->link, &sched->msgs);
 	xe_sched_process_msg_queue(sched);
@@ -131,7 +132,7 @@ void xe_sched_add_msg_locked(struct xe_gpu_scheduler *sched,
 void xe_sched_add_msg_head(struct xe_gpu_scheduler *sched,
 			   struct xe_sched_msg *msg)
 {
-	lockdep_assert_held(&sched->base.job_list_lock);
+	lockdep_assert_held(&sched->msg_lock);
 
 	list_add(&msg->link, &sched->msgs);
 	xe_sched_process_msg_queue(sched);
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
index c7a77a3a9681..dceb2cd0ee5b 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
@@ -33,12 +33,12 @@ void xe_sched_add_msg_head(struct xe_gpu_scheduler *sched,
 
 static inline void xe_sched_msg_lock(struct xe_gpu_scheduler *sched)
 {
-	spin_lock(&sched->base.job_list_lock);
+	spin_lock(&sched->msg_lock);
 }
 
 static inline void xe_sched_msg_unlock(struct xe_gpu_scheduler *sched)
 {
-	spin_unlock(&sched->base.job_list_lock);
+	spin_unlock(&sched->msg_lock);
 }
 
 static inline void xe_sched_stop(struct xe_gpu_scheduler *sched)
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h b/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
index 6731b13da8bb..63d9bf92583c 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
@@ -47,6 +47,8 @@ struct xe_gpu_scheduler {
 	const struct xe_sched_backend_ops	*ops;
 	/** @msgs: list of messages to be processed in @work_process_msg */
 	struct list_head			msgs;
+	/** @msg_lock: Message lock */
+	spinlock_t				msg_lock;
 	/** @work_process_msg: processes messages */
 	struct work_struct		work_process_msg;
 };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 02/22] drm/xe: Add EXEC_QUEUE_FLAG_CAP_SYS_NICE
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 01/22] drm/xe: Add dedicated message lock Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-02-05 16:00   ` Rodrigo Vivi
  2026-01-05  4:02 ` [PATCH v2 03/22] drm/xe: Store exec queue in hardware fence Matthew Brost
                   ` (24 subsequent siblings)
  26 siblings, 1 reply; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Store whether CAP_SYS_NICE is set on the user process that creates an
exec queue. This will indicate if the exec queue is eligible for higher
priority levels under deadline pressure.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_exec_queue.c       | 3 +++
 drivers/gpu/drm/xe/xe_exec_queue_types.h | 2 ++
 2 files changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index 0b9e074b022f..a9b981591773 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -1158,6 +1158,9 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
 	if (args->flags & DRM_XE_EXEC_QUEUE_LOW_LATENCY_HINT)
 		flags |= EXEC_QUEUE_FLAG_LOW_LATENCY;
 
+	if (capable(CAP_SYS_NICE))
+		flags |= EXEC_QUEUE_FLAG_CAP_SYS_NICE;
+
 	if (eci[0].engine_class == DRM_XE_ENGINE_CLASS_VM_BIND) {
 		if (XE_IOCTL_DBG(xe, args->width != 1) ||
 		    XE_IOCTL_DBG(xe, args->num_placements != 1) ||
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 67ea5eebf70b..cd7a6571f5c6 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -128,6 +128,8 @@ struct xe_exec_queue {
 #define EXEC_QUEUE_FLAG_LOW_LATENCY		BIT(5)
 /* for migration (kernel copy, clear, bind) jobs */
 #define EXEC_QUEUE_FLAG_MIGRATE			BIT(6)
+/* for user queues, created in CAP_SYS_NICE context */
+#define EXEC_QUEUE_FLAG_CAP_SYS_NICE		BIT(7)
 
 	/**
 	 * @flags: flags for this exec queue, should statically setup aside from ban
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 03/22] drm/xe: Store exec queue in hardware fence
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 01/22] drm/xe: Add dedicated message lock Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 02/22] drm/xe: Add EXEC_QUEUE_FLAG_CAP_SYS_NICE Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-02-05 16:02   ` Rodrigo Vivi
  2026-01-05  4:02 ` [PATCH v2 04/22] drm/xe: Add deadline exec queue vfuncs Matthew Brost
                   ` (23 subsequent siblings)
  26 siblings, 1 reply; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Enable hardware fences to set deadlines for exec queues.

v2:
 - Fix kernel doc (CI)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_hw_fence.c       | 4 +++-
 drivers/gpu/drm/xe/xe_hw_fence.h       | 2 +-
 drivers/gpu/drm/xe/xe_hw_fence_types.h | 6 ++++++
 drivers/gpu/drm/xe/xe_lrc.c            | 6 ++++--
 drivers/gpu/drm/xe/xe_lrc.h            | 3 ++-
 drivers/gpu/drm/xe/xe_sched_job.c      | 2 +-
 6 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
index f6057456e460..5995bf095843 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.c
+++ b/drivers/gpu/drm/xe/xe_hw_fence.c
@@ -242,6 +242,7 @@ void xe_hw_fence_free(struct dma_fence *fence)
  * xe_hw_fence_init() - Initialize an hw fence.
  * @fence: Pointer to the fence to initialize.
  * @ctx: Pointer to the struct xe_hw_fence_ctx fence context.
+ * @q: Pointer to exec queue tied to the fence.
  * @seqno_map: Pointer to the map into where the seqno is blitted.
  *
  * Initializes a pre-allocated hw fence.
@@ -249,12 +250,13 @@ void xe_hw_fence_free(struct dma_fence *fence)
  * dma-fence refcounting.
  */
 void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
-		      struct iosys_map seqno_map)
+		      struct xe_exec_queue *q, struct iosys_map seqno_map)
 {
 	struct  xe_hw_fence *hw_fence =
 		container_of(fence, typeof(*hw_fence), dma);
 
 	hw_fence->xe = gt_to_xe(ctx->gt);
+	hw_fence->q = q;
 	snprintf(hw_fence->name, sizeof(hw_fence->name), "%s", ctx->name);
 	hw_fence->seqno_map = seqno_map;
 	INIT_LIST_HEAD(&hw_fence->irq_link);
diff --git a/drivers/gpu/drm/xe/xe_hw_fence.h b/drivers/gpu/drm/xe/xe_hw_fence.h
index f13a1c4982c7..7a8678c881d8 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.h
+++ b/drivers/gpu/drm/xe/xe_hw_fence.h
@@ -29,5 +29,5 @@ struct dma_fence *xe_hw_fence_alloc(void);
 void xe_hw_fence_free(struct dma_fence *fence);
 
 void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
-		      struct iosys_map seqno_map);
+		      struct xe_exec_queue *q, struct iosys_map seqno_map);
 #endif
diff --git a/drivers/gpu/drm/xe/xe_hw_fence_types.h b/drivers/gpu/drm/xe/xe_hw_fence_types.h
index 58a8d09afe5c..052bbab1fad6 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence_types.h
+++ b/drivers/gpu/drm/xe/xe_hw_fence_types.h
@@ -13,6 +13,7 @@
 #include <linux/spinlock.h>
 
 struct xe_device;
+struct xe_exec_queue;
 struct xe_gt;
 
 /**
@@ -64,6 +65,11 @@ struct xe_hw_fence {
 	struct dma_fence dma;
 	/** @xe: Xe device for hw fence driver name */
 	struct xe_device *xe;
+	/**
+	 * @q: Exec queue which fence is tied too, not ref counted, lookup
+	 * protected by fence lock.
+	 */
+	struct xe_exec_queue *q;
 	/** @name: name of hardware fence context */
 	char name[MAX_FENCE_NAME_LEN];
 	/** @seqno_map: I/O map for seqno */
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index 70eae7d03a27..eccc7f2642bf 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -1783,15 +1783,17 @@ void xe_lrc_free_seqno_fence(struct dma_fence *fence)
 /**
  * xe_lrc_init_seqno_fence() - Initialize an lrc seqno fence.
  * @lrc: Pointer to the lrc.
+ * @q: Pointner to exec queue.
  * @fence: Pointer to the fence to initialize.
  *
  * Initializes a pre-allocated lrc seqno fence.
  * After initialization, the fence is subject to normal
  * dma-fence refcounting.
  */
-void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct dma_fence *fence)
+void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct xe_exec_queue *q,
+			     struct dma_fence *fence)
 {
-	xe_hw_fence_init(fence, &lrc->fence_ctx, __xe_lrc_seqno_map(lrc));
+	xe_hw_fence_init(fence, &lrc->fence_ctx, q, __xe_lrc_seqno_map(lrc));
 }
 
 s32 xe_lrc_seqno(struct xe_lrc *lrc)
diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
index 8acf85273c1a..3d72b4c0da8e 100644
--- a/drivers/gpu/drm/xe/xe_lrc.h
+++ b/drivers/gpu/drm/xe/xe_lrc.h
@@ -118,7 +118,8 @@ u64 xe_lrc_descriptor(struct xe_lrc *lrc);
 u32 xe_lrc_seqno_ggtt_addr(struct xe_lrc *lrc);
 struct dma_fence *xe_lrc_alloc_seqno_fence(void);
 void xe_lrc_free_seqno_fence(struct dma_fence *fence);
-void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct dma_fence *fence);
+void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct xe_exec_queue *q,
+			     struct dma_fence *fence);
 s32 xe_lrc_seqno(struct xe_lrc *lrc);
 
 u32 xe_lrc_start_seqno_ggtt_addr(struct xe_lrc *lrc);
diff --git a/drivers/gpu/drm/xe/xe_sched_job.c b/drivers/gpu/drm/xe/xe_sched_job.c
index cb674a322113..6099b4445835 100644
--- a/drivers/gpu/drm/xe/xe_sched_job.c
+++ b/drivers/gpu/drm/xe/xe_sched_job.c
@@ -270,7 +270,7 @@ void xe_sched_job_arm(struct xe_sched_job *job)
 		struct dma_fence_chain *chain;
 
 		fence = job->ptrs[i].lrc_fence;
-		xe_lrc_init_seqno_fence(q->lrc[i], fence);
+		xe_lrc_init_seqno_fence(q->lrc[i], q, fence);
 		job->ptrs[i].lrc_fence = NULL;
 		if (!i) {
 			job->lrc_seqno = fence->seqno;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 04/22] drm/xe: Add deadline exec queue vfuncs
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (2 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 03/22] drm/xe: Store exec queue in hardware fence Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-02-05 16:03   ` Rodrigo Vivi
  2026-01-05  4:02 ` [PATCH v2 05/22] drm/xe: Export to_xe_hw_fence Matthew Brost
                   ` (22 subsequent siblings)
  26 siblings, 1 reply; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Add set_deadline and set_deadline_state exec queue vfuncs for deadline
control.

v2:
 - Fix kernel doc
 - Remove exit_deadline, rather use an enum for control

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_exec_queue_types.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index cd7a6571f5c6..ac860f3f042e 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -15,6 +15,7 @@
 #include "xe_hw_fence_types.h"
 #include "xe_lrc_types.h"
 
+enum xe_deadline_mgr_state;
 struct drm_syncobj;
 struct xe_execlist_exec_queue;
 struct xe_gt;
@@ -301,6 +302,14 @@ struct xe_exec_queue_ops {
 	void (*resume)(struct xe_exec_queue *q);
 	/** @reset_status: check exec queue reset status */
 	bool (*reset_status)(struct xe_exec_queue *q);
+	/**
+	 * @set_deadline: Set deadline for on a queue for a fence.
+	 */
+	void (*set_deadline)(struct xe_exec_queue *q, struct dma_fence *fence,
+			     ktime_t deadline);
+	/** @set_deadline_state: Set deadline state for a queue */
+	void (*set_deadline_state)(struct xe_exec_queue *q,
+				   enum xe_deadline_mgr_state state);
 };
 
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 05/22] drm/xe: Export to_xe_hw_fence
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (3 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 04/22] drm/xe: Add deadline exec queue vfuncs Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 06/22] drm/xe: Export xe_hw_fence_signaled Matthew Brost
                   ` (21 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Allow other layers to operate on hardware fences.

v2:
 - Fix kernel doc (CI)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_hw_fence.c | 10 +++++++---
 drivers/gpu/drm/xe/xe_hw_fence.h |  3 +++
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
index 5995bf095843..2e7f52bac980 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.c
+++ b/drivers/gpu/drm/xe/xe_hw_fence.c
@@ -138,8 +138,6 @@ void xe_hw_fence_ctx_finish(struct xe_hw_fence_ctx *ctx)
 {
 }
 
-static struct xe_hw_fence *to_xe_hw_fence(struct dma_fence *fence);
-
 static struct xe_hw_fence_irq *xe_hw_fence_irq(struct xe_hw_fence *fence)
 {
 	return container_of(fence->dma.lock, struct xe_hw_fence_irq, lock);
@@ -200,7 +198,13 @@ static const struct dma_fence_ops xe_hw_fence_ops = {
 	.release = xe_hw_fence_release,
 };
 
-static struct xe_hw_fence *to_xe_hw_fence(struct dma_fence *fence)
+/**
+ * to_xe_hw_fence() - Convert dma-fence to Xe hardware fence
+ * @fence: dma-fence object
+ *
+ * Return: struct xe_hw_fence or NULL
+ */
+struct xe_hw_fence *to_xe_hw_fence(struct dma_fence *fence)
 {
 	if (XE_WARN_ON(fence->ops != &xe_hw_fence_ops))
 		return NULL;
diff --git a/drivers/gpu/drm/xe/xe_hw_fence.h b/drivers/gpu/drm/xe/xe_hw_fence.h
index 7a8678c881d8..4d5756681279 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.h
+++ b/drivers/gpu/drm/xe/xe_hw_fence.h
@@ -30,4 +30,7 @@ void xe_hw_fence_free(struct dma_fence *fence);
 
 void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
 		      struct xe_exec_queue *q, struct iosys_map seqno_map);
+
+struct xe_hw_fence *to_xe_hw_fence(struct dma_fence *fence);
+
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 06/22] drm/xe: Export xe_hw_fence_signaled
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (4 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 05/22] drm/xe: Export to_xe_hw_fence Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 07/22] drm/xe: Implement deadline manager Matthew Brost
                   ` (20 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Allow other layers to determine if a Xe hardware fence is signaled.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_hw_fence.c | 8 +++++++-
 drivers/gpu/drm/xe/xe_hw_fence.h | 2 ++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
index 2e7f52bac980..265e29e92c48 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.c
+++ b/drivers/gpu/drm/xe/xe_hw_fence.c
@@ -157,7 +157,13 @@ static const char *xe_hw_fence_get_timeline_name(struct dma_fence *dma_fence)
 	return fence->name;
 }
 
-static bool xe_hw_fence_signaled(struct dma_fence *dma_fence)
+/**
+ * xe_hw_fence_signaled() - Xe hardware fence is signaled
+ * @dma_fence: dma-fence object
+ *
+ * Return: True if Xe hardware fence is signaled, False otherwise
+ */
+bool xe_hw_fence_signaled(struct dma_fence *dma_fence)
 {
 	struct xe_hw_fence *fence = to_xe_hw_fence(dma_fence);
 	struct xe_device *xe = fence->xe;
diff --git a/drivers/gpu/drm/xe/xe_hw_fence.h b/drivers/gpu/drm/xe/xe_hw_fence.h
index 4d5756681279..cf2f19127105 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.h
+++ b/drivers/gpu/drm/xe/xe_hw_fence.h
@@ -33,4 +33,6 @@ void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
 
 struct xe_hw_fence *to_xe_hw_fence(struct dma_fence *fence);
 
+bool xe_hw_fence_signaled(struct dma_fence *dma_fence);
+
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 07/22] drm/xe: Implement deadline manager
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (5 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 06/22] drm/xe: Export xe_hw_fence_signaled Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 08/22] drm/xe: Initialize deadline manager on exec queues Matthew Brost
                   ` (19 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Implement a deadline manager that toggles an exec queue’s deadline state
based on upcoming fence deadlines. The manager tracks deadlines on
hardware fences and uses an hrtimer to enter or exit a boosted state when
a deadline is within a configurable window (default 3 ms).

As the deadline approaches, the manager first applies a frequency boost
and, at a later stage, also boosts priority. The primary use case is to
help compositors avoid missing pageflip deadlines.

v2:
 - Remove extra newlines (CI)
 - Fix xe_deadline_mgr.h ifdef
 - More robust asserts
 - Disallow parallel, multi-q, boosted queues
 - Do not enter deadline on a signaled fence
 - Add freq / prio states
 - Fix potenial deadlock canceling timer

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/Makefile                |   1 +
 drivers/gpu/drm/xe/xe_deadline_mgr.c       | 356 +++++++++++++++++++++
 drivers/gpu/drm/xe/xe_deadline_mgr.h       |  26 ++
 drivers/gpu/drm/xe/xe_deadline_mgr_types.h |  52 +++
 drivers/gpu/drm/xe/xe_hw_fence.c           |   3 +
 drivers/gpu/drm/xe/xe_hw_fence_types.h     |  13 +
 6 files changed, 451 insertions(+)
 create mode 100644 drivers/gpu/drm/xe/xe_deadline_mgr.c
 create mode 100644 drivers/gpu/drm/xe/xe_deadline_mgr.h
 create mode 100644 drivers/gpu/drm/xe/xe_deadline_mgr_types.h

diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 2b20c79d7ec9..54f266ae48ba 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -35,6 +35,7 @@ $(obj)/generated/%_device_wa_oob.c $(obj)/generated/%_device_wa_oob.h: $(obj)/xe
 xe-y += xe_bb.o \
 	xe_bo.o \
 	xe_bo_evict.o \
+	xe_deadline_mgr.o \
 	xe_dep_scheduler.o \
 	xe_devcoredump.o \
 	xe_device.o \
diff --git a/drivers/gpu/drm/xe/xe_deadline_mgr.c b/drivers/gpu/drm/xe/xe_deadline_mgr.c
new file mode 100644
index 000000000000..061664ed24e3
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_deadline_mgr.c
@@ -0,0 +1,356 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include <linux/dma-fence-chain.h>
+
+#include "xe_deadline_mgr.h"
+#include "xe_deadline_mgr_types.h"
+#include "xe_exec_queue.h"
+#include "xe_gt.h"
+#include "xe_hw_fence.h"
+
+#define XE_DEADLINE_WINDOW_US			3000
+#define XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT	60
+#define XE_DEADLINE_EXIT_DELAY_MS		100
+
+static ktime_t __xe_deadline_mgr_freq_boost_window(void)
+{
+	return us_to_ktime(XE_DEADLINE_WINDOW_US);
+}
+
+static ktime_t __xe_deadline_mgr_prio_boost_window(void)
+{
+	u64 usec = DIV_ROUND_UP_ULL(XE_DEADLINE_WINDOW_US *
+				    XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT, 100);
+
+	return us_to_ktime(usec);
+}
+
+static ktime_t __xe_deadline_mgr_prio_boost_restart(void)
+{
+	return ktime_sub(__xe_deadline_mgr_freq_boost_window(),
+			 __xe_deadline_mgr_prio_boost_window());
+}
+
+static bool __xe_deadline_mgr_deadline_signaled(struct xe_deadline_mgr *mgr)
+{
+	struct xe_hw_fence *hw_fence;
+
+	lockdep_assert_held(&mgr->lock);
+
+	hw_fence = list_first_entry_or_null(&mgr->deadlines, typeof(*hw_fence),
+					    deadline.link);
+	if (!hw_fence)
+		return true;
+
+	return xe_hw_fence_signaled(&hw_fence->dma);
+}
+
+static bool __xe_deadline_mgr_enter_deadline(struct xe_deadline_mgr *mgr,
+					     enum xe_deadline_mgr_state state)
+{
+	lockdep_assert_held(&mgr->lock);
+
+	if (XE_DEADLINE_EXIT_DELAY_MS &&
+	    mgr->state != XE_DEADLINE_MGR_STATE_NO_BOOST)
+		cancel_delayed_work(&mgr->exit_delay);
+
+	if (mgr->state != state && !__xe_deadline_mgr_deadline_signaled(mgr)) {
+		mgr->state = state;
+		mgr->q->ops->set_deadline_state(mgr->q, state);
+
+		return true;
+	}
+
+	return false;
+}
+
+static void __xe_deadline_mgr_exit_deadline_work(struct work_struct *work)
+{
+	struct xe_deadline_mgr *mgr = container_of(work, typeof(*mgr),
+						   exit_delay.work);
+
+	guard(spinlock_irqsave)(&mgr->lock);
+
+	if (mgr->state != XE_DEADLINE_MGR_STATE_NO_BOOST) {
+		mgr->state = XE_DEADLINE_MGR_STATE_NO_BOOST;
+		mgr->q->ops->set_deadline_state(mgr->q, mgr->state);
+	}
+}
+
+static void __xe_deadline_mgr_exit_deadline(struct xe_deadline_mgr *mgr)
+{
+	lockdep_assert_held(&mgr->lock);
+
+	if (mgr->state == XE_DEADLINE_MGR_STATE_NO_BOOST)
+		return;
+
+	if (!XE_DEADLINE_EXIT_DELAY_MS) {
+		mgr->state = XE_DEADLINE_MGR_STATE_NO_BOOST;
+		mgr->q->ops->set_deadline_state(mgr->q, mgr->state);
+		return;
+	}
+
+	if (!delayed_work_pending(&mgr->exit_delay))
+		mod_delayed_work(system_percpu_wq, &mgr->exit_delay,
+				 msecs_to_jiffies(XE_DEADLINE_EXIT_DELAY_MS));
+}
+
+static enum hrtimer_restart __xe_deadline_mgr_timer(struct hrtimer *t)
+{
+	struct xe_deadline_mgr *mgr = container_of(t, typeof(*mgr), timer);
+	enum xe_deadline_mgr_state state;
+	bool boosted;
+
+	guard(spinlock_irqsave)(&mgr->lock);
+
+	xe_assert(gt_to_xe(mgr->q->gt),
+		  mgr->state != XE_DEADLINE_MGR_STATE_PRIO_BOOST ||
+		  XE_DEADLINE_EXIT_DELAY_MS);
+
+	if (mgr->state == XE_DEADLINE_MGR_STATE_NO_BOOST &&
+	    XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT != 100)
+		state = XE_DEADLINE_MGR_STATE_FREQ_BOOST;
+	else
+		state = XE_DEADLINE_MGR_STATE_PRIO_BOOST;
+
+	boosted = __xe_deadline_mgr_enter_deadline(mgr, state);
+
+	if (boosted && state == XE_DEADLINE_MGR_STATE_FREQ_BOOST &&
+	    XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT != 0) {
+		ktime_t sub = __xe_deadline_mgr_freq_boost_window();
+
+		hrtimer_forward(t, ktime_sub(mgr->deadline, sub),
+				__xe_deadline_mgr_prio_boost_restart());
+		return HRTIMER_RESTART;
+	}
+
+	return HRTIMER_NORESTART;
+}
+
+/**
+ * xe_deadline_mgr_init() - Deadline manager initialize
+ * @mgr: Deadline manager object
+ * @q: Exec queue associated with deadline
+ */
+void xe_deadline_mgr_init(struct xe_deadline_mgr *mgr, struct xe_exec_queue *q)
+{
+	mgr->q = q;
+	INIT_LIST_HEAD(&mgr->deadlines);
+	spin_lock_init(&mgr->lock);
+	hrtimer_setup(&mgr->timer, __xe_deadline_mgr_timer, CLOCK_MONOTONIC,
+		      HRTIMER_MODE_ABS);
+	mgr->deadline = XE_DEADLINE_NONE;
+	mgr->state = XE_DEADLINE_MGR_STATE_NO_BOOST;
+	INIT_DELAYED_WORK(&mgr->exit_delay,
+			  __xe_deadline_mgr_exit_deadline_work);
+
+	/*
+	 * Parallel queues are not supported because the job's fence is a
+	 * dma-fence chain, which is memory-unsafe as individual hardware fences
+	 * can be freed at arbitrary points in time while remaining in the
+	 * manager. Multi-queue is also not supported because we need individual
+	 * per-queue control of priority and frequency, which multi-queue does
+	 * not have. In either case, the target use case (compositors) does not
+	 * use these types of queues.
+	 *
+	 * Also disable the deadline logic if the feature is disabled via
+	 * Kconfig or if the queue is created in a boosted state.
+	 */
+	if (xe_exec_queue_is_parallel(q) || xe_exec_queue_is_multi_queue(q) ||
+	    !XE_DEADLINE_WINDOW_US ||
+	    (q->sched_props.priority >= XE_EXEC_QUEUE_PRIORITY_HIGH &&
+	     q->flags & EXEC_QUEUE_FLAG_LOW_LATENCY))
+		mgr->state = XE_DEADLINE_MGR_STATE_UNSUPPORTED;
+}
+
+/**
+ * xe_deadline_mgr_fini() - Deadline manager finalize
+ * @mgr: Deadline manager object
+ */
+void xe_deadline_mgr_fini(struct xe_deadline_mgr *mgr)
+{
+	cancel_delayed_work_sync(&mgr->exit_delay);
+	xe_assert(gt_to_xe(mgr->q->gt),
+		  mgr->state == XE_DEADLINE_MGR_STATE_NO_BOOST ||
+		  mgr->state == XE_DEADLINE_MGR_STATE_UNSUPPORTED);
+	xe_assert(gt_to_xe(mgr->q->gt), !hrtimer_cancel(&mgr->timer));
+	xe_assert(gt_to_xe(mgr->q->gt), list_empty(&mgr->deadlines));
+}
+
+static ktime_t __xe_deadline_mgr_new_deadline(struct xe_deadline_mgr *mgr)
+{
+	struct xe_hw_fence *hw_fence;
+
+	lockdep_assert_held(&mgr->lock);
+
+	hw_fence = list_first_entry_or_null(&mgr->deadlines, typeof(*hw_fence),
+					    deadline.link);
+	if (!hw_fence)
+		return XE_DEADLINE_NONE;
+
+	return hw_fence->deadline.time;
+}
+
+static void __xe_deadline_mgr_update_deadline(struct xe_deadline_mgr *mgr)
+{
+	ktime_t old_deadline = mgr->deadline, sub, deadline, now;
+
+again:
+	lockdep_assert_held(&mgr->lock);
+
+	mgr->deadline = __xe_deadline_mgr_new_deadline(mgr);
+
+	if (!ktime_compare(old_deadline, mgr->deadline))
+		return;
+
+	if (hrtimer_try_to_cancel(&mgr->timer) < 0) {
+		/*
+		 * Corner case where hrtimer is running but waiting on
+		 * &mgr->lock, we need to drop the lock, cancel timer, require
+		 * the lock and retry.
+		 */
+		spin_unlock(&mgr->lock);
+		hrtimer_cancel(&mgr->timer);
+		spin_lock(&mgr->lock);
+		goto again;
+	}
+
+	if (mgr->deadline == XE_DEADLINE_NONE) {
+		__xe_deadline_mgr_exit_deadline(mgr);
+		return;
+	}
+
+	sub = __xe_deadline_mgr_freq_boost_window();
+	deadline = ktime_sub(mgr->deadline, sub);
+	now = ktime_get();
+
+	if (ktime_after(now, deadline)) {
+		enum xe_deadline_mgr_state state =
+			XE_DEADLINE_MGR_STATE_FREQ_BOOST;
+
+		if (mgr->state == XE_DEADLINE_MGR_STATE_PRIO_BOOST) {
+			state = XE_DEADLINE_MGR_STATE_PRIO_BOOST;
+		} else {
+			sub = __xe_deadline_mgr_prio_boost_window();
+			if (sub) {
+				deadline = ktime_sub(mgr->deadline, sub);
+
+				if (ktime_after(now, deadline))
+					state = XE_DEADLINE_MGR_STATE_PRIO_BOOST;
+				else
+					hrtimer_start(&mgr->timer, deadline,
+						      HRTIMER_MODE_ABS);
+			}
+		}
+
+		__xe_deadline_mgr_enter_deadline(mgr, state);
+	} else {
+		__xe_deadline_mgr_exit_deadline(mgr);
+		hrtimer_start(&mgr->timer, deadline,
+			      HRTIMER_MODE_ABS);
+	}
+}
+
+static void __xe_deadline_mgr_remove_deadline(struct xe_deadline_mgr *mgr,
+					      struct xe_hw_fence *hw_fence)
+{
+	ktime_t old_deadline = hw_fence->deadline.time;
+
+	lockdep_assert_held(&mgr->lock);
+
+	hw_fence->deadline.time = XE_DEADLINE_DONE;
+	if (old_deadline == XE_DEADLINE_NONE)
+		return;
+
+	list_del_init(&hw_fence->deadline.link);
+	__xe_deadline_mgr_update_deadline(mgr);
+}
+
+static void __xe_deadline_mgr_add_deadline(struct xe_deadline_mgr *mgr,
+					   struct xe_hw_fence *hw_fence,
+					   ktime_t deadline)
+{
+	struct xe_hw_fence *pos;
+
+	lockdep_assert_held(&mgr->lock);
+
+	hw_fence->deadline.time = deadline;
+
+	list_for_each_entry(pos, &mgr->deadlines, deadline.link) {
+		if (ktime_before(hw_fence->deadline.time, pos->deadline.time)) {
+			/*
+			 * A bit confusing, but the below code actually inserts
+			 * 'hw_fence' before 'pos' as list_add_tail effectively
+			 * means insert before head.
+			 */
+			list_add_tail(&hw_fence->deadline.link,
+				      &pos->deadline.link);
+			return;
+		}
+	}
+
+	list_add_tail(&hw_fence->deadline.link, &mgr->deadlines);
+}
+
+/**
+ * xe_deadline_mgr_add_deadline() - Add deadline
+ * @mgr: Deadline manager object
+ * @fence: Fence with deadline (must be struct xe_hw_fence)
+ * @deadline: Deadline for the fence
+ *
+ * Add a deadline for a fence. This may be called multiple times on a given
+ * fence. It assumes upper layers only call this function multiple times if the
+ * deadline is being reduced. If called after xe_deadline_mgr_remove_deadline,
+ * this function is a NOP.
+ */
+void xe_deadline_mgr_add_deadline(struct xe_deadline_mgr *mgr,
+				  struct dma_fence *fence,
+				  ktime_t deadline)
+{
+	struct xe_hw_fence *hw_fence = to_xe_hw_fence(fence);
+
+	if (mgr->state == XE_DEADLINE_MGR_STATE_UNSUPPORTED)
+		return;
+
+	guard(spinlock_irqsave)(&mgr->lock);
+
+	if (hw_fence->deadline.time == XE_DEADLINE_DONE ||
+	    deadline == XE_DEADLINE_DONE)
+		return;
+
+	xe_assert(gt_to_xe(mgr->q->gt),
+		  hw_fence->deadline.time == XE_DEADLINE_NONE ||
+		  deadline <= hw_fence->deadline.time);
+
+	__xe_deadline_mgr_remove_deadline(mgr, hw_fence);
+	__xe_deadline_mgr_add_deadline(mgr, hw_fence, deadline);
+	__xe_deadline_mgr_update_deadline(mgr);
+}
+
+/**
+ * xe_deadline_mgr_remove_deadline() - Remove deadline
+ * @mgr: Deadline manager object
+ * @fence: Fence with deadline (must be struct xe_hw_fence)
+ *
+ * Remove the deadline for a fence. This should be called exactly once after the
+ * fence is signaled. After this function is called, future
+ * xe_deadline_mgr_add_deadline calls are NOPs.
+ */
+void xe_deadline_mgr_remove_deadline(struct xe_deadline_mgr *mgr,
+				     struct dma_fence *fence)
+{
+	if (mgr->state == XE_DEADLINE_MGR_STATE_UNSUPPORTED)
+		return;
+
+	guard(spinlock_irqsave)(&mgr->lock);
+
+	xe_assert(gt_to_xe(mgr->q->gt), !dma_fence_is_container(fence));
+	xe_assert(gt_to_xe(mgr->q->gt), dma_fence_is_signaled(fence));
+	xe_assert(gt_to_xe(mgr->q->gt),
+		  to_xe_hw_fence(fence)->deadline.time != XE_DEADLINE_DONE);
+
+	__xe_deadline_mgr_remove_deadline(mgr, to_xe_hw_fence(fence));
+}
diff --git a/drivers/gpu/drm/xe/xe_deadline_mgr.h b/drivers/gpu/drm/xe/xe_deadline_mgr.h
new file mode 100644
index 000000000000..56f632fce792
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_deadline_mgr.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_DEADLINE_MGR_H_
+#define _XE_DEADLINE_MGR_H_
+
+#include <linux/types.h>
+
+struct dma_fence;
+struct xe_deadline_mgr;
+struct xe_exec_queue;
+
+void xe_deadline_mgr_init(struct xe_deadline_mgr *mgr, struct xe_exec_queue *q);
+
+void xe_deadline_mgr_fini(struct xe_deadline_mgr *mgr);
+
+void xe_deadline_mgr_add_deadline(struct xe_deadline_mgr *mgr,
+				  struct dma_fence *fence,
+				  ktime_t deadline);
+
+void xe_deadline_mgr_remove_deadline(struct xe_deadline_mgr *mgr,
+				     struct dma_fence *fence);
+
+#endif
diff --git a/drivers/gpu/drm/xe/xe_deadline_mgr_types.h b/drivers/gpu/drm/xe/xe_deadline_mgr_types.h
new file mode 100644
index 000000000000..5a53a79fcfc4
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_deadline_mgr_types.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_DEADLINE_MGR_TYPES_H_
+#define _XE_DEADLINE_MGR_TYPES_H_
+
+#include <linux/hrtimer_types.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+struct xe_exec_queue;
+
+#define XE_DEADLINE_NONE	(-1)
+#define XE_DEADLINE_DONE	(-2)
+
+/** enum xe_deadline_mgr_state - Deadline manager state */
+enum xe_deadline_mgr_state {
+	/** @XE_DEADLINE_MGR_STATE_UNSUPPORTED: Unsupported (disabled) */
+	XE_DEADLINE_MGR_STATE_UNSUPPORTED,
+	/** @XE_DEADLINE_MGR_STATE_NO_BOOST: No boosted state */
+	XE_DEADLINE_MGR_STATE_NO_BOOST,
+	/** @XE_DEADLINE_MGR_STATE_FREQ_BOOST: Frequency boosted state */
+	XE_DEADLINE_MGR_STATE_FREQ_BOOST,
+	/** @XE_DEADLINE_MGR_STATE_PRIO_BOOST: Priority boosted state */
+	XE_DEADLINE_MGR_STATE_PRIO_BOOST,
+};
+
+/** struct xe_deadline_mgr - Xe deadline manager */
+struct xe_deadline_mgr {
+	/** @q: Pointer to queue associated with deadline */
+	struct xe_exec_queue *q;
+	/** @deadlines: List storing deadline fences, protected by @lock */
+	struct list_head deadlines;
+	/** @timer: Timer to enter deadline mode, protected by @lock */
+	struct hrtimer timer;
+	/**
+	 * @exit_delay: Delayed worker to exit deadline mode, protected by
+	 * @lock
+	 */
+	struct delayed_work exit_delay;
+	/** @lock: Lock to protect deadlines */
+	spinlock_t lock;
+	/** @deadline: Current deadline, protected by @lock */
+	ktime_t deadline;
+	/** @state: Deadline state, protected by @lock */
+	enum xe_deadline_mgr_state state;
+};
+
+#endif
diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
index 265e29e92c48..37ba1d9612ba 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.c
+++ b/drivers/gpu/drm/xe/xe_hw_fence.c
@@ -9,6 +9,7 @@
 #include <linux/slab.h>
 
 #include "xe_bo.h"
+#include "xe_deadline_mgr_types.h"
 #include "xe_device.h"
 #include "xe_gt.h"
 #include "xe_hw_engine.h"
@@ -267,6 +268,8 @@ void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
 
 	hw_fence->xe = gt_to_xe(ctx->gt);
 	hw_fence->q = q;
+	hw_fence->deadline.time = XE_DEADLINE_NONE;
+	INIT_LIST_HEAD(&hw_fence->deadline.link);
 	snprintf(hw_fence->name, sizeof(hw_fence->name), "%s", ctx->name);
 	hw_fence->seqno_map = seqno_map;
 	INIT_LIST_HEAD(&hw_fence->irq_link);
diff --git a/drivers/gpu/drm/xe/xe_hw_fence_types.h b/drivers/gpu/drm/xe/xe_hw_fence_types.h
index 052bbab1fad6..687b2f55cd02 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence_types.h
+++ b/drivers/gpu/drm/xe/xe_hw_fence_types.h
@@ -76,6 +76,19 @@ struct xe_hw_fence {
 	struct iosys_map seqno_map;
 	/** @irq_link: Link in struct xe_hw_fence_irq.pending */
 	struct list_head irq_link;
+	/** @deadline: Deadline info */
+	struct {
+		/**
+		 * @deadline.time: Deadline time, protected by deadline manager
+		 * lock
+		 */
+		ktime_t time;
+		/**
+		 * @deadline.link: Deadline link, protected by deadline manager
+		 * lock
+		 */
+		struct list_head link;
+	} deadline;
 };
 
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 08/22] drm/xe: Initialize deadline manager on exec queues
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (6 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 07/22] drm/xe: Implement deadline manager Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 09/22] drm/xe: Stub out execlists deadline vfuncs as NOPs Matthew Brost
                   ` (18 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Initialize a per-exec-queue deadline manager to allow deadlines to be
associated with exported scheduler fences.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_exec_queue.c       | 8 ++++++++
 drivers/gpu/drm/xe/xe_exec_queue_types.h | 4 ++++
 2 files changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index a9b981591773..30c2e5ce83ec 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -14,6 +14,7 @@
 #include <uapi/drm/xe_drm.h>
 
 #include "xe_bo.h"
+#include "xe_deadline_mgr.h"
 #include "xe_dep_scheduler.h"
 #include "xe_device.h"
 #include "xe_gt.h"
@@ -266,6 +267,12 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
 		}
 	}
 
+	/*
+	 * Must be done after extension processing so the deadline manager can
+	 * detect whether the queue is supported.
+	 */
+	xe_deadline_mgr_init(&q->deadline_mgr, q);
+
 	return q;
 }
 
@@ -332,6 +339,7 @@ static void __xe_exec_queue_fini(struct xe_exec_queue *q)
 {
 	int i;
 
+	xe_deadline_mgr_fini(&q->deadline_mgr);
 	q->ops->fini(q);
 
 	for (i = 0; i < q->width; ++i)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index ac860f3f042e..3366f10108a3 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -10,6 +10,7 @@
 
 #include <drm/gpu_scheduler.h>
 
+#include "xe_deadline_mgr_types.h"
 #include "xe_gpu_scheduler_types.h"
 #include "xe_hw_engine_types.h"
 #include "xe_hw_fence_types.h"
@@ -220,6 +221,9 @@ struct xe_exec_queue {
 		struct list_head link;
 	} pxp;
 
+	/** @deadline_mgr: Deadline manager */
+	struct xe_deadline_mgr deadline_mgr;
+
 	/** @ufence_syncobj: User fence syncobj */
 	struct drm_syncobj *ufence_syncobj;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 09/22] drm/xe: Stub out execlists deadline vfuncs as NOPs
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (7 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 08/22] drm/xe: Initialize deadline manager on exec queues Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 10/22] drm/xe: Make scheduler message lock IRQ-safe Matthew Brost
                   ` (17 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

The execlists backend is non-functional but shouldn’t crash. Stub out
execlists deadline vfuncs as NOPs.

v2:
 - Use set_deadline_state vfunc

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_execlist.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
index 46c17a18a3f4..52c83c4b48ac 100644
--- a/drivers/gpu/drm/xe/xe_execlist.c
+++ b/drivers/gpu/drm/xe/xe_execlist.c
@@ -469,6 +469,19 @@ static bool execlist_exec_queue_reset_status(struct xe_exec_queue *q)
 	return false;
 }
 
+static void execlist_exec_queue_set_deadline(struct xe_exec_queue *q,
+					     struct dma_fence *fence,
+					     ktime_t deadline)
+{
+	/* NIY */
+}
+
+static void execlist_exec_queue_set_deadline_state(struct xe_exec_queue *q,
+						   enum xe_deadline_mgr_state state)
+{
+	/* NIY */
+}
+
 static const struct xe_exec_queue_ops execlist_exec_queue_ops = {
 	.init = execlist_exec_queue_init,
 	.kill = execlist_exec_queue_kill,
@@ -481,6 +494,8 @@ static const struct xe_exec_queue_ops execlist_exec_queue_ops = {
 	.suspend_wait = execlist_exec_queue_suspend_wait,
 	.resume = execlist_exec_queue_resume,
 	.reset_status = execlist_exec_queue_reset_status,
+	.set_deadline = execlist_exec_queue_set_deadline,
+	.set_deadline_state = execlist_exec_queue_set_deadline_state,
 };
 
 int xe_execlist_init(struct xe_gt *gt)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 10/22] drm/xe: Make scheduler message lock IRQ-safe
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (8 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 09/22] drm/xe: Stub out execlists deadline vfuncs as NOPs Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 11/22] drm/xe: Support unstable opcodes for static scheduler messages Matthew Brost
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

It is legal to modify deadlines in IRQ contexts (e.g., in a hrtimer),
and deadlines can add messages. Therefore, the scheduler message lock
needs to be IRQ-safe. Change xe_sched_msg_lock to use scoped_guard,
which is IRQ-safe.

v2:
 - Fix macro warnings (CI)
 - Rename macro as 'scoped_guard'

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_gpu_scheduler.c | 28 +++++++++++++--------------
 drivers/gpu/drm/xe/xe_gpu_scheduler.h | 17 ++++++++--------
 drivers/gpu/drm/xe/xe_guc_submit.c    | 23 ++++++++++------------
 3 files changed, 32 insertions(+), 36 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
index f4f23317191f..8ea5480a517d 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
@@ -15,11 +15,12 @@ static void xe_sched_process_msg_queue_if_ready(struct xe_gpu_scheduler *sched)
 {
 	struct xe_sched_msg *msg;
 
-	xe_sched_msg_lock(sched);
-	msg = list_first_entry_or_null(&sched->msgs, struct xe_sched_msg, link);
-	if (msg)
-		xe_sched_process_msg_queue(sched);
-	xe_sched_msg_unlock(sched);
+	xe_sched_msg_scoped_guard(sched) {
+		msg = list_first_entry_or_null(&sched->msgs,
+					       struct xe_sched_msg, link);
+		if (msg)
+			xe_sched_process_msg_queue(sched);
+	}
 }
 
 static struct xe_sched_msg *
@@ -27,12 +28,12 @@ xe_sched_get_msg(struct xe_gpu_scheduler *sched)
 {
 	struct xe_sched_msg *msg;
 
-	xe_sched_msg_lock(sched);
-	msg = list_first_entry_or_null(&sched->msgs,
-				       struct xe_sched_msg, link);
-	if (msg)
-		list_del_init(&msg->link);
-	xe_sched_msg_unlock(sched);
+	xe_sched_msg_scoped_guard(sched) {
+		msg = list_first_entry_or_null(&sched->msgs,
+					       struct xe_sched_msg, link);
+		if (msg)
+			list_del_init(&msg->link);
+	}
 
 	return msg;
 }
@@ -110,9 +111,8 @@ void xe_sched_submission_resume_tdr(struct xe_gpu_scheduler *sched)
 void xe_sched_add_msg(struct xe_gpu_scheduler *sched,
 		      struct xe_sched_msg *msg)
 {
-	xe_sched_msg_lock(sched);
-	xe_sched_add_msg_locked(sched, msg);
-	xe_sched_msg_unlock(sched);
+	xe_sched_msg_scoped_guard(sched)
+		xe_sched_add_msg_locked(sched, msg);
 }
 
 void xe_sched_add_msg_locked(struct xe_gpu_scheduler *sched,
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
index dceb2cd0ee5b..269508c62b8c 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
@@ -31,15 +31,14 @@ void xe_sched_add_msg_locked(struct xe_gpu_scheduler *sched,
 void xe_sched_add_msg_head(struct xe_gpu_scheduler *sched,
 			   struct xe_sched_msg *msg);
 
-static inline void xe_sched_msg_lock(struct xe_gpu_scheduler *sched)
-{
-	spin_lock(&sched->msg_lock);
-}
-
-static inline void xe_sched_msg_unlock(struct xe_gpu_scheduler *sched)
-{
-	spin_unlock(&sched->msg_lock);
-}
+/**
+ * xe_sched_msg_scoped_guard() - Scoped guard for scheduler message lock
+ * @__sched: xe_gpu_scheduler object
+ *
+ * IRQ-safe scoped guard for scheduler message lock
+ */
+#define xe_sched_msg_scoped_guard(__sched)	\
+	scoped_guard(spinlock_irqsave, &(__sched)->msg_lock)
 
 static inline void xe_sched_stop(struct xe_gpu_scheduler *sched)
 {
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 7a4218f76024..07ffab338e4a 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -2329,10 +2329,10 @@ static int guc_exec_queue_suspend(struct xe_exec_queue *q)
 	if (exec_queue_killed_or_banned_or_wedged(q))
 		return -EINVAL;
 
-	xe_sched_msg_lock(sched);
-	if (guc_exec_queue_try_add_msg(q, msg, SUSPEND))
-		q->guc->suspend_pending = true;
-	xe_sched_msg_unlock(sched);
+	xe_sched_msg_scoped_guard(sched) {
+		if (guc_exec_queue_try_add_msg(q, msg, SUSPEND))
+			q->guc->suspend_pending = true;
+	}
 
 	return 0;
 }
@@ -2388,9 +2388,8 @@ static void guc_exec_queue_resume(struct xe_exec_queue *q)
 
 	xe_gt_assert(guc_to_gt(guc), !q->guc->suspend_pending);
 
-	xe_sched_msg_lock(sched);
-	guc_exec_queue_try_add_msg(q, msg, RESUME);
-	xe_sched_msg_unlock(sched);
+	xe_sched_msg_scoped_guard(sched)
+		guc_exec_queue_try_add_msg(q, msg, RESUME);
 }
 
 static bool guc_exec_queue_reset_status(struct xe_exec_queue *q)
@@ -2810,9 +2809,8 @@ static void guc_exec_queue_replay_pending_state_change(struct xe_exec_queue *q)
 	if (q->guc->needs_suspend) {
 		msg = q->guc->static_msgs + STATIC_MSG_SUSPEND;
 
-		xe_sched_msg_lock(sched);
-		guc_exec_queue_try_add_msg_head(q, msg, SUSPEND);
-		xe_sched_msg_unlock(sched);
+		xe_sched_msg_scoped_guard(sched)
+			guc_exec_queue_try_add_msg_head(q, msg, SUSPEND);
 
 		q->guc->needs_suspend = false;
 	}
@@ -2825,9 +2823,8 @@ static void guc_exec_queue_replay_pending_state_change(struct xe_exec_queue *q)
 	if (q->guc->needs_resume) {
 		msg = q->guc->static_msgs + STATIC_MSG_RESUME;
 
-		xe_sched_msg_lock(sched);
-		guc_exec_queue_try_add_msg_head(q, msg, RESUME);
-		xe_sched_msg_unlock(sched);
+		xe_sched_msg_scoped_guard(sched)
+			guc_exec_queue_try_add_msg_head(q, msg, RESUME);
 
 		q->guc->needs_resume = false;
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 11/22] drm/xe: Support unstable opcodes for static scheduler messages
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (9 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 10/22] drm/xe: Make scheduler message lock IRQ-safe Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 12/22] drm/xe: Implement GuC submission backend ops for deadlines Matthew Brost
                   ` (15 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

To support a single static scheduler message with a changing opcode,
read the message opcode under the message lock and pass it to the
scheduler for processing.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_gpu_scheduler.c       | 10 +++++++---
 drivers/gpu/drm/xe/xe_gpu_scheduler_types.h |  2 +-
 drivers/gpu/drm/xe/xe_guc_submit.c          |  5 +++--
 3 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
index 8ea5480a517d..0c568cf0460b 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
@@ -24,7 +24,7 @@ static void xe_sched_process_msg_queue_if_ready(struct xe_gpu_scheduler *sched)
 }
 
 static struct xe_sched_msg *
-xe_sched_get_msg(struct xe_gpu_scheduler *sched)
+xe_sched_get_msg(struct xe_gpu_scheduler *sched, unsigned int *opcode)
 {
 	struct xe_sched_msg *msg;
 
@@ -33,6 +33,9 @@ xe_sched_get_msg(struct xe_gpu_scheduler *sched)
 					       struct xe_sched_msg, link);
 		if (msg)
 			list_del_init(&msg->link);
+
+		/* Opcode only stable under lock for static messages */
+		*opcode = msg->opcode;
 	}
 
 	return msg;
@@ -43,13 +46,14 @@ static void xe_sched_process_msg_work(struct work_struct *w)
 	struct xe_gpu_scheduler *sched =
 		container_of(w, struct xe_gpu_scheduler, work_process_msg);
 	struct xe_sched_msg *msg;
+	unsigned int opcode;
 
 	if (READ_ONCE(sched->base.pause_submit))
 		return;
 
-	msg = xe_sched_get_msg(sched);
+	msg = xe_sched_get_msg(sched, &opcode);
 	if (msg) {
-		sched->ops->process_msg(msg);
+		sched->ops->process_msg(msg, opcode);
 
 		xe_sched_process_msg_queue_if_ready(sched);
 	}
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h b/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
index 63d9bf92583c..ea8b0d703d12 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
@@ -34,7 +34,7 @@ struct xe_sched_backend_ops {
 	 * @process_msg: Process a message. Allowed to block, it is this
 	 * function's responsibility to free message if dynamically allocated.
 	 */
-	void (*process_msg)(struct xe_sched_msg *msg);
+	void (*process_msg)(struct xe_sched_msg *msg, unsigned int opcode);
 };
 
 /**
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 07ffab338e4a..26cd9fa6e2b3 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -2046,13 +2046,14 @@ static void __guc_exec_queue_process_msg_set_multi_queue_priority(struct xe_sche
 #define MSG_LOCKED	BIT(8)
 #define MSG_HEAD	BIT(9)
 
-static void guc_exec_queue_process_msg(struct xe_sched_msg *msg)
+static void guc_exec_queue_process_msg(struct xe_sched_msg *msg,
+				       unsigned int opcode)
 {
 	struct xe_device *xe = guc_to_xe(exec_queue_to_guc(msg->private_data));
 
 	trace_xe_sched_msg_recv(msg);
 
-	switch (msg->opcode) {
+	switch (opcode) {
 	case CLEANUP:
 		__guc_exec_queue_process_msg_cleanup(msg);
 		break;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 12/22] drm/xe: Implement GuC submission backend ops for deadlines
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (10 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 11/22] drm/xe: Support unstable opcodes for static scheduler messages Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-10 10:48   ` kernel test robot
  2026-01-05  4:02 ` [PATCH v2 13/22] drm/xe: Enable deadlines on hardware fences Matthew Brost
                   ` (14 subsequent siblings)
  26 siblings, 1 reply; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Implement GuC submission backend ops for deadlines, which dynamically
raise or lower the priority of user queues with CAP_SYS_NICE and adjust
queue frequency upon deadline state change. The idea is that if a fence
on a queue is at risk of missing a deadline, we try to ensure this fence
completes as soon as possible.

v2:
 - Disallow parallel / multi-q
 - Tie removal of deadline to job's refcount
 - Remove exit_deadline, rather use enum for control

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_guc_exec_queue_types.h |   2 +-
 drivers/gpu/drm/xe/xe_guc_submit.c           | 133 +++++++++++++++++--
 drivers/gpu/drm/xe/xe_sched_job.c            |   3 +
 3 files changed, 126 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
index a3b034e4b205..83dfb15aa4bd 100644
--- a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
@@ -31,7 +31,7 @@ struct xe_guc_exec_queue {
 	 * a message needs to sent through the GPU scheduler but memory
 	 * allocations are not allowed.
 	 */
-#define MAX_STATIC_MSG_TYPE	3
+#define MAX_STATIC_MSG_TYPE	4
 	struct xe_sched_msg static_msgs[MAX_STATIC_MSG_TYPE];
 	/** @lr_tdr: long running TDR worker */
 	struct work_struct lr_tdr;
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 26cd9fa6e2b3..1aca444faf8b 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -20,6 +20,8 @@
 #include "regs/xe_lrc_layout.h"
 #include "xe_assert.h"
 #include "xe_bo.h"
+#include "xe_deadline_mgr.h"
+#include "xe_deadline_mgr_types.h"
 #include "xe_devcoredump.h"
 #include "xe_device.h"
 #include "xe_exec_queue.h"
@@ -552,6 +554,35 @@ static const int xe_exec_queue_prio_to_guc[] = {
 	[XE_EXEC_QUEUE_PRIORITY_KERNEL] = GUC_CLIENT_PRIORITY_KMD_HIGH,
 };
 
+static void deadline_policies(struct xe_guc *guc, struct xe_exec_queue *q,
+			      enum xe_deadline_mgr_state state)
+{
+	struct exec_queue_policy policy;
+	enum xe_exec_queue_priority prio = q->sched_props.priority;
+	u32 slpc_exec_queue_freq_req = 0;
+
+	xe_gt_assert(guc_to_gt(guc), exec_queue_registered(q) &&
+		     !xe_exec_queue_is_multi_queue_secondary(q));
+	xe_gt_assert(guc_to_gt(guc), state !=
+		     XE_DEADLINE_MGR_STATE_UNSUPPORTED);
+
+	if (state == XE_DEADLINE_MGR_STATE_PRIO_BOOST &&
+	    (q->flags & EXEC_QUEUE_FLAG_CAP_SYS_NICE))
+		prio = XE_EXEC_QUEUE_PRIORITY_HIGH;
+
+	if (state != XE_DEADLINE_MGR_STATE_NO_BOOST ||
+	    (q->flags & EXEC_QUEUE_FLAG_LOW_LATENCY))
+		slpc_exec_queue_freq_req |= SLPC_CTX_FREQ_REQ_IS_COMPUTE;
+
+	__guc_exec_queue_policy_start_klv(&policy, q->guc->id);
+	__guc_exec_queue_policy_add_priority(&policy, xe_exec_queue_prio_to_guc[prio]);
+	__guc_exec_queue_policy_add_slpc_exec_queue_freq_req(&policy,
+							     slpc_exec_queue_freq_req);
+
+	xe_guc_ct_send(&guc->ct, (u32 *)&policy.h2g,
+		       __guc_exec_queue_policy_action_size(&policy), 0, 0);
+}
+
 static void init_policies(struct xe_guc *guc, struct xe_exec_queue *q)
 {
 	struct exec_queue_policy policy;
@@ -1863,6 +1894,18 @@ static void __guc_exec_queue_destroy(struct xe_guc *guc, struct xe_exec_queue *q
 	guc_exec_queue_destroy_async(q);
 }
 
+#define CLEANUP				1	/* Non-zero values to catch uninitialized msg */
+#define SET_SCHED_PROPS			2
+#define SUSPEND				3
+#define RESUME				4
+#define SET_MULTI_QUEUE_PRIORITY	5
+#define ENTER_DEADLINE_FREQ		6
+#define ENTER_DEADLINE_PRIO		7
+#define EXIT_DEADLINE			8
+#define OPCODE_MASK	0xf
+#define MSG_LOCKED	BIT(8)
+#define MSG_HEAD	BIT(9)
+
 static void __guc_exec_queue_process_msg_cleanup(struct xe_sched_msg *msg)
 {
 	struct xe_exec_queue *q = msg->private_data;
@@ -2037,14 +2080,24 @@ static void __guc_exec_queue_process_msg_set_multi_queue_priority(struct xe_sche
 	kfree(msg);
 }
 
-#define CLEANUP				1	/* Non-zero values to catch uninitialized msg */
-#define SET_SCHED_PROPS			2
-#define SUSPEND				3
-#define RESUME				4
-#define SET_MULTI_QUEUE_PRIORITY	5
-#define OPCODE_MASK	0xf
-#define MSG_LOCKED	BIT(8)
-#define MSG_HEAD	BIT(9)
+static void
+__guc_exec_queue_process_msg_set_deadline_state(struct xe_sched_msg *msg,
+						unsigned int opcode)
+{
+	struct xe_exec_queue *q = msg->private_data;
+	struct xe_guc *guc = exec_queue_to_guc(q);
+	enum xe_deadline_mgr_state state;
+
+	if (opcode == EXIT_DEADLINE)
+		state = XE_DEADLINE_MGR_STATE_NO_BOOST;
+	else if (opcode == ENTER_DEADLINE_FREQ)
+		state = XE_DEADLINE_MGR_STATE_FREQ_BOOST;
+	else
+		state = XE_DEADLINE_MGR_STATE_PRIO_BOOST;
+
+	if (guc_exec_queue_allowed_to_change_state(q))
+		deadline_policies(guc, q, state);
+}
 
 static void guc_exec_queue_process_msg(struct xe_sched_msg *msg,
 				       unsigned int opcode)
@@ -2069,6 +2122,11 @@ static void guc_exec_queue_process_msg(struct xe_sched_msg *msg,
 	case SET_MULTI_QUEUE_PRIORITY:
 		__guc_exec_queue_process_msg_set_multi_queue_priority(msg);
 		break;
+	case ENTER_DEADLINE_FREQ:
+	case ENTER_DEADLINE_PRIO:
+	case EXIT_DEADLINE:
+		__guc_exec_queue_process_msg_set_deadline_state(msg, opcode);
+		break;
 	default:
 		XE_WARN_ON("Unknown message type");
 	}
@@ -2232,9 +2290,11 @@ static bool guc_exec_queue_try_add_msg(struct xe_exec_queue *q,
 	return true;
 }
 
-#define STATIC_MSG_CLEANUP	0
-#define STATIC_MSG_SUSPEND	1
-#define STATIC_MSG_RESUME	2
+#define STATIC_MSG_CLEANUP		0
+#define STATIC_MSG_SUSPEND		1
+#define STATIC_MSG_RESUME		2
+#define STATIC_MSG_SET_DEADLINE_STATE	3
+
 static void guc_exec_queue_destroy(struct xe_exec_queue *q)
 {
 	struct xe_sched_msg *msg = q->guc->static_msgs + STATIC_MSG_CLEANUP;
@@ -2402,6 +2462,55 @@ static bool guc_exec_queue_reset_status(struct xe_exec_queue *q)
 	return exec_queue_reset(q) || exec_queue_killed_or_banned_or_wedged(q);
 }
 
+static void guc_exec_queue_set_deadline(struct xe_exec_queue *q,
+					struct dma_fence *fence,
+					ktime_t deadline)
+{
+	xe_deadline_mgr_add_deadline(&q->deadline_mgr, fence, deadline);
+}
+
+static void guc_exec_queue_set_deadline_state(struct xe_exec_queue *q,
+					      enum xe_deadline_mgr_state state)
+{
+	struct xe_gpu_scheduler *sched = &q->guc->sched;
+	struct xe_sched_msg *msg = q->guc->static_msgs +
+		STATIC_MSG_SET_DEADLINE_STATE;
+	struct xe_guc *guc = exec_queue_to_guc(q);
+	unsigned int opcode;
+
+	xe_gt_assert(guc_to_gt(guc), state !=
+		     XE_DEADLINE_MGR_STATE_UNSUPPORTED);
+
+	switch (state) {
+	case XE_DEADLINE_MGR_STATE_NO_BOOST:
+		opcode = EXIT_DEADLINE;
+		break;
+	case XE_DEADLINE_MGR_STATE_FREQ_BOOST:
+		opcode = ENTER_DEADLINE_FREQ;
+		break;
+	case XE_DEADLINE_MGR_STATE_PRIO_BOOST:
+		opcode = ENTER_DEADLINE_PRIO;
+		break;
+	default:
+		drm_warn(&guc_to_xe(guc)->drm, "NOT POSSIBLE");
+	}
+
+	xe_sched_msg_scoped_guard(sched) {
+		if (!guc_exec_queue_try_add_msg(q, msg, opcode)) {
+			bool added;
+
+			/*
+			 * A deadline state change has yet to be processed,
+			 * removed it.
+			 */
+			list_del_init(&msg->link);
+
+			added = guc_exec_queue_try_add_msg(q, msg, opcode);
+			xe_gt_assert(guc_to_gt(guc), added);
+		}
+	}
+}
+
 /*
  * All of these functions are an abstraction layer which other parts of Xe can
  * use to trap into the GuC backend. All of these functions, aside from init,
@@ -2421,6 +2530,8 @@ static const struct xe_exec_queue_ops guc_exec_queue_ops = {
 	.suspend_wait = guc_exec_queue_suspend_wait,
 	.resume = guc_exec_queue_resume,
 	.reset_status = guc_exec_queue_reset_status,
+	.set_deadline = guc_exec_queue_set_deadline,
+	.set_deadline_state = guc_exec_queue_set_deadline_state,
 };
 
 static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
diff --git a/drivers/gpu/drm/xe/xe_sched_job.c b/drivers/gpu/drm/xe/xe_sched_job.c
index 6099b4445835..3d02f02ae9bb 100644
--- a/drivers/gpu/drm/xe/xe_sched_job.c
+++ b/drivers/gpu/drm/xe/xe_sched_job.c
@@ -9,6 +9,7 @@
 #include <linux/dma-fence-chain.h>
 #include <linux/slab.h>
 
+#include "xe_deadline_mgr.h"
 #include "xe_device.h"
 #include "xe_exec_queue.h"
 #include "xe_gt.h"
@@ -174,6 +175,8 @@ void xe_sched_job_destroy(struct kref *ref)
 	struct xe_device *xe = job_to_xe(job);
 	struct xe_exec_queue *q = job->q;
 
+	if (job->fence)
+		xe_deadline_mgr_remove_deadline(&q->deadline_mgr, job->fence);
 	xe_sched_job_free_fences(job);
 	dma_fence_put(job->fence);
 	drm_sched_job_cleanup(&job->drm);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 13/22] drm/xe: Enable deadlines on hardware fences
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (11 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 12/22] drm/xe: Implement GuC submission backend ops for deadlines Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 14/22] drm/xe: Fix Kconfig.profile newlines Matthew Brost
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Implement the set_deadline vfunc on hardware fences, which, with GuC
submission, allows priority and frequency boosts for queues that have
fences at risk of missing a deadline.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_hw_fence.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
index 37ba1d9612ba..f6f7ceb5cfc5 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.c
+++ b/drivers/gpu/drm/xe/xe_hw_fence.c
@@ -66,6 +66,7 @@ static void hw_fence_irq_run_cb(struct irq_work *work)
 			if (dma_fence_is_signaled_locked(dma_fence)) {
 				trace_xe_hw_fence_signal(fence);
 				list_del_init(&fence->irq_link);
+				fence->q = NULL;
 				dma_fence_put(dma_fence);
 			}
 		}
@@ -93,6 +94,7 @@ void xe_hw_fence_irq_finish(struct xe_hw_fence_irq *irq)
 		spin_lock_irqsave(&irq->lock, flags);
 		list_for_each_entry_safe(fence, next, &irq->pending, irq_link) {
 			list_del_init(&fence->irq_link);
+			fence->q = NULL;
 			XE_WARN_ON(dma_fence_check_and_signal_locked(&fence->dma));
 			dma_fence_put(&fence->dma);
 		}
@@ -197,12 +199,23 @@ static void xe_hw_fence_release(struct dma_fence *dma_fence)
 	call_rcu(&dma_fence->rcu, fence_free);
 }
 
+static void xe_hw_fence_set_deadline(struct dma_fence *dma_fence,
+				     ktime_t deadline)
+{
+	struct xe_hw_fence *fence = to_xe_hw_fence(dma_fence);
+
+	guard(spinlock_irqsave)(dma_fence->lock);
+	if (fence->q)
+		fence->q->ops->set_deadline(fence->q, dma_fence, deadline);
+}
+
 static const struct dma_fence_ops xe_hw_fence_ops = {
 	.get_driver_name = xe_hw_fence_get_driver_name,
 	.get_timeline_name = xe_hw_fence_get_timeline_name,
 	.enable_signaling = xe_hw_fence_enable_signaling,
 	.signaled = xe_hw_fence_signaled,
 	.release = xe_hw_fence_release,
+	.set_deadline = xe_hw_fence_set_deadline,
 };
 
 /**
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 14/22] drm/xe: Fix Kconfig.profile newlines
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (12 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 13/22] drm/xe: Enable deadlines on hardware fences Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-02-05 16:06   ` Rodrigo Vivi
  2026-01-05  4:02 ` [PATCH v2 15/22] drm/xe: Add deadline Kconfig options Matthew Brost
                   ` (12 subsequent siblings)
  26 siblings, 1 reply; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Add missing newlines between Kconfig options in Kconfig.profile to
improve readability.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/Kconfig.profile | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/xe/Kconfig.profile b/drivers/gpu/drm/xe/Kconfig.profile
index 7530df998148..594acedf7b56 100644
--- a/drivers/gpu/drm/xe/Kconfig.profile
+++ b/drivers/gpu/drm/xe/Kconfig.profile
@@ -5,24 +5,28 @@ config DRM_XE_JOB_TIMEOUT_MAX
 	help
 	  Configures the default max job timeout after which job will
 	  be forcefully taken away from scheduler.
+
 config DRM_XE_JOB_TIMEOUT_MIN
 	int "Default min job timeout (ms)"
 	default 1 # milliseconds
 	help
 	  Configures the default min job timeout after which job will
 	  be forcefully taken away from scheduler.
+
 config DRM_XE_TIMESLICE_MAX
 	int "Default max timeslice duration (us)"
 	default 10000000 # microseconds
 	help
 	  Configures the default max timeslice duration between multiple
 	  contexts by guc scheduling.
+
 config DRM_XE_TIMESLICE_MIN
 	int "Default min timeslice duration (us)"
 	default 1 # microseconds
 	help
 	  Configures the default min timeslice duration between multiple
 	  contexts by guc scheduling.
+
 config DRM_XE_PREEMPT_TIMEOUT
 	int "Preempt timeout (us, jiffy granularity)"
 	default 640000 # microseconds
@@ -31,6 +35,7 @@ config DRM_XE_PREEMPT_TIMEOUT
 	  when submitting a new context. If the current context does not hit
 	  an arbitration point and yield to HW before the timer expires, the
 	  HW will be reset to allow the more important context to execute.
+
 config DRM_XE_PREEMPT_TIMEOUT_MAX
 	int "Default max preempt timeout (us)"
 	default 10000000 # microseconds
@@ -38,6 +43,7 @@ config DRM_XE_PREEMPT_TIMEOUT_MAX
 	  Configures the default max preempt timeout after which context
 	  will be forcefully taken away and higher priority context will
 	  run.
+
 config DRM_XE_PREEMPT_TIMEOUT_MIN
 	int "Default min preempt timeout (us)"
 	default 1 # microseconds
@@ -45,6 +51,7 @@ config DRM_XE_PREEMPT_TIMEOUT_MIN
 	  Configures the default min preempt timeout after which context
 	  will be forcefully taken away and higher priority context will
 	  run.
+
 config DRM_XE_ENABLE_SCHEDTIMEOUT_LIMIT
 	bool "Default configuration of limitation on scheduler timeout"
 	default y
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 15/22] drm/xe: Add deadline Kconfig options
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (13 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 14/22] drm/xe: Fix Kconfig.profile newlines Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 16/22] drm/xe: Add exec queue deadline trace points Matthew Brost
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Add Kconfig options to tune the deadline manager behavior.

CONFIG_DRM_XE_DEADLINE_WINDOW_US configures the deadline window. If a
fence is not signaled within this window prior to the programmed
deadline, deadline boosting is activated. The default is 3000 us,
allowing OEMs to tune deadline sensitivity.

CONFIG_DRM_XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT configures the
percentage of the deadline window during which priority boosting is
applied in addition to frequency boosting.

CONFIG_DRM_XE_DEADLINE_EXIT_DELAY_MS configures the delay from the last
deadline signaling until the boost mode is exited.

v2:
 - s/CONFIG_DRM_XE_DEADLINE_WINDOW/CONFIG_DRM_XE_DEADLINE_WINDOW_US
 - Add priority boost percent Kconfig

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/Kconfig.profile   | 22 ++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_deadline_mgr.c | 17 ++++++++++++++++-
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/xe/Kconfig.profile b/drivers/gpu/drm/xe/Kconfig.profile
index 594acedf7b56..e01e7cc11c53 100644
--- a/drivers/gpu/drm/xe/Kconfig.profile
+++ b/drivers/gpu/drm/xe/Kconfig.profile
@@ -60,3 +60,25 @@ config DRM_XE_ENABLE_SCHEDTIMEOUT_LIMIT
 	  to apply to applicable user. For elevated user, all above MIN
 	  and MAX values will apply when this configuration is enable to
 	  apply limitation. By default limitation is applied.
+
+config DRM_XE_DEADLINE_WINDOW_US
+	int "Default deadline window (us)"
+	default 3000
+	help
+	  Specifies the deadline window in microseconds. If a fence has not
+	  been signaled when the current time is within this window of its
+	  programmed deadline, deadline boosting is enabled.
+
+config DRM_XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT
+	int "Percent of deadline window with priority boost"
+	default 60
+	range 0 100
+	help
+	  Specifies the percentage of the deadline window during which
+	  priority boosting is applied in addition to frequency boosting.
+
+config DRM_XE_DEADLINE_EXIT_DELAY_MS
+	int "Default deadline exit delay (ms)"
+	default 100
+	help
+	  Specifies the deadline exit delay in milliseconds.
diff --git a/drivers/gpu/drm/xe/xe_deadline_mgr.c b/drivers/gpu/drm/xe/xe_deadline_mgr.c
index 061664ed24e3..8ebecf92aa8c 100644
--- a/drivers/gpu/drm/xe/xe_deadline_mgr.c
+++ b/drivers/gpu/drm/xe/xe_deadline_mgr.c
@@ -11,9 +11,24 @@
 #include "xe_gt.h"
 #include "xe_hw_fence.h"
 
+#ifdef CONFIG_DRM_XE_DEADLINE_WINDOW_US
+#define XE_DEADLINE_WINDOW_US	CONFIG_DRM_XE_DEADLINE_WINDOW_US
+#else
 #define XE_DEADLINE_WINDOW_US			3000
-#define XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT	60
+#endif
+
+#ifdef CONFIG_DRM_XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT
+#define XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT	\
+	CONFIG_DRM_XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT
+#else
+#define XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT   60
+#endif
+
+#ifdef CONFIG_DRM_XE_DEADLINE_EXIT_DELAY_MS
+#define XE_DEADLINE_EXIT_DELAY_MS	CONFIG_DRM_XE_DEADLINE_EXIT_DELAY_MS
+#else
 #define XE_DEADLINE_EXIT_DELAY_MS		100
+#endif
 
 static ktime_t __xe_deadline_mgr_freq_boost_window(void)
 {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 16/22] drm/xe: Add exec queue deadline trace points
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (14 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 15/22] drm/xe: Add deadline Kconfig options Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 17/22] drm/xe: Add hw fence " Matthew Brost
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Add exec queue deadline trace points to help debug and profile the
deadline implementation.

v2:
 - Add freq / prio tracepoints

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_deadline_mgr.c |  4 +++-
 drivers/gpu/drm/xe/xe_guc_submit.c   | 10 +++++++---
 drivers/gpu/drm/xe/xe_trace.h        | 20 ++++++++++++++++++++
 3 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_deadline_mgr.c b/drivers/gpu/drm/xe/xe_deadline_mgr.c
index 8ebecf92aa8c..f3c920017e40 100644
--- a/drivers/gpu/drm/xe/xe_deadline_mgr.c
+++ b/drivers/gpu/drm/xe/xe_deadline_mgr.c
@@ -69,8 +69,10 @@ static bool __xe_deadline_mgr_enter_deadline(struct xe_deadline_mgr *mgr,
 	lockdep_assert_held(&mgr->lock);
 
 	if (XE_DEADLINE_EXIT_DELAY_MS &&
-	    mgr->state != XE_DEADLINE_MGR_STATE_NO_BOOST)
+	    mgr->state != XE_DEADLINE_MGR_STATE_NO_BOOST) {
 		cancel_delayed_work(&mgr->exit_delay);
+		trace_xe_exec_queue_cancel_deadline_exit(mgr->q);
+	}
 
 	if (mgr->state != state && !__xe_deadline_mgr_deadline_signaled(mgr)) {
 		mgr->state = state;
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 1aca444faf8b..7e47c375f530 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -2088,12 +2088,16 @@ __guc_exec_queue_process_msg_set_deadline_state(struct xe_sched_msg *msg,
 	struct xe_guc *guc = exec_queue_to_guc(q);
 	enum xe_deadline_mgr_state state;
 
-	if (opcode == EXIT_DEADLINE)
+	if (opcode == EXIT_DEADLINE) {
 		state = XE_DEADLINE_MGR_STATE_NO_BOOST;
-	else if (opcode == ENTER_DEADLINE_FREQ)
+		trace_xe_exec_queue_exit_deadline(q);
+	} else if (opcode == ENTER_DEADLINE_FREQ) {
 		state = XE_DEADLINE_MGR_STATE_FREQ_BOOST;
-	else
+		trace_xe_exec_queue_enter_deadline_freq(q);
+	} else {
 		state = XE_DEADLINE_MGR_STATE_PRIO_BOOST;
+		trace_xe_exec_queue_enter_deadline_prio(q);
+	}
 
 	if (guc_exec_queue_allowed_to_change_state(q))
 		deadline_policies(guc, q, state);
diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
index 6d12fcc13f43..14592403d2c0 100644
--- a/drivers/gpu/drm/xe/xe_trace.h
+++ b/drivers/gpu/drm/xe/xe_trace.h
@@ -233,6 +233,26 @@ DEFINE_EVENT(xe_exec_queue, xe_exec_queue_lr_cleanup,
 	     TP_ARGS(q)
 );
 
+DEFINE_EVENT(xe_exec_queue, xe_exec_queue_enter_deadline_freq,
+	     TP_PROTO(struct xe_exec_queue *q),
+	     TP_ARGS(q)
+);
+
+DEFINE_EVENT(xe_exec_queue, xe_exec_queue_enter_deadline_prio,
+	     TP_PROTO(struct xe_exec_queue *q),
+	     TP_ARGS(q)
+);
+
+DEFINE_EVENT(xe_exec_queue, xe_exec_queue_exit_deadline,
+	     TP_PROTO(struct xe_exec_queue *q),
+	     TP_ARGS(q)
+);
+
+DEFINE_EVENT(xe_exec_queue, xe_exec_queue_cancel_deadline_exit,
+	     TP_PROTO(struct xe_exec_queue *q),
+	     TP_ARGS(q)
+);
+
 DECLARE_EVENT_CLASS(xe_sched_job,
 		    TP_PROTO(struct xe_sched_job *job),
 		    TP_ARGS(job),
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 17/22] drm/xe: Add hw fence deadline trace points
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (15 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 16/22] drm/xe: Add exec queue deadline trace points Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 18/22] drm/xe: Add timestamp_ms to LRC snapshot Matthew Brost
                   ` (9 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Add hw fence deadline trace points to help debug and profile the
deadline implementation.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_deadline_mgr.c | 14 ++++++++++++--
 drivers/gpu/drm/xe/xe_trace.h        | 22 ++++++++++++++++++++--
 2 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_deadline_mgr.c b/drivers/gpu/drm/xe/xe_deadline_mgr.c
index f3c920017e40..e2ee23f6e787 100644
--- a/drivers/gpu/drm/xe/xe_deadline_mgr.c
+++ b/drivers/gpu/drm/xe/xe_deadline_mgr.c
@@ -10,6 +10,7 @@
 #include "xe_exec_queue.h"
 #include "xe_gt.h"
 #include "xe_hw_fence.h"
+#include "xe_trace.h"
 
 #ifdef CONFIG_DRM_XE_DEADLINE_WINDOW_US
 #define XE_DEADLINE_WINDOW_US	CONFIG_DRM_XE_DEADLINE_WINDOW_US
@@ -345,6 +346,8 @@ void xe_deadline_mgr_add_deadline(struct xe_deadline_mgr *mgr,
 	__xe_deadline_mgr_remove_deadline(mgr, hw_fence);
 	__xe_deadline_mgr_add_deadline(mgr, hw_fence, deadline);
 	__xe_deadline_mgr_update_deadline(mgr);
+
+	trace_xe_hw_fence_add_deadline(hw_fence);
 }
 
 /**
@@ -359,15 +362,22 @@ void xe_deadline_mgr_add_deadline(struct xe_deadline_mgr *mgr,
 void xe_deadline_mgr_remove_deadline(struct xe_deadline_mgr *mgr,
 				     struct dma_fence *fence)
 {
+	struct xe_hw_fence *hw_fence;
+
 	if (mgr->state == XE_DEADLINE_MGR_STATE_UNSUPPORTED)
 		return;
 
 	guard(spinlock_irqsave)(&mgr->lock);
 
+	hw_fence = to_xe_hw_fence(fence);
+
 	xe_assert(gt_to_xe(mgr->q->gt), !dma_fence_is_container(fence));
 	xe_assert(gt_to_xe(mgr->q->gt), dma_fence_is_signaled(fence));
 	xe_assert(gt_to_xe(mgr->q->gt),
-		  to_xe_hw_fence(fence)->deadline.time != XE_DEADLINE_DONE);
+		  hw_fence->deadline.time != XE_DEADLINE_DONE);
+
+	if (hw_fence->deadline.time != XE_DEADLINE_NONE)
+		trace_xe_hw_fence_remove_deadline(hw_fence);
 
-	__xe_deadline_mgr_remove_deadline(mgr, to_xe_hw_fence(fence));
+	__xe_deadline_mgr_remove_deadline(mgr, hw_fence);
 }
diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
index 14592403d2c0..5c84b8503de3 100644
--- a/drivers/gpu/drm/xe/xe_trace.h
+++ b/drivers/gpu/drm/xe/xe_trace.h
@@ -369,6 +369,8 @@ DECLARE_EVENT_CLASS(xe_hw_fence,
 			     __field(u64, ctx)
 			     __field(u32, seqno)
 			     __field(struct xe_hw_fence *, fence)
+			     __field(s64, delta_ns)
+			     __field(bool, missed)
 			     ),
 
 		    TP_fast_assign(
@@ -376,10 +378,16 @@ DECLARE_EVENT_CLASS(xe_hw_fence,
 			   __entry->ctx = fence->dma.context;
 			   __entry->seqno = fence->dma.seqno;
 			   __entry->fence = fence;
+			   __entry->delta_ns =
+			   ktime_to_ns(ktime_sub(fence->deadline.time, ktime_get()));
+			   __entry->missed = (__entry->delta_ns < 0 &&
+			    fence->deadline.time != XE_DEADLINE_NONE);
 			   ),
 
-		    TP_printk("dev=%s, ctx=0x%016llx, fence=%p, seqno=%u",
-			      __get_str(dev), __entry->ctx, __entry->fence, __entry->seqno)
+		    TP_printk("dev=%s, ctx=0x%llx, fence=%p, seqno=%u, missed=%d, delta_ns=0x%llx",
+			      __get_str(dev), __entry->ctx, __entry->fence,
+			      __entry->seqno, __entry->missed ? 1 : 0,
+			      (u64)__entry->delta_ns)
 );
 
 DEFINE_EVENT(xe_hw_fence, xe_hw_fence_create,
@@ -397,6 +405,16 @@ DEFINE_EVENT(xe_hw_fence, xe_hw_fence_try_signal,
 	     TP_ARGS(fence)
 );
 
+DEFINE_EVENT(xe_hw_fence, xe_hw_fence_add_deadline,
+	     TP_PROTO(struct xe_hw_fence *fence),
+	     TP_ARGS(fence)
+);
+
+DEFINE_EVENT(xe_hw_fence, xe_hw_fence_remove_deadline,
+	     TP_PROTO(struct xe_hw_fence *fence),
+	     TP_ARGS(fence)
+);
+
 TRACE_EVENT(xe_reg_rw,
 	TP_PROTO(struct xe_mmio *mmio, bool write, u32 reg, u64 val, int len),
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 18/22] drm/xe: Add timestamp_ms to LRC snapshot
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (16 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 17/22] drm/xe: Add hw fence " Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 19/22] drm/xe: Enforce GuC static message defines Matthew Brost
                   ` (8 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Add a timestamp in milliseconds to the LRC snapshot to make it easier to
reason about how long the LRC has been running and the average duration
of each job.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_lrc.c | 4 ++++
 drivers/gpu/drm/xe/xe_lrc.h | 1 +
 2 files changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index eccc7f2642bf..f3ada74d297e 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -22,6 +22,7 @@
 #include "xe_drm_client.h"
 #include "xe_exec_queue_types.h"
 #include "xe_gt.h"
+#include "xe_gt_clock.h"
 #include "xe_gt_printk.h"
 #include "xe_hw_fence.h"
 #include "xe_map.h"
@@ -2289,6 +2290,8 @@ struct xe_lrc_snapshot *xe_lrc_snapshot_capture(struct xe_lrc *lrc)
 	snapshot->replay_size = lrc->replay_size;
 	snapshot->lrc_snapshot = NULL;
 	snapshot->ctx_timestamp = lower_32_bits(xe_lrc_ctx_timestamp(lrc));
+	snapshot->ctx_timestamp_ms =
+		xe_gt_clock_interval_to_ms(lrc->gt, xe_lrc_ctx_timestamp(lrc));
 	snapshot->ctx_job_timestamp = xe_lrc_ctx_job_timestamp(lrc);
 	return snapshot;
 }
@@ -2342,6 +2345,7 @@ void xe_lrc_snapshot_print(struct xe_lrc_snapshot *snapshot, struct drm_printer
 	drm_printf(p, "\tStart seqno: (memory) %d\n", snapshot->start_seqno);
 	drm_printf(p, "\tSeqno: (memory) %d\n", snapshot->seqno);
 	drm_printf(p, "\tTimestamp: 0x%08x\n", snapshot->ctx_timestamp);
+	drm_printf(p, "\tTimestamp ms: %llu\n", snapshot->ctx_timestamp_ms);
 	drm_printf(p, "\tJob Timestamp: 0x%08x\n", snapshot->ctx_job_timestamp);
 
 	if (!snapshot->lrc_snapshot)
diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
index 3d72b4c0da8e..59413d9fc1ff 100644
--- a/drivers/gpu/drm/xe/xe_lrc.h
+++ b/drivers/gpu/drm/xe/xe_lrc.h
@@ -39,6 +39,7 @@ struct xe_lrc_snapshot {
 	u32 seqno;
 	u32 ctx_timestamp;
 	u32 ctx_job_timestamp;
+	u64 ctx_timestamp_ms;
 };
 
 #define LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR (0x34 * 4)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 19/22] drm/xe: Enforce GuC static message defines
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (17 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 18/22] drm/xe: Add timestamp_ms to LRC snapshot Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 20/22] drm/xe: Document the deadline manager Matthew Brost
                   ` (7 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Ensure GuC static message defines fit within the static message memory
layout using BUILD_BUG_ON.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_guc_submit.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 7e47c375f530..051e57d3e540 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -2294,15 +2294,20 @@ static bool guc_exec_queue_try_add_msg(struct xe_exec_queue *q,
 	return true;
 }
 
-#define STATIC_MSG_CLEANUP		0
-#define STATIC_MSG_SUSPEND		1
-#define STATIC_MSG_RESUME		2
-#define STATIC_MSG_SET_DEADLINE_STATE	3
+enum {
+	STATIC_MSG_CLEANUP = 0,
+	STATIC_MSG_SUSPEND,
+	STATIC_MSG_RESUME,
+	STATIC_MSG_SET_DEADLINE_STATE,
+	STATIC_MSG_COUNT,
+};
 
 static void guc_exec_queue_destroy(struct xe_exec_queue *q)
 {
 	struct xe_sched_msg *msg = q->guc->static_msgs + STATIC_MSG_CLEANUP;
 
+	BUILD_BUG_ON(STATIC_MSG_COUNT != MAX_STATIC_MSG_TYPE);
+
 	if (!(q->flags & EXEC_QUEUE_FLAG_PERMANENT) && !exec_queue_wedged(q))
 		guc_exec_queue_add_msg(q, msg, CLEANUP);
 	else
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 20/22] drm/xe: Document the deadline manager
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (18 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 19/22] drm/xe: Enforce GuC static message defines Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 21/22] drm/atomic: Export fence deadline helper for atomic commits Matthew Brost
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa

Add kernel-doc describing the Xe deadline manager and its behavior.

The documentation explains the deadline model, state machine, and timer
behavior used to boost frequency and priority as deadlines approach. It
also documents deadline lifecycle rules, queue limitations, locking, and
the Kconfig options that control the boost window and priority boost
sub-window.

This is intended to make the deadline logic easier to reason about and
to clarify the constraints under which it operates, particularly for
latency-sensitive use cases such as compositors.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_deadline_mgr.c | 80 ++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_deadline_mgr.c b/drivers/gpu/drm/xe/xe_deadline_mgr.c
index e2ee23f6e787..0a240e09da9b 100644
--- a/drivers/gpu/drm/xe/xe_deadline_mgr.c
+++ b/drivers/gpu/drm/xe/xe_deadline_mgr.c
@@ -12,6 +12,86 @@
 #include "xe_hw_fence.h"
 #include "xe_trace.h"
 
+/**
+ * DOC: Xe deadline manager
+ *
+ * The Xe deadline manager provides per-exec-queue deadline boosting to help
+ * latency-sensitive workloads (e.g. compositors) avoid missing presentation
+ * deadlines.
+ *
+ * Overview
+ * ========
+ * Userspace may associate an absolute deadline (ktime_t) with the hardware
+ * fence of a submitted job. The manager tracks deadlines for in-flight jobs on
+ * a queue and programs a single hrtimer based on the earliest outstanding
+ * deadline.
+ *
+ * When the earliest deadline approaches, the manager transitions the queue into
+ * a boosted state via @q->ops->set_deadline_state(). Boosting is intended to be
+ * minimal and time-bounded:
+ *
+ *   - Frequency boost begins when the current time is within
+ *     %XE_DEADLINE_WINDOW_US of the deadline.
+ *
+ *   - Optionally, the final portion of the window additionally boosts
+ *     priority. The length of this priority-boost sub-window is controlled by
+ *     %XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT.
+ *
+ * State machine
+ * =============
+ * The manager maintains a current deadline state:
+ *
+ *   - %XE_DEADLINE_MGR_STATE_NO_BOOST   - normal execution.
+ *   - %XE_DEADLINE_MGR_STATE_FREQ_BOOST - frequency boost active.
+ *   - %XE_DEADLINE_MGR_STATE_PRIO_BOOST - priority + frequency boost active.
+ *
+ * For a non-empty deadline list, the manager schedules an hrtimer to fire at:
+ *
+ *     earliest_deadline - %XE_DEADLINE_WINDOW_US
+ *
+ * When the timer fires and the earliest deadline is still pending, the manager
+ * transitions to %XE_DEADLINE_MGR_STATE_FREQ_BOOST or directly to
+ * %XE_DEADLINE_MGR_STATE_PRIO_BOOST depending on
+ * %XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT. If a priority-boost sub-window is
+ * configured, the timer is re-armed to transition from frequency-only to
+ * priority boost at:
+ *
+ *     earliest_deadline - prio_boost_window
+ *
+ * If the earliest deadline changes (add/remove), the timer is canceled and
+ * reprogrammed. If the deadline is already within the configured window when
+ * updated, the appropriate boost state is entered immediately.
+ *
+ * Deadline lifecycle
+ * ==================
+ * Deadlines are tracked per struct xe_hw_fence and stored on
+ * @mgr->deadlines sorted by deadline time (earliest first). Adding a deadline
+ * may be called multiple times for the same fence; upper layers are expected
+ * to only reduce deadlines. Removing a deadline must be done exactly once after
+ * the fence is signaled. After removal, future add attempts for that fence are
+ * treated as NOPs.
+ *
+ * Concurrency and limitations
+ * ===========================
+ * The manager is protected by @mgr->lock (spinlock) and uses an hrtimer in
+ * %CLOCK_MONOTONIC absolute mode.
+ *
+ * Parallel queues are not supported because their job fence is a dma-fence
+ * chain and individual hardware fences may be freed while still referenced by
+ * the manager. Multi-queue is not supported because deadline boosting requires
+ * per-queue control of priority and frequency. The deadline logic is also
+ * disabled when the feature is disabled via Kconfig or when the queue is
+ * created in a boosted state.
+ *
+ * Tuning
+ * ======
+ * %XE_DEADLINE_WINDOW_US controls how early boosting begins relative to the
+ * deadline (default 3000 us). %XE_DEADLINE_PRIO_BOOST_WINDOW_PERCENT controls
+ * what fraction of that window uses priority boosting in addition to frequency
+ * boosting (default 60%). %XE_DEADLINE_EXIT_DELAY_MS controls delay after
+ * deadline is done until boosting mode exits (default 100 ms).
+ */
+
 #ifdef CONFIG_DRM_XE_DEADLINE_WINDOW_US
 #define XE_DEADLINE_WINDOW_US	CONFIG_DRM_XE_DEADLINE_WINDOW_US
 #else
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 21/22] drm/atomic: Export fence deadline helper for atomic commits
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (19 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 20/22] drm/xe: Document the deadline manager Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:02 ` [PATCH v2 22/22] drm/i915/display: Use atomic helper to set plane fence deadlines Matthew Brost
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe; +Cc: daniele.ceraolospurio, carlos.santa, dri-devel

drm_atomic_helper_wait_for_fences() computes the next vblank start time
(for single-CRTC commits) and uses it to set an advisory deadline on the
incoming plane fences before waiting.

Expose this logic as drm_atomic_helper_set_fence_deadline() so drivers
with custom commit plumbing can reuse the same deadline calculation and
fence annotation without open-coding it.

No functional change intended: drm_atomic_helper_wait_for_fences()
continues to set the same deadlines as before, now via the exported
helper.

Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/drm_atomic_helper.c | 11 ++++++++---
 include/drm/drm_atomic_helper.h     |  3 +++
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
index cc1f0c102414..321fad478ee0 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -1770,11 +1770,15 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
 EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_enables);
 
 /*
+ * drm_atomic_helper_set_fence_deadline() - set fence deadlines
+ * @dev: DRM device
+ * @state: atomic state object being committed
+ *
  * For atomic updates which touch just a single CRTC, calculate the time of the
  * next vblank, and inform all the fences of the deadline.
  */
-static void set_fence_deadline(struct drm_device *dev,
-			       struct drm_atomic_state *state)
+void drm_atomic_helper_set_fence_deadline(struct drm_device *dev,
+					  struct drm_atomic_state *state)
 {
 	struct drm_crtc *crtc;
 	struct drm_crtc_state *new_crtc_state;
@@ -1809,6 +1813,7 @@ static void set_fence_deadline(struct drm_device *dev,
 		dma_fence_set_deadline(new_plane_state->fence, vbltime);
 	}
 }
+EXPORT_SYMBOL(drm_atomic_helper_set_fence_deadline);
 
 /**
  * drm_atomic_helper_wait_for_fences - wait for fences stashed in plane state
@@ -1839,7 +1844,7 @@ int drm_atomic_helper_wait_for_fences(struct drm_device *dev,
 	struct drm_plane_state *new_plane_state;
 	int i, ret;
 
-	set_fence_deadline(dev, state);
+	drm_atomic_helper_set_fence_deadline(dev, state);
 
 	for_each_new_plane_in_state(state, plane, new_plane_state, i) {
 		if (!new_plane_state->fence)
diff --git a/include/drm/drm_atomic_helper.h b/include/drm/drm_atomic_helper.h
index e154ee4f0696..401e83ab408d 100644
--- a/include/drm/drm_atomic_helper.h
+++ b/include/drm/drm_atomic_helper.h
@@ -186,6 +186,9 @@ int drm_atomic_helper_page_flip_target(
 				uint32_t target,
 				struct drm_modeset_acquire_ctx *ctx);
 
+void drm_atomic_helper_set_fence_deadline(struct drm_device *dev,
+					  struct drm_atomic_state *state);
+
 /**
  * drm_atomic_crtc_for_each_plane - iterate over planes currently attached to CRTC
  * @plane: the loop cursor
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v2 22/22] drm/i915/display: Use atomic helper to set plane fence deadlines
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (20 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 21/22] drm/atomic: Export fence deadline helper for atomic commits Matthew Brost
@ 2026-01-05  4:02 ` Matthew Brost
  2026-01-05  4:09 ` ✗ CI.checkpatch: warning for Fence deadlines in Xe (rev2) Patchwork
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Matthew Brost @ 2026-01-05  4:02 UTC (permalink / raw)
  To: intel-xe
  Cc: daniele.ceraolospurio, carlos.santa, intel-gfx,
	Ville Syrjälä, Jani Nikula, Rodrigo Vivi

i915 has its own atomic commit path and does not always funnel through
drm_atomic_helper_wait_for_fences(). Reuse the atomic helper deadline
logic by calling drm_atomic_helper_set_fence_deadline() at the start of
intel_atomic_commit().

This sets an advisory deadline on incoming plane fences based on the
next vblank for single-CRTC commits, matching the behavior of the atomic
helper wait path.

Cc: <intel-gfx@lists.freedesktop.org>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 81b3a6692ca2..d12ff6cd17b2 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -7751,6 +7751,8 @@ int intel_atomic_commit(struct drm_device *dev, struct drm_atomic_state *_state,
 	drm_atomic_state_get(&state->base);
 	INIT_WORK(&state->base.commit_work, intel_atomic_commit_work);
 
+	drm_atomic_helper_set_fence_deadline(dev, _state);
+
 	if (nonblock && state->modeset) {
 		queue_work(display->wq.modeset, &state->base.commit_work);
 	} else if (nonblock) {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* ✗ CI.checkpatch: warning for Fence deadlines in Xe (rev2)
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (21 preceding siblings ...)
  2026-01-05  4:02 ` [PATCH v2 22/22] drm/i915/display: Use atomic helper to set plane fence deadlines Matthew Brost
@ 2026-01-05  4:09 ` Patchwork
  2026-01-05  4:10 ` ✓ CI.KUnit: success " Patchwork
                   ` (3 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2026-01-05  4:09 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Fence deadlines in Xe (rev2)
URL   : https://patchwork.freedesktop.org/series/159479/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
9f1cb6875f3f9eb0925ed50c16100322a2df513c
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 10ab58042b2e33b511d5d5f08ed09f05413e93a0
Author: Matthew Brost <matthew.brost@intel.com>
Date:   Sun Jan 4 20:02:37 2026 -0800

    drm/i915/display: Use atomic helper to set plane fence deadlines
    
    i915 has its own atomic commit path and does not always funnel through
    drm_atomic_helper_wait_for_fences(). Reuse the atomic helper deadline
    logic by calling drm_atomic_helper_set_fence_deadline() at the start of
    intel_atomic_commit().
    
    This sets an advisory deadline on incoming plane fences based on the
    next vblank for single-CRTC commits, matching the behavior of the atomic
    helper wait path.
    
    Cc: <intel-gfx@lists.freedesktop.org>
    Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
    Cc: Jani Nikula <jani.nikula@intel.com>
    Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
    Signed-off-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch 5fc5192372599f11da8dee072fd8beb4414f8eca drm-intel
7ff3a32e3c74 drm/xe: Add dedicated message lock
ec1f91717356 drm/xe: Add EXEC_QUEUE_FLAG_CAP_SYS_NICE
37903da1efbe drm/xe: Store exec queue in hardware fence
e76fe678f3e3 drm/xe: Add deadline exec queue vfuncs
08c2283a1005 drm/xe: Export to_xe_hw_fence
3495056d6945 drm/xe: Export xe_hw_fence_signaled
9331da0f79c1 drm/xe: Implement deadline manager
-:42: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#42: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 475 lines checked
4c613e59eac1 drm/xe: Initialize deadline manager on exec queues
e8c00280e141 drm/xe: Stub out execlists deadline vfuncs as NOPs
331d6a51ec57 drm/xe: Make scheduler message lock IRQ-safe
3cf64d1ee069 drm/xe: Support unstable opcodes for static scheduler messages
17d3ffca0acf drm/xe: Implement GuC submission backend ops for deadlines
cab06fb789ab drm/xe: Enable deadlines on hardware fences
1e3691c01f3a drm/xe: Fix Kconfig.profile newlines
8469d2a66c1f drm/xe: Add deadline Kconfig options
1cf0d527a3b2 drm/xe: Add exec queue deadline trace points
418e58006c8a drm/xe: Add hw fence deadline trace points
60a55beb6884 drm/xe: Add timestamp_ms to LRC snapshot
f3a13e5068ff drm/xe: Enforce GuC static message defines
e9b910043d8b drm/xe: Document the deadline manager
9585af7d6cf9 drm/atomic: Export fence deadline helper for atomic commits
10ab58042b2e drm/i915/display: Use atomic helper to set plane fence deadlines



^ permalink raw reply	[flat|nested] 33+ messages in thread

* ✓ CI.KUnit: success for Fence deadlines in Xe (rev2)
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (22 preceding siblings ...)
  2026-01-05  4:09 ` ✗ CI.checkpatch: warning for Fence deadlines in Xe (rev2) Patchwork
@ 2026-01-05  4:10 ` Patchwork
  2026-01-05  4:26 ` ✗ CI.checksparse: warning " Patchwork
                   ` (2 subsequent siblings)
  26 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2026-01-05  4:10 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Fence deadlines in Xe (rev2)
URL   : https://patchwork.freedesktop.org/series/159479/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[04:09:29] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[04:09:33] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[04:10:04] Starting KUnit Kernel (1/1)...
[04:10:04] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[04:10:05] ================== guc_buf (11 subtests) ===================
[04:10:05] [PASSED] test_smallest
[04:10:05] [PASSED] test_largest
[04:10:05] [PASSED] test_granular
[04:10:05] [PASSED] test_unique
[04:10:05] [PASSED] test_overlap
[04:10:05] [PASSED] test_reusable
[04:10:05] [PASSED] test_too_big
[04:10:05] [PASSED] test_flush
[04:10:05] [PASSED] test_lookup
[04:10:05] [PASSED] test_data
[04:10:05] [PASSED] test_class
[04:10:05] ===================== [PASSED] guc_buf =====================
[04:10:05] =================== guc_dbm (7 subtests) ===================
[04:10:05] [PASSED] test_empty
[04:10:05] [PASSED] test_default
[04:10:05] ======================== test_size  ========================
[04:10:05] [PASSED] 4
[04:10:05] [PASSED] 8
[04:10:05] [PASSED] 32
[04:10:05] [PASSED] 256
[04:10:05] ==================== [PASSED] test_size ====================
[04:10:05] ======================= test_reuse  ========================
[04:10:05] [PASSED] 4
[04:10:05] [PASSED] 8
[04:10:05] [PASSED] 32
[04:10:05] [PASSED] 256
[04:10:05] =================== [PASSED] test_reuse ====================
[04:10:05] =================== test_range_overlap  ====================
[04:10:05] [PASSED] 4
[04:10:05] [PASSED] 8
[04:10:05] [PASSED] 32
[04:10:05] [PASSED] 256
[04:10:05] =============== [PASSED] test_range_overlap ================
[04:10:05] =================== test_range_compact  ====================
[04:10:05] [PASSED] 4
[04:10:05] [PASSED] 8
[04:10:05] [PASSED] 32
[04:10:05] [PASSED] 256
[04:10:05] =============== [PASSED] test_range_compact ================
[04:10:05] ==================== test_range_spare  =====================
[04:10:05] [PASSED] 4
[04:10:05] [PASSED] 8
[04:10:05] [PASSED] 32
[04:10:05] [PASSED] 256
[04:10:05] ================ [PASSED] test_range_spare =================
[04:10:05] ===================== [PASSED] guc_dbm =====================
[04:10:05] =================== guc_idm (6 subtests) ===================
[04:10:05] [PASSED] bad_init
[04:10:05] [PASSED] no_init
[04:10:05] [PASSED] init_fini
[04:10:05] [PASSED] check_used
[04:10:05] [PASSED] check_quota
[04:10:05] [PASSED] check_all
[04:10:05] ===================== [PASSED] guc_idm =====================
[04:10:05] ================== no_relay (3 subtests) ===================
[04:10:05] [PASSED] xe_drops_guc2pf_if_not_ready
[04:10:05] [PASSED] xe_drops_guc2vf_if_not_ready
[04:10:05] [PASSED] xe_rejects_send_if_not_ready
[04:10:05] ==================== [PASSED] no_relay =====================
[04:10:05] ================== pf_relay (14 subtests) ==================
[04:10:05] [PASSED] pf_rejects_guc2pf_too_short
[04:10:05] [PASSED] pf_rejects_guc2pf_too_long
[04:10:05] [PASSED] pf_rejects_guc2pf_no_payload
[04:10:05] [PASSED] pf_fails_no_payload
[04:10:05] [PASSED] pf_fails_bad_origin
[04:10:05] [PASSED] pf_fails_bad_type
[04:10:05] [PASSED] pf_txn_reports_error
[04:10:05] [PASSED] pf_txn_sends_pf2guc
[04:10:05] [PASSED] pf_sends_pf2guc
[04:10:05] [SKIPPED] pf_loopback_nop
[04:10:05] [SKIPPED] pf_loopback_echo
[04:10:05] [SKIPPED] pf_loopback_fail
[04:10:05] [SKIPPED] pf_loopback_busy
[04:10:05] [SKIPPED] pf_loopback_retry
[04:10:05] ==================== [PASSED] pf_relay =====================
[04:10:05] ================== vf_relay (3 subtests) ===================
[04:10:05] [PASSED] vf_rejects_guc2vf_too_short
[04:10:05] [PASSED] vf_rejects_guc2vf_too_long
[04:10:05] [PASSED] vf_rejects_guc2vf_no_payload
[04:10:05] ==================== [PASSED] vf_relay =====================
[04:10:05] ================ pf_gt_config (6 subtests) =================
[04:10:05] [PASSED] fair_contexts_1vf
[04:10:05] [PASSED] fair_doorbells_1vf
[04:10:05] [PASSED] fair_ggtt_1vf
[04:10:05] ====================== fair_contexts  ======================
[04:10:05] [PASSED] 1 VF
[04:10:05] [PASSED] 2 VFs
[04:10:05] [PASSED] 3 VFs
[04:10:05] [PASSED] 4 VFs
[04:10:05] [PASSED] 5 VFs
[04:10:05] [PASSED] 6 VFs
[04:10:05] [PASSED] 7 VFs
[04:10:05] [PASSED] 8 VFs
[04:10:05] [PASSED] 9 VFs
[04:10:05] [PASSED] 10 VFs
[04:10:05] [PASSED] 11 VFs
[04:10:05] [PASSED] 12 VFs
[04:10:05] [PASSED] 13 VFs
[04:10:05] [PASSED] 14 VFs
[04:10:05] [PASSED] 15 VFs
[04:10:05] [PASSED] 16 VFs
[04:10:05] [PASSED] 17 VFs
[04:10:05] [PASSED] 18 VFs
[04:10:05] [PASSED] 19 VFs
[04:10:05] [PASSED] 20 VFs
[04:10:05] [PASSED] 21 VFs
[04:10:05] [PASSED] 22 VFs
[04:10:05] [PASSED] 23 VFs
[04:10:05] [PASSED] 24 VFs
[04:10:05] [PASSED] 25 VFs
[04:10:05] [PASSED] 26 VFs
[04:10:05] [PASSED] 27 VFs
[04:10:05] [PASSED] 28 VFs
[04:10:05] [PASSED] 29 VFs
[04:10:05] [PASSED] 30 VFs
[04:10:05] [PASSED] 31 VFs
[04:10:05] [PASSED] 32 VFs
[04:10:05] [PASSED] 33 VFs
[04:10:05] [PASSED] 34 VFs
[04:10:05] [PASSED] 35 VFs
[04:10:05] [PASSED] 36 VFs
[04:10:05] [PASSED] 37 VFs
[04:10:05] [PASSED] 38 VFs
[04:10:05] [PASSED] 39 VFs
[04:10:05] [PASSED] 40 VFs
[04:10:05] [PASSED] 41 VFs
[04:10:05] [PASSED] 42 VFs
[04:10:05] [PASSED] 43 VFs
[04:10:05] [PASSED] 44 VFs
[04:10:05] [PASSED] 45 VFs
[04:10:05] [PASSED] 46 VFs
[04:10:05] [PASSED] 47 VFs
[04:10:05] [PASSED] 48 VFs
[04:10:05] [PASSED] 49 VFs
[04:10:05] [PASSED] 50 VFs
[04:10:05] [PASSED] 51 VFs
[04:10:05] [PASSED] 52 VFs
[04:10:05] [PASSED] 53 VFs
[04:10:05] [PASSED] 54 VFs
[04:10:05] [PASSED] 55 VFs
[04:10:05] [PASSED] 56 VFs
[04:10:05] [PASSED] 57 VFs
[04:10:05] [PASSED] 58 VFs
[04:10:05] [PASSED] 59 VFs
[04:10:05] [PASSED] 60 VFs
[04:10:05] [PASSED] 61 VFs
[04:10:05] [PASSED] 62 VFs
[04:10:05] [PASSED] 63 VFs
[04:10:05] ================== [PASSED] fair_contexts ==================
[04:10:05] ===================== fair_doorbells  ======================
[04:10:05] [PASSED] 1 VF
[04:10:05] [PASSED] 2 VFs
[04:10:05] [PASSED] 3 VFs
[04:10:05] [PASSED] 4 VFs
[04:10:05] [PASSED] 5 VFs
[04:10:05] [PASSED] 6 VFs
[04:10:05] [PASSED] 7 VFs
[04:10:05] [PASSED] 8 VFs
[04:10:05] [PASSED] 9 VFs
[04:10:05] [PASSED] 10 VFs
[04:10:05] [PASSED] 11 VFs
[04:10:05] [PASSED] 12 VFs
[04:10:05] [PASSED] 13 VFs
[04:10:05] [PASSED] 14 VFs
[04:10:05] [PASSED] 15 VFs
[04:10:05] [PASSED] 16 VFs
[04:10:05] [PASSED] 17 VFs
[04:10:05] [PASSED] 18 VFs
[04:10:05] [PASSED] 19 VFs
[04:10:05] [PASSED] 20 VFs
[04:10:05] [PASSED] 21 VFs
[04:10:05] [PASSED] 22 VFs
[04:10:05] [PASSED] 23 VFs
[04:10:05] [PASSED] 24 VFs
[04:10:05] [PASSED] 25 VFs
[04:10:05] [PASSED] 26 VFs
[04:10:05] [PASSED] 27 VFs
[04:10:05] [PASSED] 28 VFs
[04:10:05] [PASSED] 29 VFs
[04:10:05] [PASSED] 30 VFs
[04:10:05] [PASSED] 31 VFs
[04:10:05] [PASSED] 32 VFs
[04:10:05] [PASSED] 33 VFs
[04:10:05] [PASSED] 34 VFs
[04:10:05] [PASSED] 35 VFs
[04:10:05] [PASSED] 36 VFs
[04:10:05] [PASSED] 37 VFs
[04:10:05] [PASSED] 38 VFs
[04:10:05] [PASSED] 39 VFs
[04:10:05] [PASSED] 40 VFs
[04:10:05] [PASSED] 41 VFs
[04:10:05] [PASSED] 42 VFs
[04:10:05] [PASSED] 43 VFs
[04:10:05] [PASSED] 44 VFs
[04:10:05] [PASSED] 45 VFs
[04:10:05] [PASSED] 46 VFs
[04:10:05] [PASSED] 47 VFs
[04:10:05] [PASSED] 48 VFs
[04:10:05] [PASSED] 49 VFs
[04:10:05] [PASSED] 50 VFs
[04:10:05] [PASSED] 51 VFs
[04:10:05] [PASSED] 52 VFs
[04:10:05] [PASSED] 53 VFs
[04:10:05] [PASSED] 54 VFs
[04:10:05] [PASSED] 55 VFs
[04:10:05] [PASSED] 56 VFs
[04:10:05] [PASSED] 57 VFs
[04:10:05] [PASSED] 58 VFs
[04:10:05] [PASSED] 59 VFs
[04:10:05] [PASSED] 60 VFs
[04:10:05] [PASSED] 61 VFs
[04:10:05] [PASSED] 62 VFs
[04:10:05] [PASSED] 63 VFs
[04:10:05] ================= [PASSED] fair_doorbells ==================
[04:10:05] ======================== fair_ggtt  ========================
[04:10:05] [PASSED] 1 VF
[04:10:05] [PASSED] 2 VFs
[04:10:05] [PASSED] 3 VFs
[04:10:05] [PASSED] 4 VFs
[04:10:05] [PASSED] 5 VFs
[04:10:05] [PASSED] 6 VFs
[04:10:05] [PASSED] 7 VFs
[04:10:05] [PASSED] 8 VFs
[04:10:05] [PASSED] 9 VFs
[04:10:05] [PASSED] 10 VFs
[04:10:05] [PASSED] 11 VFs
[04:10:05] [PASSED] 12 VFs
[04:10:05] [PASSED] 13 VFs
[04:10:05] [PASSED] 14 VFs
[04:10:05] [PASSED] 15 VFs
[04:10:05] [PASSED] 16 VFs
[04:10:05] [PASSED] 17 VFs
[04:10:05] [PASSED] 18 VFs
[04:10:05] [PASSED] 19 VFs
[04:10:05] [PASSED] 20 VFs
[04:10:05] [PASSED] 21 VFs
[04:10:05] [PASSED] 22 VFs
[04:10:05] [PASSED] 23 VFs
[04:10:05] [PASSED] 24 VFs
[04:10:05] [PASSED] 25 VFs
[04:10:05] [PASSED] 26 VFs
[04:10:05] [PASSED] 27 VFs
[04:10:05] [PASSED] 28 VFs
[04:10:05] [PASSED] 29 VFs
[04:10:05] [PASSED] 30 VFs
[04:10:05] [PASSED] 31 VFs
[04:10:05] [PASSED] 32 VFs
[04:10:05] [PASSED] 33 VFs
[04:10:05] [PASSED] 34 VFs
[04:10:05] [PASSED] 35 VFs
[04:10:05] [PASSED] 36 VFs
[04:10:05] [PASSED] 37 VFs
[04:10:05] [PASSED] 38 VFs
[04:10:05] [PASSED] 39 VFs
[04:10:05] [PASSED] 40 VFs
[04:10:05] [PASSED] 41 VFs
[04:10:05] [PASSED] 42 VFs
[04:10:05] [PASSED] 43 VFs
[04:10:05] [PASSED] 44 VFs
[04:10:05] [PASSED] 45 VFs
[04:10:05] [PASSED] 46 VFs
[04:10:05] [PASSED] 47 VFs
[04:10:05] [PASSED] 48 VFs
[04:10:05] [PASSED] 49 VFs
[04:10:05] [PASSED] 50 VFs
[04:10:05] [PASSED] 51 VFs
[04:10:05] [PASSED] 52 VFs
[04:10:05] [PASSED] 53 VFs
[04:10:05] [PASSED] 54 VFs
[04:10:05] [PASSED] 55 VFs
[04:10:05] [PASSED] 56 VFs
[04:10:05] [PASSED] 57 VFs
[04:10:05] [PASSED] 58 VFs
[04:10:05] [PASSED] 59 VFs
[04:10:05] [PASSED] 60 VFs
[04:10:05] [PASSED] 61 VFs
[04:10:05] [PASSED] 62 VFs
[04:10:05] [PASSED] 63 VFs
[04:10:05] ==================== [PASSED] fair_ggtt ====================
[04:10:05] ================== [PASSED] pf_gt_config ===================
[04:10:05] ===================== lmtt (1 subtest) =====================
[04:10:05] ======================== test_ops  =========================
[04:10:05] [PASSED] 2-level
[04:10:05] [PASSED] multi-level
[04:10:05] ==================== [PASSED] test_ops =====================
[04:10:05] ====================== [PASSED] lmtt =======================
[04:10:05] ================= pf_service (11 subtests) =================
[04:10:05] [PASSED] pf_negotiate_any
[04:10:05] [PASSED] pf_negotiate_base_match
[04:10:05] [PASSED] pf_negotiate_base_newer
[04:10:05] [PASSED] pf_negotiate_base_next
[04:10:05] [SKIPPED] pf_negotiate_base_older
[04:10:05] [PASSED] pf_negotiate_base_prev
[04:10:05] [PASSED] pf_negotiate_latest_match
[04:10:05] [PASSED] pf_negotiate_latest_newer
[04:10:05] [PASSED] pf_negotiate_latest_next
[04:10:05] [SKIPPED] pf_negotiate_latest_older
[04:10:05] [SKIPPED] pf_negotiate_latest_prev
[04:10:05] =================== [PASSED] pf_service ====================
[04:10:05] ================= xe_guc_g2g (2 subtests) ==================
[04:10:05] ============== xe_live_guc_g2g_kunit_default  ==============
[04:10:05] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[04:10:05] ============== xe_live_guc_g2g_kunit_allmem  ===============
[04:10:05] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[04:10:05] =================== [SKIPPED] xe_guc_g2g ===================
[04:10:05] =================== xe_mocs (2 subtests) ===================
[04:10:05] ================ xe_live_mocs_kernel_kunit  ================
[04:10:05] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[04:10:05] ================ xe_live_mocs_reset_kunit  =================
[04:10:05] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[04:10:05] ==================== [SKIPPED] xe_mocs =====================
[04:10:05] ================= xe_migrate (2 subtests) ==================
[04:10:05] ================= xe_migrate_sanity_kunit  =================
[04:10:05] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[04:10:05] ================== xe_validate_ccs_kunit  ==================
[04:10:05] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[04:10:05] =================== [SKIPPED] xe_migrate ===================
[04:10:05] ================== xe_dma_buf (1 subtest) ==================
[04:10:05] ==================== xe_dma_buf_kunit  =====================
[04:10:05] ================ [SKIPPED] xe_dma_buf_kunit ================
[04:10:05] =================== [SKIPPED] xe_dma_buf ===================
[04:10:05] ================= xe_bo_shrink (1 subtest) =================
[04:10:05] =================== xe_bo_shrink_kunit  ====================
[04:10:05] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[04:10:05] ================== [SKIPPED] xe_bo_shrink ==================
[04:10:05] ==================== xe_bo (2 subtests) ====================
[04:10:05] ================== xe_ccs_migrate_kunit  ===================
[04:10:05] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[04:10:05] ==================== xe_bo_evict_kunit  ====================
[04:10:05] =============== [SKIPPED] xe_bo_evict_kunit ================
[04:10:05] ===================== [SKIPPED] xe_bo ======================
[04:10:05] ==================== args (13 subtests) ====================
[04:10:05] [PASSED] count_args_test
[04:10:05] [PASSED] call_args_example
[04:10:05] [PASSED] call_args_test
[04:10:05] [PASSED] drop_first_arg_example
[04:10:05] [PASSED] drop_first_arg_test
[04:10:05] [PASSED] first_arg_example
[04:10:05] [PASSED] first_arg_test
[04:10:05] [PASSED] last_arg_example
[04:10:05] [PASSED] last_arg_test
[04:10:05] [PASSED] pick_arg_example
[04:10:05] [PASSED] if_args_example
[04:10:05] [PASSED] if_args_test
[04:10:05] [PASSED] sep_comma_example
[04:10:05] ====================== [PASSED] args =======================
[04:10:05] =================== xe_pci (3 subtests) ====================
[04:10:05] ==================== check_graphics_ip  ====================
[04:10:05] [PASSED] 12.00 Xe_LP
[04:10:05] [PASSED] 12.10 Xe_LP+
[04:10:05] [PASSED] 12.55 Xe_HPG
[04:10:05] [PASSED] 12.60 Xe_HPC
[04:10:05] [PASSED] 12.70 Xe_LPG
[04:10:05] [PASSED] 12.71 Xe_LPG
[04:10:05] [PASSED] 12.74 Xe_LPG+
[04:10:05] [PASSED] 20.01 Xe2_HPG
[04:10:05] [PASSED] 20.02 Xe2_HPG
[04:10:05] [PASSED] 20.04 Xe2_LPG
[04:10:05] [PASSED] 30.00 Xe3_LPG
[04:10:05] [PASSED] 30.01 Xe3_LPG
[04:10:05] [PASSED] 30.03 Xe3_LPG
[04:10:05] [PASSED] 30.04 Xe3_LPG
[04:10:05] [PASSED] 30.05 Xe3_LPG
[04:10:05] [PASSED] 35.11 Xe3p_XPC
[04:10:05] ================ [PASSED] check_graphics_ip ================
[04:10:05] ===================== check_media_ip  ======================
[04:10:05] [PASSED] 12.00 Xe_M
[04:10:05] [PASSED] 12.55 Xe_HPM
[04:10:05] [PASSED] 13.00 Xe_LPM+
[04:10:05] [PASSED] 13.01 Xe2_HPM
[04:10:05] [PASSED] 20.00 Xe2_LPM
[04:10:05] [PASSED] 30.00 Xe3_LPM
[04:10:05] [PASSED] 30.02 Xe3_LPM
[04:10:05] [PASSED] 35.00 Xe3p_LPM
[04:10:05] [PASSED] 35.03 Xe3p_HPM
[04:10:05] ================= [PASSED] check_media_ip ==================
[04:10:05] =================== check_platform_desc  ===================
[04:10:05] [PASSED] 0x9A60 (TIGERLAKE)
[04:10:05] [PASSED] 0x9A68 (TIGERLAKE)
[04:10:05] [PASSED] 0x9A70 (TIGERLAKE)
[04:10:05] [PASSED] 0x9A40 (TIGERLAKE)
[04:10:05] [PASSED] 0x9A49 (TIGERLAKE)
[04:10:05] [PASSED] 0x9A59 (TIGERLAKE)
[04:10:05] [PASSED] 0x9A78 (TIGERLAKE)
[04:10:05] [PASSED] 0x9AC0 (TIGERLAKE)
[04:10:05] [PASSED] 0x9AC9 (TIGERLAKE)
[04:10:05] [PASSED] 0x9AD9 (TIGERLAKE)
[04:10:05] [PASSED] 0x9AF8 (TIGERLAKE)
[04:10:05] [PASSED] 0x4C80 (ROCKETLAKE)
[04:10:05] [PASSED] 0x4C8A (ROCKETLAKE)
[04:10:05] [PASSED] 0x4C8B (ROCKETLAKE)
[04:10:05] [PASSED] 0x4C8C (ROCKETLAKE)
[04:10:05] [PASSED] 0x4C90 (ROCKETLAKE)
[04:10:05] [PASSED] 0x4C9A (ROCKETLAKE)
[04:10:05] [PASSED] 0x4680 (ALDERLAKE_S)
[04:10:05] [PASSED] 0x4682 (ALDERLAKE_S)
[04:10:05] [PASSED] 0x4688 (ALDERLAKE_S)
[04:10:05] [PASSED] 0x468A (ALDERLAKE_S)
[04:10:05] [PASSED] 0x468B (ALDERLAKE_S)
[04:10:05] [PASSED] 0x4690 (ALDERLAKE_S)
[04:10:05] [PASSED] 0x4692 (ALDERLAKE_S)
[04:10:05] [PASSED] 0x4693 (ALDERLAKE_S)
[04:10:05] [PASSED] 0x46A0 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46A1 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46A2 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46A3 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46A6 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46A8 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46AA (ALDERLAKE_P)
[04:10:05] [PASSED] 0x462A (ALDERLAKE_P)
[04:10:05] [PASSED] 0x4626 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x4628 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[04:10:05] [PASSED] 0x46B0 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46B1 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46B2 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46B3 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46C0 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46C1 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46C2 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46C3 (ALDERLAKE_P)
[04:10:05] [PASSED] 0x46D0 (ALDERLAKE_N)
[04:10:05] [PASSED] 0x46D1 (ALDERLAKE_N)
[04:10:05] [PASSED] 0x46D2 (ALDERLAKE_N)
[04:10:05] [PASSED] 0x46D3 (ALDERLAKE_N)
[04:10:05] [PASSED] 0x46D4 (ALDERLAKE_N)
[04:10:05] [PASSED] 0xA721 (ALDERLAKE_P)
[04:10:05] [PASSED] 0xA7A1 (ALDERLAKE_P)
[04:10:05] [PASSED] 0xA7A9 (ALDERLAKE_P)
[04:10:05] [PASSED] 0xA7AC (ALDERLAKE_P)
[04:10:05] [PASSED] 0xA7AD (ALDERLAKE_P)
[04:10:05] [PASSED] 0xA720 (ALDERLAKE_P)
[04:10:05] [PASSED] 0xA7A0 (ALDERLAKE_P)
[04:10:05] [PASSED] 0xA7A8 (ALDERLAKE_P)
[04:10:05] [PASSED] 0xA7AA (ALDERLAKE_P)
[04:10:05] [PASSED] 0xA7AB (ALDERLAKE_P)
[04:10:05] [PASSED] 0xA780 (ALDERLAKE_S)
[04:10:05] [PASSED] 0xA781 (ALDERLAKE_S)
[04:10:05] [PASSED] 0xA782 (ALDERLAKE_S)
[04:10:05] [PASSED] 0xA783 (ALDERLAKE_S)
[04:10:05] [PASSED] 0xA788 (ALDERLAKE_S)
[04:10:05] [PASSED] 0xA789 (ALDERLAKE_S)
[04:10:05] [PASSED] 0xA78A (ALDERLAKE_S)
[04:10:05] [PASSED] 0xA78B (ALDERLAKE_S)
[04:10:05] [PASSED] 0x4905 (DG1)
[04:10:05] [PASSED] 0x4906 (DG1)
[04:10:05] [PASSED] 0x4907 (DG1)
[04:10:05] [PASSED] 0x4908 (DG1)
[04:10:05] [PASSED] 0x4909 (DG1)
[04:10:05] [PASSED] 0x56C0 (DG2)
[04:10:05] [PASSED] 0x56C2 (DG2)
[04:10:05] [PASSED] 0x56C1 (DG2)
[04:10:05] [PASSED] 0x7D51 (METEORLAKE)
[04:10:05] [PASSED] 0x7DD1 (METEORLAKE)
[04:10:05] [PASSED] 0x7D41 (METEORLAKE)
[04:10:05] [PASSED] 0x7D67 (METEORLAKE)
[04:10:05] [PASSED] 0xB640 (METEORLAKE)
[04:10:05] [PASSED] 0x56A0 (DG2)
[04:10:05] [PASSED] 0x56A1 (DG2)
[04:10:05] [PASSED] 0x56A2 (DG2)
[04:10:05] [PASSED] 0x56BE (DG2)
[04:10:05] [PASSED] 0x56BF (DG2)
[04:10:05] [PASSED] 0x5690 (DG2)
[04:10:05] [PASSED] 0x5691 (DG2)
[04:10:05] [PASSED] 0x5692 (DG2)
[04:10:05] [PASSED] 0x56A5 (DG2)
[04:10:05] [PASSED] 0x56A6 (DG2)
[04:10:05] [PASSED] 0x56B0 (DG2)
[04:10:05] [PASSED] 0x56B1 (DG2)
[04:10:05] [PASSED] 0x56BA (DG2)
[04:10:05] [PASSED] 0x56BB (DG2)
[04:10:05] [PASSED] 0x56BC (DG2)
[04:10:05] [PASSED] 0x56BD (DG2)
[04:10:05] [PASSED] 0x5693 (DG2)
[04:10:05] [PASSED] 0x5694 (DG2)
[04:10:05] [PASSED] 0x5695 (DG2)
[04:10:05] [PASSED] 0x56A3 (DG2)
[04:10:05] [PASSED] 0x56A4 (DG2)
[04:10:05] [PASSED] 0x56B2 (DG2)
[04:10:05] [PASSED] 0x56B3 (DG2)
[04:10:05] [PASSED] 0x5696 (DG2)
[04:10:05] [PASSED] 0x5697 (DG2)
[04:10:05] [PASSED] 0xB69 (PVC)
[04:10:05] [PASSED] 0xB6E (PVC)
[04:10:05] [PASSED] 0xBD4 (PVC)
[04:10:05] [PASSED] 0xBD5 (PVC)
[04:10:05] [PASSED] 0xBD6 (PVC)
[04:10:05] [PASSED] 0xBD7 (PVC)
[04:10:05] [PASSED] 0xBD8 (PVC)
[04:10:05] [PASSED] 0xBD9 (PVC)
[04:10:05] [PASSED] 0xBDA (PVC)
[04:10:05] [PASSED] 0xBDB (PVC)
[04:10:05] [PASSED] 0xBE0 (PVC)
[04:10:05] [PASSED] 0xBE1 (PVC)
[04:10:05] [PASSED] 0xBE5 (PVC)
[04:10:05] [PASSED] 0x7D40 (METEORLAKE)
[04:10:05] [PASSED] 0x7D45 (METEORLAKE)
[04:10:05] [PASSED] 0x7D55 (METEORLAKE)
[04:10:05] [PASSED] 0x7D60 (METEORLAKE)
[04:10:05] [PASSED] 0x7DD5 (METEORLAKE)
[04:10:05] [PASSED] 0x6420 (LUNARLAKE)
[04:10:05] [PASSED] 0x64A0 (LUNARLAKE)
[04:10:05] [PASSED] 0x64B0 (LUNARLAKE)
[04:10:05] [PASSED] 0xE202 (BATTLEMAGE)
[04:10:05] [PASSED] 0xE209 (BATTLEMAGE)
[04:10:05] [PASSED] 0xE20B (BATTLEMAGE)
[04:10:05] [PASSED] 0xE20C (BATTLEMAGE)
[04:10:05] [PASSED] 0xE20D (BATTLEMAGE)
[04:10:05] [PASSED] 0xE210 (BATTLEMAGE)
[04:10:05] [PASSED] 0xE211 (BATTLEMAGE)
[04:10:05] [PASSED] 0xE212 (BATTLEMAGE)
[04:10:05] [PASSED] 0xE216 (BATTLEMAGE)
[04:10:05] [PASSED] 0xE220 (BATTLEMAGE)
[04:10:05] [PASSED] 0xE221 (BATTLEMAGE)
[04:10:05] [PASSED] 0xE222 (BATTLEMAGE)
[04:10:05] [PASSED] 0xE223 (BATTLEMAGE)
[04:10:05] [PASSED] 0xB080 (PANTHERLAKE)
[04:10:05] [PASSED] 0xB081 (PANTHERLAKE)
[04:10:05] [PASSED] 0xB082 (PANTHERLAKE)
[04:10:05] [PASSED] 0xB083 (PANTHERLAKE)
[04:10:05] [PASSED] 0xB084 (PANTHERLAKE)
[04:10:05] [PASSED] 0xB085 (PANTHERLAKE)
[04:10:05] [PASSED] 0xB086 (PANTHERLAKE)
[04:10:05] [PASSED] 0xB087 (PANTHERLAKE)
[04:10:05] [PASSED] 0xB08F (PANTHERLAKE)
[04:10:05] [PASSED] 0xB090 (PANTHERLAKE)
[04:10:05] [PASSED] 0xB0A0 (PANTHERLAKE)
[04:10:05] [PASSED] 0xB0B0 (PANTHERLAKE)
[04:10:05] [PASSED] 0xFD80 (PANTHERLAKE)
[04:10:05] [PASSED] 0xFD81 (PANTHERLAKE)
[04:10:05] [PASSED] 0xD740 (NOVALAKE_S)
[04:10:05] [PASSED] 0xD741 (NOVALAKE_S)
[04:10:05] [PASSED] 0xD742 (NOVALAKE_S)
[04:10:05] [PASSED] 0xD743 (NOVALAKE_S)
[04:10:05] [PASSED] 0xD744 (NOVALAKE_S)
[04:10:05] [PASSED] 0xD745 (NOVALAKE_S)
[04:10:05] [PASSED] 0x674C (CRESCENTISLAND)
[04:10:05] =============== [PASSED] check_platform_desc ===============
[04:10:05] ===================== [PASSED] xe_pci ======================
[04:10:05] =================== xe_rtp (2 subtests) ====================
[04:10:05] =============== xe_rtp_process_to_sr_tests  ================
[04:10:05] [PASSED] coalesce-same-reg
[04:10:05] [PASSED] no-match-no-add
[04:10:05] [PASSED] match-or
[04:10:05] [PASSED] match-or-xfail
[04:10:05] [PASSED] no-match-no-add-multiple-rules
[04:10:05] [PASSED] two-regs-two-entries
[04:10:05] [PASSED] clr-one-set-other
[04:10:05] [PASSED] set-field
[04:10:05] [PASSED] conflict-duplicate
[04:10:05] [PASSED] conflict-not-disjoint
[04:10:05] [PASSED] conflict-reg-type
[04:10:05] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[04:10:05] ================== xe_rtp_process_tests  ===================
[04:10:05] [PASSED] active1
[04:10:05] [PASSED] active2
[04:10:05] [PASSED] active-inactive
[04:10:05] [PASSED] inactive-active
[04:10:05] [PASSED] inactive-1st_or_active-inactive
[04:10:05] [PASSED] inactive-2nd_or_active-inactive
[04:10:05] [PASSED] inactive-last_or_active-inactive
[04:10:05] [PASSED] inactive-no_or_active-inactive
[04:10:05] ============== [PASSED] xe_rtp_process_tests ===============
[04:10:05] ===================== [PASSED] xe_rtp ======================
[04:10:05] ==================== xe_wa (1 subtest) =====================
[04:10:05] ======================== xe_wa_gt  =========================
[04:10:05] [PASSED] TIGERLAKE B0
[04:10:05] [PASSED] DG1 A0
[04:10:05] [PASSED] DG1 B0
[04:10:05] [PASSED] ALDERLAKE_S A0
[04:10:05] [PASSED] ALDERLAKE_S B0
[04:10:05] [PASSED] ALDERLAKE_S C0
[04:10:05] [PASSED] ALDERLAKE_S D0
[04:10:05] [PASSED] ALDERLAKE_P A0
[04:10:05] [PASSED] ALDERLAKE_P B0
[04:10:05] [PASSED] ALDERLAKE_P C0
[04:10:05] [PASSED] ALDERLAKE_S RPLS D0
[04:10:05] [PASSED] ALDERLAKE_P RPLU E0
[04:10:05] [PASSED] DG2 G10 C0
[04:10:05] [PASSED] DG2 G11 B1
[04:10:05] [PASSED] DG2 G12 A1
[04:10:05] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[04:10:05] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[04:10:05] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[04:10:05] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[04:10:05] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[04:10:05] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[04:10:05] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[04:10:05] ==================== [PASSED] xe_wa_gt =====================
[04:10:05] ====================== [PASSED] xe_wa ======================
[04:10:05] ============================================================
[04:10:05] Testing complete. Ran 512 tests: passed: 494, skipped: 18
[04:10:05] Elapsed time: 36.128s total, 4.165s configuring, 31.490s building, 0.460s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[04:10:05] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[04:10:07] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[04:10:32] Starting KUnit Kernel (1/1)...
[04:10:32] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[04:10:32] ============ drm_test_pick_cmdline (2 subtests) ============
[04:10:32] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[04:10:32] =============== drm_test_pick_cmdline_named  ===============
[04:10:32] [PASSED] NTSC
[04:10:32] [PASSED] NTSC-J
[04:10:32] [PASSED] PAL
[04:10:32] [PASSED] PAL-M
[04:10:32] =========== [PASSED] drm_test_pick_cmdline_named ===========
[04:10:32] ============== [PASSED] drm_test_pick_cmdline ==============
[04:10:32] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[04:10:32] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[04:10:32] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[04:10:32] =========== drm_validate_clone_mode (2 subtests) ===========
[04:10:32] ============== drm_test_check_in_clone_mode  ===============
[04:10:32] [PASSED] in_clone_mode
[04:10:32] [PASSED] not_in_clone_mode
[04:10:32] ========== [PASSED] drm_test_check_in_clone_mode ===========
[04:10:32] =============== drm_test_check_valid_clones  ===============
[04:10:32] [PASSED] not_in_clone_mode
[04:10:32] [PASSED] valid_clone
[04:10:32] [PASSED] invalid_clone
[04:10:32] =========== [PASSED] drm_test_check_valid_clones ===========
[04:10:32] ============= [PASSED] drm_validate_clone_mode =============
[04:10:32] ============= drm_validate_modeset (1 subtest) =============
[04:10:32] [PASSED] drm_test_check_connector_changed_modeset
[04:10:32] ============== [PASSED] drm_validate_modeset ===============
[04:10:32] ====== drm_test_bridge_get_current_state (2 subtests) ======
[04:10:32] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[04:10:32] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[04:10:32] ======== [PASSED] drm_test_bridge_get_current_state ========
[04:10:32] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[04:10:32] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[04:10:32] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[04:10:32] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[04:10:32] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[04:10:32] ============== drm_bridge_alloc (2 subtests) ===============
[04:10:32] [PASSED] drm_test_drm_bridge_alloc_basic
[04:10:32] [PASSED] drm_test_drm_bridge_alloc_get_put
[04:10:32] ================ [PASSED] drm_bridge_alloc =================
[04:10:32] ================== drm_buddy (8 subtests) ==================
[04:10:32] [PASSED] drm_test_buddy_alloc_limit
[04:10:32] [PASSED] drm_test_buddy_alloc_optimistic
[04:10:32] [PASSED] drm_test_buddy_alloc_pessimistic
[04:10:32] [PASSED] drm_test_buddy_alloc_pathological
[04:10:32] [PASSED] drm_test_buddy_alloc_contiguous
[04:10:32] [PASSED] drm_test_buddy_alloc_clear
[04:10:32] [PASSED] drm_test_buddy_alloc_range_bias
[04:10:32] [PASSED] drm_test_buddy_fragmentation_performance
[04:10:32] ==================== [PASSED] drm_buddy ====================
[04:10:32] ============= drm_cmdline_parser (40 subtests) =============
[04:10:32] [PASSED] drm_test_cmdline_force_d_only
[04:10:32] [PASSED] drm_test_cmdline_force_D_only_dvi
[04:10:32] [PASSED] drm_test_cmdline_force_D_only_hdmi
[04:10:32] [PASSED] drm_test_cmdline_force_D_only_not_digital
[04:10:32] [PASSED] drm_test_cmdline_force_e_only
[04:10:32] [PASSED] drm_test_cmdline_res
[04:10:32] [PASSED] drm_test_cmdline_res_vesa
[04:10:32] [PASSED] drm_test_cmdline_res_vesa_rblank
[04:10:32] [PASSED] drm_test_cmdline_res_rblank
[04:10:32] [PASSED] drm_test_cmdline_res_bpp
[04:10:32] [PASSED] drm_test_cmdline_res_refresh
[04:10:32] [PASSED] drm_test_cmdline_res_bpp_refresh
[04:10:32] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[04:10:32] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[04:10:32] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[04:10:32] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[04:10:32] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[04:10:32] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[04:10:32] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[04:10:32] [PASSED] drm_test_cmdline_res_margins_force_on
[04:10:32] [PASSED] drm_test_cmdline_res_vesa_margins
[04:10:32] [PASSED] drm_test_cmdline_name
[04:10:32] [PASSED] drm_test_cmdline_name_bpp
[04:10:32] [PASSED] drm_test_cmdline_name_option
[04:10:32] [PASSED] drm_test_cmdline_name_bpp_option
[04:10:32] [PASSED] drm_test_cmdline_rotate_0
[04:10:32] [PASSED] drm_test_cmdline_rotate_90
[04:10:32] [PASSED] drm_test_cmdline_rotate_180
[04:10:32] [PASSED] drm_test_cmdline_rotate_270
[04:10:32] [PASSED] drm_test_cmdline_hmirror
[04:10:32] [PASSED] drm_test_cmdline_vmirror
[04:10:32] [PASSED] drm_test_cmdline_margin_options
[04:10:32] [PASSED] drm_test_cmdline_multiple_options
[04:10:32] [PASSED] drm_test_cmdline_bpp_extra_and_option
[04:10:32] [PASSED] drm_test_cmdline_extra_and_option
[04:10:32] [PASSED] drm_test_cmdline_freestanding_options
[04:10:32] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[04:10:32] [PASSED] drm_test_cmdline_panel_orientation
[04:10:32] ================ drm_test_cmdline_invalid  =================
[04:10:32] [PASSED] margin_only
[04:10:32] [PASSED] interlace_only
[04:10:32] [PASSED] res_missing_x
[04:10:32] [PASSED] res_missing_y
[04:10:32] [PASSED] res_bad_y
[04:10:32] [PASSED] res_missing_y_bpp
[04:10:32] [PASSED] res_bad_bpp
[04:10:32] [PASSED] res_bad_refresh
[04:10:32] [PASSED] res_bpp_refresh_force_on_off
[04:10:32] [PASSED] res_invalid_mode
[04:10:32] [PASSED] res_bpp_wrong_place_mode
[04:10:32] [PASSED] name_bpp_refresh
[04:10:32] [PASSED] name_refresh
[04:10:32] [PASSED] name_refresh_wrong_mode
[04:10:32] [PASSED] name_refresh_invalid_mode
[04:10:32] [PASSED] rotate_multiple
[04:10:32] [PASSED] rotate_invalid_val
[04:10:32] [PASSED] rotate_truncated
[04:10:32] [PASSED] invalid_option
[04:10:32] [PASSED] invalid_tv_option
[04:10:32] [PASSED] truncated_tv_option
[04:10:32] ============ [PASSED] drm_test_cmdline_invalid =============
[04:10:32] =============== drm_test_cmdline_tv_options  ===============
[04:10:32] [PASSED] NTSC
[04:10:32] [PASSED] NTSC_443
[04:10:32] [PASSED] NTSC_J
[04:10:32] [PASSED] PAL
[04:10:32] [PASSED] PAL_M
[04:10:32] [PASSED] PAL_N
[04:10:32] [PASSED] SECAM
[04:10:32] [PASSED] MONO_525
[04:10:32] [PASSED] MONO_625
[04:10:32] =========== [PASSED] drm_test_cmdline_tv_options ===========
[04:10:32] =============== [PASSED] drm_cmdline_parser ================
[04:10:32] ========== drmm_connector_hdmi_init (20 subtests) ==========
[04:10:32] [PASSED] drm_test_connector_hdmi_init_valid
[04:10:32] [PASSED] drm_test_connector_hdmi_init_bpc_8
[04:10:32] [PASSED] drm_test_connector_hdmi_init_bpc_10
[04:10:32] [PASSED] drm_test_connector_hdmi_init_bpc_12
[04:10:32] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[04:10:32] [PASSED] drm_test_connector_hdmi_init_bpc_null
[04:10:32] [PASSED] drm_test_connector_hdmi_init_formats_empty
[04:10:32] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[04:10:32] === drm_test_connector_hdmi_init_formats_yuv420_allowed  ===
[04:10:32] [PASSED] supported_formats=0x9 yuv420_allowed=1
[04:10:32] [PASSED] supported_formats=0x9 yuv420_allowed=0
[04:10:32] [PASSED] supported_formats=0x3 yuv420_allowed=1
[04:10:32] [PASSED] supported_formats=0x3 yuv420_allowed=0
[04:10:32] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[04:10:32] [PASSED] drm_test_connector_hdmi_init_null_ddc
[04:10:32] [PASSED] drm_test_connector_hdmi_init_null_product
[04:10:32] [PASSED] drm_test_connector_hdmi_init_null_vendor
[04:10:32] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[04:10:32] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[04:10:32] [PASSED] drm_test_connector_hdmi_init_product_valid
[04:10:32] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[04:10:32] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[04:10:32] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[04:10:32] ========= drm_test_connector_hdmi_init_type_valid  =========
[04:10:32] [PASSED] HDMI-A
[04:10:32] [PASSED] HDMI-B
[04:10:32] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[04:10:32] ======== drm_test_connector_hdmi_init_type_invalid  ========
[04:10:32] [PASSED] Unknown
[04:10:32] [PASSED] VGA
[04:10:32] [PASSED] DVI-I
[04:10:32] [PASSED] DVI-D
[04:10:32] [PASSED] DVI-A
[04:10:32] [PASSED] Composite
[04:10:32] [PASSED] SVIDEO
[04:10:32] [PASSED] LVDS
[04:10:32] [PASSED] Component
[04:10:32] [PASSED] DIN
[04:10:32] [PASSED] DP
[04:10:32] [PASSED] TV
[04:10:32] [PASSED] eDP
[04:10:32] [PASSED] Virtual
[04:10:32] [PASSED] DSI
[04:10:32] [PASSED] DPI
[04:10:32] [PASSED] Writeback
[04:10:32] [PASSED] SPI
[04:10:32] [PASSED] USB
[04:10:32] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[04:10:32] ============ [PASSED] drmm_connector_hdmi_init =============
[04:10:32] ============= drmm_connector_init (3 subtests) =============
[04:10:32] [PASSED] drm_test_drmm_connector_init
[04:10:32] [PASSED] drm_test_drmm_connector_init_null_ddc
[04:10:32] ========= drm_test_drmm_connector_init_type_valid  =========
[04:10:32] [PASSED] Unknown
[04:10:32] [PASSED] VGA
[04:10:32] [PASSED] DVI-I
[04:10:32] [PASSED] DVI-D
[04:10:32] [PASSED] DVI-A
[04:10:32] [PASSED] Composite
[04:10:32] [PASSED] SVIDEO
[04:10:32] [PASSED] LVDS
[04:10:32] [PASSED] Component
[04:10:32] [PASSED] DIN
[04:10:32] [PASSED] DP
[04:10:32] [PASSED] HDMI-A
[04:10:32] [PASSED] HDMI-B
[04:10:32] [PASSED] TV
[04:10:32] [PASSED] eDP
[04:10:32] [PASSED] Virtual
[04:10:32] [PASSED] DSI
[04:10:32] [PASSED] DPI
[04:10:32] [PASSED] Writeback
[04:10:32] [PASSED] SPI
[04:10:32] [PASSED] USB
[04:10:32] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[04:10:32] =============== [PASSED] drmm_connector_init ===============
[04:10:32] ========= drm_connector_dynamic_init (6 subtests) ==========
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_init
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_init_properties
[04:10:32] ===== drm_test_drm_connector_dynamic_init_type_valid  ======
[04:10:32] [PASSED] Unknown
[04:10:32] [PASSED] VGA
[04:10:32] [PASSED] DVI-I
[04:10:32] [PASSED] DVI-D
[04:10:32] [PASSED] DVI-A
[04:10:32] [PASSED] Composite
[04:10:32] [PASSED] SVIDEO
[04:10:32] [PASSED] LVDS
[04:10:32] [PASSED] Component
[04:10:32] [PASSED] DIN
[04:10:32] [PASSED] DP
[04:10:32] [PASSED] HDMI-A
[04:10:32] [PASSED] HDMI-B
[04:10:32] [PASSED] TV
[04:10:32] [PASSED] eDP
[04:10:32] [PASSED] Virtual
[04:10:32] [PASSED] DSI
[04:10:32] [PASSED] DPI
[04:10:32] [PASSED] Writeback
[04:10:32] [PASSED] SPI
[04:10:32] [PASSED] USB
[04:10:32] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[04:10:32] ======== drm_test_drm_connector_dynamic_init_name  =========
[04:10:32] [PASSED] Unknown
[04:10:32] [PASSED] VGA
[04:10:32] [PASSED] DVI-I
[04:10:32] [PASSED] DVI-D
[04:10:32] [PASSED] DVI-A
[04:10:32] [PASSED] Composite
[04:10:32] [PASSED] SVIDEO
[04:10:32] [PASSED] LVDS
[04:10:32] [PASSED] Component
[04:10:32] [PASSED] DIN
[04:10:32] [PASSED] DP
[04:10:32] [PASSED] HDMI-A
[04:10:32] [PASSED] HDMI-B
[04:10:32] [PASSED] TV
[04:10:32] [PASSED] eDP
[04:10:32] [PASSED] Virtual
[04:10:32] [PASSED] DSI
[04:10:32] [PASSED] DPI
[04:10:32] [PASSED] Writeback
[04:10:32] [PASSED] SPI
[04:10:32] [PASSED] USB
[04:10:32] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[04:10:32] =========== [PASSED] drm_connector_dynamic_init ============
[04:10:32] ==== drm_connector_dynamic_register_early (4 subtests) =====
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[04:10:32] ====== [PASSED] drm_connector_dynamic_register_early =======
[04:10:32] ======= drm_connector_dynamic_register (7 subtests) ========
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[04:10:32] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[04:10:32] ========= [PASSED] drm_connector_dynamic_register ==========
[04:10:32] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[04:10:32] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[04:10:32] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[04:10:32] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[04:10:32] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[04:10:32] ========== drm_test_get_tv_mode_from_name_valid  ===========
[04:10:32] [PASSED] NTSC
[04:10:32] [PASSED] NTSC-443
[04:10:32] [PASSED] NTSC-J
[04:10:32] [PASSED] PAL
[04:10:32] [PASSED] PAL-M
[04:10:32] [PASSED] PAL-N
[04:10:32] [PASSED] SECAM
[04:10:32] [PASSED] Mono
[04:10:32] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[04:10:32] [PASSED] drm_test_get_tv_mode_from_name_truncated
[04:10:32] ============ [PASSED] drm_get_tv_mode_from_name ============
[04:10:32] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[04:10:32] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[04:10:32] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[04:10:32] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[04:10:32] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[04:10:32] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[04:10:32] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[04:10:32] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[04:10:32] [PASSED] VIC 96
[04:10:32] [PASSED] VIC 97
[04:10:32] [PASSED] VIC 101
[04:10:32] [PASSED] VIC 102
[04:10:32] [PASSED] VIC 106
[04:10:32] [PASSED] VIC 107
[04:10:32] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[04:10:32] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[04:10:32] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[04:10:32] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[04:10:32] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[04:10:32] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[04:10:32] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[04:10:32] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[04:10:32] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[04:10:32] [PASSED] Automatic
[04:10:32] [PASSED] Full
[04:10:32] [PASSED] Limited 16:235
[04:10:32] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[04:10:32] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[04:10:32] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[04:10:32] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[04:10:32] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[04:10:32] [PASSED] RGB
[04:10:32] [PASSED] YUV 4:2:0
[04:10:32] [PASSED] YUV 4:2:2
[04:10:32] [PASSED] YUV 4:4:4
[04:10:32] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[04:10:32] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[04:10:32] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[04:10:32] ============= drm_damage_helper (21 subtests) ==============
[04:10:32] [PASSED] drm_test_damage_iter_no_damage
[04:10:32] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[04:10:32] [PASSED] drm_test_damage_iter_no_damage_src_moved
[04:10:32] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[04:10:32] [PASSED] drm_test_damage_iter_no_damage_not_visible
[04:10:32] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[04:10:32] [PASSED] drm_test_damage_iter_no_damage_no_fb
[04:10:32] [PASSED] drm_test_damage_iter_simple_damage
[04:10:32] [PASSED] drm_test_damage_iter_single_damage
[04:10:32] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[04:10:32] [PASSED] drm_test_damage_iter_single_damage_outside_src
[04:10:32] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[04:10:32] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[04:10:32] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[04:10:32] [PASSED] drm_test_damage_iter_single_damage_src_moved
[04:10:32] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[04:10:32] [PASSED] drm_test_damage_iter_damage
[04:10:32] [PASSED] drm_test_damage_iter_damage_one_intersect
[04:10:32] [PASSED] drm_test_damage_iter_damage_one_outside
[04:10:32] [PASSED] drm_test_damage_iter_damage_src_moved
[04:10:32] [PASSED] drm_test_damage_iter_damage_not_visible
[04:10:32] ================ [PASSED] drm_damage_helper ================
[04:10:32] ============== drm_dp_mst_helper (3 subtests) ==============
[04:10:32] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[04:10:32] [PASSED] Clock 154000 BPP 30 DSC disabled
[04:10:32] [PASSED] Clock 234000 BPP 30 DSC disabled
[04:10:32] [PASSED] Clock 297000 BPP 24 DSC disabled
[04:10:32] [PASSED] Clock 332880 BPP 24 DSC enabled
[04:10:32] [PASSED] Clock 324540 BPP 24 DSC enabled
[04:10:32] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[04:10:32] ============== drm_test_dp_mst_calc_pbn_div  ===============
[04:10:32] [PASSED] Link rate 2000000 lane count 4
[04:10:32] [PASSED] Link rate 2000000 lane count 2
[04:10:32] [PASSED] Link rate 2000000 lane count 1
[04:10:32] [PASSED] Link rate 1350000 lane count 4
[04:10:32] [PASSED] Link rate 1350000 lane count 2
[04:10:32] [PASSED] Link rate 1350000 lane count 1
[04:10:32] [PASSED] Link rate 1000000 lane count 4
[04:10:32] [PASSED] Link rate 1000000 lane count 2
[04:10:32] [PASSED] Link rate 1000000 lane count 1
[04:10:32] [PASSED] Link rate 810000 lane count 4
[04:10:32] [PASSED] Link rate 810000 lane count 2
[04:10:32] [PASSED] Link rate 810000 lane count 1
[04:10:32] [PASSED] Link rate 540000 lane count 4
[04:10:32] [PASSED] Link rate 540000 lane count 2
[04:10:32] [PASSED] Link rate 540000 lane count 1
[04:10:32] [PASSED] Link rate 270000 lane count 4
[04:10:32] [PASSED] Link rate 270000 lane count 2
[04:10:32] [PASSED] Link rate 270000 lane count 1
[04:10:32] [PASSED] Link rate 162000 lane count 4
[04:10:32] [PASSED] Link rate 162000 lane count 2
[04:10:32] [PASSED] Link rate 162000 lane count 1
[04:10:32] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[04:10:32] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[04:10:32] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[04:10:32] [PASSED] DP_POWER_UP_PHY with port number
[04:10:32] [PASSED] DP_POWER_DOWN_PHY with port number
[04:10:32] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[04:10:32] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[04:10:32] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[04:10:32] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[04:10:32] [PASSED] DP_QUERY_PAYLOAD with port number
[04:10:32] [PASSED] DP_QUERY_PAYLOAD with VCPI
[04:10:32] [PASSED] DP_REMOTE_DPCD_READ with port number
[04:10:32] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[04:10:32] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[04:10:32] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[04:10:32] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[04:10:32] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[04:10:32] [PASSED] DP_REMOTE_I2C_READ with port number
[04:10:32] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[04:10:32] [PASSED] DP_REMOTE_I2C_READ with transactions array
[04:10:32] [PASSED] DP_REMOTE_I2C_WRITE with port number
[04:10:32] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[04:10:32] [PASSED] DP_REMOTE_I2C_WRITE with data array
[04:10:32] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[04:10:32] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[04:10:32] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[04:10:32] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[04:10:32] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[04:10:32] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[04:10:32] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[04:10:32] ================ [PASSED] drm_dp_mst_helper ================
[04:10:32] ================== drm_exec (7 subtests) ===================
[04:10:32] [PASSED] sanitycheck
[04:10:32] [PASSED] test_lock
[04:10:32] [PASSED] test_lock_unlock
[04:10:32] [PASSED] test_duplicates
[04:10:32] [PASSED] test_prepare
[04:10:32] [PASSED] test_prepare_array
[04:10:32] [PASSED] test_multiple_loops
[04:10:32] ==================== [PASSED] drm_exec =====================
[04:10:32] =========== drm_format_helper_test (17 subtests) ===========
[04:10:32] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[04:10:32] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[04:10:32] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[04:10:32] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[04:10:32] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[04:10:32] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[04:10:32] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[04:10:32] ============= drm_test_fb_xrgb8888_to_bgr888  ==============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[04:10:32] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[04:10:32] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[04:10:32] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[04:10:32] ============== drm_test_fb_xrgb8888_to_mono  ===============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[04:10:32] ==================== drm_test_fb_swab  =====================
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ================ [PASSED] drm_test_fb_swab =================
[04:10:32] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[04:10:32] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[04:10:32] [PASSED] single_pixel_source_buffer
[04:10:32] [PASSED] single_pixel_clip_rectangle
[04:10:32] [PASSED] well_known_colors
[04:10:32] [PASSED] destination_pitch
[04:10:32] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[04:10:32] ================= drm_test_fb_clip_offset  =================
[04:10:32] [PASSED] pass through
[04:10:32] [PASSED] horizontal offset
[04:10:32] [PASSED] vertical offset
[04:10:32] [PASSED] horizontal and vertical offset
[04:10:32] [PASSED] horizontal offset (custom pitch)
[04:10:32] [PASSED] vertical offset (custom pitch)
[04:10:32] [PASSED] horizontal and vertical offset (custom pitch)
[04:10:32] ============= [PASSED] drm_test_fb_clip_offset =============
[04:10:32] =================== drm_test_fb_memcpy  ====================
[04:10:32] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[04:10:32] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[04:10:32] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[04:10:32] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[04:10:32] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[04:10:32] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[04:10:32] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[04:10:32] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[04:10:32] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[04:10:32] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[04:10:32] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[04:10:32] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[04:10:32] =============== [PASSED] drm_test_fb_memcpy ================
[04:10:32] ============= [PASSED] drm_format_helper_test ==============
[04:10:32] ================= drm_format (18 subtests) =================
[04:10:32] [PASSED] drm_test_format_block_width_invalid
[04:10:32] [PASSED] drm_test_format_block_width_one_plane
[04:10:32] [PASSED] drm_test_format_block_width_two_plane
[04:10:32] [PASSED] drm_test_format_block_width_three_plane
[04:10:32] [PASSED] drm_test_format_block_width_tiled
[04:10:32] [PASSED] drm_test_format_block_height_invalid
[04:10:32] [PASSED] drm_test_format_block_height_one_plane
[04:10:32] [PASSED] drm_test_format_block_height_two_plane
[04:10:32] [PASSED] drm_test_format_block_height_three_plane
[04:10:32] [PASSED] drm_test_format_block_height_tiled
[04:10:32] [PASSED] drm_test_format_min_pitch_invalid
[04:10:32] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[04:10:32] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[04:10:32] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[04:10:32] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[04:10:32] [PASSED] drm_test_format_min_pitch_two_plane
[04:10:32] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[04:10:32] [PASSED] drm_test_format_min_pitch_tiled
[04:10:32] =================== [PASSED] drm_format ====================
[04:10:32] ============== drm_framebuffer (10 subtests) ===============
[04:10:32] ========== drm_test_framebuffer_check_src_coords  ==========
[04:10:32] [PASSED] Success: source fits into fb
[04:10:32] [PASSED] Fail: overflowing fb with x-axis coordinate
[04:10:32] [PASSED] Fail: overflowing fb with y-axis coordinate
[04:10:32] [PASSED] Fail: overflowing fb with source width
[04:10:32] [PASSED] Fail: overflowing fb with source height
[04:10:32] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[04:10:32] [PASSED] drm_test_framebuffer_cleanup
[04:10:32] =============== drm_test_framebuffer_create  ===============
[04:10:32] [PASSED] ABGR8888 normal sizes
[04:10:32] [PASSED] ABGR8888 max sizes
[04:10:32] [PASSED] ABGR8888 pitch greater than min required
[04:10:32] [PASSED] ABGR8888 pitch less than min required
[04:10:32] [PASSED] ABGR8888 Invalid width
[04:10:32] [PASSED] ABGR8888 Invalid buffer handle
[04:10:32] [PASSED] No pixel format
[04:10:32] [PASSED] ABGR8888 Width 0
[04:10:32] [PASSED] ABGR8888 Height 0
[04:10:32] [PASSED] ABGR8888 Out of bound height * pitch combination
[04:10:32] [PASSED] ABGR8888 Large buffer offset
[04:10:32] [PASSED] ABGR8888 Buffer offset for inexistent plane
[04:10:32] [PASSED] ABGR8888 Invalid flag
[04:10:32] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[04:10:32] [PASSED] ABGR8888 Valid buffer modifier
[04:10:32] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[04:10:32] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[04:10:32] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[04:10:32] [PASSED] NV12 Normal sizes
[04:10:32] [PASSED] NV12 Max sizes
[04:10:32] [PASSED] NV12 Invalid pitch
[04:10:32] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[04:10:32] [PASSED] NV12 different  modifier per-plane
[04:10:32] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[04:10:32] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[04:10:32] [PASSED] NV12 Modifier for inexistent plane
[04:10:32] [PASSED] NV12 Handle for inexistent plane
[04:10:32] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[04:10:32] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[04:10:32] [PASSED] YVU420 Normal sizes
[04:10:32] [PASSED] YVU420 Max sizes
[04:10:32] [PASSED] YVU420 Invalid pitch
[04:10:32] [PASSED] YVU420 Different pitches
[04:10:32] [PASSED] YVU420 Different buffer offsets/pitches
[04:10:32] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[04:10:32] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[04:10:32] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[04:10:32] [PASSED] YVU420 Valid modifier
[04:10:32] [PASSED] YVU420 Different modifiers per plane
[04:10:32] [PASSED] YVU420 Modifier for inexistent plane
[04:10:32] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[04:10:32] [PASSED] X0L2 Normal sizes
[04:10:32] [PASSED] X0L2 Max sizes
[04:10:32] [PASSED] X0L2 Invalid pitch
[04:10:32] [PASSED] X0L2 Pitch greater than minimum required
[04:10:32] [PASSED] X0L2 Handle for inexistent plane
[04:10:32] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[04:10:32] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[04:10:32] [PASSED] X0L2 Valid modifier
[04:10:32] [PASSED] X0L2 Modifier for inexistent plane
[04:10:32] =========== [PASSED] drm_test_framebuffer_create ===========
[04:10:32] [PASSED] drm_test_framebuffer_free
[04:10:32] [PASSED] drm_test_framebuffer_init
[04:10:32] [PASSED] drm_test_framebuffer_init_bad_format
[04:10:32] [PASSED] drm_test_framebuffer_init_dev_mismatch
[04:10:32] [PASSED] drm_test_framebuffer_lookup
[04:10:32] [PASSED] drm_test_framebuffer_lookup_inexistent
[04:10:32] [PASSED] drm_test_framebuffer_modifiers_not_supported
[04:10:32] ================= [PASSED] drm_framebuffer =================
[04:10:32] ================ drm_gem_shmem (8 subtests) ================
[04:10:32] [PASSED] drm_gem_shmem_test_obj_create
[04:10:32] [PASSED] drm_gem_shmem_test_obj_create_private
[04:10:32] [PASSED] drm_gem_shmem_test_pin_pages
[04:10:32] [PASSED] drm_gem_shmem_test_vmap
[04:10:32] [PASSED] drm_gem_shmem_test_get_sg_table
[04:10:32] [PASSED] drm_gem_shmem_test_get_pages_sgt
[04:10:32] [PASSED] drm_gem_shmem_test_madvise
[04:10:32] [PASSED] drm_gem_shmem_test_purge
[04:10:32] ================== [PASSED] drm_gem_shmem ==================
[04:10:32] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[04:10:32] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[04:10:32] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[04:10:32] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[04:10:32] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[04:10:32] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[04:10:32] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[04:10:32] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420  =======
[04:10:32] [PASSED] Automatic
[04:10:32] [PASSED] Full
[04:10:32] [PASSED] Limited 16:235
[04:10:32] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[04:10:32] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[04:10:32] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[04:10:32] [PASSED] drm_test_check_disable_connector
[04:10:32] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[04:10:32] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[04:10:32] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[04:10:32] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[04:10:32] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[04:10:32] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[04:10:32] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[04:10:32] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[04:10:32] [PASSED] drm_test_check_output_bpc_dvi
[04:10:32] [PASSED] drm_test_check_output_bpc_format_vic_1
[04:10:32] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[04:10:32] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[04:10:32] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[04:10:32] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[04:10:32] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[04:10:32] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[04:10:32] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[04:10:32] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[04:10:32] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[04:10:32] [PASSED] drm_test_check_broadcast_rgb_value
[04:10:32] [PASSED] drm_test_check_bpc_8_value
[04:10:32] [PASSED] drm_test_check_bpc_10_value
[04:10:32] [PASSED] drm_test_check_bpc_12_value
[04:10:32] [PASSED] drm_test_check_format_value
[04:10:32] [PASSED] drm_test_check_tmds_char_value
[04:10:32] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[04:10:32] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[04:10:32] [PASSED] drm_test_check_mode_valid
[04:10:32] [PASSED] drm_test_check_mode_valid_reject
[04:10:32] [PASSED] drm_test_check_mode_valid_reject_rate
[04:10:32] [PASSED] drm_test_check_mode_valid_reject_max_clock
[04:10:32] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[04:10:32] ================= drm_managed (2 subtests) =================
[04:10:32] [PASSED] drm_test_managed_release_action
[04:10:32] [PASSED] drm_test_managed_run_action
[04:10:32] =================== [PASSED] drm_managed ===================
[04:10:32] =================== drm_mm (6 subtests) ====================
[04:10:32] [PASSED] drm_test_mm_init
[04:10:32] [PASSED] drm_test_mm_debug
[04:10:32] [PASSED] drm_test_mm_align32
[04:10:32] [PASSED] drm_test_mm_align64
[04:10:32] [PASSED] drm_test_mm_lowest
[04:10:32] [PASSED] drm_test_mm_highest
[04:10:32] ===================== [PASSED] drm_mm ======================
[04:10:32] ============= drm_modes_analog_tv (5 subtests) =============
[04:10:32] [PASSED] drm_test_modes_analog_tv_mono_576i
[04:10:32] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[04:10:32] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[04:10:32] [PASSED] drm_test_modes_analog_tv_pal_576i
[04:10:32] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[04:10:32] =============== [PASSED] drm_modes_analog_tv ===============
[04:10:32] ============== drm_plane_helper (2 subtests) ===============
[04:10:32] =============== drm_test_check_plane_state  ================
[04:10:32] [PASSED] clipping_simple
[04:10:32] [PASSED] clipping_rotate_reflect
[04:10:32] [PASSED] positioning_simple
[04:10:32] [PASSED] upscaling
[04:10:32] [PASSED] downscaling
[04:10:32] [PASSED] rounding1
[04:10:32] [PASSED] rounding2
[04:10:32] [PASSED] rounding3
[04:10:32] [PASSED] rounding4
[04:10:32] =========== [PASSED] drm_test_check_plane_state ============
[04:10:32] =========== drm_test_check_invalid_plane_state  ============
[04:10:32] [PASSED] positioning_invalid
[04:10:32] [PASSED] upscaling_invalid
[04:10:32] [PASSED] downscaling_invalid
[04:10:32] ======= [PASSED] drm_test_check_invalid_plane_state ========
[04:10:32] ================ [PASSED] drm_plane_helper =================
[04:10:32] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[04:10:32] ====== drm_test_connector_helper_tv_get_modes_check  =======
[04:10:32] [PASSED] None
[04:10:32] [PASSED] PAL
[04:10:32] [PASSED] NTSC
[04:10:32] [PASSED] Both, NTSC Default
[04:10:32] [PASSED] Both, PAL Default
[04:10:32] [PASSED] Both, NTSC Default, with PAL on command-line
[04:10:32] [PASSED] Both, PAL Default, with NTSC on command-line
[04:10:32] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[04:10:32] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[04:10:32] ================== drm_rect (9 subtests) ===================
[04:10:32] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[04:10:32] [PASSED] drm_test_rect_clip_scaled_not_clipped
[04:10:32] [PASSED] drm_test_rect_clip_scaled_clipped
[04:10:32] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[04:10:32] ================= drm_test_rect_intersect  =================
[04:10:32] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[04:10:32] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[04:10:32] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[04:10:32] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[04:10:32] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[04:10:32] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[04:10:32] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[04:10:32] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[04:10:32] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[04:10:32] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[04:10:32] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[04:10:32] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[04:10:32] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[04:10:32] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[04:10:32] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[04:10:32] ============= [PASSED] drm_test_rect_intersect =============
[04:10:32] ================ drm_test_rect_calc_hscale  ================
[04:10:32] [PASSED] normal use
[04:10:32] [PASSED] out of max range
[04:10:32] [PASSED] out of min range
[04:10:32] [PASSED] zero dst
[04:10:32] [PASSED] negative src
[04:10:32] [PASSED] negative dst
[04:10:32] ============ [PASSED] drm_test_rect_calc_hscale ============
[04:10:32] ================ drm_test_rect_calc_vscale  ================
[04:10:32] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[04:10:32] [PASSED] out of max range
[04:10:32] [PASSED] out of min range
[04:10:32] [PASSED] zero dst
[04:10:32] [PASSED] negative src
[04:10:32] [PASSED] negative dst
[04:10:32] ============ [PASSED] drm_test_rect_calc_vscale ============
[04:10:32] ================== drm_test_rect_rotate  ===================
[04:10:32] [PASSED] reflect-x
[04:10:32] [PASSED] reflect-y
[04:10:32] [PASSED] rotate-0
[04:10:32] [PASSED] rotate-90
[04:10:32] [PASSED] rotate-180
[04:10:32] [PASSED] rotate-270
[04:10:32] ============== [PASSED] drm_test_rect_rotate ===============
[04:10:32] ================ drm_test_rect_rotate_inv  =================
[04:10:32] [PASSED] reflect-x
[04:10:32] [PASSED] reflect-y
[04:10:32] [PASSED] rotate-0
[04:10:32] [PASSED] rotate-90
[04:10:32] [PASSED] rotate-180
[04:10:32] [PASSED] rotate-270
[04:10:32] ============ [PASSED] drm_test_rect_rotate_inv =============
[04:10:32] ==================== [PASSED] drm_rect =====================
[04:10:32] ============ drm_sysfb_modeset_test (1 subtest) ============
[04:10:32] ============ drm_test_sysfb_build_fourcc_list  =============
[04:10:32] [PASSED] no native formats
[04:10:32] [PASSED] XRGB8888 as native format
[04:10:32] [PASSED] remove duplicates
[04:10:32] [PASSED] convert alpha formats
[04:10:32] [PASSED] random formats
[04:10:32] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[04:10:32] ============= [PASSED] drm_sysfb_modeset_test ==============
[04:10:32] ================== drm_fixp (2 subtests) ===================
[04:10:32] [PASSED] drm_test_int2fixp
[04:10:32] [PASSED] drm_test_sm2fixp
[04:10:32] ==================== [PASSED] drm_fixp =====================
[04:10:32] ============================================================
[04:10:32] Testing complete. Ran 624 tests: passed: 624
[04:10:32] Elapsed time: 26.957s total, 1.631s configuring, 24.911s building, 0.373s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[04:10:32] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[04:10:34] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[04:10:43] Starting KUnit Kernel (1/1)...
[04:10:43] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[04:10:43] ================= ttm_device (5 subtests) ==================
[04:10:43] [PASSED] ttm_device_init_basic
[04:10:43] [PASSED] ttm_device_init_multiple
[04:10:43] [PASSED] ttm_device_fini_basic
[04:10:43] [PASSED] ttm_device_init_no_vma_man
[04:10:43] ================== ttm_device_init_pools  ==================
[04:10:43] [PASSED] No DMA allocations, no DMA32 required
[04:10:43] [PASSED] DMA allocations, DMA32 required
[04:10:43] [PASSED] No DMA allocations, DMA32 required
[04:10:43] [PASSED] DMA allocations, no DMA32 required
[04:10:43] ============== [PASSED] ttm_device_init_pools ==============
[04:10:43] =================== [PASSED] ttm_device ====================
[04:10:43] ================== ttm_pool (8 subtests) ===================
[04:10:43] ================== ttm_pool_alloc_basic  ===================
[04:10:43] [PASSED] One page
[04:10:43] [PASSED] More than one page
[04:10:43] [PASSED] Above the allocation limit
[04:10:43] [PASSED] One page, with coherent DMA mappings enabled
[04:10:43] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[04:10:43] ============== [PASSED] ttm_pool_alloc_basic ===============
[04:10:43] ============== ttm_pool_alloc_basic_dma_addr  ==============
[04:10:43] [PASSED] One page
[04:10:43] [PASSED] More than one page
[04:10:43] [PASSED] Above the allocation limit
[04:10:43] [PASSED] One page, with coherent DMA mappings enabled
[04:10:43] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[04:10:43] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[04:10:43] [PASSED] ttm_pool_alloc_order_caching_match
[04:10:43] [PASSED] ttm_pool_alloc_caching_mismatch
[04:10:43] [PASSED] ttm_pool_alloc_order_mismatch
[04:10:43] [PASSED] ttm_pool_free_dma_alloc
[04:10:43] [PASSED] ttm_pool_free_no_dma_alloc
[04:10:43] [PASSED] ttm_pool_fini_basic
[04:10:43] ==================== [PASSED] ttm_pool =====================
[04:10:43] ================ ttm_resource (8 subtests) =================
[04:10:43] ================= ttm_resource_init_basic  =================
[04:10:43] [PASSED] Init resource in TTM_PL_SYSTEM
[04:10:43] [PASSED] Init resource in TTM_PL_VRAM
[04:10:43] [PASSED] Init resource in a private placement
[04:10:43] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[04:10:43] ============= [PASSED] ttm_resource_init_basic =============
[04:10:43] [PASSED] ttm_resource_init_pinned
[04:10:43] [PASSED] ttm_resource_fini_basic
[04:10:43] [PASSED] ttm_resource_manager_init_basic
[04:10:43] [PASSED] ttm_resource_manager_usage_basic
[04:10:43] [PASSED] ttm_resource_manager_set_used_basic
[04:10:43] [PASSED] ttm_sys_man_alloc_basic
[04:10:43] [PASSED] ttm_sys_man_free_basic
[04:10:43] ================== [PASSED] ttm_resource ===================
[04:10:43] =================== ttm_tt (15 subtests) ===================
[04:10:43] ==================== ttm_tt_init_basic  ====================
[04:10:43] [PASSED] Page-aligned size
[04:10:43] [PASSED] Extra pages requested
[04:10:43] ================ [PASSED] ttm_tt_init_basic ================
[04:10:43] [PASSED] ttm_tt_init_misaligned
[04:10:43] [PASSED] ttm_tt_fini_basic
[04:10:43] [PASSED] ttm_tt_fini_sg
[04:10:43] [PASSED] ttm_tt_fini_shmem
[04:10:43] [PASSED] ttm_tt_create_basic
[04:10:43] [PASSED] ttm_tt_create_invalid_bo_type
[04:10:43] [PASSED] ttm_tt_create_ttm_exists
[04:10:43] [PASSED] ttm_tt_create_failed
[04:10:43] [PASSED] ttm_tt_destroy_basic
[04:10:43] [PASSED] ttm_tt_populate_null_ttm
[04:10:43] [PASSED] ttm_tt_populate_populated_ttm
[04:10:43] [PASSED] ttm_tt_unpopulate_basic
[04:10:43] [PASSED] ttm_tt_unpopulate_empty_ttm
[04:10:43] [PASSED] ttm_tt_swapin_basic
[04:10:43] ===================== [PASSED] ttm_tt ======================
[04:10:43] =================== ttm_bo (14 subtests) ===================
[04:10:43] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[04:10:43] [PASSED] Cannot be interrupted and sleeps
[04:10:43] [PASSED] Cannot be interrupted, locks straight away
[04:10:43] [PASSED] Can be interrupted, sleeps
[04:10:43] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[04:10:43] [PASSED] ttm_bo_reserve_locked_no_sleep
[04:10:43] [PASSED] ttm_bo_reserve_no_wait_ticket
[04:10:43] [PASSED] ttm_bo_reserve_double_resv
[04:10:43] [PASSED] ttm_bo_reserve_interrupted
[04:10:43] [PASSED] ttm_bo_reserve_deadlock
[04:10:43] [PASSED] ttm_bo_unreserve_basic
[04:10:43] [PASSED] ttm_bo_unreserve_pinned
[04:10:43] [PASSED] ttm_bo_unreserve_bulk
[04:10:43] [PASSED] ttm_bo_fini_basic
[04:10:43] [PASSED] ttm_bo_fini_shared_resv
[04:10:43] [PASSED] ttm_bo_pin_basic
[04:10:43] [PASSED] ttm_bo_pin_unpin_resource
[04:10:43] [PASSED] ttm_bo_multiple_pin_one_unpin
[04:10:43] ===================== [PASSED] ttm_bo ======================
[04:10:43] ============== ttm_bo_validate (21 subtests) ===============
[04:10:43] ============== ttm_bo_init_reserved_sys_man  ===============
[04:10:43] [PASSED] Buffer object for userspace
[04:10:43] [PASSED] Kernel buffer object
[04:10:43] [PASSED] Shared buffer object
[04:10:43] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[04:10:43] ============== ttm_bo_init_reserved_mock_man  ==============
[04:10:43] [PASSED] Buffer object for userspace
[04:10:43] [PASSED] Kernel buffer object
[04:10:43] [PASSED] Shared buffer object
[04:10:43] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[04:10:43] [PASSED] ttm_bo_init_reserved_resv
[04:10:43] ================== ttm_bo_validate_basic  ==================
[04:10:43] [PASSED] Buffer object for userspace
[04:10:43] [PASSED] Kernel buffer object
[04:10:43] [PASSED] Shared buffer object
[04:10:43] ============== [PASSED] ttm_bo_validate_basic ==============
[04:10:43] [PASSED] ttm_bo_validate_invalid_placement
[04:10:43] ============= ttm_bo_validate_same_placement  ==============
[04:10:43] [PASSED] System manager
[04:10:43] [PASSED] VRAM manager
[04:10:43] ========= [PASSED] ttm_bo_validate_same_placement ==========
[04:10:43] [PASSED] ttm_bo_validate_failed_alloc
[04:10:43] [PASSED] ttm_bo_validate_pinned
[04:10:43] [PASSED] ttm_bo_validate_busy_placement
[04:10:43] ================ ttm_bo_validate_multihop  =================
[04:10:43] [PASSED] Buffer object for userspace
[04:10:43] [PASSED] Kernel buffer object
[04:10:43] [PASSED] Shared buffer object
[04:10:43] ============ [PASSED] ttm_bo_validate_multihop =============
[04:10:43] ========== ttm_bo_validate_no_placement_signaled  ==========
[04:10:43] [PASSED] Buffer object in system domain, no page vector
[04:10:43] [PASSED] Buffer object in system domain with an existing page vector
[04:10:43] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[04:10:43] ======== ttm_bo_validate_no_placement_not_signaled  ========
[04:10:43] [PASSED] Buffer object for userspace
[04:10:43] [PASSED] Kernel buffer object
[04:10:43] [PASSED] Shared buffer object
[04:10:43] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[04:10:43] [PASSED] ttm_bo_validate_move_fence_signaled
[04:10:43] ========= ttm_bo_validate_move_fence_not_signaled  =========
[04:10:43] [PASSED] Waits for GPU
[04:10:43] [PASSED] Tries to lock straight away
[04:10:43] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[04:10:43] [PASSED] ttm_bo_validate_happy_evict
[04:10:43] [PASSED] ttm_bo_validate_all_pinned_evict
[04:10:43] [PASSED] ttm_bo_validate_allowed_only_evict
[04:10:43] [PASSED] ttm_bo_validate_deleted_evict
[04:10:43] [PASSED] ttm_bo_validate_busy_domain_evict
[04:10:43] [PASSED] ttm_bo_validate_evict_gutting
[04:10:43] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[04:10:43] ================= [PASSED] ttm_bo_validate =================
[04:10:43] ============================================================
[04:10:43] Testing complete. Ran 101 tests: passed: 101
[04:10:43] Elapsed time: 11.379s total, 1.685s configuring, 9.479s building, 0.180s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 33+ messages in thread

* ✗ CI.checksparse: warning for Fence deadlines in Xe (rev2)
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (23 preceding siblings ...)
  2026-01-05  4:10 ` ✓ CI.KUnit: success " Patchwork
@ 2026-01-05  4:26 ` Patchwork
  2026-01-05  5:07 ` ✓ Xe.CI.BAT: success " Patchwork
  2026-01-05  6:51 ` ✗ Xe.CI.Full: failure " Patchwork
  26 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2026-01-05  4:26 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Fence deadlines in Xe (rev2)
URL   : https://patchwork.freedesktop.org/series/159479/
State : warning

== Summary ==

+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast 5fc5192372599f11da8dee072fd8beb4414f8eca
Sparse version: 0.6.4 (Ubuntu: 0.6.4-4ubuntu3)
Fast mode used, each commit won't be checked separately.
+drivers/gpu/drm/drm_gem_framebuffer_helper.c:23:1: error: bad constant expression
+drivers/gpu/drm/drm_simple_kms_helper.c:457:1: error: bad constant expression
+drivers/gpu/drm/drm_simple_kms_helper.c:458:1: error: bad constant expression
+drivers/gpu/drm/drm_simple_kms_helper.c:458:1: error: bad constant expression
+drivers/gpu/drm/i915/display/i9xx_plane.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/icl_dsi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_dsi.h):
+drivers/gpu/drm/i915/display/intel_atomic.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_connector.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_crtc.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_display_trace.h):
+drivers/gpu/drm/i915/display/intel_crt.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_cursor.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display_driver.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display_reset.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dp.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dp_mst.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dvo.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_hdmi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_load_detect.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_lspcon.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_lvds.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_plane.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_display_trace.h):
+drivers/gpu/drm/i915/display/intel_psr.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_sdvo.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_sprite.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_tv.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/skl_universal_plane.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/vlv_dsi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/virtio/virtgpu_drv.c:217:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:218:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:218:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:219:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:220:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:221:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:52:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:53:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 33+ messages in thread

* ✓ Xe.CI.BAT: success for Fence deadlines in Xe (rev2)
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (24 preceding siblings ...)
  2026-01-05  4:26 ` ✗ CI.checksparse: warning " Patchwork
@ 2026-01-05  5:07 ` Patchwork
  2026-01-05  6:51 ` ✗ Xe.CI.Full: failure " Patchwork
  26 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2026-01-05  5:07 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 5141 bytes --]

== Series Details ==

Series: Fence deadlines in Xe (rev2)
URL   : https://patchwork.freedesktop.org/series/159479/
State : success

== Summary ==

CI Bug Log - changes from xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca_BAT -> xe-pw-159479v2_BAT
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (12 -> 12)
------------------------------

  No changes in participating hosts

Known issues
------------

  Here are the changes found in xe-pw-159479v2_BAT that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][1] ([Intel XE#623])
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_dsc@dsc-basic:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][2] ([Intel XE#455])
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@kms_dsc@dsc-basic.html

  * igt@kms_psr@psr-cursor-plane-move:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][3] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +2 other tests skip
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@kms_psr@psr-cursor-plane-move.html

  * igt@sriov_basic@enable-vfs-autoprobe-off:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][4] ([Intel XE#1091] / [Intel XE#2849]) +1 other test skip
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@sriov_basic@enable-vfs-autoprobe-off.html

  * igt@xe_exec_fault_mode@twice-bindexecqueue-userptr:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][5] ([Intel XE#288]) +32 other tests skip
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr.html

  * igt@xe_huc_copy@huc_copy:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][6] ([Intel XE#255])
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@xe_huc_copy@huc_copy.html

  * igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][7] ([Intel XE#2229])
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html

  * igt@xe_pat@pat-index-xe2:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][8] ([Intel XE#977])
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@xe_pat@pat-index-xe2.html

  * igt@xe_pat@pat-index-xehpc:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][9] ([Intel XE#2838] / [Intel XE#979])
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@xe_pat@pat-index-xehpc.html

  * igt@xe_pat@pat-index-xelpg:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][10] ([Intel XE#979])
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@xe_pat@pat-index-xelpg.html

  * igt@xe_sriov_flr@flr-vf1-clear:
    - bat-dg2-oem2:       NOTRUN -> [SKIP][11] ([Intel XE#3342])
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@xe_sriov_flr@flr-vf1-clear.html

  
#### Possible fixes ####

  * igt@xe_module_load@load:
    - bat-dg2-oem2:       [ABORT][12] ([Intel XE#6610]) -> [PASS][13]
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/bat-dg2-oem2/igt@xe_module_load@load.html
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/bat-dg2-oem2/igt@xe_module_load@load.html

  
  [Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
  [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
  [Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
  [Intel XE#255]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/255
  [Intel XE#2838]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2838
  [Intel XE#2849]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2849
  [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
  [Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
  [Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
  [Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
  [Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
  [Intel XE#6610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6610
  [Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
  [Intel XE#977]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/977
  [Intel XE#979]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/979


Build changes
-------------

  * Linux: xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca -> xe-pw-159479v2

  IGT_8681: c49f35440873244aa86e778007ed2dcbe5bf0ecb @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca: 5fc5192372599f11da8dee072fd8beb4414f8eca
  xe-pw-159479v2: 159479v2

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/index.html

[-- Attachment #2: Type: text/html, Size: 6022 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* ✗ Xe.CI.Full: failure for Fence deadlines in Xe (rev2)
  2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
                   ` (25 preceding siblings ...)
  2026-01-05  5:07 ` ✓ Xe.CI.BAT: success " Patchwork
@ 2026-01-05  6:51 ` Patchwork
  26 siblings, 0 replies; 33+ messages in thread
From: Patchwork @ 2026-01-05  6:51 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 24721 bytes --]

== Series Details ==

Series: Fence deadlines in Xe (rev2)
URL   : https://patchwork.freedesktop.org/series/159479/
State : failure

== Summary ==

CI Bug Log - changes from xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca_FULL -> xe-pw-159479v2_FULL
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-159479v2_FULL absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-159479v2_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (2 -> 2)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-159479v2_FULL:

### IGT changes ###

#### Possible regressions ####

  * igt@xe_fault_injection@inject-fault-probe-function-xe_mmio_probe_early:
    - shard-bmg:          NOTRUN -> [ABORT][1]
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@xe_fault_injection@inject-fault-probe-function-xe_mmio_probe_early.html

  
Known issues
------------

  Here are the changes found in xe-pw-159479v2_FULL that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_big_fb@x-tiled-32bpp-rotate-90:
    - shard-bmg:          NOTRUN -> [SKIP][2] ([Intel XE#2327]) +1 other test skip
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-2/igt@kms_big_fb@x-tiled-32bpp-rotate-90.html

  * igt@kms_big_fb@y-tiled-16bpp-rotate-270:
    - shard-bmg:          NOTRUN -> [SKIP][3] ([Intel XE#1124]) +10 other tests skip
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_big_fb@y-tiled-16bpp-rotate-270.html

  * igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p:
    - shard-bmg:          NOTRUN -> [SKIP][4] ([Intel XE#2314] / [Intel XE#2894]) +2 other tests skip
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p.html

  * igt@kms_bw@linear-tiling-3-displays-1920x1080p:
    - shard-bmg:          NOTRUN -> [SKIP][5] ([Intel XE#367]) +1 other test skip
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_bw@linear-tiling-3-displays-1920x1080p.html

  * igt@kms_ccs@crc-primary-suspend-y-tiled-ccs:
    - shard-bmg:          NOTRUN -> [SKIP][6] ([Intel XE#3432])
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_ccs@crc-primary-suspend-y-tiled-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs@pipe-d-hdmi-a-3:
    - shard-bmg:          NOTRUN -> [SKIP][7] ([Intel XE#2652] / [Intel XE#787]) +8 other tests skip
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs@pipe-d-hdmi-a-3.html

  * igt@kms_ccs@crc-sprite-planes-basic-y-tiled-ccs:
    - shard-bmg:          NOTRUN -> [SKIP][8] ([Intel XE#2887]) +10 other tests skip
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-ccs.html

  * igt@kms_chamelium_color@ctm-green-to-red:
    - shard-bmg:          NOTRUN -> [SKIP][9] ([Intel XE#2325])
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@kms_chamelium_color@ctm-green-to-red.html

  * igt@kms_chamelium_hpd@dp-hpd-with-enabled-mode:
    - shard-bmg:          NOTRUN -> [SKIP][10] ([Intel XE#2252]) +9 other tests skip
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_chamelium_hpd@dp-hpd-with-enabled-mode.html

  * igt@kms_content_protection@legacy:
    - shard-bmg:          NOTRUN -> [FAIL][11] ([Intel XE#1178]) +3 other tests fail
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_content_protection@legacy.html

  * igt@kms_content_protection@lic-type-1:
    - shard-bmg:          NOTRUN -> [SKIP][12] ([Intel XE#2341])
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_content_protection@lic-type-1.html

  * igt@kms_cursor_crc@cursor-onscreen-512x170:
    - shard-bmg:          NOTRUN -> [SKIP][13] ([Intel XE#2321])
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_cursor_crc@cursor-onscreen-512x170.html

  * igt@kms_cursor_crc@cursor-sliding-256x85:
    - shard-bmg:          NOTRUN -> [SKIP][14] ([Intel XE#2320]) +1 other test skip
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_cursor_crc@cursor-sliding-256x85.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
    - shard-bmg:          NOTRUN -> [SKIP][15] ([Intel XE#2286]) +1 other test skip
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html

  * igt@kms_dsc@dsc-with-output-formats-with-bpc:
    - shard-bmg:          NOTRUN -> [SKIP][16] ([Intel XE#2244]) +1 other test skip
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_dsc@dsc-with-output-formats-with-bpc.html

  * igt@kms_fbcon_fbt@psr-suspend:
    - shard-bmg:          NOTRUN -> [SKIP][17] ([Intel XE#776])
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_fbcon_fbt@psr-suspend.html

  * igt@kms_feature_discovery@psr1:
    - shard-bmg:          NOTRUN -> [SKIP][18] ([Intel XE#2374])
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_feature_discovery@psr1.html

  * igt@kms_flip@flip-vs-expired-vblank@a-edp1:
    - shard-lnl:          [PASS][19] -> [FAIL][20] ([Intel XE#301])
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-lnl-8/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling:
    - shard-bmg:          NOTRUN -> [SKIP][21] ([Intel XE#2293] / [Intel XE#2380])
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling@pipe-a-valid-mode:
    - shard-bmg:          NOTRUN -> [SKIP][22] ([Intel XE#2293])
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-downscaling:
    - shard-bmg:          NOTRUN -> [SKIP][23] ([Intel XE#2380]) +1 other test skip
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-downscaling.html

  * igt@kms_frontbuffer_tracking@drrs-rgb101010-draw-blt:
    - shard-bmg:          NOTRUN -> [SKIP][24] ([Intel XE#2311]) +28 other tests skip
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_frontbuffer_tracking@drrs-rgb101010-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff:
    - shard-bmg:          NOTRUN -> [SKIP][25] ([Intel XE#4141]) +10 other tests skip
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y:
    - shard-bmg:          NOTRUN -> [SKIP][26] ([Intel XE#2352])
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
    - shard-bmg:          NOTRUN -> [SKIP][27] ([Intel XE#2313]) +27 other tests skip
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-bmg:          NOTRUN -> [ABORT][28] ([Intel XE#6740]) +1 other test abort
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_plane_lowres@tiling-yf:
    - shard-bmg:          NOTRUN -> [SKIP][29] ([Intel XE#2393])
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-2/igt@kms_plane_lowres@tiling-yf.html

  * igt@kms_pm_backlight@fade:
    - shard-bmg:          NOTRUN -> [SKIP][30] ([Intel XE#870]) +1 other test skip
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_pm_backlight@fade.html

  * igt@kms_pm_lpsp@kms-lpsp:
    - shard-bmg:          NOTRUN -> [SKIP][31] ([Intel XE#2499])
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@kms_pm_lpsp@kms-lpsp.html

  * igt@kms_pm_rpm@modeset-lpsp-stress-no-wait:
    - shard-bmg:          NOTRUN -> [SKIP][32] ([Intel XE#1439] / [Intel XE#3141] / [Intel XE#836])
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@kms_pm_rpm@modeset-lpsp-stress-no-wait.html

  * igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area:
    - shard-bmg:          NOTRUN -> [SKIP][33] ([Intel XE#1406] / [Intel XE#1489]) +6 other tests skip
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area.html

  * igt@kms_psr@psr2-primary-page-flip:
    - shard-bmg:          NOTRUN -> [SKIP][34] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +8 other tests skip
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@kms_psr@psr2-primary-page-flip.html

  * igt@kms_psr@psr2-primary-render:
    - shard-bmg:          NOTRUN -> [SKIP][35] ([Intel XE#1406] / [Intel XE#2234])
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_psr@psr2-primary-render.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-0:
    - shard-bmg:          NOTRUN -> [SKIP][36] ([Intel XE#2330])
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90:
    - shard-bmg:          NOTRUN -> [SKIP][37] ([Intel XE#3414] / [Intel XE#3904])
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90.html

  * igt@kms_scaling_modes@scaling-mode-full-aspect:
    - shard-bmg:          NOTRUN -> [SKIP][38] ([Intel XE#2413])
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_scaling_modes@scaling-mode-full-aspect.html

  * igt@kms_sharpness_filter@invalid-filter-with-plane:
    - shard-bmg:          NOTRUN -> [SKIP][39] ([Intel XE#6503]) +1 other test skip
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@kms_sharpness_filter@invalid-filter-with-plane.html

  * igt@kms_tiled_display@basic-test-pattern:
    - shard-bmg:          NOTRUN -> [SKIP][40] ([Intel XE#2426])
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@kms_tiled_display@basic-test-pattern.html

  * igt@testdisplay:
    - shard-bmg:          [PASS][41] -> [ABORT][42] ([Intel XE#6740])
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-bmg-10/igt@testdisplay.html
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@testdisplay.html

  * igt@xe_eudebug@basic-vms:
    - shard-bmg:          NOTRUN -> [SKIP][43] ([Intel XE#4837]) +6 other tests skip
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@xe_eudebug@basic-vms.html

  * igt@xe_eudebug_online@set-breakpoint-faultable:
    - shard-bmg:          NOTRUN -> [SKIP][44] ([Intel XE#4837] / [Intel XE#6665]) +3 other tests skip
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-2/igt@xe_eudebug_online@set-breakpoint-faultable.html

  * igt@xe_exec_basic@multigpu-many-execqueues-many-vm-rebind:
    - shard-bmg:          NOTRUN -> [SKIP][45] ([Intel XE#2322]) +8 other tests skip
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-2/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-rebind.html

  * igt@xe_exec_multi_queue@two-queues-preempt-mode-fault-dyn-priority-smem:
    - shard-bmg:          NOTRUN -> [SKIP][46] ([Intel XE#6874]) +31 other tests skip
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@xe_exec_multi_queue@two-queues-preempt-mode-fault-dyn-priority-smem.html

  * igt@xe_exec_system_allocator@many-64k-mmap-new-huge-nomemset:
    - shard-bmg:          NOTRUN -> [SKIP][47] ([Intel XE#5007]) +1 other test skip
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@xe_exec_system_allocator@many-64k-mmap-new-huge-nomemset.html

  * igt@xe_exec_system_allocator@process-many-large-execqueues-mmap-new-huge:
    - shard-bmg:          NOTRUN -> [SKIP][48] ([Intel XE#4943]) +22 other tests skip
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@xe_exec_system_allocator@process-many-large-execqueues-mmap-new-huge.html

  * igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv:
    - shard-bmg:          NOTRUN -> [ABORT][49] ([Intel XE#5466])
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-1/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html

  * igt@xe_pat@pat-index-xelpg:
    - shard-bmg:          NOTRUN -> [SKIP][50] ([Intel XE#2236])
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@xe_pat@pat-index-xelpg.html

  * igt@xe_pm@d3cold-mmap-vram:
    - shard-bmg:          NOTRUN -> [SKIP][51] ([Intel XE#2284])
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@xe_pm@d3cold-mmap-vram.html

  * igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_video_decode0:
    - shard-lnl:          [PASS][52] -> [FAIL][53] ([Intel XE#6251]) +1 other test fail
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-lnl-4/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_video_decode0.html
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-lnl-8/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_video_decode0.html

  * igt@xe_pxp@pxp-stale-queue-post-suspend:
    - shard-bmg:          NOTRUN -> [SKIP][54] ([Intel XE#4733]) +2 other tests skip
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@xe_pxp@pxp-stale-queue-post-suspend.html

  * igt@xe_query@multigpu-query-invalid-cs-cycles:
    - shard-bmg:          NOTRUN -> [SKIP][55] ([Intel XE#944]) +3 other tests skip
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-3/igt@xe_query@multigpu-query-invalid-cs-cycles.html

  * igt@xe_sriov_vram@vf-access-after-resize-up:
    - shard-bmg:          NOTRUN -> [FAIL][56] ([Intel XE#5937])
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-9/igt@xe_sriov_vram@vf-access-after-resize-up.html

  
#### Possible fixes ####

  * igt@kms_cursor_legacy@flip-vs-cursor-legacy:
    - shard-bmg:          [FAIL][57] ([Intel XE#5299]) -> [PASS][58]
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-bmg-7/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-8/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html

  * igt@kms_flip@flip-vs-expired-vblank@c-edp1:
    - shard-lnl:          [FAIL][59] ([Intel XE#301] / [Intel XE#3149]) -> [PASS][60]
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-lnl-8/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html

  * igt@kms_vrr@cmrr@pipe-a-edp-1:
    - shard-lnl:          [FAIL][61] ([Intel XE#4459]) -> [PASS][62] +1 other test pass
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-lnl-2/igt@kms_vrr@cmrr@pipe-a-edp-1.html
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-lnl-4/igt@kms_vrr@cmrr@pipe-a-edp-1.html

  * igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_copy0:
    - shard-lnl:          [FAIL][63] ([Intel XE#6251]) -> [PASS][64]
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-lnl-4/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_copy0.html
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-lnl-8/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_copy0.html

  
#### Warnings ####

  * igt@kms_flip@flip-vs-expired-vblank:
    - shard-lnl:          [FAIL][65] ([Intel XE#301] / [Intel XE#3149]) -> [FAIL][66] ([Intel XE#301])
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank.html
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-lnl-8/igt@kms_flip@flip-vs-expired-vblank.html

  * igt@kms_hdr@brightness-with-hdr:
    - shard-bmg:          [SKIP][67] ([Intel XE#3544]) -> [SKIP][68] ([Intel XE#3374] / [Intel XE#3544])
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-bmg-9/igt@kms_hdr@brightness-with-hdr.html
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-8/igt@kms_hdr@brightness-with-hdr.html

  * igt@kms_hdr@invalid-hdr:
    - shard-bmg:          [ABORT][69] ([Intel XE#6740]) -> [SKIP][70] ([Intel XE#1503])
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-bmg-7/igt@kms_hdr@invalid-hdr.html
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-2/igt@kms_hdr@invalid-hdr.html

  * igt@kms_tiled_display@basic-test-pattern-with-chamelium:
    - shard-bmg:          [SKIP][71] ([Intel XE#2426]) -> [SKIP][72] ([Intel XE#2509])
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-bmg-1/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-10/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html

  * igt@xe_peer2peer@read:
    - shard-bmg:          [SKIP][73] ([Intel XE#2427]) -> [SKIP][74] ([Intel XE#2427] / [Intel XE#6953])
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca/shard-bmg-3/igt@xe_peer2peer@read.html
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/shard-bmg-7/igt@xe_peer2peer@read.html

  
  [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
  [Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
  [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
  [Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
  [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
  [Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
  [Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
  [Intel XE#2236]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2236
  [Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
  [Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
  [Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
  [Intel XE#2286]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2286
  [Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
  [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
  [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
  [Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
  [Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
  [Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
  [Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
  [Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
  [Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
  [Intel XE#2330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2330
  [Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
  [Intel XE#2352]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2352
  [Intel XE#2374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2374
  [Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
  [Intel XE#2393]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2393
  [Intel XE#2413]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2413
  [Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
  [Intel XE#2427]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2427
  [Intel XE#2499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2499
  [Intel XE#2509]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2509
  [Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
  [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
  [Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
  [Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
  [Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
  [Intel XE#3141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3141
  [Intel XE#3149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3149
  [Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
  [Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
  [Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
  [Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
  [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
  [Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
  [Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
  [Intel XE#4459]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4459
  [Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
  [Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
  [Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
  [Intel XE#5007]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5007
  [Intel XE#5299]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5299
  [Intel XE#5466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5466
  [Intel XE#5937]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5937
  [Intel XE#6251]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6251
  [Intel XE#6503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6503
  [Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665
  [Intel XE#6740]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6740
  [Intel XE#6874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6874
  [Intel XE#6953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6953
  [Intel XE#776]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/776
  [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
  [Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
  [Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
  [Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944


Build changes
-------------

  * Linux: xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca -> xe-pw-159479v2

  IGT_8681: c49f35440873244aa86e778007ed2dcbe5bf0ecb @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-4326-5fc5192372599f11da8dee072fd8beb4414f8eca: 5fc5192372599f11da8dee072fd8beb4414f8eca
  xe-pw-159479v2: 159479v2

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159479v2/index.html

[-- Attachment #2: Type: text/html, Size: 27748 bytes --]

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 12/22] drm/xe: Implement GuC submission backend ops for deadlines
  2026-01-05  4:02 ` [PATCH v2 12/22] drm/xe: Implement GuC submission backend ops for deadlines Matthew Brost
@ 2026-01-10 10:48   ` kernel test robot
  0 siblings, 0 replies; 33+ messages in thread
From: kernel test robot @ 2026-01-10 10:48 UTC (permalink / raw)
  To: Matthew Brost, intel-xe
  Cc: llvm, oe-kbuild-all, daniele.ceraolospurio, carlos.santa

Hi Matthew,

kernel test robot noticed the following build warnings:

[auto build test WARNING on next-20251219]
[cannot apply to drm-xe/drm-xe-next drm-misc/drm-misc-next drm-i915/for-linux-next drm-i915/for-linux-next-fixes v6.19-rc4 v6.19-rc3 v6.19-rc2 linus/master v6.19-rc4]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Matthew-Brost/drm-xe-Add-dedicated-message-lock/20260105-160721
base:   next-20251219
patch link:    https://lore.kernel.org/r/20260105040237.1307873-13-matthew.brost%40intel.com
patch subject: [PATCH v2 12/22] drm/xe: Implement GuC submission backend ops for deadlines
config: s390-allmodconfig (https://download.01.org/0day-ci/archive/20260110/202601101818.dyOCWJK0-lkp@intel.com/config)
compiler: clang version 18.1.8 (https://github.com/llvm/llvm-project 3b5b5c1ec4a3095ab096dd780e84d7ab81f3d7ff)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260110/202601101818.dyOCWJK0-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202601101818.dyOCWJK0-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/xe/xe_guc_submit.c:2477:2: warning: variable 'opcode' is used uninitialized whenever switch default is taken [-Wsometimes-uninitialized]
    2477 |         default:
         |         ^~~~~~~
   drivers/gpu/drm/xe/xe_guc_submit.c:2482:43: note: uninitialized use occurs here
    2482 |                 if (!guc_exec_queue_try_add_msg(q, msg, opcode)) {
         |                                                         ^~~~~~
   drivers/gpu/drm/xe/xe_guc_submit.c:2462:21: note: initialize the variable 'opcode' to silence this warning
    2462 |         unsigned int opcode;
         |                            ^
         |                             = 0
   1 warning generated.


vim +/opcode +2477 drivers/gpu/drm/xe/xe_guc_submit.c

  2454	
  2455	static void guc_exec_queue_set_deadline_state(struct xe_exec_queue *q,
  2456						      enum xe_deadline_mgr_state state)
  2457	{
  2458		struct xe_gpu_scheduler *sched = &q->guc->sched;
  2459		struct xe_sched_msg *msg = q->guc->static_msgs +
  2460			STATIC_MSG_SET_DEADLINE_STATE;
  2461		struct xe_guc *guc = exec_queue_to_guc(q);
  2462		unsigned int opcode;
  2463	
  2464		xe_gt_assert(guc_to_gt(guc), state !=
  2465			     XE_DEADLINE_MGR_STATE_UNSUPPORTED);
  2466	
  2467		switch (state) {
  2468		case XE_DEADLINE_MGR_STATE_NO_BOOST:
  2469			opcode = EXIT_DEADLINE;
  2470			break;
  2471		case XE_DEADLINE_MGR_STATE_FREQ_BOOST:
  2472			opcode = ENTER_DEADLINE_FREQ;
  2473			break;
  2474		case XE_DEADLINE_MGR_STATE_PRIO_BOOST:
  2475			opcode = ENTER_DEADLINE_PRIO;
  2476			break;
> 2477		default:
  2478			drm_warn(&guc_to_xe(guc)->drm, "NOT POSSIBLE");
  2479		}
  2480	
  2481		xe_sched_msg_scoped_guard(sched) {
  2482			if (!guc_exec_queue_try_add_msg(q, msg, opcode)) {
  2483				bool added;
  2484	
  2485				/*
  2486				 * A deadline state change has yet to be processed,
  2487				 * removed it.
  2488				 */
  2489				list_del_init(&msg->link);
  2490	
  2491				added = guc_exec_queue_try_add_msg(q, msg, opcode);
  2492				xe_gt_assert(guc_to_gt(guc), added);
  2493			}
  2494		}
  2495	}
  2496	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 02/22] drm/xe: Add EXEC_QUEUE_FLAG_CAP_SYS_NICE
  2026-01-05  4:02 ` [PATCH v2 02/22] drm/xe: Add EXEC_QUEUE_FLAG_CAP_SYS_NICE Matthew Brost
@ 2026-02-05 16:00   ` Rodrigo Vivi
  0 siblings, 0 replies; 33+ messages in thread
From: Rodrigo Vivi @ 2026-02-05 16:00 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe, daniele.ceraolospurio, carlos.santa

On Sun, Jan 04, 2026 at 08:02:17PM -0800, Matthew Brost wrote:
> Store whether CAP_SYS_NICE is set on the user process that creates an
> exec queue. This will indicate if the exec queue is eligible for higher
> priority levels under deadline pressure.
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_exec_queue.c       | 3 +++
>  drivers/gpu/drm/xe/xe_exec_queue_types.h | 2 ++
>  2 files changed, 5 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index 0b9e074b022f..a9b981591773 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -1158,6 +1158,9 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
>  	if (args->flags & DRM_XE_EXEC_QUEUE_LOW_LATENCY_HINT)
>  		flags |= EXEC_QUEUE_FLAG_LOW_LATENCY;
>  
> +	if (capable(CAP_SYS_NICE))
> +		flags |= EXEC_QUEUE_FLAG_CAP_SYS_NICE;
> +
>  	if (eci[0].engine_class == DRM_XE_ENGINE_CLASS_VM_BIND) {
>  		if (XE_IOCTL_DBG(xe, args->width != 1) ||
>  		    XE_IOCTL_DBG(xe, args->num_placements != 1) ||
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index 67ea5eebf70b..cd7a6571f5c6 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -128,6 +128,8 @@ struct xe_exec_queue {
>  #define EXEC_QUEUE_FLAG_LOW_LATENCY		BIT(5)
>  /* for migration (kernel copy, clear, bind) jobs */
>  #define EXEC_QUEUE_FLAG_MIGRATE			BIT(6)
> +/* for user queues, created in CAP_SYS_NICE context */
> +#define EXEC_QUEUE_FLAG_CAP_SYS_NICE		BIT(7)

So, Let's then already reconcile all the already existing CAP_SYS_NICE into this
new one?

>  
>  	/**
>  	 * @flags: flags for this exec queue, should statically setup aside from ban
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 03/22] drm/xe: Store exec queue in hardware fence
  2026-01-05  4:02 ` [PATCH v2 03/22] drm/xe: Store exec queue in hardware fence Matthew Brost
@ 2026-02-05 16:02   ` Rodrigo Vivi
  0 siblings, 0 replies; 33+ messages in thread
From: Rodrigo Vivi @ 2026-02-05 16:02 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe, daniele.ceraolospurio, carlos.santa

On Sun, Jan 04, 2026 at 08:02:18PM -0800, Matthew Brost wrote:
> Enable hardware fences to set deadlines for exec queues.

probably worth expand this message to explicitly say that
this is to be used by a follow up work that is introducing
the deadlines...

the patch itself looks good

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

> 
> v2:
>  - Fix kernel doc (CI)
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_hw_fence.c       | 4 +++-
>  drivers/gpu/drm/xe/xe_hw_fence.h       | 2 +-
>  drivers/gpu/drm/xe/xe_hw_fence_types.h | 6 ++++++
>  drivers/gpu/drm/xe/xe_lrc.c            | 6 ++++--
>  drivers/gpu/drm/xe/xe_lrc.h            | 3 ++-
>  drivers/gpu/drm/xe/xe_sched_job.c      | 2 +-
>  6 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
> index f6057456e460..5995bf095843 100644
> --- a/drivers/gpu/drm/xe/xe_hw_fence.c
> +++ b/drivers/gpu/drm/xe/xe_hw_fence.c
> @@ -242,6 +242,7 @@ void xe_hw_fence_free(struct dma_fence *fence)
>   * xe_hw_fence_init() - Initialize an hw fence.
>   * @fence: Pointer to the fence to initialize.
>   * @ctx: Pointer to the struct xe_hw_fence_ctx fence context.
> + * @q: Pointer to exec queue tied to the fence.
>   * @seqno_map: Pointer to the map into where the seqno is blitted.
>   *
>   * Initializes a pre-allocated hw fence.
> @@ -249,12 +250,13 @@ void xe_hw_fence_free(struct dma_fence *fence)
>   * dma-fence refcounting.
>   */
>  void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
> -		      struct iosys_map seqno_map)
> +		      struct xe_exec_queue *q, struct iosys_map seqno_map)
>  {
>  	struct  xe_hw_fence *hw_fence =
>  		container_of(fence, typeof(*hw_fence), dma);
>  
>  	hw_fence->xe = gt_to_xe(ctx->gt);
> +	hw_fence->q = q;
>  	snprintf(hw_fence->name, sizeof(hw_fence->name), "%s", ctx->name);
>  	hw_fence->seqno_map = seqno_map;
>  	INIT_LIST_HEAD(&hw_fence->irq_link);
> diff --git a/drivers/gpu/drm/xe/xe_hw_fence.h b/drivers/gpu/drm/xe/xe_hw_fence.h
> index f13a1c4982c7..7a8678c881d8 100644
> --- a/drivers/gpu/drm/xe/xe_hw_fence.h
> +++ b/drivers/gpu/drm/xe/xe_hw_fence.h
> @@ -29,5 +29,5 @@ struct dma_fence *xe_hw_fence_alloc(void);
>  void xe_hw_fence_free(struct dma_fence *fence);
>  
>  void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
> -		      struct iosys_map seqno_map);
> +		      struct xe_exec_queue *q, struct iosys_map seqno_map);
>  #endif
> diff --git a/drivers/gpu/drm/xe/xe_hw_fence_types.h b/drivers/gpu/drm/xe/xe_hw_fence_types.h
> index 58a8d09afe5c..052bbab1fad6 100644
> --- a/drivers/gpu/drm/xe/xe_hw_fence_types.h
> +++ b/drivers/gpu/drm/xe/xe_hw_fence_types.h
> @@ -13,6 +13,7 @@
>  #include <linux/spinlock.h>
>  
>  struct xe_device;
> +struct xe_exec_queue;
>  struct xe_gt;
>  
>  /**
> @@ -64,6 +65,11 @@ struct xe_hw_fence {
>  	struct dma_fence dma;
>  	/** @xe: Xe device for hw fence driver name */
>  	struct xe_device *xe;
> +	/**
> +	 * @q: Exec queue which fence is tied too, not ref counted, lookup
> +	 * protected by fence lock.
> +	 */
> +	struct xe_exec_queue *q;
>  	/** @name: name of hardware fence context */
>  	char name[MAX_FENCE_NAME_LEN];
>  	/** @seqno_map: I/O map for seqno */
> diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
> index 70eae7d03a27..eccc7f2642bf 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.c
> +++ b/drivers/gpu/drm/xe/xe_lrc.c
> @@ -1783,15 +1783,17 @@ void xe_lrc_free_seqno_fence(struct dma_fence *fence)
>  /**
>   * xe_lrc_init_seqno_fence() - Initialize an lrc seqno fence.
>   * @lrc: Pointer to the lrc.
> + * @q: Pointner to exec queue.
>   * @fence: Pointer to the fence to initialize.
>   *
>   * Initializes a pre-allocated lrc seqno fence.
>   * After initialization, the fence is subject to normal
>   * dma-fence refcounting.
>   */
> -void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct dma_fence *fence)
> +void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct xe_exec_queue *q,
> +			     struct dma_fence *fence)
>  {
> -	xe_hw_fence_init(fence, &lrc->fence_ctx, __xe_lrc_seqno_map(lrc));
> +	xe_hw_fence_init(fence, &lrc->fence_ctx, q, __xe_lrc_seqno_map(lrc));
>  }
>  
>  s32 xe_lrc_seqno(struct xe_lrc *lrc)
> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
> index 8acf85273c1a..3d72b4c0da8e 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.h
> +++ b/drivers/gpu/drm/xe/xe_lrc.h
> @@ -118,7 +118,8 @@ u64 xe_lrc_descriptor(struct xe_lrc *lrc);
>  u32 xe_lrc_seqno_ggtt_addr(struct xe_lrc *lrc);
>  struct dma_fence *xe_lrc_alloc_seqno_fence(void);
>  void xe_lrc_free_seqno_fence(struct dma_fence *fence);
> -void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct dma_fence *fence);
> +void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct xe_exec_queue *q,
> +			     struct dma_fence *fence);
>  s32 xe_lrc_seqno(struct xe_lrc *lrc);
>  
>  u32 xe_lrc_start_seqno_ggtt_addr(struct xe_lrc *lrc);
> diff --git a/drivers/gpu/drm/xe/xe_sched_job.c b/drivers/gpu/drm/xe/xe_sched_job.c
> index cb674a322113..6099b4445835 100644
> --- a/drivers/gpu/drm/xe/xe_sched_job.c
> +++ b/drivers/gpu/drm/xe/xe_sched_job.c
> @@ -270,7 +270,7 @@ void xe_sched_job_arm(struct xe_sched_job *job)
>  		struct dma_fence_chain *chain;
>  
>  		fence = job->ptrs[i].lrc_fence;
> -		xe_lrc_init_seqno_fence(q->lrc[i], fence);
> +		xe_lrc_init_seqno_fence(q->lrc[i], q, fence);
>  		job->ptrs[i].lrc_fence = NULL;
>  		if (!i) {
>  			job->lrc_seqno = fence->seqno;
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 04/22] drm/xe: Add deadline exec queue vfuncs
  2026-01-05  4:02 ` [PATCH v2 04/22] drm/xe: Add deadline exec queue vfuncs Matthew Brost
@ 2026-02-05 16:03   ` Rodrigo Vivi
  0 siblings, 0 replies; 33+ messages in thread
From: Rodrigo Vivi @ 2026-02-05 16:03 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe, daniele.ceraolospurio, carlos.santa

On Sun, Jan 04, 2026 at 08:02:19PM -0800, Matthew Brost wrote:
> Add set_deadline and set_deadline_state exec queue vfuncs for deadline
> control.

same as previous patch... it looks it deserves a better msg, but
the code is good

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

> 
> v2:
>  - Fix kernel doc
>  - Remove exit_deadline, rather use an enum for control
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_exec_queue_types.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index cd7a6571f5c6..ac860f3f042e 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -15,6 +15,7 @@
>  #include "xe_hw_fence_types.h"
>  #include "xe_lrc_types.h"
>  
> +enum xe_deadline_mgr_state;
>  struct drm_syncobj;
>  struct xe_execlist_exec_queue;
>  struct xe_gt;
> @@ -301,6 +302,14 @@ struct xe_exec_queue_ops {
>  	void (*resume)(struct xe_exec_queue *q);
>  	/** @reset_status: check exec queue reset status */
>  	bool (*reset_status)(struct xe_exec_queue *q);
> +	/**
> +	 * @set_deadline: Set deadline for on a queue for a fence.
> +	 */
> +	void (*set_deadline)(struct xe_exec_queue *q, struct dma_fence *fence,
> +			     ktime_t deadline);
> +	/** @set_deadline_state: Set deadline state for a queue */
> +	void (*set_deadline_state)(struct xe_exec_queue *q,
> +				   enum xe_deadline_mgr_state state);
>  };
>  
>  #endif
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v2 14/22] drm/xe: Fix Kconfig.profile newlines
  2026-01-05  4:02 ` [PATCH v2 14/22] drm/xe: Fix Kconfig.profile newlines Matthew Brost
@ 2026-02-05 16:06   ` Rodrigo Vivi
  0 siblings, 0 replies; 33+ messages in thread
From: Rodrigo Vivi @ 2026-02-05 16:06 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe, daniele.ceraolospurio, carlos.santa

On Sun, Jan 04, 2026 at 08:02:29PM -0800, Matthew Brost wrote:
> Add missing newlines between Kconfig options in Kconfig.profile to
> improve readability.
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

> ---
>  drivers/gpu/drm/xe/Kconfig.profile | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/Kconfig.profile b/drivers/gpu/drm/xe/Kconfig.profile
> index 7530df998148..594acedf7b56 100644
> --- a/drivers/gpu/drm/xe/Kconfig.profile
> +++ b/drivers/gpu/drm/xe/Kconfig.profile
> @@ -5,24 +5,28 @@ config DRM_XE_JOB_TIMEOUT_MAX
>  	help
>  	  Configures the default max job timeout after which job will
>  	  be forcefully taken away from scheduler.
> +
>  config DRM_XE_JOB_TIMEOUT_MIN
>  	int "Default min job timeout (ms)"
>  	default 1 # milliseconds
>  	help
>  	  Configures the default min job timeout after which job will
>  	  be forcefully taken away from scheduler.
> +
>  config DRM_XE_TIMESLICE_MAX
>  	int "Default max timeslice duration (us)"
>  	default 10000000 # microseconds
>  	help
>  	  Configures the default max timeslice duration between multiple
>  	  contexts by guc scheduling.
> +
>  config DRM_XE_TIMESLICE_MIN
>  	int "Default min timeslice duration (us)"
>  	default 1 # microseconds
>  	help
>  	  Configures the default min timeslice duration between multiple
>  	  contexts by guc scheduling.
> +
>  config DRM_XE_PREEMPT_TIMEOUT
>  	int "Preempt timeout (us, jiffy granularity)"
>  	default 640000 # microseconds
> @@ -31,6 +35,7 @@ config DRM_XE_PREEMPT_TIMEOUT
>  	  when submitting a new context. If the current context does not hit
>  	  an arbitration point and yield to HW before the timer expires, the
>  	  HW will be reset to allow the more important context to execute.
> +
>  config DRM_XE_PREEMPT_TIMEOUT_MAX
>  	int "Default max preempt timeout (us)"
>  	default 10000000 # microseconds
> @@ -38,6 +43,7 @@ config DRM_XE_PREEMPT_TIMEOUT_MAX
>  	  Configures the default max preempt timeout after which context
>  	  will be forcefully taken away and higher priority context will
>  	  run.
> +
>  config DRM_XE_PREEMPT_TIMEOUT_MIN
>  	int "Default min preempt timeout (us)"
>  	default 1 # microseconds
> @@ -45,6 +51,7 @@ config DRM_XE_PREEMPT_TIMEOUT_MIN
>  	  Configures the default min preempt timeout after which context
>  	  will be forcefully taken away and higher priority context will
>  	  run.
> +
>  config DRM_XE_ENABLE_SCHEDTIMEOUT_LIMIT
>  	bool "Default configuration of limitation on scheduler timeout"
>  	default y
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2026-02-05 16:06 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-05  4:02 [PATCH v2 00/22] Fence deadlines in Xe Matthew Brost
2026-01-05  4:02 ` [PATCH v2 01/22] drm/xe: Add dedicated message lock Matthew Brost
2026-01-05  4:02 ` [PATCH v2 02/22] drm/xe: Add EXEC_QUEUE_FLAG_CAP_SYS_NICE Matthew Brost
2026-02-05 16:00   ` Rodrigo Vivi
2026-01-05  4:02 ` [PATCH v2 03/22] drm/xe: Store exec queue in hardware fence Matthew Brost
2026-02-05 16:02   ` Rodrigo Vivi
2026-01-05  4:02 ` [PATCH v2 04/22] drm/xe: Add deadline exec queue vfuncs Matthew Brost
2026-02-05 16:03   ` Rodrigo Vivi
2026-01-05  4:02 ` [PATCH v2 05/22] drm/xe: Export to_xe_hw_fence Matthew Brost
2026-01-05  4:02 ` [PATCH v2 06/22] drm/xe: Export xe_hw_fence_signaled Matthew Brost
2026-01-05  4:02 ` [PATCH v2 07/22] drm/xe: Implement deadline manager Matthew Brost
2026-01-05  4:02 ` [PATCH v2 08/22] drm/xe: Initialize deadline manager on exec queues Matthew Brost
2026-01-05  4:02 ` [PATCH v2 09/22] drm/xe: Stub out execlists deadline vfuncs as NOPs Matthew Brost
2026-01-05  4:02 ` [PATCH v2 10/22] drm/xe: Make scheduler message lock IRQ-safe Matthew Brost
2026-01-05  4:02 ` [PATCH v2 11/22] drm/xe: Support unstable opcodes for static scheduler messages Matthew Brost
2026-01-05  4:02 ` [PATCH v2 12/22] drm/xe: Implement GuC submission backend ops for deadlines Matthew Brost
2026-01-10 10:48   ` kernel test robot
2026-01-05  4:02 ` [PATCH v2 13/22] drm/xe: Enable deadlines on hardware fences Matthew Brost
2026-01-05  4:02 ` [PATCH v2 14/22] drm/xe: Fix Kconfig.profile newlines Matthew Brost
2026-02-05 16:06   ` Rodrigo Vivi
2026-01-05  4:02 ` [PATCH v2 15/22] drm/xe: Add deadline Kconfig options Matthew Brost
2026-01-05  4:02 ` [PATCH v2 16/22] drm/xe: Add exec queue deadline trace points Matthew Brost
2026-01-05  4:02 ` [PATCH v2 17/22] drm/xe: Add hw fence " Matthew Brost
2026-01-05  4:02 ` [PATCH v2 18/22] drm/xe: Add timestamp_ms to LRC snapshot Matthew Brost
2026-01-05  4:02 ` [PATCH v2 19/22] drm/xe: Enforce GuC static message defines Matthew Brost
2026-01-05  4:02 ` [PATCH v2 20/22] drm/xe: Document the deadline manager Matthew Brost
2026-01-05  4:02 ` [PATCH v2 21/22] drm/atomic: Export fence deadline helper for atomic commits Matthew Brost
2026-01-05  4:02 ` [PATCH v2 22/22] drm/i915/display: Use atomic helper to set plane fence deadlines Matthew Brost
2026-01-05  4:09 ` ✗ CI.checkpatch: warning for Fence deadlines in Xe (rev2) Patchwork
2026-01-05  4:10 ` ✓ CI.KUnit: success " Patchwork
2026-01-05  4:26 ` ✗ CI.checksparse: warning " Patchwork
2026-01-05  5:07 ` ✓ Xe.CI.BAT: success " Patchwork
2026-01-05  6:51 ` ✗ Xe.CI.Full: failure " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox