intel-xe.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence
  2025-08-13 19:47 [PATCH 0/9] Add TLB invalidation abstraction stuartsummers
@ 2025-08-13 19:47 ` stuartsummers
  0 siblings, 0 replies; 23+ messages in thread
From: stuartsummers @ 2025-08-13 19:47 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, stuartsummers

Currently the CT lock is used to cover TLB invalidation
sequence number updates. In an effort to separate the GuC
back end tracking of communication with the firmware from
the front end TLB sequence number tracking, add a new lock
here to specifically track those sequence number updates
coming in from the user.

Apart from the CT lock, we also have a pending lock to
cover both pending fences and sequence numbers received
from the back end. Those cover interrupt cases and so
it makes not to overload those with sequence numbers
coming in from new transactions. In that way, we'll employ
a mutex here.

v2: Actually add the correct lock rather than just dropping
    it... (Matt)

Signed-off-by: stuartsummers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 19 +++++++++++++------
 drivers/gpu/drm/xe/xe_gt_types.h            |  2 ++
 2 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
index 02f0bb92d6e0..75854b963d66 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
@@ -118,6 +118,9 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
  */
 int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
 {
+	struct xe_device *xe = gt_to_xe(gt);
+	int err;
+
 	gt->tlb_invalidation.seqno = 1;
 	INIT_LIST_HEAD(&gt->tlb_invalidation.pending_fences);
 	spin_lock_init(&gt->tlb_invalidation.pending_lock);
@@ -125,6 +128,10 @@ int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
 	INIT_DELAYED_WORK(&gt->tlb_invalidation.fence_tdr,
 			  xe_gt_tlb_fence_timeout);
 
+	err = drmm_mutex_init(&xe->drm, &gt->tlb_invalidation.seqno_lock);
+	if (err)
+		return err;
+
 	gt->tlb_invalidation.job_wq =
 		drmm_alloc_ordered_workqueue(&gt_to_xe(gt)->drm, "gt-tbl-inval-job-wq",
 					     WQ_MEM_RECLAIM);
@@ -158,7 +165,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 	 * appear.
 	 */
 
-	mutex_lock(&gt->uc.guc.ct.lock);
+	mutex_lock(&gt->tlb_invalidation.seqno_lock);
 	spin_lock_irq(&gt->tlb_invalidation.pending_lock);
 	cancel_delayed_work(&gt->tlb_invalidation.fence_tdr);
 	/*
@@ -178,7 +185,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 				 &gt->tlb_invalidation.pending_fences, link)
 		invalidation_fence_signal(gt_to_xe(gt), fence);
 	spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
-	mutex_unlock(&gt->uc.guc.ct.lock);
+	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 }
 
 static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
@@ -211,13 +218,13 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 	 * need to be updated.
 	 */
 
-	mutex_lock(&guc->ct.lock);
+	mutex_lock(&gt->tlb_invalidation.seqno_lock);
 	seqno = gt->tlb_invalidation.seqno;
 	fence->seqno = seqno;
 	trace_xe_gt_tlb_invalidation_fence_send(xe, fence);
 	action[1] = seqno;
-	ret = xe_guc_ct_send_locked(&guc->ct, action, len,
-				    G2H_LEN_DW_TLB_INVALIDATE, 1);
+	ret = xe_guc_ct_send(&guc->ct, action, len,
+			     G2H_LEN_DW_TLB_INVALIDATE, 1);
 	if (!ret) {
 		spin_lock_irq(&gt->tlb_invalidation.pending_lock);
 		/*
@@ -248,7 +255,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 		if (!gt->tlb_invalidation.seqno)
 			gt->tlb_invalidation.seqno = 1;
 	}
-	mutex_unlock(&guc->ct.lock);
+	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
 
 	return ret;
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index ef0f2eecfa29..4dbc40fa6639 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -190,6 +190,8 @@ struct xe_gt {
 		/** @tlb_invalidation.seqno: TLB invalidation seqno, protected by CT lock */
 #define TLB_INVALIDATION_SEQNO_MAX	0x100000
 		int seqno;
+		/** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
+		struct mutex seqno_lock;
 		/**
 		 * @tlb_invalidation.seqno_recv: last received TLB invalidation seqno,
 		 * protected by CT lock
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence
  2025-08-20 22:45 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
@ 2025-08-20 22:45 ` Stuart Summers
  0 siblings, 0 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-20 22:45 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

Currently the CT lock is used to cover TLB invalidation
sequence number updates. In an effort to separate the GuC
back end tracking of communication with the firmware from
the front end TLB sequence number tracking, add a new lock
here to specifically track those sequence number updates
coming in from the user.

Apart from the CT lock, we also have a pending lock to
cover both pending fences and sequence numbers received
from the back end. Those cover interrupt cases and so
it makes not to overload those with sequence numbers
coming in from new transactions. In that way, we'll employ
a mutex here.

v2: Actually add the correct lock rather than just dropping
    it... (Matt)

Signed-off-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 19 +++++++++++++------
 drivers/gpu/drm/xe/xe_gt_types.h            |  2 ++
 2 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
index 02f0bb92d6e0..75854b963d66 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
@@ -118,6 +118,9 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
  */
 int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
 {
+	struct xe_device *xe = gt_to_xe(gt);
+	int err;
+
 	gt->tlb_invalidation.seqno = 1;
 	INIT_LIST_HEAD(&gt->tlb_invalidation.pending_fences);
 	spin_lock_init(&gt->tlb_invalidation.pending_lock);
@@ -125,6 +128,10 @@ int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
 	INIT_DELAYED_WORK(&gt->tlb_invalidation.fence_tdr,
 			  xe_gt_tlb_fence_timeout);
 
+	err = drmm_mutex_init(&xe->drm, &gt->tlb_invalidation.seqno_lock);
+	if (err)
+		return err;
+
 	gt->tlb_invalidation.job_wq =
 		drmm_alloc_ordered_workqueue(&gt_to_xe(gt)->drm, "gt-tbl-inval-job-wq",
 					     WQ_MEM_RECLAIM);
@@ -158,7 +165,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 	 * appear.
 	 */
 
-	mutex_lock(&gt->uc.guc.ct.lock);
+	mutex_lock(&gt->tlb_invalidation.seqno_lock);
 	spin_lock_irq(&gt->tlb_invalidation.pending_lock);
 	cancel_delayed_work(&gt->tlb_invalidation.fence_tdr);
 	/*
@@ -178,7 +185,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 				 &gt->tlb_invalidation.pending_fences, link)
 		invalidation_fence_signal(gt_to_xe(gt), fence);
 	spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
-	mutex_unlock(&gt->uc.guc.ct.lock);
+	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 }
 
 static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
@@ -211,13 +218,13 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 	 * need to be updated.
 	 */
 
-	mutex_lock(&guc->ct.lock);
+	mutex_lock(&gt->tlb_invalidation.seqno_lock);
 	seqno = gt->tlb_invalidation.seqno;
 	fence->seqno = seqno;
 	trace_xe_gt_tlb_invalidation_fence_send(xe, fence);
 	action[1] = seqno;
-	ret = xe_guc_ct_send_locked(&guc->ct, action, len,
-				    G2H_LEN_DW_TLB_INVALIDATE, 1);
+	ret = xe_guc_ct_send(&guc->ct, action, len,
+			     G2H_LEN_DW_TLB_INVALIDATE, 1);
 	if (!ret) {
 		spin_lock_irq(&gt->tlb_invalidation.pending_lock);
 		/*
@@ -248,7 +255,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 		if (!gt->tlb_invalidation.seqno)
 			gt->tlb_invalidation.seqno = 1;
 	}
-	mutex_unlock(&guc->ct.lock);
+	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
 
 	return ret;
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index ef0f2eecfa29..4dbc40fa6639 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -190,6 +190,8 @@ struct xe_gt {
 		/** @tlb_invalidation.seqno: TLB invalidation seqno, protected by CT lock */
 #define TLB_INVALIDATION_SEQNO_MAX	0x100000
 		int seqno;
+		/** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
+		struct mutex seqno_lock;
 		/**
 		 * @tlb_invalidation.seqno_recv: last received TLB invalidation seqno,
 		 * protected by CT lock
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence
  2025-08-20 23:30 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
@ 2025-08-20 23:30 ` Stuart Summers
  2025-08-21 22:09   ` Matthew Brost
  0 siblings, 1 reply; 23+ messages in thread
From: Stuart Summers @ 2025-08-20 23:30 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

Currently the CT lock is used to cover TLB invalidation
sequence number updates. In an effort to separate the GuC
back end tracking of communication with the firmware from
the front end TLB sequence number tracking, add a new lock
here to specifically track those sequence number updates
coming in from the user.

Apart from the CT lock, we also have a pending lock to
cover both pending fences and sequence numbers received
from the back end. Those cover interrupt cases and so
it makes not to overload those with sequence numbers
coming in from new transactions. In that way, we'll employ
a mutex here.

v2: Actually add the correct lock rather than just dropping
    it... (Matt)

Signed-off-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 19 +++++++++++++------
 drivers/gpu/drm/xe/xe_gt_types.h            |  2 ++
 2 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
index 02f0bb92d6e0..75854b963d66 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
@@ -118,6 +118,9 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
  */
 int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
 {
+	struct xe_device *xe = gt_to_xe(gt);
+	int err;
+
 	gt->tlb_invalidation.seqno = 1;
 	INIT_LIST_HEAD(&gt->tlb_invalidation.pending_fences);
 	spin_lock_init(&gt->tlb_invalidation.pending_lock);
@@ -125,6 +128,10 @@ int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
 	INIT_DELAYED_WORK(&gt->tlb_invalidation.fence_tdr,
 			  xe_gt_tlb_fence_timeout);
 
+	err = drmm_mutex_init(&xe->drm, &gt->tlb_invalidation.seqno_lock);
+	if (err)
+		return err;
+
 	gt->tlb_invalidation.job_wq =
 		drmm_alloc_ordered_workqueue(&gt_to_xe(gt)->drm, "gt-tbl-inval-job-wq",
 					     WQ_MEM_RECLAIM);
@@ -158,7 +165,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 	 * appear.
 	 */
 
-	mutex_lock(&gt->uc.guc.ct.lock);
+	mutex_lock(&gt->tlb_invalidation.seqno_lock);
 	spin_lock_irq(&gt->tlb_invalidation.pending_lock);
 	cancel_delayed_work(&gt->tlb_invalidation.fence_tdr);
 	/*
@@ -178,7 +185,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 				 &gt->tlb_invalidation.pending_fences, link)
 		invalidation_fence_signal(gt_to_xe(gt), fence);
 	spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
-	mutex_unlock(&gt->uc.guc.ct.lock);
+	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 }
 
 static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
@@ -211,13 +218,13 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 	 * need to be updated.
 	 */
 
-	mutex_lock(&guc->ct.lock);
+	mutex_lock(&gt->tlb_invalidation.seqno_lock);
 	seqno = gt->tlb_invalidation.seqno;
 	fence->seqno = seqno;
 	trace_xe_gt_tlb_invalidation_fence_send(xe, fence);
 	action[1] = seqno;
-	ret = xe_guc_ct_send_locked(&guc->ct, action, len,
-				    G2H_LEN_DW_TLB_INVALIDATE, 1);
+	ret = xe_guc_ct_send(&guc->ct, action, len,
+			     G2H_LEN_DW_TLB_INVALIDATE, 1);
 	if (!ret) {
 		spin_lock_irq(&gt->tlb_invalidation.pending_lock);
 		/*
@@ -248,7 +255,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 		if (!gt->tlb_invalidation.seqno)
 			gt->tlb_invalidation.seqno = 1;
 	}
-	mutex_unlock(&guc->ct.lock);
+	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
 
 	return ret;
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index ef0f2eecfa29..4dbc40fa6639 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -190,6 +190,8 @@ struct xe_gt {
 		/** @tlb_invalidation.seqno: TLB invalidation seqno, protected by CT lock */
 #define TLB_INVALIDATION_SEQNO_MAX	0x100000
 		int seqno;
+		/** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
+		struct mutex seqno_lock;
 		/**
 		 * @tlb_invalidation.seqno_recv: last received TLB invalidation seqno,
 		 * protected by CT lock
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence
  2025-08-20 23:30 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
@ 2025-08-21 22:09   ` Matthew Brost
  0 siblings, 0 replies; 23+ messages in thread
From: Matthew Brost @ 2025-08-21 22:09 UTC (permalink / raw)
  To: Stuart Summers; +Cc: intel-xe, farah.kassabri

On Wed, Aug 20, 2025 at 11:30:49PM +0000, Stuart Summers wrote:
> Currently the CT lock is used to cover TLB invalidation
> sequence number updates. In an effort to separate the GuC
> back end tracking of communication with the firmware from
> the front end TLB sequence number tracking, add a new lock
> here to specifically track those sequence number updates
> coming in from the user.
> 
> Apart from the CT lock, we also have a pending lock to
> cover both pending fences and sequence numbers received
> from the back end. Those cover interrupt cases and so
> it makes not to overload those with sequence numbers
> coming in from new transactions. In that way, we'll employ
> a mutex here.
> 
> v2: Actually add the correct lock rather than just dropping
>     it... (Matt)
> 
> Signed-off-by: Stuart Summers <stuart.summers@intel.com>

Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> ---
>  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 19 +++++++++++++------
>  drivers/gpu/drm/xe/xe_gt_types.h            |  2 ++
>  2 files changed, 15 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> index 02f0bb92d6e0..75854b963d66 100644
> --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> @@ -118,6 +118,9 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
>   */
>  int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
>  {
> +	struct xe_device *xe = gt_to_xe(gt);
> +	int err;
> +
>  	gt->tlb_invalidation.seqno = 1;
>  	INIT_LIST_HEAD(&gt->tlb_invalidation.pending_fences);
>  	spin_lock_init(&gt->tlb_invalidation.pending_lock);
> @@ -125,6 +128,10 @@ int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
>  	INIT_DELAYED_WORK(&gt->tlb_invalidation.fence_tdr,
>  			  xe_gt_tlb_fence_timeout);
>  
> +	err = drmm_mutex_init(&xe->drm, &gt->tlb_invalidation.seqno_lock);
> +	if (err)
> +		return err;
> +
>  	gt->tlb_invalidation.job_wq =
>  		drmm_alloc_ordered_workqueue(&gt_to_xe(gt)->drm, "gt-tbl-inval-job-wq",
>  					     WQ_MEM_RECLAIM);
> @@ -158,7 +165,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
>  	 * appear.
>  	 */
>  
> -	mutex_lock(&gt->uc.guc.ct.lock);
> +	mutex_lock(&gt->tlb_invalidation.seqno_lock);
>  	spin_lock_irq(&gt->tlb_invalidation.pending_lock);
>  	cancel_delayed_work(&gt->tlb_invalidation.fence_tdr);
>  	/*
> @@ -178,7 +185,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
>  				 &gt->tlb_invalidation.pending_fences, link)
>  		invalidation_fence_signal(gt_to_xe(gt), fence);
>  	spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
> -	mutex_unlock(&gt->uc.guc.ct.lock);
> +	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
>  }
>  
>  static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
> @@ -211,13 +218,13 @@ static int send_tlb_invalidation(struct xe_guc *guc,
>  	 * need to be updated.
>  	 */
>  
> -	mutex_lock(&guc->ct.lock);
> +	mutex_lock(&gt->tlb_invalidation.seqno_lock);
>  	seqno = gt->tlb_invalidation.seqno;
>  	fence->seqno = seqno;
>  	trace_xe_gt_tlb_invalidation_fence_send(xe, fence);
>  	action[1] = seqno;
> -	ret = xe_guc_ct_send_locked(&guc->ct, action, len,
> -				    G2H_LEN_DW_TLB_INVALIDATE, 1);
> +	ret = xe_guc_ct_send(&guc->ct, action, len,
> +			     G2H_LEN_DW_TLB_INVALIDATE, 1);
>  	if (!ret) {
>  		spin_lock_irq(&gt->tlb_invalidation.pending_lock);
>  		/*
> @@ -248,7 +255,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
>  		if (!gt->tlb_invalidation.seqno)
>  			gt->tlb_invalidation.seqno = 1;
>  	}
> -	mutex_unlock(&guc->ct.lock);
> +	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
>  	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
>  
>  	return ret;
> diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
> index ef0f2eecfa29..4dbc40fa6639 100644
> --- a/drivers/gpu/drm/xe/xe_gt_types.h
> +++ b/drivers/gpu/drm/xe/xe_gt_types.h
> @@ -190,6 +190,8 @@ struct xe_gt {
>  		/** @tlb_invalidation.seqno: TLB invalidation seqno, protected by CT lock */
>  #define TLB_INVALIDATION_SEQNO_MAX	0x100000
>  		int seqno;
> +		/** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
> +		struct mutex seqno_lock;
>  		/**
>  		 * @tlb_invalidation.seqno_recv: last received TLB invalidation seqno,
>  		 * protected by CT lock
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 0/9] Add TLB invalidation abstraction
@ 2025-08-25 17:57 Stuart Summers
  2025-08-25 17:57 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
                   ` (12 more replies)
  0 siblings, 13 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-25 17:57 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

This is a new collection of patches from Matt that has
been floating around internally and on the mailing list.
The goal here is to abstract the actual mechanism of
the invalidation from the higher level invalidation triggers
(like page table updates).

Most of these were brought in unmodified by Matt, but
I've done some minor rebase work here and there and
added my signoff where those rebases seemed a little
more extensive.

Tested on BMG locally.

v9: Use tlb_inval_reset in TLB inval tear down sequence.
v8: Fix documentation failures in CI and rebase
v7: Add a little more documentation around the TLB worker
    cancel on teardown and move that cancellation to a
    drm teardown helper.
v6: Fix for UAF in timer.c due to outstanding TLB inval
    on teardown.
v5: Make sure seqno_lock covers the prep and send in
    the later patches (Matt)
v4: Replace CT lock with seqno_lock
v3: Minor spelling fixes and added R-B's per updates on
    on the mailing list
v2: Start the series with a new patch to drop the
    explicit CT lock (Matt)
    Pull in the remaining patches from [1]

[1] https://patchwork.freedesktop.org/series/151670/#rev1

Matthew Brost (7):
  drm/xe: s/tlb_invalidation/tlb_inval
  drm/xe: Add xe_tlb_inval structure
  drm/xe: Add xe_gt_tlb_invalidation_done_handler
  drm/xe: Decouple TLB invalidations from GT
  drm/xe: Prep TLB invalidation fence before sending
  drm/xe: Add helpers to send TLB invalidations
  drm/xe: Split TLB invalidation code in frontend and backend

Stuart Summers (2):
  drm/xe: Move explicit CT lock in TLB invalidation sequence
  drm/xe: Cancel pending TLB inval workers on teardown

 drivers/gpu/drm/xe/Makefile                   |   5 +-
 drivers/gpu/drm/xe/xe_device_types.h          |   4 +-
 drivers/gpu/drm/xe/xe_exec_queue.c            |   2 +-
 drivers/gpu/drm/xe/xe_ggtt.c                  |   4 +-
 drivers/gpu/drm/xe/xe_gt.c                    |   8 +-
 drivers/gpu/drm/xe/xe_gt_pagefault.c          |   1 -
 drivers/gpu/drm/xe/xe_gt_tlb_inval_job.h      |  34 -
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c   | 604 ------------------
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h   |  40 --
 .../gpu/drm/xe/xe_gt_tlb_invalidation_types.h |  32 -
 drivers/gpu/drm/xe/xe_gt_types.h              |  33 +-
 drivers/gpu/drm/xe/xe_guc_ct.c                |   8 +-
 drivers/gpu/drm/xe/xe_guc_tlb_inval.c         | 242 +++++++
 drivers/gpu/drm/xe/xe_guc_tlb_inval.h         |  19 +
 drivers/gpu/drm/xe/xe_lmtt.c                  |  12 +-
 drivers/gpu/drm/xe/xe_migrate.h               |  10 +-
 drivers/gpu/drm/xe/xe_pci.c                   |   6 +-
 drivers/gpu/drm/xe/xe_pci_types.h             |   2 +-
 drivers/gpu/drm/xe/xe_pt.c                    |  63 +-
 drivers/gpu/drm/xe/xe_svm.c                   |   3 +-
 drivers/gpu/drm/xe/xe_tlb_inval.c             | 434 +++++++++++++
 drivers/gpu/drm/xe/xe_tlb_inval.h             |  46 ++
 ..._gt_tlb_inval_job.c => xe_tlb_inval_job.c} | 154 +++--
 drivers/gpu/drm/xe/xe_tlb_inval_job.h         |  33 +
 drivers/gpu/drm/xe/xe_tlb_inval_types.h       | 130 ++++
 drivers/gpu/drm/xe/xe_trace.h                 |  24 +-
 drivers/gpu/drm/xe/xe_vm.c                    |  66 +-
 drivers/gpu/drm/xe/xe_vm.h                    |   4 +-
 28 files changed, 1098 insertions(+), 925 deletions(-)
 delete mode 100644 drivers/gpu/drm/xe/xe_gt_tlb_inval_job.h
 delete mode 100644 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
 delete mode 100644 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
 delete mode 100644 drivers/gpu/drm/xe/xe_gt_tlb_invalidation_types.h
 create mode 100644 drivers/gpu/drm/xe/xe_guc_tlb_inval.c
 create mode 100644 drivers/gpu/drm/xe/xe_guc_tlb_inval.h
 create mode 100644 drivers/gpu/drm/xe/xe_tlb_inval.c
 create mode 100644 drivers/gpu/drm/xe/xe_tlb_inval.h
 rename drivers/gpu/drm/xe/{xe_gt_tlb_inval_job.c => xe_tlb_inval_job.c} (50%)
 create mode 100644 drivers/gpu/drm/xe/xe_tlb_inval_job.h
 create mode 100644 drivers/gpu/drm/xe/xe_tlb_inval_types.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
@ 2025-08-25 17:57 ` Stuart Summers
  2025-08-25 17:57 ` [PATCH 2/9] drm/xe: Cancel pending TLB inval workers on teardown Stuart Summers
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-25 17:57 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

Currently the CT lock is used to cover TLB invalidation
sequence number updates. In an effort to separate the GuC
back end tracking of communication with the firmware from
the front end TLB sequence number tracking, add a new lock
here to specifically track those sequence number updates
coming in from the user.

Apart from the CT lock, we also have a pending lock to
cover both pending fences and sequence numbers received
from the back end. Those cover interrupt cases and so
it makes not to overload those with sequence numbers
coming in from new transactions. In that way, we'll employ
a mutex here.

v2: Actually add the correct lock rather than just dropping
    it... (Matt)

Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 19 +++++++++++++------
 drivers/gpu/drm/xe/xe_gt_types.h            |  2 ++
 2 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
index 02f0bb92d6e0..75854b963d66 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
@@ -118,6 +118,9 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
  */
 int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
 {
+	struct xe_device *xe = gt_to_xe(gt);
+	int err;
+
 	gt->tlb_invalidation.seqno = 1;
 	INIT_LIST_HEAD(&gt->tlb_invalidation.pending_fences);
 	spin_lock_init(&gt->tlb_invalidation.pending_lock);
@@ -125,6 +128,10 @@ int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
 	INIT_DELAYED_WORK(&gt->tlb_invalidation.fence_tdr,
 			  xe_gt_tlb_fence_timeout);
 
+	err = drmm_mutex_init(&xe->drm, &gt->tlb_invalidation.seqno_lock);
+	if (err)
+		return err;
+
 	gt->tlb_invalidation.job_wq =
 		drmm_alloc_ordered_workqueue(&gt_to_xe(gt)->drm, "gt-tbl-inval-job-wq",
 					     WQ_MEM_RECLAIM);
@@ -158,7 +165,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 	 * appear.
 	 */
 
-	mutex_lock(&gt->uc.guc.ct.lock);
+	mutex_lock(&gt->tlb_invalidation.seqno_lock);
 	spin_lock_irq(&gt->tlb_invalidation.pending_lock);
 	cancel_delayed_work(&gt->tlb_invalidation.fence_tdr);
 	/*
@@ -178,7 +185,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 				 &gt->tlb_invalidation.pending_fences, link)
 		invalidation_fence_signal(gt_to_xe(gt), fence);
 	spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
-	mutex_unlock(&gt->uc.guc.ct.lock);
+	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 }
 
 static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
@@ -211,13 +218,13 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 	 * need to be updated.
 	 */
 
-	mutex_lock(&guc->ct.lock);
+	mutex_lock(&gt->tlb_invalidation.seqno_lock);
 	seqno = gt->tlb_invalidation.seqno;
 	fence->seqno = seqno;
 	trace_xe_gt_tlb_invalidation_fence_send(xe, fence);
 	action[1] = seqno;
-	ret = xe_guc_ct_send_locked(&guc->ct, action, len,
-				    G2H_LEN_DW_TLB_INVALIDATE, 1);
+	ret = xe_guc_ct_send(&guc->ct, action, len,
+			     G2H_LEN_DW_TLB_INVALIDATE, 1);
 	if (!ret) {
 		spin_lock_irq(&gt->tlb_invalidation.pending_lock);
 		/*
@@ -248,7 +255,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 		if (!gt->tlb_invalidation.seqno)
 			gt->tlb_invalidation.seqno = 1;
 	}
-	mutex_unlock(&guc->ct.lock);
+	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
 
 	return ret;
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index ef0f2eecfa29..4dbc40fa6639 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -190,6 +190,8 @@ struct xe_gt {
 		/** @tlb_invalidation.seqno: TLB invalidation seqno, protected by CT lock */
 #define TLB_INVALIDATION_SEQNO_MAX	0x100000
 		int seqno;
+		/** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
+		struct mutex seqno_lock;
 		/**
 		 * @tlb_invalidation.seqno_recv: last received TLB invalidation seqno,
 		 * protected by CT lock
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/9] drm/xe: Cancel pending TLB inval workers on teardown
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
  2025-08-25 17:57 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
@ 2025-08-25 17:57 ` Stuart Summers
  2025-08-25 18:06   ` Summers, Stuart
  2025-08-25 17:57 ` [PATCH 3/9] drm/xe: s/tlb_invalidation/tlb_inval Stuart Summers
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 23+ messages in thread
From: Stuart Summers @ 2025-08-25 17:57 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

Add a new _fini() routine on the GT TLB invalidation
side to handle this worker cleanup on driver teardown.

v2: Move the TLB teardown to the gt fini() routine called during
    gt_init rather than in gt_alloc. This way the GT structure stays
    alive for while we reset the TLB state.

Signed-off-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_gt.c                  |  2 ++
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 12 ++++++++++++
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h |  1 +
 3 files changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
index a3397f04abcc..178c4783bbda 100644
--- a/drivers/gpu/drm/xe/xe_gt.c
+++ b/drivers/gpu/drm/xe/xe_gt.c
@@ -603,6 +603,8 @@ static void xe_gt_fini(void *arg)
 	struct xe_gt *gt = arg;
 	int i;
 
+	xe_gt_tlb_invalidation_fini(gt);
+
 	for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
 		xe_hw_fence_irq_finish(&gt->fence_irq[i]);
 
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
index 75854b963d66..db00c5adead9 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
@@ -188,6 +188,18 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 }
 
+/**
+ *
+ * xe_gt_tlb_invalidation_fini - Clean up GT TLB invalidation state
+ *
+ * Cancel pending fence workers and clean up any additional
+ * GT TLB invalidation state.
+ */
+void xe_gt_tlb_invalidation_fini(struct xe_gt *gt)
+{
+	xe_gt_tlb_invalidation_reset(gt);
+}
+
 static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
 {
 	int seqno_recv = READ_ONCE(gt->tlb_invalidation.seqno_recv);
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
index f7f0f2eaf4b5..3e4cff3922d6 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
@@ -16,6 +16,7 @@ struct xe_vm;
 struct xe_vma;
 
 int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt);
+void xe_gt_tlb_invalidation_fini(struct xe_gt *gt);
 
 void xe_gt_tlb_invalidation_reset(struct xe_gt *gt);
 int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/9] drm/xe: s/tlb_invalidation/tlb_inval
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
  2025-08-25 17:57 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
  2025-08-25 17:57 ` [PATCH 2/9] drm/xe: Cancel pending TLB inval workers on teardown Stuart Summers
@ 2025-08-25 17:57 ` Stuart Summers
  2025-08-25 17:57 ` [PATCH 4/9] drm/xe: Add xe_tlb_inval structure Stuart Summers
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-25 17:57 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

From: Matthew Brost <matthew.brost@intel.com>

tlb_invalidation is a bit verbose leading to ugly wraps in the code,
shorten to tlb_inval.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/Makefile                   |   2 +-
 drivers/gpu/drm/xe/xe_device_types.h          |   4 +-
 drivers/gpu/drm/xe/xe_exec_queue.c            |   2 +-
 drivers/gpu/drm/xe/xe_ggtt.c                  |   4 +-
 drivers/gpu/drm/xe/xe_gt.c                    |  10 +-
 drivers/gpu/drm/xe/xe_gt_pagefault.c          |   1 -
 ...t_tlb_invalidation.c => xe_gt_tlb_inval.c} | 262 +++++++++---------
 drivers/gpu/drm/xe/xe_gt_tlb_inval.h          |  41 +++
 drivers/gpu/drm/xe/xe_gt_tlb_inval_job.c      |  18 +-
 ...dation_types.h => xe_gt_tlb_inval_types.h} |  14 +-
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h   |  41 ---
 drivers/gpu/drm/xe/xe_gt_types.h              |  18 +-
 drivers/gpu/drm/xe/xe_guc_ct.c                |   8 +-
 drivers/gpu/drm/xe/xe_lmtt.c                  |  12 +-
 drivers/gpu/drm/xe/xe_pci.c                   |   6 +-
 drivers/gpu/drm/xe/xe_pci_types.h             |   2 +-
 drivers/gpu/drm/xe/xe_svm.c                   |   4 +-
 drivers/gpu/drm/xe/xe_trace.h                 |  24 +-
 drivers/gpu/drm/xe/xe_vm.c                    |  64 ++---
 drivers/gpu/drm/xe/xe_vm.h                    |   4 +-
 20 files changed, 265 insertions(+), 276 deletions(-)
 rename drivers/gpu/drm/xe/{xe_gt_tlb_invalidation.c => xe_gt_tlb_inval.c} (61%)
 create mode 100644 drivers/gpu/drm/xe/xe_gt_tlb_inval.h
 rename drivers/gpu/drm/xe/{xe_gt_tlb_invalidation_types.h => xe_gt_tlb_inval_types.h} (55%)
 delete mode 100644 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h

diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 8e0c3412a757..0a36b2463434 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -61,7 +61,7 @@ xe-y += xe_bb.o \
 	xe_gt_pagefault.o \
 	xe_gt_sysfs.o \
 	xe_gt_throttle.o \
-	xe_gt_tlb_invalidation.o \
+	xe_gt_tlb_inval.o \
 	xe_gt_tlb_inval_job.o \
 	xe_gt_topology.o \
 	xe_guc.o \
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index e67fbfe59afa..40b250cfb131 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -287,8 +287,8 @@ struct xe_device {
 		u8 has_mbx_power_limits:1;
 		/** @info.has_pxp: Device has PXP support */
 		u8 has_pxp:1;
-		/** @info.has_range_tlb_invalidation: Has range based TLB invalidations */
-		u8 has_range_tlb_invalidation:1;
+		/** @info.has_range_tlb_inval: Has range based TLB invalidations */
+		u8 has_range_tlb_inval:1;
 		/** @info.has_sriov: Supports SR-IOV */
 		u8 has_sriov:1;
 		/** @info.has_usm: Device has unified shared memory support */
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index 2d10a53f701d..063c89d981e5 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -75,7 +75,7 @@ static int alloc_dep_schedulers(struct xe_device *xe, struct xe_exec_queue *q)
 		if (!gt)
 			continue;
 
-		wq = gt->tlb_invalidation.job_wq;
+		wq = gt->tlb_inval.job_wq;
 
 #define MAX_TLB_INVAL_JOBS	16	/* Picking a reasonable value */
 		dep_scheduler = xe_dep_scheduler_create(xe, wq, q->name,
diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
index e03222f5ac5a..c3e46c270117 100644
--- a/drivers/gpu/drm/xe/xe_ggtt.c
+++ b/drivers/gpu/drm/xe/xe_ggtt.c
@@ -23,7 +23,7 @@
 #include "xe_device.h"
 #include "xe_gt.h"
 #include "xe_gt_printk.h"
-#include "xe_gt_tlb_invalidation.h"
+#include "xe_gt_tlb_inval.h"
 #include "xe_map.h"
 #include "xe_mmio.h"
 #include "xe_pm.h"
@@ -438,7 +438,7 @@ static void ggtt_invalidate_gt_tlb(struct xe_gt *gt)
 	if (!gt)
 		return;
 
-	err = xe_gt_tlb_invalidation_ggtt(gt);
+	err = xe_gt_tlb_inval_ggtt(gt);
 	xe_gt_WARN(gt, err, "Failed to invalidate GGTT (%pe)", ERR_PTR(err));
 }
 
diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
index 178c4783bbda..9a4639732bd7 100644
--- a/drivers/gpu/drm/xe/xe_gt.c
+++ b/drivers/gpu/drm/xe/xe_gt.c
@@ -37,7 +37,7 @@
 #include "xe_gt_sriov_pf.h"
 #include "xe_gt_sriov_vf.h"
 #include "xe_gt_sysfs.h"
-#include "xe_gt_tlb_invalidation.h"
+#include "xe_gt_tlb_inval.h"
 #include "xe_gt_topology.h"
 #include "xe_guc_exec_queue_types.h"
 #include "xe_guc_pc.h"
@@ -413,7 +413,7 @@ int xe_gt_init_early(struct xe_gt *gt)
 	xe_force_wake_init_gt(gt, gt_to_fw(gt));
 	spin_lock_init(&gt->global_invl_lock);
 
-	err = xe_gt_tlb_invalidation_init_early(gt);
+	err = xe_gt_tlb_inval_init_early(gt);
 	if (err)
 		return err;
 
@@ -603,7 +603,7 @@ static void xe_gt_fini(void *arg)
 	struct xe_gt *gt = arg;
 	int i;
 
-	xe_gt_tlb_invalidation_fini(gt);
+	xe_gt_tlb_inval_fini(gt);
 
 	for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
 		xe_hw_fence_irq_finish(&gt->fence_irq[i]);
@@ -852,7 +852,7 @@ static int gt_reset(struct xe_gt *gt)
 
 	xe_uc_stop(&gt->uc);
 
-	xe_gt_tlb_invalidation_reset(gt);
+	xe_gt_tlb_inval_reset(gt);
 
 	err = do_gt_reset(gt);
 	if (err)
@@ -1066,5 +1066,5 @@ void xe_gt_declare_wedged(struct xe_gt *gt)
 	xe_gt_assert(gt, gt_to_xe(gt)->wedged.mode);
 
 	xe_uc_declare_wedged(&gt->uc);
-	xe_gt_tlb_invalidation_reset(gt);
+	xe_gt_tlb_inval_reset(gt);
 }
diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index ab43dec52776..1da6a981ca4e 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -16,7 +16,6 @@
 #include "xe_gt.h"
 #include "xe_gt_printk.h"
 #include "xe_gt_stats.h"
-#include "xe_gt_tlb_invalidation.h"
 #include "xe_guc.h"
 #include "xe_guc_ct.h"
 #include "xe_migrate.h"
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_inval.c
similarity index 61%
rename from drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
rename to drivers/gpu/drm/xe/xe_gt_tlb_inval.c
index db00c5adead9..1571fd917830 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_inval.c
@@ -5,8 +5,6 @@
 
 #include <drm/drm_managed.h>
 
-#include "xe_gt_tlb_invalidation.h"
-
 #include "abi/guc_actions_abi.h"
 #include "xe_device.h"
 #include "xe_force_wake.h"
@@ -15,6 +13,7 @@
 #include "xe_guc.h"
 #include "xe_guc_ct.h"
 #include "xe_gt_stats.h"
+#include "xe_gt_tlb_inval.h"
 #include "xe_mmio.h"
 #include "xe_pm.h"
 #include "xe_sriov.h"
@@ -39,7 +38,7 @@ static long tlb_timeout_jiffies(struct xe_gt *gt)
 	return hw_tlb_timeout + 2 * delay;
 }
 
-static void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fence *fence)
+static void xe_gt_tlb_inval_fence_fini(struct xe_gt_tlb_inval_fence *fence)
 {
 	if (WARN_ON_ONCE(!fence->gt))
 		return;
@@ -49,66 +48,66 @@ static void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fenc
 }
 
 static void
-__invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence)
+__inval_fence_signal(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence)
 {
 	bool stack = test_bit(FENCE_STACK_BIT, &fence->base.flags);
 
-	trace_xe_gt_tlb_invalidation_fence_signal(xe, fence);
-	xe_gt_tlb_invalidation_fence_fini(fence);
+	trace_xe_gt_tlb_inval_fence_signal(xe, fence);
+	xe_gt_tlb_inval_fence_fini(fence);
 	dma_fence_signal(&fence->base);
 	if (!stack)
 		dma_fence_put(&fence->base);
 }
 
 static void
-invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence)
+inval_fence_signal(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence)
 {
 	list_del(&fence->link);
-	__invalidation_fence_signal(xe, fence);
+	__inval_fence_signal(xe, fence);
 }
 
-void xe_gt_tlb_invalidation_fence_signal(struct xe_gt_tlb_invalidation_fence *fence)
+void xe_gt_tlb_inval_fence_signal(struct xe_gt_tlb_inval_fence *fence)
 {
 	if (WARN_ON_ONCE(!fence->gt))
 		return;
 
-	__invalidation_fence_signal(gt_to_xe(fence->gt), fence);
+	__inval_fence_signal(gt_to_xe(fence->gt), fence);
 }
 
 static void xe_gt_tlb_fence_timeout(struct work_struct *work)
 {
 	struct xe_gt *gt = container_of(work, struct xe_gt,
-					tlb_invalidation.fence_tdr.work);
+					tlb_inval.fence_tdr.work);
 	struct xe_device *xe = gt_to_xe(gt);
-	struct xe_gt_tlb_invalidation_fence *fence, *next;
+	struct xe_gt_tlb_inval_fence *fence, *next;
 
 	LNL_FLUSH_WORK(&gt->uc.guc.ct.g2h_worker);
 
-	spin_lock_irq(&gt->tlb_invalidation.pending_lock);
+	spin_lock_irq(&gt->tlb_inval.pending_lock);
 	list_for_each_entry_safe(fence, next,
-				 &gt->tlb_invalidation.pending_fences, link) {
+				 &gt->tlb_inval.pending_fences, link) {
 		s64 since_inval_ms = ktime_ms_delta(ktime_get(),
-						    fence->invalidation_time);
+						    fence->inval_time);
 
 		if (msecs_to_jiffies(since_inval_ms) < tlb_timeout_jiffies(gt))
 			break;
 
-		trace_xe_gt_tlb_invalidation_fence_timeout(xe, fence);
+		trace_xe_gt_tlb_inval_fence_timeout(xe, fence);
 		xe_gt_err(gt, "TLB invalidation fence timeout, seqno=%d recv=%d",
-			  fence->seqno, gt->tlb_invalidation.seqno_recv);
+			  fence->seqno, gt->tlb_inval.seqno_recv);
 
 		fence->base.error = -ETIME;
-		invalidation_fence_signal(xe, fence);
+		inval_fence_signal(xe, fence);
 	}
-	if (!list_empty(&gt->tlb_invalidation.pending_fences))
+	if (!list_empty(&gt->tlb_inval.pending_fences))
 		queue_delayed_work(system_wq,
-				   &gt->tlb_invalidation.fence_tdr,
+				   &gt->tlb_inval.fence_tdr,
 				   tlb_timeout_jiffies(gt));
-	spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
+	spin_unlock_irq(&gt->tlb_inval.pending_lock);
 }
 
 /**
- * xe_gt_tlb_invalidation_init_early - Initialize GT TLB invalidation state
+ * xe_gt_tlb_inval_init_early - Initialize GT TLB invalidation state
  * @gt: GT structure
  *
  * Initialize GT TLB invalidation state, purely software initialization, should
@@ -116,40 +115,40 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
  *
  * Return: 0 on success, negative error code on error.
  */
-int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
+int xe_gt_tlb_inval_init_early(struct xe_gt *gt)
 {
 	struct xe_device *xe = gt_to_xe(gt);
 	int err;
 
-	gt->tlb_invalidation.seqno = 1;
-	INIT_LIST_HEAD(&gt->tlb_invalidation.pending_fences);
-	spin_lock_init(&gt->tlb_invalidation.pending_lock);
-	spin_lock_init(&gt->tlb_invalidation.lock);
-	INIT_DELAYED_WORK(&gt->tlb_invalidation.fence_tdr,
+	gt->tlb_inval.seqno = 1;
+	INIT_LIST_HEAD(&gt->tlb_inval.pending_fences);
+	spin_lock_init(&gt->tlb_inval.pending_lock);
+	spin_lock_init(&gt->tlb_inval.lock);
+	INIT_DELAYED_WORK(&gt->tlb_inval.fence_tdr,
 			  xe_gt_tlb_fence_timeout);
 
-	err = drmm_mutex_init(&xe->drm, &gt->tlb_invalidation.seqno_lock);
+	err = drmm_mutex_init(&xe->drm, &gt->tlb_inval.seqno_lock);
 	if (err)
 		return err;
 
-	gt->tlb_invalidation.job_wq =
+	gt->tlb_inval.job_wq =
 		drmm_alloc_ordered_workqueue(&gt_to_xe(gt)->drm, "gt-tbl-inval-job-wq",
 					     WQ_MEM_RECLAIM);
-	if (IS_ERR(gt->tlb_invalidation.job_wq))
-		return PTR_ERR(gt->tlb_invalidation.job_wq);
+	if (IS_ERR(gt->tlb_inval.job_wq))
+		return PTR_ERR(gt->tlb_inval.job_wq);
 
 	return 0;
 }
 
 /**
- * xe_gt_tlb_invalidation_reset - Initialize GT TLB invalidation reset
+ * xe_gt_tlb_inval_reset - Initialize GT TLB invalidation reset
  * @gt: GT structure
  *
  * Signal any pending invalidation fences, should be called during a GT reset
  */
-void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
+void xe_gt_tlb_inval_reset(struct xe_gt *gt)
 {
-	struct xe_gt_tlb_invalidation_fence *fence, *next;
+	struct xe_gt_tlb_inval_fence *fence, *next;
 	int pending_seqno;
 
 	/*
@@ -165,9 +164,9 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 	 * appear.
 	 */
 
-	mutex_lock(&gt->tlb_invalidation.seqno_lock);
-	spin_lock_irq(&gt->tlb_invalidation.pending_lock);
-	cancel_delayed_work(&gt->tlb_invalidation.fence_tdr);
+	mutex_lock(&gt->tlb_inval.seqno_lock);
+	spin_lock_irq(&gt->tlb_inval.pending_lock);
+	cancel_delayed_work(&gt->tlb_inval.fence_tdr);
 	/*
 	 * We might have various kworkers waiting for TLB flushes to complete
 	 * which are not tracked with an explicit TLB fence, however at this
@@ -175,34 +174,34 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 	 * make sure we signal them here under the assumption that we have
 	 * completed a full GT reset.
 	 */
-	if (gt->tlb_invalidation.seqno == 1)
+	if (gt->tlb_inval.seqno == 1)
 		pending_seqno = TLB_INVALIDATION_SEQNO_MAX - 1;
 	else
-		pending_seqno = gt->tlb_invalidation.seqno - 1;
-	WRITE_ONCE(gt->tlb_invalidation.seqno_recv, pending_seqno);
+		pending_seqno = gt->tlb_inval.seqno - 1;
+	WRITE_ONCE(gt->tlb_inval.seqno_recv, pending_seqno);
 
 	list_for_each_entry_safe(fence, next,
-				 &gt->tlb_invalidation.pending_fences, link)
-		invalidation_fence_signal(gt_to_xe(gt), fence);
-	spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
-	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
+				 &gt->tlb_inval.pending_fences, link)
+		inval_fence_signal(gt_to_xe(gt), fence);
+	spin_unlock_irq(&gt->tlb_inval.pending_lock);
+	mutex_unlock(&gt->tlb_inval.seqno_lock);
 }
 
 /**
  *
- * xe_gt_tlb_invalidation_fini - Clean up GT TLB invalidation state
+ * xe_gt_tlb_inval_fini - Clean up GT TLB invalidation state
  *
  * Cancel pending fence workers and clean up any additional
  * GT TLB invalidation state.
  */
-void xe_gt_tlb_invalidation_fini(struct xe_gt *gt)
+void xe_gt_tlb_inval_fini(struct xe_gt *gt)
 {
-	xe_gt_tlb_invalidation_reset(gt);
+	xe_gt_tlb_inval_reset(gt);
 }
 
-static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
+static bool tlb_inval_seqno_past(struct xe_gt *gt, int seqno)
 {
-	int seqno_recv = READ_ONCE(gt->tlb_invalidation.seqno_recv);
+	int seqno_recv = READ_ONCE(gt->tlb_inval.seqno_recv);
 
 	if (seqno - seqno_recv < -(TLB_INVALIDATION_SEQNO_MAX / 2))
 		return false;
@@ -213,9 +212,9 @@ static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
 	return seqno_recv >= seqno;
 }
 
-static int send_tlb_invalidation(struct xe_guc *guc,
-				 struct xe_gt_tlb_invalidation_fence *fence,
-				 u32 *action, int len)
+static int send_tlb_inval(struct xe_guc *guc,
+			  struct xe_gt_tlb_inval_fence *fence,
+			  u32 *action, int len)
 {
 	struct xe_gt *gt = guc_to_gt(guc);
 	struct xe_device *xe = gt_to_xe(gt);
@@ -230,44 +229,44 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 	 * need to be updated.
 	 */
 
-	mutex_lock(&gt->tlb_invalidation.seqno_lock);
-	seqno = gt->tlb_invalidation.seqno;
+	mutex_lock(&gt->tlb_inval.seqno_lock);
+	seqno = gt->tlb_inval.seqno;
 	fence->seqno = seqno;
-	trace_xe_gt_tlb_invalidation_fence_send(xe, fence);
+	trace_xe_gt_tlb_inval_fence_send(xe, fence);
 	action[1] = seqno;
 	ret = xe_guc_ct_send(&guc->ct, action, len,
 			     G2H_LEN_DW_TLB_INVALIDATE, 1);
 	if (!ret) {
-		spin_lock_irq(&gt->tlb_invalidation.pending_lock);
+		spin_lock_irq(&gt->tlb_inval.pending_lock);
 		/*
 		 * We haven't actually published the TLB fence as per
 		 * pending_fences, but in theory our seqno could have already
 		 * been written as we acquired the pending_lock. In such a case
 		 * we can just go ahead and signal the fence here.
 		 */
-		if (tlb_invalidation_seqno_past(gt, seqno)) {
-			__invalidation_fence_signal(xe, fence);
+		if (tlb_inval_seqno_past(gt, seqno)) {
+			__inval_fence_signal(xe, fence);
 		} else {
-			fence->invalidation_time = ktime_get();
+			fence->inval_time = ktime_get();
 			list_add_tail(&fence->link,
-				      &gt->tlb_invalidation.pending_fences);
+				      &gt->tlb_inval.pending_fences);
 
-			if (list_is_singular(&gt->tlb_invalidation.pending_fences))
+			if (list_is_singular(&gt->tlb_inval.pending_fences))
 				queue_delayed_work(system_wq,
-						   &gt->tlb_invalidation.fence_tdr,
+						   &gt->tlb_inval.fence_tdr,
 						   tlb_timeout_jiffies(gt));
 		}
-		spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
+		spin_unlock_irq(&gt->tlb_inval.pending_lock);
 	} else {
-		__invalidation_fence_signal(xe, fence);
+		__inval_fence_signal(xe, fence);
 	}
 	if (!ret) {
-		gt->tlb_invalidation.seqno = (gt->tlb_invalidation.seqno + 1) %
+		gt->tlb_inval.seqno = (gt->tlb_inval.seqno + 1) %
 			TLB_INVALIDATION_SEQNO_MAX;
-		if (!gt->tlb_invalidation.seqno)
-			gt->tlb_invalidation.seqno = 1;
+		if (!gt->tlb_inval.seqno)
+			gt->tlb_inval.seqno = 1;
 	}
-	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
+	mutex_unlock(&gt->tlb_inval.seqno_lock);
 	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
 
 	return ret;
@@ -278,7 +277,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 		XE_GUC_TLB_INVAL_FLUSH_CACHE)
 
 /**
- * xe_gt_tlb_invalidation_guc - Issue a TLB invalidation on this GT for the GuC
+ * xe_gt_tlb_inval_guc - Issue a TLB invalidation on this GT for the GuC
  * @gt: GT structure
  * @fence: invalidation fence which will be signal on TLB invalidation
  * completion
@@ -288,18 +287,17 @@ static int send_tlb_invalidation(struct xe_guc *guc,
  *
  * Return: 0 on success, negative error code on error
  */
-static int xe_gt_tlb_invalidation_guc(struct xe_gt *gt,
-				      struct xe_gt_tlb_invalidation_fence *fence)
+static int xe_gt_tlb_inval_guc(struct xe_gt *gt,
+			       struct xe_gt_tlb_inval_fence *fence)
 {
 	u32 action[] = {
 		XE_GUC_ACTION_TLB_INVALIDATION,
-		0,  /* seqno, replaced in send_tlb_invalidation */
+		0,  /* seqno, replaced in send_tlb_inval */
 		MAKE_INVAL_OP(XE_GUC_TLB_INVAL_GUC),
 	};
 	int ret;
 
-	ret = send_tlb_invalidation(&gt->uc.guc, fence, action,
-				    ARRAY_SIZE(action));
+	ret = send_tlb_inval(&gt->uc.guc, fence, action, ARRAY_SIZE(action));
 	/*
 	 * -ECANCELED indicates the CT is stopped for a GT reset. TLB caches
 	 *  should be nuked on a GT reset so this error can be ignored.
@@ -311,7 +309,7 @@ static int xe_gt_tlb_invalidation_guc(struct xe_gt *gt,
 }
 
 /**
- * xe_gt_tlb_invalidation_ggtt - Issue a TLB invalidation on this GT for the GGTT
+ * xe_gt_tlb_inval_ggtt - Issue a TLB invalidation on this GT for the GGTT
  * @gt: GT structure
  *
  * Issue a TLB invalidation for the GGTT. Completion of TLB invalidation is
@@ -319,22 +317,22 @@ static int xe_gt_tlb_invalidation_guc(struct xe_gt *gt,
  *
  * Return: 0 on success, negative error code on error
  */
-int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt)
+int xe_gt_tlb_inval_ggtt(struct xe_gt *gt)
 {
 	struct xe_device *xe = gt_to_xe(gt);
 	unsigned int fw_ref;
 
 	if (xe_guc_ct_enabled(&gt->uc.guc.ct) &&
 	    gt->uc.guc.submission_state.enabled) {
-		struct xe_gt_tlb_invalidation_fence fence;
+		struct xe_gt_tlb_inval_fence fence;
 		int ret;
 
-		xe_gt_tlb_invalidation_fence_init(gt, &fence, true);
-		ret = xe_gt_tlb_invalidation_guc(gt, &fence);
+		xe_gt_tlb_inval_fence_init(gt, &fence, true);
+		ret = xe_gt_tlb_inval_guc(gt, &fence);
 		if (ret)
 			return ret;
 
-		xe_gt_tlb_invalidation_fence_wait(&fence);
+		xe_gt_tlb_inval_fence_wait(&fence);
 	} else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
 		struct xe_mmio *mmio = &gt->mmio;
 
@@ -357,34 +355,34 @@ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt)
 	return 0;
 }
 
-static int send_tlb_invalidation_all(struct xe_gt *gt,
-				     struct xe_gt_tlb_invalidation_fence *fence)
+static int send_tlb_inval_all(struct xe_gt *gt,
+			      struct xe_gt_tlb_inval_fence *fence)
 {
 	u32 action[] = {
 		XE_GUC_ACTION_TLB_INVALIDATION_ALL,
-		0,  /* seqno, replaced in send_tlb_invalidation */
+		0,  /* seqno, replaced in send_tlb_inval */
 		MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL),
 	};
 
-	return send_tlb_invalidation(&gt->uc.guc, fence, action, ARRAY_SIZE(action));
+	return send_tlb_inval(&gt->uc.guc, fence, action, ARRAY_SIZE(action));
 }
 
 /**
  * xe_gt_tlb_invalidation_all - Invalidate all TLBs across PF and all VFs.
  * @gt: the &xe_gt structure
- * @fence: the &xe_gt_tlb_invalidation_fence to be signaled on completion
+ * @fence: the &xe_gt_tlb_inval_fence to be signaled on completion
  *
  * Send a request to invalidate all TLBs across PF and all VFs.
  *
  * Return: 0 on success, negative error code on error
  */
-int xe_gt_tlb_invalidation_all(struct xe_gt *gt, struct xe_gt_tlb_invalidation_fence *fence)
+int xe_gt_tlb_inval_all(struct xe_gt *gt, struct xe_gt_tlb_inval_fence *fence)
 {
 	int err;
 
 	xe_gt_assert(gt, gt == fence->gt);
 
-	err = send_tlb_invalidation_all(gt, fence);
+	err = send_tlb_inval_all(gt, fence);
 	if (err)
 		xe_gt_err(gt, "TLB invalidation request failed (%pe)", ERR_PTR(err));
 
@@ -399,8 +397,7 @@ int xe_gt_tlb_invalidation_all(struct xe_gt *gt, struct xe_gt_tlb_invalidation_f
 #define MAX_RANGE_TLB_INVALIDATION_LENGTH (rounddown_pow_of_two(ULONG_MAX))
 
 /**
- * xe_gt_tlb_invalidation_range - Issue a TLB invalidation on this GT for an
- * address range
+ * xe_gt_tlb_inval_range - Issue a TLB invalidation on this GT for an address range
  *
  * @gt: GT structure
  * @fence: invalidation fence which will be signal on TLB invalidation
@@ -415,9 +412,8 @@ int xe_gt_tlb_invalidation_all(struct xe_gt *gt, struct xe_gt_tlb_invalidation_f
  *
  * Return: Negative error code on error, 0 on success
  */
-int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
-				 struct xe_gt_tlb_invalidation_fence *fence,
-				 u64 start, u64 end, u32 asid)
+int xe_gt_tlb_inval_range(struct xe_gt *gt, struct xe_gt_tlb_inval_fence *fence,
+			  u64 start, u64 end, u32 asid)
 {
 	struct xe_device *xe = gt_to_xe(gt);
 #define MAX_TLB_INVALIDATION_LEN	7
@@ -429,13 +425,13 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
 
 	/* Execlists not supported */
 	if (gt_to_xe(gt)->info.force_execlist) {
-		__invalidation_fence_signal(xe, fence);
+		__inval_fence_signal(xe, fence);
 		return 0;
 	}
 
 	action[len++] = XE_GUC_ACTION_TLB_INVALIDATION;
-	action[len++] = 0; /* seqno, replaced in send_tlb_invalidation */
-	if (!xe->info.has_range_tlb_invalidation ||
+	action[len++] = 0; /* seqno, replaced in send_tlb_inval */
+	if (!xe->info.has_range_tlb_inval ||
 	    length > MAX_RANGE_TLB_INVALIDATION_LENGTH) {
 		action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
 	} else {
@@ -484,33 +480,33 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
 
 	xe_gt_assert(gt, len <= MAX_TLB_INVALIDATION_LEN);
 
-	return send_tlb_invalidation(&gt->uc.guc, fence, action, len);
+	return send_tlb_inval(&gt->uc.guc, fence, action, len);
 }
 
 /**
- * xe_gt_tlb_invalidation_vm - Issue a TLB invalidation on this GT for a VM
+ * xe_gt_tlb_inval_vm - Issue a TLB invalidation on this GT for a VM
  * @gt: graphics tile
  * @vm: VM to invalidate
  *
  * Invalidate entire VM's address space
  */
-void xe_gt_tlb_invalidation_vm(struct xe_gt *gt, struct xe_vm *vm)
+void xe_gt_tlb_inval_vm(struct xe_gt *gt, struct xe_vm *vm)
 {
-	struct xe_gt_tlb_invalidation_fence fence;
+	struct xe_gt_tlb_inval_fence fence;
 	u64 range = 1ull << vm->xe->info.va_bits;
 	int ret;
 
-	xe_gt_tlb_invalidation_fence_init(gt, &fence, true);
+	xe_gt_tlb_inval_fence_init(gt, &fence, true);
 
-	ret = xe_gt_tlb_invalidation_range(gt, &fence, 0, range, vm->usm.asid);
+	ret = xe_gt_tlb_inval_range(gt, &fence, 0, range, vm->usm.asid);
 	if (ret < 0)
 		return;
 
-	xe_gt_tlb_invalidation_fence_wait(&fence);
+	xe_gt_tlb_inval_fence_wait(&fence);
 }
 
 /**
- * xe_guc_tlb_invalidation_done_handler - TLB invalidation done handler
+ * xe_guc_tlb_inval_done_handler - TLB invalidation done handler
  * @guc: guc
  * @msg: message indicating TLB invalidation done
  * @len: length of message
@@ -521,11 +517,11 @@ void xe_gt_tlb_invalidation_vm(struct xe_gt *gt, struct xe_vm *vm)
  *
  * Return: 0 on success, -EPROTO for malformed messages.
  */
-int xe_guc_tlb_invalidation_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
+int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
 {
 	struct xe_gt *gt = guc_to_gt(guc);
 	struct xe_device *xe = gt_to_xe(gt);
-	struct xe_gt_tlb_invalidation_fence *fence, *next;
+	struct xe_gt_tlb_inval_fence *fence, *next;
 	unsigned long flags;
 
 	if (unlikely(len != 1))
@@ -546,74 +542,74 @@ int xe_guc_tlb_invalidation_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
 	 * officially process the CT message like if racing against
 	 * process_g2h_msg().
 	 */
-	spin_lock_irqsave(&gt->tlb_invalidation.pending_lock, flags);
-	if (tlb_invalidation_seqno_past(gt, msg[0])) {
-		spin_unlock_irqrestore(&gt->tlb_invalidation.pending_lock, flags);
+	spin_lock_irqsave(&gt->tlb_inval.pending_lock, flags);
+	if (tlb_inval_seqno_past(gt, msg[0])) {
+		spin_unlock_irqrestore(&gt->tlb_inval.pending_lock, flags);
 		return 0;
 	}
 
-	WRITE_ONCE(gt->tlb_invalidation.seqno_recv, msg[0]);
+	WRITE_ONCE(gt->tlb_inval.seqno_recv, msg[0]);
 
 	list_for_each_entry_safe(fence, next,
-				 &gt->tlb_invalidation.pending_fences, link) {
-		trace_xe_gt_tlb_invalidation_fence_recv(xe, fence);
+				 &gt->tlb_inval.pending_fences, link) {
+		trace_xe_gt_tlb_inval_fence_recv(xe, fence);
 
-		if (!tlb_invalidation_seqno_past(gt, fence->seqno))
+		if (!tlb_inval_seqno_past(gt, fence->seqno))
 			break;
 
-		invalidation_fence_signal(xe, fence);
+		inval_fence_signal(xe, fence);
 	}
 
-	if (!list_empty(&gt->tlb_invalidation.pending_fences))
+	if (!list_empty(&gt->tlb_inval.pending_fences))
 		mod_delayed_work(system_wq,
-				 &gt->tlb_invalidation.fence_tdr,
+				 &gt->tlb_inval.fence_tdr,
 				 tlb_timeout_jiffies(gt));
 	else
-		cancel_delayed_work(&gt->tlb_invalidation.fence_tdr);
+		cancel_delayed_work(&gt->tlb_inval.fence_tdr);
 
-	spin_unlock_irqrestore(&gt->tlb_invalidation.pending_lock, flags);
+	spin_unlock_irqrestore(&gt->tlb_inval.pending_lock, flags);
 
 	return 0;
 }
 
 static const char *
-invalidation_fence_get_driver_name(struct dma_fence *dma_fence)
+inval_fence_get_driver_name(struct dma_fence *dma_fence)
 {
 	return "xe";
 }
 
 static const char *
-invalidation_fence_get_timeline_name(struct dma_fence *dma_fence)
+inval_fence_get_timeline_name(struct dma_fence *dma_fence)
 {
-	return "invalidation_fence";
+	return "inval_fence";
 }
 
-static const struct dma_fence_ops invalidation_fence_ops = {
-	.get_driver_name = invalidation_fence_get_driver_name,
-	.get_timeline_name = invalidation_fence_get_timeline_name,
+static const struct dma_fence_ops inval_fence_ops = {
+	.get_driver_name = inval_fence_get_driver_name,
+	.get_timeline_name = inval_fence_get_timeline_name,
 };
 
 /**
- * xe_gt_tlb_invalidation_fence_init - Initialize TLB invalidation fence
+ * xe_gt_tlb_inval_fence_init - Initialize TLB invalidation fence
  * @gt: GT
  * @fence: TLB invalidation fence to initialize
  * @stack: fence is stack variable
  *
- * Initialize TLB invalidation fence for use. xe_gt_tlb_invalidation_fence_fini
+ * Initialize TLB invalidation fence for use. xe_gt_tlb_inval_fence_fini
  * will be automatically called when fence is signalled (all fences must signal),
  * even on error.
  */
-void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt,
-				       struct xe_gt_tlb_invalidation_fence *fence,
-				       bool stack)
+void xe_gt_tlb_inval_fence_init(struct xe_gt *gt,
+				struct xe_gt_tlb_inval_fence *fence,
+				bool stack)
 {
 	xe_pm_runtime_get_noresume(gt_to_xe(gt));
 
-	spin_lock_irq(&gt->tlb_invalidation.lock);
-	dma_fence_init(&fence->base, &invalidation_fence_ops,
-		       &gt->tlb_invalidation.lock,
+	spin_lock_irq(&gt->tlb_inval.lock);
+	dma_fence_init(&fence->base, &inval_fence_ops,
+		       &gt->tlb_inval.lock,
 		       dma_fence_context_alloc(1), 1);
-	spin_unlock_irq(&gt->tlb_invalidation.lock);
+	spin_unlock_irq(&gt->tlb_inval.lock);
 	INIT_LIST_HEAD(&fence->link);
 	if (stack)
 		set_bit(FENCE_STACK_BIT, &fence->base.flags);
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_inval.h b/drivers/gpu/drm/xe/xe_gt_tlb_inval.h
new file mode 100644
index 000000000000..b1258ac4e8fb
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_inval.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#ifndef _XE_GT_TLB_INVAL_H_
+#define _XE_GT_TLB_INVAL_H_
+
+#include <linux/types.h>
+
+#include "xe_gt_tlb_inval_types.h"
+
+struct xe_gt;
+struct xe_guc;
+struct xe_vm;
+struct xe_vma;
+
+int xe_gt_tlb_inval_init_early(struct xe_gt *gt);
+void xe_gt_tlb_inval_fini(struct xe_gt *gt);
+
+void xe_gt_tlb_inval_reset(struct xe_gt *gt);
+int xe_gt_tlb_inval_ggtt(struct xe_gt *gt);
+void xe_gt_tlb_inval_vm(struct xe_gt *gt, struct xe_vm *vm);
+int xe_gt_tlb_inval_all(struct xe_gt *gt, struct xe_gt_tlb_inval_fence *fence);
+int xe_gt_tlb_inval_range(struct xe_gt *gt,
+			  struct xe_gt_tlb_inval_fence *fence,
+			  u64 start, u64 end, u32 asid);
+int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
+
+void xe_gt_tlb_inval_fence_init(struct xe_gt *gt,
+				struct xe_gt_tlb_inval_fence *fence,
+				bool stack);
+void xe_gt_tlb_inval_fence_signal(struct xe_gt_tlb_inval_fence *fence);
+
+static inline void
+xe_gt_tlb_inval_fence_wait(struct xe_gt_tlb_inval_fence *fence)
+{
+	dma_fence_wait(&fence->base, false);
+}
+
+#endif	/* _XE_GT_TLB_INVAL_ */
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_inval_job.c b/drivers/gpu/drm/xe/xe_gt_tlb_inval_job.c
index e9255be26467..41e0ea92ea5a 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_inval_job.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_inval_job.c
@@ -7,7 +7,7 @@
 #include "xe_dep_scheduler.h"
 #include "xe_exec_queue.h"
 #include "xe_gt.h"
-#include "xe_gt_tlb_invalidation.h"
+#include "xe_gt_tlb_inval.h"
 #include "xe_gt_tlb_inval_job.h"
 #include "xe_migrate.h"
 #include "xe_pm.h"
@@ -41,11 +41,11 @@ static struct dma_fence *xe_gt_tlb_inval_job_run(struct xe_dep_job *dep_job)
 {
 	struct xe_gt_tlb_inval_job *job =
 		container_of(dep_job, typeof(*job), dep);
-	struct xe_gt_tlb_invalidation_fence *ifence =
+	struct xe_gt_tlb_inval_fence *ifence =
 		container_of(job->fence, typeof(*ifence), base);
 
-	xe_gt_tlb_invalidation_range(job->gt, ifence, job->start,
-				     job->end, job->asid);
+	xe_gt_tlb_inval_range(job->gt, ifence, job->start,
+			      job->end, job->asid);
 
 	return job->fence;
 }
@@ -93,7 +93,7 @@ struct xe_gt_tlb_inval_job *xe_gt_tlb_inval_job_create(struct xe_exec_queue *q,
 		q->tlb_inval[xe_gt_tlb_inval_context(gt)].dep_scheduler;
 	struct drm_sched_entity *entity =
 		xe_dep_scheduler_entity(dep_scheduler);
-	struct xe_gt_tlb_invalidation_fence *ifence;
+	struct xe_gt_tlb_inval_fence *ifence;
 	int err;
 
 	job = kmalloc(sizeof(*job), GFP_KERNEL);
@@ -140,7 +140,7 @@ static void xe_gt_tlb_inval_job_destroy(struct kref *ref)
 {
 	struct xe_gt_tlb_inval_job *job = container_of(ref, typeof(*job),
 						       refcount);
-	struct xe_gt_tlb_invalidation_fence *ifence =
+	struct xe_gt_tlb_inval_fence *ifence =
 		container_of(job->fence, typeof(*ifence), base);
 	struct xe_device *xe = gt_to_xe(job->gt);
 	struct xe_exec_queue *q = job->q;
@@ -148,7 +148,7 @@ static void xe_gt_tlb_inval_job_destroy(struct kref *ref)
 	if (!job->fence_armed)
 		kfree(ifence);
 	else
-		/* Ref from xe_gt_tlb_invalidation_fence_init */
+		/* Ref from xe_gt_tlb_inval_fence_init */
 		dma_fence_put(job->fence);
 
 	drm_sched_job_cleanup(&job->dep.drm);
@@ -194,7 +194,7 @@ struct dma_fence *xe_gt_tlb_inval_job_push(struct xe_gt_tlb_inval_job *job,
 					   struct xe_migrate *m,
 					   struct dma_fence *fence)
 {
-	struct xe_gt_tlb_invalidation_fence *ifence =
+	struct xe_gt_tlb_inval_fence *ifence =
 		container_of(job->fence, typeof(*ifence), base);
 
 	if (!dma_fence_is_signaled(fence)) {
@@ -226,7 +226,7 @@ struct dma_fence *xe_gt_tlb_inval_job_push(struct xe_gt_tlb_inval_job *job,
 	xe_migrate_job_lock(m, job->q);
 
 	/* Creation ref pairs with put in xe_gt_tlb_inval_job_destroy */
-	xe_gt_tlb_invalidation_fence_init(job->gt, ifence, false);
+	xe_gt_tlb_inval_fence_init(job->gt, ifence, false);
 	dma_fence_get(job->fence);	/* Pairs with put in DRM scheduler */
 
 	drm_sched_job_arm(&job->dep.drm);
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation_types.h b/drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h
similarity index 55%
rename from drivers/gpu/drm/xe/xe_gt_tlb_invalidation_types.h
rename to drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h
index de6e825e0851..919430359103 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h
@@ -3,20 +3,20 @@
  * Copyright © 2023 Intel Corporation
  */
 
-#ifndef _XE_GT_TLB_INVALIDATION_TYPES_H_
-#define _XE_GT_TLB_INVALIDATION_TYPES_H_
+#ifndef _XE_GT_TLB_INVAL_TYPES_H_
+#define _XE_GT_TLB_INVAL_TYPES_H_
 
 #include <linux/dma-fence.h>
 
 struct xe_gt;
 
 /**
- * struct xe_gt_tlb_invalidation_fence - XE GT TLB invalidation fence
+ * struct xe_gt_tlb_inval_fence - XE GT TLB invalidation fence
  *
- * Optionally passed to xe_gt_tlb_invalidation and will be signaled upon TLB
+ * Optionally passed to xe_gt_tlb_inval and will be signaled upon TLB
  * invalidation completion.
  */
-struct xe_gt_tlb_invalidation_fence {
+struct xe_gt_tlb_inval_fence {
 	/** @base: dma fence base */
 	struct dma_fence base;
 	/** @gt: GT which fence belong to */
@@ -25,8 +25,8 @@ struct xe_gt_tlb_invalidation_fence {
 	struct list_head link;
 	/** @seqno: seqno of TLB invalidation to signal fence one */
 	int seqno;
-	/** @invalidation_time: time of TLB invalidation */
-	ktime_t invalidation_time;
+	/** @inval_time: time of TLB invalidation */
+	ktime_t inval_time;
 };
 
 #endif
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
deleted file mode 100644
index 3e4cff3922d6..000000000000
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Copyright © 2023 Intel Corporation
- */
-
-#ifndef _XE_GT_TLB_INVALIDATION_H_
-#define _XE_GT_TLB_INVALIDATION_H_
-
-#include <linux/types.h>
-
-#include "xe_gt_tlb_invalidation_types.h"
-
-struct xe_gt;
-struct xe_guc;
-struct xe_vm;
-struct xe_vma;
-
-int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt);
-void xe_gt_tlb_invalidation_fini(struct xe_gt *gt);
-
-void xe_gt_tlb_invalidation_reset(struct xe_gt *gt);
-int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);
-void xe_gt_tlb_invalidation_vm(struct xe_gt *gt, struct xe_vm *vm);
-int xe_gt_tlb_invalidation_all(struct xe_gt *gt, struct xe_gt_tlb_invalidation_fence *fence);
-int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
-				 struct xe_gt_tlb_invalidation_fence *fence,
-				 u64 start, u64 end, u32 asid);
-int xe_guc_tlb_invalidation_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
-
-void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt,
-				       struct xe_gt_tlb_invalidation_fence *fence,
-				       bool stack);
-void xe_gt_tlb_invalidation_fence_signal(struct xe_gt_tlb_invalidation_fence *fence);
-
-static inline void
-xe_gt_tlb_invalidation_fence_wait(struct xe_gt_tlb_invalidation_fence *fence)
-{
-	dma_fence_wait(&fence->base, false);
-}
-
-#endif	/* _XE_GT_TLB_INVALIDATION_ */
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index 4dbc40fa6639..85cfcc49472b 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -185,38 +185,38 @@ struct xe_gt {
 		struct work_struct worker;
 	} reset;
 
-	/** @tlb_invalidation: TLB invalidation state */
+	/** @tlb_inval: TLB invalidation state */
 	struct {
-		/** @tlb_invalidation.seqno: TLB invalidation seqno, protected by CT lock */
+		/** @tlb_inval.seqno: TLB invalidation seqno, protected by CT lock */
 #define TLB_INVALIDATION_SEQNO_MAX	0x100000
 		int seqno;
 		/** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
 		struct mutex seqno_lock;
 		/**
-		 * @tlb_invalidation.seqno_recv: last received TLB invalidation seqno,
+		 * @tlb_inval.seqno_recv: last received TLB invalidation seqno,
 		 * protected by CT lock
 		 */
 		int seqno_recv;
 		/**
-		 * @tlb_invalidation.pending_fences: list of pending fences waiting TLB
+		 * @tlb_inval.pending_fences: list of pending fences waiting TLB
 		 * invaliations, protected by CT lock
 		 */
 		struct list_head pending_fences;
 		/**
-		 * @tlb_invalidation.pending_lock: protects @tlb_invalidation.pending_fences
-		 * and updating @tlb_invalidation.seqno_recv.
+		 * @tlb_inval.pending_lock: protects @tlb_inval.pending_fences
+		 * and updating @tlb_inval.seqno_recv.
 		 */
 		spinlock_t pending_lock;
 		/**
-		 * @tlb_invalidation.fence_tdr: schedules a delayed call to
+		 * @tlb_inval.fence_tdr: schedules a delayed call to
 		 * xe_gt_tlb_fence_timeout after the timeut interval is over.
 		 */
 		struct delayed_work fence_tdr;
 		/** @wtlb_invalidation.wq: schedules GT TLB invalidation jobs */
 		struct workqueue_struct *job_wq;
-		/** @tlb_invalidation.lock: protects TLB invalidation fences */
+		/** @tlb_inval.lock: protects TLB invalidation fences */
 		spinlock_t lock;
-	} tlb_invalidation;
+	} tlb_inval;
 
 	/**
 	 * @ccs_mode: Number of compute engines enabled.
diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index 3f4e6a46ff16..9131d121d941 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -26,7 +26,7 @@
 #include "xe_gt_sriov_pf_control.h"
 #include "xe_gt_sriov_pf_monitor.h"
 #include "xe_gt_sriov_printk.h"
-#include "xe_gt_tlb_invalidation.h"
+#include "xe_gt_tlb_inval.h"
 #include "xe_guc.h"
 #include "xe_guc_log.h"
 #include "xe_guc_relay.h"
@@ -1416,8 +1416,7 @@ static int process_g2h_msg(struct xe_guc_ct *ct, u32 *msg, u32 len)
 		ret = xe_guc_pagefault_handler(guc, payload, adj_len);
 		break;
 	case XE_GUC_ACTION_TLB_INVALIDATION_DONE:
-		ret = xe_guc_tlb_invalidation_done_handler(guc, payload,
-							   adj_len);
+		ret = xe_guc_tlb_inval_done_handler(guc, payload, adj_len);
 		break;
 	case XE_GUC_ACTION_ACCESS_COUNTER_NOTIFY:
 		ret = xe_guc_access_counter_notify_handler(guc, payload,
@@ -1618,8 +1617,7 @@ static void g2h_fast_path(struct xe_guc_ct *ct, u32 *msg, u32 len)
 		break;
 	case XE_GUC_ACTION_TLB_INVALIDATION_DONE:
 		__g2h_release_space(ct, len);
-		ret = xe_guc_tlb_invalidation_done_handler(guc, payload,
-							   adj_len);
+		ret = xe_guc_tlb_inval_done_handler(guc, payload, adj_len);
 		break;
 	default:
 		xe_gt_warn(gt, "NOT_POSSIBLE");
diff --git a/drivers/gpu/drm/xe/xe_lmtt.c b/drivers/gpu/drm/xe/xe_lmtt.c
index a78c9d474a6e..e5aba03ff8ac 100644
--- a/drivers/gpu/drm/xe/xe_lmtt.c
+++ b/drivers/gpu/drm/xe/xe_lmtt.c
@@ -11,7 +11,7 @@
 
 #include "xe_assert.h"
 #include "xe_bo.h"
-#include "xe_gt_tlb_invalidation.h"
+#include "xe_gt_tlb_inval.h"
 #include "xe_lmtt.h"
 #include "xe_map.h"
 #include "xe_mmio.h"
@@ -228,8 +228,8 @@ void xe_lmtt_init_hw(struct xe_lmtt *lmtt)
 
 static int lmtt_invalidate_hw(struct xe_lmtt *lmtt)
 {
-	struct xe_gt_tlb_invalidation_fence fences[XE_MAX_GT_PER_TILE];
-	struct xe_gt_tlb_invalidation_fence *fence = fences;
+	struct xe_gt_tlb_inval_fence fences[XE_MAX_GT_PER_TILE];
+	struct xe_gt_tlb_inval_fence *fence = fences;
 	struct xe_tile *tile = lmtt_to_tile(lmtt);
 	struct xe_gt *gt;
 	int result = 0;
@@ -237,8 +237,8 @@ static int lmtt_invalidate_hw(struct xe_lmtt *lmtt)
 	u8 id;
 
 	for_each_gt_on_tile(gt, tile, id) {
-		xe_gt_tlb_invalidation_fence_init(gt, fence, true);
-		err = xe_gt_tlb_invalidation_all(gt, fence);
+		xe_gt_tlb_inval_fence_init(gt, fence, true);
+		err = xe_gt_tlb_inval_all(gt, fence);
 		result = result ?: err;
 		fence++;
 	}
@@ -252,7 +252,7 @@ static int lmtt_invalidate_hw(struct xe_lmtt *lmtt)
 	 */
 	fence = fences;
 	for_each_gt_on_tile(gt, tile, id)
-		xe_gt_tlb_invalidation_fence_wait(fence++);
+		xe_gt_tlb_inval_fence_wait(fence++);
 
 	return result;
 }
diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
index d1590d67e649..c28bbc5eb9b8 100644
--- a/drivers/gpu/drm/xe/xe_pci.c
+++ b/drivers/gpu/drm/xe/xe_pci.c
@@ -56,7 +56,7 @@ static const struct xe_graphics_desc graphics_xelp = {
 };
 
 #define XE_HP_FEATURES \
-	.has_range_tlb_invalidation = true, \
+	.has_range_tlb_inval = true, \
 	.va_bits = 48, \
 	.vm_max_level = 3
 
@@ -104,7 +104,7 @@ static const struct xe_graphics_desc graphics_xelpg = {
 	.has_asid = 1, \
 	.has_atomic_enable_pte_bit = 1, \
 	.has_flat_ccs = 1, \
-	.has_range_tlb_invalidation = 1, \
+	.has_range_tlb_inval = 1, \
 	.has_usm = 1, \
 	.has_64bit_timestamp = 1, \
 	.va_bits = 48, \
@@ -713,7 +713,7 @@ static int xe_info_init(struct xe_device *xe,
 	/* Runtime detection may change this later */
 	xe->info.has_flat_ccs = graphics_desc->has_flat_ccs;
 
-	xe->info.has_range_tlb_invalidation = graphics_desc->has_range_tlb_invalidation;
+	xe->info.has_range_tlb_inval = graphics_desc->has_range_tlb_inval;
 	xe->info.has_usm = graphics_desc->has_usm;
 	xe->info.has_64bit_timestamp = graphics_desc->has_64bit_timestamp;
 
diff --git a/drivers/gpu/drm/xe/xe_pci_types.h b/drivers/gpu/drm/xe/xe_pci_types.h
index 4de6f69ed975..b63002fc0f67 100644
--- a/drivers/gpu/drm/xe/xe_pci_types.h
+++ b/drivers/gpu/drm/xe/xe_pci_types.h
@@ -60,7 +60,7 @@ struct xe_graphics_desc {
 	u8 has_atomic_enable_pte_bit:1;
 	u8 has_flat_ccs:1;
 	u8 has_indirect_ring_state:1;
-	u8 has_range_tlb_invalidation:1;
+	u8 has_range_tlb_inval:1;
 	u8 has_usm:1;
 	u8 has_64bit_timestamp:1;
 };
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index e35c6d4def20..d290e54134f3 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -7,7 +7,7 @@
 
 #include "xe_bo.h"
 #include "xe_gt_stats.h"
-#include "xe_gt_tlb_invalidation.h"
+#include "xe_gt_tlb_inval.h"
 #include "xe_migrate.h"
 #include "xe_module.h"
 #include "xe_pm.h"
@@ -225,7 +225,7 @@ static void xe_svm_invalidate(struct drm_gpusvm *gpusvm,
 
 	xe_device_wmb(xe);
 
-	err = xe_vm_range_tilemask_tlb_invalidation(vm, adj_start, adj_end, tile_mask);
+	err = xe_vm_range_tilemask_tlb_inval(vm, adj_start, adj_end, tile_mask);
 	WARN_ON_ONCE(err);
 
 range_notifier_event_end:
diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
index 21486a6f693a..36538f50d06f 100644
--- a/drivers/gpu/drm/xe/xe_trace.h
+++ b/drivers/gpu/drm/xe/xe_trace.h
@@ -14,7 +14,7 @@
 
 #include "xe_exec_queue_types.h"
 #include "xe_gpu_scheduler_types.h"
-#include "xe_gt_tlb_invalidation_types.h"
+#include "xe_gt_tlb_inval_types.h"
 #include "xe_gt_types.h"
 #include "xe_guc_exec_queue_types.h"
 #include "xe_sched_job.h"
@@ -25,13 +25,13 @@
 #define __dev_name_gt(gt)	__dev_name_xe(gt_to_xe((gt)))
 #define __dev_name_eq(q)	__dev_name_gt((q)->gt)
 
-DECLARE_EVENT_CLASS(xe_gt_tlb_invalidation_fence,
-		    TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence),
+DECLARE_EVENT_CLASS(xe_gt_tlb_inval_fence,
+		    TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence),
 		    TP_ARGS(xe, fence),
 
 		    TP_STRUCT__entry(
 			     __string(dev, __dev_name_xe(xe))
-			     __field(struct xe_gt_tlb_invalidation_fence *, fence)
+			     __field(struct xe_gt_tlb_inval_fence *, fence)
 			     __field(int, seqno)
 			     ),
 
@@ -45,23 +45,23 @@ DECLARE_EVENT_CLASS(xe_gt_tlb_invalidation_fence,
 			      __get_str(dev), __entry->fence, __entry->seqno)
 );
 
-DEFINE_EVENT(xe_gt_tlb_invalidation_fence, xe_gt_tlb_invalidation_fence_send,
-	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence),
+DEFINE_EVENT(xe_gt_tlb_inval_fence, xe_gt_tlb_inval_fence_send,
+	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence),
 	     TP_ARGS(xe, fence)
 );
 
-DEFINE_EVENT(xe_gt_tlb_invalidation_fence, xe_gt_tlb_invalidation_fence_recv,
-	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence),
+DEFINE_EVENT(xe_gt_tlb_inval_fence, xe_gt_tlb_inval_fence_recv,
+	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence),
 	     TP_ARGS(xe, fence)
 );
 
-DEFINE_EVENT(xe_gt_tlb_invalidation_fence, xe_gt_tlb_invalidation_fence_signal,
-	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence),
+DEFINE_EVENT(xe_gt_tlb_inval_fence, xe_gt_tlb_inval_fence_signal,
+	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence),
 	     TP_ARGS(xe, fence)
 );
 
-DEFINE_EVENT(xe_gt_tlb_invalidation_fence, xe_gt_tlb_invalidation_fence_timeout,
-	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence),
+DEFINE_EVENT(xe_gt_tlb_inval_fence, xe_gt_tlb_inval_fence_timeout,
+	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence),
 	     TP_ARGS(xe, fence)
 );
 
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index f35d69c0b4c6..fd42aa1b7fa0 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -28,7 +28,7 @@
 #include "xe_drm_client.h"
 #include "xe_exec_queue.h"
 #include "xe_gt_pagefault.h"
-#include "xe_gt_tlb_invalidation.h"
+#include "xe_gt_tlb_inval.h"
 #include "xe_migrate.h"
 #include "xe_pat.h"
 #include "xe_pm.h"
@@ -1892,7 +1892,7 @@ static void xe_vm_close(struct xe_vm *vm)
 					xe_pt_clear(xe, vm->pt_root[id]);
 
 			for_each_gt(gt, xe, id)
-				xe_gt_tlb_invalidation_vm(gt, vm);
+				xe_gt_tlb_inval_vm(gt, vm);
 		}
 	}
 
@@ -3875,7 +3875,7 @@ void xe_vm_unlock(struct xe_vm *vm)
 }
 
 /**
- * xe_vm_range_tilemask_tlb_invalidation - Issue a TLB invalidation on this tilemask for an
+ * xe_vm_range_tilemask_tlb_inval - Issue a TLB invalidation on this tilemask for an
  * address range
  * @vm: The VM
  * @start: start address
@@ -3886,10 +3886,11 @@ void xe_vm_unlock(struct xe_vm *vm)
  *
  * Returns 0 for success, negative error code otherwise.
  */
-int xe_vm_range_tilemask_tlb_invalidation(struct xe_vm *vm, u64 start,
-					  u64 end, u8 tile_mask)
+int xe_vm_range_tilemask_tlb_inval(struct xe_vm *vm, u64 start,
+				   u64 end, u8 tile_mask)
 {
-	struct xe_gt_tlb_invalidation_fence fence[XE_MAX_TILES_PER_DEVICE * XE_MAX_GT_PER_TILE];
+	struct xe_gt_tlb_inval_fence
+		fence[XE_MAX_TILES_PER_DEVICE * XE_MAX_GT_PER_TILE];
 	struct xe_tile *tile;
 	u32 fence_id = 0;
 	u8 id;
@@ -3899,39 +3900,34 @@ int xe_vm_range_tilemask_tlb_invalidation(struct xe_vm *vm, u64 start,
 		return 0;
 
 	for_each_tile(tile, vm->xe, id) {
-		if (tile_mask & BIT(id)) {
-			xe_gt_tlb_invalidation_fence_init(tile->primary_gt,
-							  &fence[fence_id], true);
-
-			err = xe_gt_tlb_invalidation_range(tile->primary_gt,
-							   &fence[fence_id],
-							   start,
-							   end,
-							   vm->usm.asid);
-			if (err)
-				goto wait;
-			++fence_id;
+		if (!(tile_mask & BIT(id)))
+			continue;
 
-			if (!tile->media_gt)
-				continue;
+		xe_gt_tlb_inval_fence_init(tile->primary_gt,
+					   &fence[fence_id], true);
 
-			xe_gt_tlb_invalidation_fence_init(tile->media_gt,
-							  &fence[fence_id], true);
+		err = xe_gt_tlb_inval_range(tile->primary_gt, &fence[fence_id],
+					    start, end, vm->usm.asid);
+		if (err)
+			goto wait;
+		++fence_id;
 
-			err = xe_gt_tlb_invalidation_range(tile->media_gt,
-							   &fence[fence_id],
-							   start,
-							   end,
-							   vm->usm.asid);
-			if (err)
-				goto wait;
-			++fence_id;
-		}
+		if (!tile->media_gt)
+			continue;
+
+		xe_gt_tlb_inval_fence_init(tile->media_gt,
+					   &fence[fence_id], true);
+
+		err = xe_gt_tlb_inval_range(tile->media_gt, &fence[fence_id],
+					    start, end, vm->usm.asid);
+		if (err)
+			goto wait;
+		++fence_id;
 	}
 
 wait:
 	for (id = 0; id < fence_id; ++id)
-		xe_gt_tlb_invalidation_fence_wait(&fence[id]);
+		xe_gt_tlb_inval_fence_wait(&fence[id]);
 
 	return err;
 }
@@ -3990,8 +3986,8 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
 
 	xe_device_wmb(xe);
 
-	ret = xe_vm_range_tilemask_tlb_invalidation(xe_vma_vm(vma), xe_vma_start(vma),
-						    xe_vma_end(vma), tile_mask);
+	ret = xe_vm_range_tilemask_tlb_inval(xe_vma_vm(vma), xe_vma_start(vma),
+					     xe_vma_end(vma), tile_mask);
 
 	/* WRITE_ONCE pairs with READ_ONCE in xe_vm_has_valid_gpu_mapping() */
 	WRITE_ONCE(vma->tile_invalidated, vma->tile_mask);
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 2f213737c7e5..93a4ac79b86e 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -228,8 +228,8 @@ struct dma_fence *xe_vm_range_rebind(struct xe_vm *vm,
 struct dma_fence *xe_vm_range_unbind(struct xe_vm *vm,
 				     struct xe_svm_range *range);
 
-int xe_vm_range_tilemask_tlb_invalidation(struct xe_vm *vm, u64 start,
-					  u64 end, u8 tile_mask);
+int xe_vm_range_tilemask_tlb_inval(struct xe_vm *vm, u64 start,
+				   u64 end, u8 tile_mask);
 
 int xe_vm_invalidate_vma(struct xe_vma *vma);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/9] drm/xe: Add xe_tlb_inval structure
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
                   ` (2 preceding siblings ...)
  2025-08-25 17:57 ` [PATCH 3/9] drm/xe: s/tlb_invalidation/tlb_inval Stuart Summers
@ 2025-08-25 17:57 ` Stuart Summers
  2025-08-25 17:57 ` [PATCH 5/9] drm/xe: Add xe_gt_tlb_invalidation_done_handler Stuart Summers
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-25 17:57 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

From: Matthew Brost <matthew.brost@intel.com>

Extract TLB invalidation state into a structure to decouple TLB
invalidations from the GT, allowing the structure to be embedded
anywhere in the driver.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h | 34 ++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_gt_types.h           | 33 ++-------------------
 2 files changed, 36 insertions(+), 31 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h b/drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h
index 919430359103..442f72b78ccf 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h
@@ -6,10 +6,44 @@
 #ifndef _XE_GT_TLB_INVAL_TYPES_H_
 #define _XE_GT_TLB_INVAL_TYPES_H_
 
+#include <linux/workqueue.h>
 #include <linux/dma-fence.h>
 
 struct xe_gt;
 
+/** struct xe_tlb_inval - TLB invalidation client */
+struct xe_tlb_inval {
+	/** @tlb_inval.seqno: TLB invalidation seqno, protected by CT lock */
+#define TLB_INVALIDATION_SEQNO_MAX	0x100000
+	int seqno;
+	/** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
+	struct mutex seqno_lock;
+	/**
+	 * @tlb_inval.seqno_recv: last received TLB invalidation seqno,
+	 * protected by CT lock
+	 */
+	int seqno_recv;
+	/**
+	 * @tlb_inval.pending_fences: list of pending fences waiting TLB
+	 * invaliations, protected by CT lock
+	 */
+	struct list_head pending_fences;
+	/**
+	 * @tlb_inval.pending_lock: protects @tlb_inval.pending_fences
+	 * and updating @tlb_inval.seqno_recv.
+	 */
+	spinlock_t pending_lock;
+	/**
+	 * @tlb_inval.fence_tdr: schedules a delayed call to
+	 * xe_gt_tlb_fence_timeout after the timeut interval is over.
+	 */
+	struct delayed_work fence_tdr;
+	/** @wtlb_invalidation.wq: schedules GT TLB invalidation jobs */
+	struct workqueue_struct *job_wq;
+	/** @tlb_inval.lock: protects TLB invalidation fences */
+	spinlock_t lock;
+};
+
 /**
  * struct xe_gt_tlb_inval_fence - XE GT TLB invalidation fence
  *
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index 85cfcc49472b..7dc5a3f310f1 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -12,6 +12,7 @@
 #include "xe_gt_sriov_pf_types.h"
 #include "xe_gt_sriov_vf_types.h"
 #include "xe_gt_stats_types.h"
+#include "xe_gt_tlb_inval_types.h"
 #include "xe_hw_engine_types.h"
 #include "xe_hw_fence_types.h"
 #include "xe_oa_types.h"
@@ -186,37 +187,7 @@ struct xe_gt {
 	} reset;
 
 	/** @tlb_inval: TLB invalidation state */
-	struct {
-		/** @tlb_inval.seqno: TLB invalidation seqno, protected by CT lock */
-#define TLB_INVALIDATION_SEQNO_MAX	0x100000
-		int seqno;
-		/** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
-		struct mutex seqno_lock;
-		/**
-		 * @tlb_inval.seqno_recv: last received TLB invalidation seqno,
-		 * protected by CT lock
-		 */
-		int seqno_recv;
-		/**
-		 * @tlb_inval.pending_fences: list of pending fences waiting TLB
-		 * invaliations, protected by CT lock
-		 */
-		struct list_head pending_fences;
-		/**
-		 * @tlb_inval.pending_lock: protects @tlb_inval.pending_fences
-		 * and updating @tlb_inval.seqno_recv.
-		 */
-		spinlock_t pending_lock;
-		/**
-		 * @tlb_inval.fence_tdr: schedules a delayed call to
-		 * xe_gt_tlb_fence_timeout after the timeut interval is over.
-		 */
-		struct delayed_work fence_tdr;
-		/** @wtlb_invalidation.wq: schedules GT TLB invalidation jobs */
-		struct workqueue_struct *job_wq;
-		/** @tlb_inval.lock: protects TLB invalidation fences */
-		spinlock_t lock;
-	} tlb_inval;
+	struct xe_tlb_inval tlb_inval;
 
 	/**
 	 * @ccs_mode: Number of compute engines enabled.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/9] drm/xe: Add xe_gt_tlb_invalidation_done_handler
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
                   ` (3 preceding siblings ...)
  2025-08-25 17:57 ` [PATCH 4/9] drm/xe: Add xe_tlb_inval structure Stuart Summers
@ 2025-08-25 17:57 ` Stuart Summers
  2025-08-25 17:57 ` [PATCH 6/9] drm/xe: Decouple TLB invalidations from GT Stuart Summers
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-25 17:57 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

From: Matthew Brost <matthew.brost@intel.com>

Decouple GT TLB seqno handling from G2H handler.

v2:
 - Add kernel doc

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_tlb_inval.c | 47 ++++++++++++++++++----------
 1 file changed, 30 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_inval.c b/drivers/gpu/drm/xe/xe_gt_tlb_inval.c
index 1571fd917830..37b3b45ec230 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_inval.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_inval.c
@@ -506,27 +506,18 @@ void xe_gt_tlb_inval_vm(struct xe_gt *gt, struct xe_vm *vm)
 }
 
 /**
- * xe_guc_tlb_inval_done_handler - TLB invalidation done handler
- * @guc: guc
- * @msg: message indicating TLB invalidation done
- * @len: length of message
- *
- * Parse seqno of TLB invalidation, wake any waiters for seqno, and signal any
- * invalidation fences for seqno. Algorithm for this depends on seqno being
- * received in-order and asserts this assumption.
+ * xe_gt_tlb_inval_done_handler - GT TLB invalidation done handler
+ * @gt: gt
+ * @seqno: seqno of invalidation that is done
  *
- * Return: 0 on success, -EPROTO for malformed messages.
+ * Update recv seqno, signal any GT TLB invalidation fences, and restart TDR
  */
-int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
+static void xe_gt_tlb_inval_done_handler(struct xe_gt *gt, int seqno)
 {
-	struct xe_gt *gt = guc_to_gt(guc);
 	struct xe_device *xe = gt_to_xe(gt);
 	struct xe_gt_tlb_inval_fence *fence, *next;
 	unsigned long flags;
 
-	if (unlikely(len != 1))
-		return -EPROTO;
-
 	/*
 	 * This can also be run both directly from the IRQ handler and also in
 	 * process_g2h_msg(). Only one may process any individual CT message,
@@ -543,12 +534,12 @@ int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
 	 * process_g2h_msg().
 	 */
 	spin_lock_irqsave(&gt->tlb_inval.pending_lock, flags);
-	if (tlb_inval_seqno_past(gt, msg[0])) {
+	if (tlb_inval_seqno_past(gt, seqno)) {
 		spin_unlock_irqrestore(&gt->tlb_inval.pending_lock, flags);
-		return 0;
+		return;
 	}
 
-	WRITE_ONCE(gt->tlb_inval.seqno_recv, msg[0]);
+	WRITE_ONCE(gt->tlb_inval.seqno_recv, seqno);
 
 	list_for_each_entry_safe(fence, next,
 				 &gt->tlb_inval.pending_fences, link) {
@@ -568,6 +559,28 @@ int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
 		cancel_delayed_work(&gt->tlb_inval.fence_tdr);
 
 	spin_unlock_irqrestore(&gt->tlb_inval.pending_lock, flags);
+}
+
+/**
+ * xe_guc_tlb_inval_done_handler - TLB invalidation done handler
+ * @guc: guc
+ * @msg: message indicating TLB invalidation done
+ * @len: length of message
+ *
+ * Parse seqno of TLB invalidation, wake any waiters for seqno, and signal any
+ * invalidation fences for seqno. Algorithm for this depends on seqno being
+ * received in-order and asserts this assumption.
+ *
+ * Return: 0 on success, -EPROTO for malformed messages.
+ */
+int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
+{
+	struct xe_gt *gt = guc_to_gt(guc);
+
+	if (unlikely(len != 1))
+		return -EPROTO;
+
+	xe_gt_tlb_inval_done_handler(gt, msg[0]);
 
 	return 0;
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 6/9] drm/xe: Decouple TLB invalidations from GT
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
                   ` (4 preceding siblings ...)
  2025-08-25 17:57 ` [PATCH 5/9] drm/xe: Add xe_gt_tlb_invalidation_done_handler Stuart Summers
@ 2025-08-25 17:57 ` Stuart Summers
  2025-08-25 17:57 ` [PATCH 7/9] drm/xe: Prep TLB invalidation fence before sending Stuart Summers
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-25 17:57 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

From: Matthew Brost <matthew.brost@intel.com>

Decouple TLB invalidations from the GT by updating the TLB invalidation
layer to accept a `struct xe_tlb_inval` instead of a `struct xe_gt`.
Also, rename *gt_tlb* to *tlb*. The internals of the TLB invalidation
code still operate on a GT, but this is now hidden from the rest of the
driver.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/Makefile                   |   4 +-
 drivers/gpu/drm/xe/xe_ggtt.c                  |   4 +-
 drivers/gpu/drm/xe/xe_gt.c                    |   6 +-
 drivers/gpu/drm/xe/xe_gt_tlb_inval.h          |  41 -----
 drivers/gpu/drm/xe/xe_gt_tlb_inval_job.h      |  34 ----
 drivers/gpu/drm/xe/xe_gt_types.h              |   2 +-
 drivers/gpu/drm/xe/xe_guc_ct.c                |   2 +-
 drivers/gpu/drm/xe/xe_lmtt.c                  |  12 +-
 drivers/gpu/drm/xe/xe_migrate.h               |  10 +-
 drivers/gpu/drm/xe/xe_pt.c                    |  63 ++++---
 drivers/gpu/drm/xe/xe_svm.c                   |   1 -
 .../xe/{xe_gt_tlb_inval.c => xe_tlb_inval.c}  | 142 +++++++++-------
 drivers/gpu/drm/xe/xe_tlb_inval.h             |  41 +++++
 ..._gt_tlb_inval_job.c => xe_tlb_inval_job.c} | 154 +++++++++---------
 drivers/gpu/drm/xe/xe_tlb_inval_job.h         |  33 ++++
 ...tlb_inval_types.h => xe_tlb_inval_types.h} |  35 ++--
 drivers/gpu/drm/xe/xe_trace.h                 |  24 +--
 drivers/gpu/drm/xe/xe_vm.c                    |  26 +--
 18 files changed, 330 insertions(+), 304 deletions(-)
 delete mode 100644 drivers/gpu/drm/xe/xe_gt_tlb_inval.h
 delete mode 100644 drivers/gpu/drm/xe/xe_gt_tlb_inval_job.h
 rename drivers/gpu/drm/xe/{xe_gt_tlb_inval.c => xe_tlb_inval.c} (80%)
 create mode 100644 drivers/gpu/drm/xe/xe_tlb_inval.h
 rename drivers/gpu/drm/xe/{xe_gt_tlb_inval_job.c => xe_tlb_inval_job.c} (51%)
 create mode 100644 drivers/gpu/drm/xe/xe_tlb_inval_job.h
 rename drivers/gpu/drm/xe/{xe_gt_tlb_inval_types.h => xe_tlb_inval_types.h} (56%)

diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 0a36b2463434..e4a363489072 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -61,8 +61,6 @@ xe-y += xe_bb.o \
 	xe_gt_pagefault.o \
 	xe_gt_sysfs.o \
 	xe_gt_throttle.o \
-	xe_gt_tlb_inval.o \
-	xe_gt_tlb_inval_job.o \
 	xe_gt_topology.o \
 	xe_guc.o \
 	xe_guc_ads.o \
@@ -117,6 +115,8 @@ xe-y += xe_bb.o \
 	xe_sync.o \
 	xe_tile.o \
 	xe_tile_sysfs.o \
+	xe_tlb_inval.o \
+	xe_tlb_inval_job.o \
 	xe_trace.o \
 	xe_trace_bo.o \
 	xe_trace_guc.o \
diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
index c3e46c270117..71c7690a92b3 100644
--- a/drivers/gpu/drm/xe/xe_ggtt.c
+++ b/drivers/gpu/drm/xe/xe_ggtt.c
@@ -23,13 +23,13 @@
 #include "xe_device.h"
 #include "xe_gt.h"
 #include "xe_gt_printk.h"
-#include "xe_gt_tlb_inval.h"
 #include "xe_map.h"
 #include "xe_mmio.h"
 #include "xe_pm.h"
 #include "xe_res_cursor.h"
 #include "xe_sriov.h"
 #include "xe_tile_sriov_vf.h"
+#include "xe_tlb_inval.h"
 #include "xe_wa.h"
 #include "xe_wopcm.h"
 
@@ -438,7 +438,7 @@ static void ggtt_invalidate_gt_tlb(struct xe_gt *gt)
 	if (!gt)
 		return;
 
-	err = xe_gt_tlb_inval_ggtt(gt);
+	err = xe_tlb_inval_ggtt(&gt->tlb_inval);
 	xe_gt_WARN(gt, err, "Failed to invalidate GGTT (%pe)", ERR_PTR(err));
 }
 
diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
index 9a4639732bd7..67ee7cdbd6ec 100644
--- a/drivers/gpu/drm/xe/xe_gt.c
+++ b/drivers/gpu/drm/xe/xe_gt.c
@@ -37,7 +37,6 @@
 #include "xe_gt_sriov_pf.h"
 #include "xe_gt_sriov_vf.h"
 #include "xe_gt_sysfs.h"
-#include "xe_gt_tlb_inval.h"
 #include "xe_gt_topology.h"
 #include "xe_guc_exec_queue_types.h"
 #include "xe_guc_pc.h"
@@ -58,6 +57,7 @@
 #include "xe_sa.h"
 #include "xe_sched_job.h"
 #include "xe_sriov.h"
+#include "xe_tlb_inval.h"
 #include "xe_tuning.h"
 #include "xe_uc.h"
 #include "xe_uc_fw.h"
@@ -852,7 +852,7 @@ static int gt_reset(struct xe_gt *gt)
 
 	xe_uc_stop(&gt->uc);
 
-	xe_gt_tlb_inval_reset(gt);
+	xe_tlb_inval_reset(&gt->tlb_inval);
 
 	err = do_gt_reset(gt);
 	if (err)
@@ -1066,5 +1066,5 @@ void xe_gt_declare_wedged(struct xe_gt *gt)
 	xe_gt_assert(gt, gt_to_xe(gt)->wedged.mode);
 
 	xe_uc_declare_wedged(&gt->uc);
-	xe_gt_tlb_inval_reset(gt);
+	xe_tlb_inval_reset(&gt->tlb_inval);
 }
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_inval.h b/drivers/gpu/drm/xe/xe_gt_tlb_inval.h
deleted file mode 100644
index b1258ac4e8fb..000000000000
--- a/drivers/gpu/drm/xe/xe_gt_tlb_inval.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Copyright © 2023 Intel Corporation
- */
-
-#ifndef _XE_GT_TLB_INVAL_H_
-#define _XE_GT_TLB_INVAL_H_
-
-#include <linux/types.h>
-
-#include "xe_gt_tlb_inval_types.h"
-
-struct xe_gt;
-struct xe_guc;
-struct xe_vm;
-struct xe_vma;
-
-int xe_gt_tlb_inval_init_early(struct xe_gt *gt);
-void xe_gt_tlb_inval_fini(struct xe_gt *gt);
-
-void xe_gt_tlb_inval_reset(struct xe_gt *gt);
-int xe_gt_tlb_inval_ggtt(struct xe_gt *gt);
-void xe_gt_tlb_inval_vm(struct xe_gt *gt, struct xe_vm *vm);
-int xe_gt_tlb_inval_all(struct xe_gt *gt, struct xe_gt_tlb_inval_fence *fence);
-int xe_gt_tlb_inval_range(struct xe_gt *gt,
-			  struct xe_gt_tlb_inval_fence *fence,
-			  u64 start, u64 end, u32 asid);
-int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
-
-void xe_gt_tlb_inval_fence_init(struct xe_gt *gt,
-				struct xe_gt_tlb_inval_fence *fence,
-				bool stack);
-void xe_gt_tlb_inval_fence_signal(struct xe_gt_tlb_inval_fence *fence);
-
-static inline void
-xe_gt_tlb_inval_fence_wait(struct xe_gt_tlb_inval_fence *fence)
-{
-	dma_fence_wait(&fence->base, false);
-}
-
-#endif	/* _XE_GT_TLB_INVAL_ */
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_inval_job.h b/drivers/gpu/drm/xe/xe_gt_tlb_inval_job.h
deleted file mode 100644
index 883896194a34..000000000000
--- a/drivers/gpu/drm/xe/xe_gt_tlb_inval_job.h
+++ /dev/null
@@ -1,34 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Copyright © 2025 Intel Corporation
- */
-
-#ifndef _XE_GT_TLB_INVAL_JOB_H_
-#define _XE_GT_TLB_INVAL_JOB_H_
-
-#include <linux/types.h>
-
-struct dma_fence;
-struct drm_sched_job;
-struct kref;
-struct xe_exec_queue;
-struct xe_gt;
-struct xe_gt_tlb_inval_job;
-struct xe_migrate;
-
-struct xe_gt_tlb_inval_job *xe_gt_tlb_inval_job_create(struct xe_exec_queue *q,
-						       struct xe_gt *gt,
-						       u64 start, u64 end,
-						       u32 asid);
-
-int xe_gt_tlb_inval_job_alloc_dep(struct xe_gt_tlb_inval_job *job);
-
-struct dma_fence *xe_gt_tlb_inval_job_push(struct xe_gt_tlb_inval_job *job,
-					   struct xe_migrate *m,
-					   struct dma_fence *fence);
-
-void xe_gt_tlb_inval_job_get(struct xe_gt_tlb_inval_job *job);
-
-void xe_gt_tlb_inval_job_put(struct xe_gt_tlb_inval_job *job);
-
-#endif
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index 7dc5a3f310f1..66158105aca5 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -12,12 +12,12 @@
 #include "xe_gt_sriov_pf_types.h"
 #include "xe_gt_sriov_vf_types.h"
 #include "xe_gt_stats_types.h"
-#include "xe_gt_tlb_inval_types.h"
 #include "xe_hw_engine_types.h"
 #include "xe_hw_fence_types.h"
 #include "xe_oa_types.h"
 #include "xe_reg_sr_types.h"
 #include "xe_sa_types.h"
+#include "xe_tlb_inval_types.h"
 #include "xe_uc_types.h"
 
 struct xe_exec_queue_ops;
diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index 9131d121d941..5f38041cff4c 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -26,13 +26,13 @@
 #include "xe_gt_sriov_pf_control.h"
 #include "xe_gt_sriov_pf_monitor.h"
 #include "xe_gt_sriov_printk.h"
-#include "xe_gt_tlb_inval.h"
 #include "xe_guc.h"
 #include "xe_guc_log.h"
 #include "xe_guc_relay.h"
 #include "xe_guc_submit.h"
 #include "xe_map.h"
 #include "xe_pm.h"
+#include "xe_tlb_inval.h"
 #include "xe_trace_guc.h"
 
 static void receive_g2h(struct xe_guc_ct *ct);
diff --git a/drivers/gpu/drm/xe/xe_lmtt.c b/drivers/gpu/drm/xe/xe_lmtt.c
index e5aba03ff8ac..f2bfbfa3efa1 100644
--- a/drivers/gpu/drm/xe/xe_lmtt.c
+++ b/drivers/gpu/drm/xe/xe_lmtt.c
@@ -11,7 +11,7 @@
 
 #include "xe_assert.h"
 #include "xe_bo.h"
-#include "xe_gt_tlb_inval.h"
+#include "xe_tlb_inval.h"
 #include "xe_lmtt.h"
 #include "xe_map.h"
 #include "xe_mmio.h"
@@ -228,8 +228,8 @@ void xe_lmtt_init_hw(struct xe_lmtt *lmtt)
 
 static int lmtt_invalidate_hw(struct xe_lmtt *lmtt)
 {
-	struct xe_gt_tlb_inval_fence fences[XE_MAX_GT_PER_TILE];
-	struct xe_gt_tlb_inval_fence *fence = fences;
+	struct xe_tlb_inval_fence fences[XE_MAX_GT_PER_TILE];
+	struct xe_tlb_inval_fence *fence = fences;
 	struct xe_tile *tile = lmtt_to_tile(lmtt);
 	struct xe_gt *gt;
 	int result = 0;
@@ -237,8 +237,8 @@ static int lmtt_invalidate_hw(struct xe_lmtt *lmtt)
 	u8 id;
 
 	for_each_gt_on_tile(gt, tile, id) {
-		xe_gt_tlb_inval_fence_init(gt, fence, true);
-		err = xe_gt_tlb_inval_all(gt, fence);
+		xe_tlb_inval_fence_init(&gt->tlb_inval, fence, true);
+		err = xe_tlb_inval_all(&gt->tlb_inval, fence);
 		result = result ?: err;
 		fence++;
 	}
@@ -252,7 +252,7 @@ static int lmtt_invalidate_hw(struct xe_lmtt *lmtt)
 	 */
 	fence = fences;
 	for_each_gt_on_tile(gt, tile, id)
-		xe_gt_tlb_inval_fence_wait(fence++);
+		xe_tlb_inval_fence_wait(fence++);
 
 	return result;
 }
diff --git a/drivers/gpu/drm/xe/xe_migrate.h b/drivers/gpu/drm/xe/xe_migrate.h
index 8978d2cc1a75..4fad324b6253 100644
--- a/drivers/gpu/drm/xe/xe_migrate.h
+++ b/drivers/gpu/drm/xe/xe_migrate.h
@@ -15,7 +15,7 @@ struct ttm_resource;
 
 struct xe_bo;
 struct xe_gt;
-struct xe_gt_tlb_inval_job;
+struct xe_tlb_inval_job;
 struct xe_exec_queue;
 struct xe_migrate;
 struct xe_migrate_pt_update;
@@ -94,13 +94,13 @@ struct xe_migrate_pt_update {
 	/** @job: The job if a GPU page-table update. NULL otherwise */
 	struct xe_sched_job *job;
 	/**
-	 * @ijob: The GT TLB invalidation job for primary tile. NULL otherwise
+	 * @ijob: The TLB invalidation job for primary GT. NULL otherwise
 	 */
-	struct xe_gt_tlb_inval_job *ijob;
+	struct xe_tlb_inval_job *ijob;
 	/**
-	 * @mjob: The GT TLB invalidation job for media tile. NULL otherwise
+	 * @mjob: The TLB invalidation job for media GT. NULL otherwise
 	 */
-	struct xe_gt_tlb_inval_job *mjob;
+	struct xe_tlb_inval_job *mjob;
 	/** @tile_id: Tile ID of the update */
 	u8 tile_id;
 };
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index f3a39e734a90..d70015c063ad 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -13,7 +13,7 @@
 #include "xe_drm_client.h"
 #include "xe_exec_queue.h"
 #include "xe_gt.h"
-#include "xe_gt_tlb_inval_job.h"
+#include "xe_tlb_inval_job.h"
 #include "xe_migrate.h"
 #include "xe_pt_types.h"
 #include "xe_pt_walk.h"
@@ -21,6 +21,7 @@
 #include "xe_sched_job.h"
 #include "xe_sync.h"
 #include "xe_svm.h"
+#include "xe_tlb_inval_job.h"
 #include "xe_trace.h"
 #include "xe_ttm_stolen_mgr.h"
 #include "xe_vm.h"
@@ -1261,8 +1262,8 @@ static int op_add_deps(struct xe_vm *vm, struct xe_vma_op *op,
 }
 
 static int xe_pt_vm_dependencies(struct xe_sched_job *job,
-				 struct xe_gt_tlb_inval_job *ijob,
-				 struct xe_gt_tlb_inval_job *mjob,
+				 struct xe_tlb_inval_job *ijob,
+				 struct xe_tlb_inval_job *mjob,
 				 struct xe_vm *vm,
 				 struct xe_vma_ops *vops,
 				 struct xe_vm_pgtable_update_ops *pt_update_ops,
@@ -1332,13 +1333,13 @@ static int xe_pt_vm_dependencies(struct xe_sched_job *job,
 
 	if (job) {
 		if (ijob) {
-			err = xe_gt_tlb_inval_job_alloc_dep(ijob);
+			err = xe_tlb_inval_job_alloc_dep(ijob);
 			if (err)
 				return err;
 		}
 
 		if (mjob) {
-			err = xe_gt_tlb_inval_job_alloc_dep(mjob);
+			err = xe_tlb_inval_job_alloc_dep(mjob);
 			if (err)
 				return err;
 		}
@@ -2338,6 +2339,15 @@ static const struct xe_migrate_pt_update_ops svm_migrate_ops = {
 static const struct xe_migrate_pt_update_ops svm_migrate_ops;
 #endif
 
+static struct xe_dep_scheduler *to_dep_scheduler(struct xe_exec_queue *q,
+						 struct xe_gt *gt)
+{
+	if (xe_gt_is_media_type(gt))
+		return q->tlb_inval[XE_EXEC_QUEUE_TLB_INVAL_MEDIA_GT].dep_scheduler;
+
+	return q->tlb_inval[XE_EXEC_QUEUE_TLB_INVAL_PRIMARY_GT].dep_scheduler;
+}
+
 /**
  * xe_pt_update_ops_run() - Run PT update operations
  * @tile: Tile of PT update operations
@@ -2356,7 +2366,7 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
 	struct xe_vm_pgtable_update_ops *pt_update_ops =
 		&vops->pt_update_ops[tile->id];
 	struct dma_fence *fence, *ifence, *mfence;
-	struct xe_gt_tlb_inval_job *ijob = NULL, *mjob = NULL;
+	struct xe_tlb_inval_job *ijob = NULL, *mjob = NULL;
 	struct dma_fence **fences = NULL;
 	struct dma_fence_array *cf = NULL;
 	struct xe_range_fence *rfence;
@@ -2388,11 +2398,15 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
 #endif
 
 	if (pt_update_ops->needs_invalidation) {
-		ijob = xe_gt_tlb_inval_job_create(pt_update_ops->q,
-						  tile->primary_gt,
-						  pt_update_ops->start,
-						  pt_update_ops->last,
-						  vm->usm.asid);
+		struct xe_exec_queue *q = pt_update_ops->q;
+		struct xe_dep_scheduler *dep_scheduler =
+			to_dep_scheduler(q, tile->primary_gt);
+
+		ijob = xe_tlb_inval_job_create(q, &tile->primary_gt->tlb_inval,
+					       dep_scheduler,
+					       pt_update_ops->start,
+					       pt_update_ops->last,
+					       vm->usm.asid);
 		if (IS_ERR(ijob)) {
 			err = PTR_ERR(ijob);
 			goto kill_vm_tile1;
@@ -2400,11 +2414,14 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
 		update.ijob = ijob;
 
 		if (tile->media_gt) {
-			mjob = xe_gt_tlb_inval_job_create(pt_update_ops->q,
-							  tile->media_gt,
-							  pt_update_ops->start,
-							  pt_update_ops->last,
-							  vm->usm.asid);
+			dep_scheduler = to_dep_scheduler(q, tile->media_gt);
+
+			mjob = xe_tlb_inval_job_create(q,
+						       &tile->media_gt->tlb_inval,
+						       dep_scheduler,
+						       pt_update_ops->start,
+						       pt_update_ops->last,
+						       vm->usm.asid);
 			if (IS_ERR(mjob)) {
 				err = PTR_ERR(mjob);
 				goto free_ijob;
@@ -2455,13 +2472,13 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
 	if (ijob) {
 		struct dma_fence *__fence;
 
-		ifence = xe_gt_tlb_inval_job_push(ijob, tile->migrate, fence);
+		ifence = xe_tlb_inval_job_push(ijob, tile->migrate, fence);
 		__fence = ifence;
 
 		if (mjob) {
 			fences[0] = ifence;
-			mfence = xe_gt_tlb_inval_job_push(mjob, tile->migrate,
-							  fence);
+			mfence = xe_tlb_inval_job_push(mjob, tile->migrate,
+						       fence);
 			fences[1] = mfence;
 
 			dma_fence_array_init(cf, 2, fences,
@@ -2504,8 +2521,8 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
 	if (pt_update_ops->needs_userptr_lock)
 		up_read(&vm->userptr.notifier_lock);
 
-	xe_gt_tlb_inval_job_put(mjob);
-	xe_gt_tlb_inval_job_put(ijob);
+	xe_tlb_inval_job_put(mjob);
+	xe_tlb_inval_job_put(ijob);
 
 	return fence;
 
@@ -2514,8 +2531,8 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
 free_ijob:
 	kfree(cf);
 	kfree(fences);
-	xe_gt_tlb_inval_job_put(mjob);
-	xe_gt_tlb_inval_job_put(ijob);
+	xe_tlb_inval_job_put(mjob);
+	xe_tlb_inval_job_put(ijob);
 kill_vm_tile1:
 	if (err != -EAGAIN && err != -ENODATA && tile->id)
 		xe_vm_kill(vops->vm, false);
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index d290e54134f3..c8febef4d679 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -7,7 +7,6 @@
 
 #include "xe_bo.h"
 #include "xe_gt_stats.h"
-#include "xe_gt_tlb_inval.h"
 #include "xe_migrate.h"
 #include "xe_module.h"
 #include "xe_pm.h"
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_inval.c b/drivers/gpu/drm/xe/xe_tlb_inval.c
similarity index 80%
rename from drivers/gpu/drm/xe/xe_gt_tlb_inval.c
rename to drivers/gpu/drm/xe/xe_tlb_inval.c
index 37b3b45ec230..f4b7c0c74894 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_inval.c
+++ b/drivers/gpu/drm/xe/xe_tlb_inval.c
@@ -13,7 +13,7 @@
 #include "xe_guc.h"
 #include "xe_guc_ct.h"
 #include "xe_gt_stats.h"
-#include "xe_gt_tlb_inval.h"
+#include "xe_tlb_inval.h"
 #include "xe_mmio.h"
 #include "xe_pm.h"
 #include "xe_sriov.h"
@@ -38,40 +38,47 @@ static long tlb_timeout_jiffies(struct xe_gt *gt)
 	return hw_tlb_timeout + 2 * delay;
 }
 
-static void xe_gt_tlb_inval_fence_fini(struct xe_gt_tlb_inval_fence *fence)
+static void xe_tlb_inval_fence_fini(struct xe_tlb_inval_fence *fence)
 {
-	if (WARN_ON_ONCE(!fence->gt))
+	struct xe_gt *gt;
+
+	if (WARN_ON_ONCE(!fence->tlb_inval))
 		return;
 
-	xe_pm_runtime_put(gt_to_xe(fence->gt));
-	fence->gt = NULL; /* fini() should be called once */
+	gt = fence->tlb_inval->private;
+
+	xe_pm_runtime_put(gt_to_xe(gt));
+	fence->tlb_inval = NULL; /* fini() should be called once */
 }
 
 static void
-__inval_fence_signal(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence)
+__inval_fence_signal(struct xe_device *xe, struct xe_tlb_inval_fence *fence)
 {
 	bool stack = test_bit(FENCE_STACK_BIT, &fence->base.flags);
 
-	trace_xe_gt_tlb_inval_fence_signal(xe, fence);
-	xe_gt_tlb_inval_fence_fini(fence);
+	trace_xe_tlb_inval_fence_signal(xe, fence);
+	xe_tlb_inval_fence_fini(fence);
 	dma_fence_signal(&fence->base);
 	if (!stack)
 		dma_fence_put(&fence->base);
 }
 
 static void
-inval_fence_signal(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence)
+inval_fence_signal(struct xe_device *xe, struct xe_tlb_inval_fence *fence)
 {
 	list_del(&fence->link);
 	__inval_fence_signal(xe, fence);
 }
 
-void xe_gt_tlb_inval_fence_signal(struct xe_gt_tlb_inval_fence *fence)
+void xe_tlb_inval_fence_signal(struct xe_tlb_inval_fence *fence)
 {
-	if (WARN_ON_ONCE(!fence->gt))
+	struct xe_gt *gt;
+
+	if (WARN_ON_ONCE(!fence->tlb_inval))
 		return;
 
-	__inval_fence_signal(gt_to_xe(fence->gt), fence);
+	gt = fence->tlb_inval->private;
+	__inval_fence_signal(gt_to_xe(gt), fence);
 }
 
 static void xe_gt_tlb_fence_timeout(struct work_struct *work)
@@ -79,7 +86,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
 	struct xe_gt *gt = container_of(work, struct xe_gt,
 					tlb_inval.fence_tdr.work);
 	struct xe_device *xe = gt_to_xe(gt);
-	struct xe_gt_tlb_inval_fence *fence, *next;
+	struct xe_tlb_inval_fence *fence, *next;
 
 	LNL_FLUSH_WORK(&gt->uc.guc.ct.g2h_worker);
 
@@ -92,7 +99,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
 		if (msecs_to_jiffies(since_inval_ms) < tlb_timeout_jiffies(gt))
 			break;
 
-		trace_xe_gt_tlb_inval_fence_timeout(xe, fence);
+		trace_xe_tlb_inval_fence_timeout(xe, fence);
 		xe_gt_err(gt, "TLB invalidation fence timeout, seqno=%d recv=%d",
 			  fence->seqno, gt->tlb_inval.seqno_recv);
 
@@ -110,7 +117,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
  * xe_gt_tlb_inval_init_early - Initialize GT TLB invalidation state
  * @gt: GT structure
  *
- * Initialize GT TLB invalidation state, purely software initialization, should
+ * Initialize TLB invalidation state, purely software initialization, should
  * be called once during driver load.
  *
  * Return: 0 on success, negative error code on error.
@@ -120,6 +127,7 @@ int xe_gt_tlb_inval_init_early(struct xe_gt *gt)
 	struct xe_device *xe = gt_to_xe(gt);
 	int err;
 
+	gt->tlb_inval.private = gt;
 	gt->tlb_inval.seqno = 1;
 	INIT_LIST_HEAD(&gt->tlb_inval.pending_fences);
 	spin_lock_init(&gt->tlb_inval.pending_lock);
@@ -141,14 +149,15 @@ int xe_gt_tlb_inval_init_early(struct xe_gt *gt)
 }
 
 /**
- * xe_gt_tlb_inval_reset - Initialize GT TLB invalidation reset
- * @gt: GT structure
+ * xe_tlb_inval_reset - Initialize TLB invalidation reset
+ * @tlb_inval: TLB invalidation client
  *
  * Signal any pending invalidation fences, should be called during a GT reset
  */
-void xe_gt_tlb_inval_reset(struct xe_gt *gt)
+void xe_tlb_inval_reset(struct xe_tlb_inval *tlb_inval)
 {
-	struct xe_gt_tlb_inval_fence *fence, *next;
+	struct xe_gt *gt = tlb_inval->private;
+	struct xe_tlb_inval_fence *fence, *next;
 	int pending_seqno;
 
 	/*
@@ -213,7 +222,7 @@ static bool tlb_inval_seqno_past(struct xe_gt *gt, int seqno)
 }
 
 static int send_tlb_inval(struct xe_guc *guc,
-			  struct xe_gt_tlb_inval_fence *fence,
+			  struct xe_tlb_inval_fence *fence,
 			  u32 *action, int len)
 {
 	struct xe_gt *gt = guc_to_gt(guc);
@@ -232,7 +241,7 @@ static int send_tlb_inval(struct xe_guc *guc,
 	mutex_lock(&gt->tlb_inval.seqno_lock);
 	seqno = gt->tlb_inval.seqno;
 	fence->seqno = seqno;
-	trace_xe_gt_tlb_inval_fence_send(xe, fence);
+	trace_xe_tlb_inval_fence_send(xe, fence);
 	action[1] = seqno;
 	ret = xe_guc_ct_send(&guc->ct, action, len,
 			     G2H_LEN_DW_TLB_INVALIDATE, 1);
@@ -277,7 +286,7 @@ static int send_tlb_inval(struct xe_guc *guc,
 		XE_GUC_TLB_INVAL_FLUSH_CACHE)
 
 /**
- * xe_gt_tlb_inval_guc - Issue a TLB invalidation on this GT for the GuC
+ * xe_tlb_inval_guc - Issue a TLB invalidation on this GT for the GuC
  * @gt: GT structure
  * @fence: invalidation fence which will be signal on TLB invalidation
  * completion
@@ -287,8 +296,8 @@ static int send_tlb_inval(struct xe_guc *guc,
  *
  * Return: 0 on success, negative error code on error
  */
-static int xe_gt_tlb_inval_guc(struct xe_gt *gt,
-			       struct xe_gt_tlb_inval_fence *fence)
+static int xe_tlb_inval_guc(struct xe_gt *gt,
+			    struct xe_tlb_inval_fence *fence)
 {
 	u32 action[] = {
 		XE_GUC_ACTION_TLB_INVALIDATION,
@@ -309,30 +318,31 @@ static int xe_gt_tlb_inval_guc(struct xe_gt *gt,
 }
 
 /**
- * xe_gt_tlb_inval_ggtt - Issue a TLB invalidation on this GT for the GGTT
- * @gt: GT structure
+ * xe_tlb_inval_ggtt - Issue a TLB invalidation on this GT for the GGTT
+ * @tlb_inval: TLB invalidation client
  *
  * Issue a TLB invalidation for the GGTT. Completion of TLB invalidation is
  * synchronous.
  *
  * Return: 0 on success, negative error code on error
  */
-int xe_gt_tlb_inval_ggtt(struct xe_gt *gt)
+int xe_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval)
 {
+	struct xe_gt *gt = tlb_inval->private;
 	struct xe_device *xe = gt_to_xe(gt);
 	unsigned int fw_ref;
 
 	if (xe_guc_ct_enabled(&gt->uc.guc.ct) &&
 	    gt->uc.guc.submission_state.enabled) {
-		struct xe_gt_tlb_inval_fence fence;
+		struct xe_tlb_inval_fence fence;
 		int ret;
 
-		xe_gt_tlb_inval_fence_init(gt, &fence, true);
-		ret = xe_gt_tlb_inval_guc(gt, &fence);
+		xe_tlb_inval_fence_init(tlb_inval, &fence, true);
+		ret = xe_tlb_inval_guc(gt, &fence);
 		if (ret)
 			return ret;
 
-		xe_gt_tlb_inval_fence_wait(&fence);
+		xe_tlb_inval_fence_wait(&fence);
 	} else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
 		struct xe_mmio *mmio = &gt->mmio;
 
@@ -355,14 +365,17 @@ int xe_gt_tlb_inval_ggtt(struct xe_gt *gt)
 	return 0;
 }
 
-static int send_tlb_inval_all(struct xe_gt *gt,
-			      struct xe_gt_tlb_inval_fence *fence)
+static int send_tlb_inval_all(struct xe_tlb_inval *tlb_inval,
+			      struct xe_tlb_inval_fence *fence)
 {
 	u32 action[] = {
 		XE_GUC_ACTION_TLB_INVALIDATION_ALL,
 		0,  /* seqno, replaced in send_tlb_inval */
 		MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL),
 	};
+	struct xe_gt *gt = tlb_inval->private;
+
+	xe_gt_assert(gt, fence);
 
 	return send_tlb_inval(&gt->uc.guc, fence, action, ARRAY_SIZE(action));
 }
@@ -370,19 +383,19 @@ static int send_tlb_inval_all(struct xe_gt *gt,
 /**
  * xe_gt_tlb_invalidation_all - Invalidate all TLBs across PF and all VFs.
  * @gt: the &xe_gt structure
- * @fence: the &xe_gt_tlb_inval_fence to be signaled on completion
+ * @fence: the &xe_tlb_inval_fence to be signaled on completion
  *
  * Send a request to invalidate all TLBs across PF and all VFs.
  *
  * Return: 0 on success, negative error code on error
  */
-int xe_gt_tlb_inval_all(struct xe_gt *gt, struct xe_gt_tlb_inval_fence *fence)
+int xe_tlb_inval_all(struct xe_tlb_inval *tlb_inval,
+		     struct xe_tlb_inval_fence *fence)
 {
+	struct xe_gt *gt = tlb_inval->private;
 	int err;
 
-	xe_gt_assert(gt, gt == fence->gt);
-
-	err = send_tlb_inval_all(gt, fence);
+	err = send_tlb_inval_all(tlb_inval, fence);
 	if (err)
 		xe_gt_err(gt, "TLB invalidation request failed (%pe)", ERR_PTR(err));
 
@@ -397,9 +410,8 @@ int xe_gt_tlb_inval_all(struct xe_gt *gt, struct xe_gt_tlb_inval_fence *fence)
 #define MAX_RANGE_TLB_INVALIDATION_LENGTH (rounddown_pow_of_two(ULONG_MAX))
 
 /**
- * xe_gt_tlb_inval_range - Issue a TLB invalidation on this GT for an address range
- *
- * @gt: GT structure
+ * xe_tlb_inval_range - Issue a TLB invalidation on this GT for an address range
+ * @tlb_inval: TLB invalidation client
  * @fence: invalidation fence which will be signal on TLB invalidation
  * completion
  * @start: start address
@@ -412,9 +424,11 @@ int xe_gt_tlb_inval_all(struct xe_gt *gt, struct xe_gt_tlb_inval_fence *fence)
  *
  * Return: Negative error code on error, 0 on success
  */
-int xe_gt_tlb_inval_range(struct xe_gt *gt, struct xe_gt_tlb_inval_fence *fence,
-			  u64 start, u64 end, u32 asid)
+int xe_tlb_inval_range(struct xe_tlb_inval *tlb_inval,
+		       struct xe_tlb_inval_fence *fence, u64 start, u64 end,
+		       u32 asid)
 {
+	struct xe_gt *gt = tlb_inval->private;
 	struct xe_device *xe = gt_to_xe(gt);
 #define MAX_TLB_INVALIDATION_LEN	7
 	u32 action[MAX_TLB_INVALIDATION_LEN];
@@ -484,38 +498,38 @@ int xe_gt_tlb_inval_range(struct xe_gt *gt, struct xe_gt_tlb_inval_fence *fence,
 }
 
 /**
- * xe_gt_tlb_inval_vm - Issue a TLB invalidation on this GT for a VM
- * @gt: graphics tile
+ * xe_tlb_inval_vm - Issue a TLB invalidation on this GT for a VM
+ * @tlb_inval: TLB invalidation client
  * @vm: VM to invalidate
  *
  * Invalidate entire VM's address space
  */
-void xe_gt_tlb_inval_vm(struct xe_gt *gt, struct xe_vm *vm)
+void xe_tlb_inval_vm(struct xe_tlb_inval *tlb_inval, struct xe_vm *vm)
 {
-	struct xe_gt_tlb_inval_fence fence;
+	struct xe_tlb_inval_fence fence;
 	u64 range = 1ull << vm->xe->info.va_bits;
 	int ret;
 
-	xe_gt_tlb_inval_fence_init(gt, &fence, true);
+	xe_tlb_inval_fence_init(tlb_inval, &fence, true);
 
-	ret = xe_gt_tlb_inval_range(gt, &fence, 0, range, vm->usm.asid);
+	ret = xe_tlb_inval_range(tlb_inval, &fence, 0, range, vm->usm.asid);
 	if (ret < 0)
 		return;
 
-	xe_gt_tlb_inval_fence_wait(&fence);
+	xe_tlb_inval_fence_wait(&fence);
 }
 
 /**
- * xe_gt_tlb_inval_done_handler - GT TLB invalidation done handler
+ * xe_tlb_inval_done_handler - TLB invalidation done handler
  * @gt: gt
  * @seqno: seqno of invalidation that is done
  *
- * Update recv seqno, signal any GT TLB invalidation fences, and restart TDR
+ * Update recv seqno, signal any TLB invalidation fences, and restart TDR
  */
-static void xe_gt_tlb_inval_done_handler(struct xe_gt *gt, int seqno)
+static void xe_tlb_inval_done_handler(struct xe_gt *gt, int seqno)
 {
 	struct xe_device *xe = gt_to_xe(gt);
-	struct xe_gt_tlb_inval_fence *fence, *next;
+	struct xe_tlb_inval_fence *fence, *next;
 	unsigned long flags;
 
 	/*
@@ -543,7 +557,7 @@ static void xe_gt_tlb_inval_done_handler(struct xe_gt *gt, int seqno)
 
 	list_for_each_entry_safe(fence, next,
 				 &gt->tlb_inval.pending_fences, link) {
-		trace_xe_gt_tlb_inval_fence_recv(xe, fence);
+		trace_xe_tlb_inval_fence_recv(xe, fence);
 
 		if (!tlb_inval_seqno_past(gt, fence->seqno))
 			break;
@@ -580,7 +594,7 @@ int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
 	if (unlikely(len != 1))
 		return -EPROTO;
 
-	xe_gt_tlb_inval_done_handler(gt, msg[0]);
+	xe_tlb_inval_done_handler(gt, msg[0]);
 
 	return 0;
 }
@@ -603,19 +617,21 @@ static const struct dma_fence_ops inval_fence_ops = {
 };
 
 /**
- * xe_gt_tlb_inval_fence_init - Initialize TLB invalidation fence
- * @gt: GT
+ * xe_tlb_inval_fence_init - Initialize TLB invalidation fence
+ * @tlb_inval: TLB invalidation client
  * @fence: TLB invalidation fence to initialize
  * @stack: fence is stack variable
  *
- * Initialize TLB invalidation fence for use. xe_gt_tlb_inval_fence_fini
+ * Initialize TLB invalidation fence for use. xe_tlb_inval_fence_fini
  * will be automatically called when fence is signalled (all fences must signal),
  * even on error.
  */
-void xe_gt_tlb_inval_fence_init(struct xe_gt *gt,
-				struct xe_gt_tlb_inval_fence *fence,
-				bool stack)
+void xe_tlb_inval_fence_init(struct xe_tlb_inval *tlb_inval,
+			     struct xe_tlb_inval_fence *fence,
+			     bool stack)
 {
+	struct xe_gt *gt = tlb_inval->private;
+
 	xe_pm_runtime_get_noresume(gt_to_xe(gt));
 
 	spin_lock_irq(&gt->tlb_inval.lock);
@@ -628,5 +644,5 @@ void xe_gt_tlb_inval_fence_init(struct xe_gt *gt,
 		set_bit(FENCE_STACK_BIT, &fence->base.flags);
 	else
 		dma_fence_get(&fence->base);
-	fence->gt = gt;
+	fence->tlb_inval = tlb_inval;
 }
diff --git a/drivers/gpu/drm/xe/xe_tlb_inval.h b/drivers/gpu/drm/xe/xe_tlb_inval.h
new file mode 100644
index 000000000000..ab6f769c50be
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_tlb_inval.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_TLB_INVAL_H_
+#define _XE_TLB_INVAL_H_
+
+#include <linux/types.h>
+
+#include "xe_tlb_inval_types.h"
+
+struct xe_gt;
+struct xe_guc;
+struct xe_vm;
+
+int xe_gt_tlb_inval_init_early(struct xe_gt *gt);
+void xe_gt_tlb_inval_fini(struct xe_gt *gt);
+
+void xe_tlb_inval_reset(struct xe_tlb_inval *tlb_inval);
+int xe_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval);
+void xe_tlb_inval_vm(struct xe_tlb_inval *tlb_inval, struct xe_vm *vm);
+int xe_tlb_inval_all(struct xe_tlb_inval *tlb_inval,
+		     struct xe_tlb_inval_fence *fence);
+int xe_tlb_inval_range(struct xe_tlb_inval *tlb_inval,
+		       struct xe_tlb_inval_fence *fence,
+		       u64 start, u64 end, u32 asid);
+int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
+
+void xe_tlb_inval_fence_init(struct xe_tlb_inval *tlb_inval,
+			     struct xe_tlb_inval_fence *fence,
+			     bool stack);
+void xe_tlb_inval_fence_signal(struct xe_tlb_inval_fence *fence);
+
+static inline void
+xe_tlb_inval_fence_wait(struct xe_tlb_inval_fence *fence)
+{
+	dma_fence_wait(&fence->base, false);
+}
+
+#endif	/* _XE_TLB_INVAL_ */
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_inval_job.c b/drivers/gpu/drm/xe/xe_tlb_inval_job.c
similarity index 51%
rename from drivers/gpu/drm/xe/xe_gt_tlb_inval_job.c
rename to drivers/gpu/drm/xe/xe_tlb_inval_job.c
index 41e0ea92ea5a..492def04a559 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_inval_job.c
+++ b/drivers/gpu/drm/xe/xe_tlb_inval_job.c
@@ -3,21 +3,22 @@
  * Copyright © 2025 Intel Corporation
  */
 
+#include "xe_assert.h"
 #include "xe_dep_job_types.h"
 #include "xe_dep_scheduler.h"
 #include "xe_exec_queue.h"
-#include "xe_gt.h"
-#include "xe_gt_tlb_inval.h"
-#include "xe_gt_tlb_inval_job.h"
+#include "xe_gt_types.h"
+#include "xe_tlb_inval.h"
+#include "xe_tlb_inval_job.h"
 #include "xe_migrate.h"
 #include "xe_pm.h"
 
-/** struct xe_gt_tlb_inval_job - GT TLB invalidation job */
-struct xe_gt_tlb_inval_job {
+/** struct xe_tlb_inval_job - TLB invalidation job */
+struct xe_tlb_inval_job {
 	/** @dep: base generic dependency Xe job */
 	struct xe_dep_job dep;
-	/** @gt: GT to invalidate */
-	struct xe_gt *gt;
+	/** @tlb_inval: TLB invalidation client */
+	struct xe_tlb_inval *tlb_inval;
 	/** @q: exec queue issuing the invalidate */
 	struct xe_exec_queue *q;
 	/** @refcount: ref count of this job */
@@ -37,63 +38,56 @@ struct xe_gt_tlb_inval_job {
 	bool fence_armed;
 };
 
-static struct dma_fence *xe_gt_tlb_inval_job_run(struct xe_dep_job *dep_job)
+static struct dma_fence *xe_tlb_inval_job_run(struct xe_dep_job *dep_job)
 {
-	struct xe_gt_tlb_inval_job *job =
+	struct xe_tlb_inval_job *job =
 		container_of(dep_job, typeof(*job), dep);
-	struct xe_gt_tlb_inval_fence *ifence =
+	struct xe_tlb_inval_fence *ifence =
 		container_of(job->fence, typeof(*ifence), base);
 
-	xe_gt_tlb_inval_range(job->gt, ifence, job->start,
-			      job->end, job->asid);
+	xe_tlb_inval_range(job->tlb_inval, ifence, job->start,
+			   job->end, job->asid);
 
 	return job->fence;
 }
 
-static void xe_gt_tlb_inval_job_free(struct xe_dep_job *dep_job)
+static void xe_tlb_inval_job_free(struct xe_dep_job *dep_job)
 {
-	struct xe_gt_tlb_inval_job *job =
+	struct xe_tlb_inval_job *job =
 		container_of(dep_job, typeof(*job), dep);
 
-	/* Pairs with get in xe_gt_tlb_inval_job_push */
-	xe_gt_tlb_inval_job_put(job);
+	/* Pairs with get in xe_tlb_inval_job_push */
+	xe_tlb_inval_job_put(job);
 }
 
 static const struct xe_dep_job_ops dep_job_ops = {
-	.run_job = xe_gt_tlb_inval_job_run,
-	.free_job = xe_gt_tlb_inval_job_free,
+	.run_job = xe_tlb_inval_job_run,
+	.free_job = xe_tlb_inval_job_free,
 };
 
-static int xe_gt_tlb_inval_context(struct xe_gt *gt)
-{
-	return xe_gt_is_media_type(gt) ? XE_EXEC_QUEUE_TLB_INVAL_MEDIA_GT :
-		XE_EXEC_QUEUE_TLB_INVAL_PRIMARY_GT;
-}
-
 /**
- * xe_gt_tlb_inval_job_create() - GT TLB invalidation job create
- * @gt: GT to invalidate
+ * xe_tlb_inval_job_create() - TLB invalidation job create
  * @q: exec queue issuing the invalidate
+ * @tlb_inval: TLB invalidation client
+ * @dep_scheduler: Dependency scheduler for job
  * @start: Start address to invalidate
  * @end: End address to invalidate
  * @asid: Address space ID to invalidate
  *
- * Create a GT TLB invalidation job and initialize internal fields. The caller is
+ * Create a TLB invalidation job and initialize internal fields. The caller is
  * responsible for releasing the creation reference.
  *
- * Return: GT TLB invalidation job object on success, ERR_PTR failure
+ * Return: TLB invalidation job object on success, ERR_PTR failure
  */
-struct xe_gt_tlb_inval_job *xe_gt_tlb_inval_job_create(struct xe_exec_queue *q,
-						       struct xe_gt *gt,
-						       u64 start, u64 end,
-						       u32 asid)
+struct xe_tlb_inval_job *
+xe_tlb_inval_job_create(struct xe_exec_queue *q, struct xe_tlb_inval *tlb_inval,
+			struct xe_dep_scheduler *dep_scheduler, u64 start,
+			u64 end, u32 asid)
 {
-	struct xe_gt_tlb_inval_job *job;
-	struct xe_dep_scheduler *dep_scheduler =
-		q->tlb_inval[xe_gt_tlb_inval_context(gt)].dep_scheduler;
+	struct xe_tlb_inval_job *job;
 	struct drm_sched_entity *entity =
 		xe_dep_scheduler_entity(dep_scheduler);
-	struct xe_gt_tlb_inval_fence *ifence;
+	struct xe_tlb_inval_fence *ifence;
 	int err;
 
 	job = kmalloc(sizeof(*job), GFP_KERNEL);
@@ -101,14 +95,14 @@ struct xe_gt_tlb_inval_job *xe_gt_tlb_inval_job_create(struct xe_exec_queue *q,
 		return ERR_PTR(-ENOMEM);
 
 	job->q = q;
-	job->gt = gt;
+	job->tlb_inval = tlb_inval;
 	job->start = start;
 	job->end = end;
 	job->asid = asid;
 	job->fence_armed = false;
 	job->dep.ops = &dep_job_ops;
 	kref_init(&job->refcount);
-	xe_exec_queue_get(q);	/* Pairs with put in xe_gt_tlb_inval_job_destroy */
+	xe_exec_queue_get(q);	/* Pairs with put in xe_tlb_inval_job_destroy */
 
 	ifence = kmalloc(sizeof(*ifence), GFP_KERNEL);
 	if (!ifence) {
@@ -122,8 +116,8 @@ struct xe_gt_tlb_inval_job *xe_gt_tlb_inval_job_create(struct xe_exec_queue *q,
 	if (err)
 		goto err_fence;
 
-	/* Pairs with put in xe_gt_tlb_inval_job_destroy */
-	xe_pm_runtime_get_noresume(gt_to_xe(job->gt));
+	/* Pairs with put in xe_tlb_inval_job_destroy */
+	xe_pm_runtime_get_noresume(gt_to_xe(q->gt));
 
 	return job;
 
@@ -136,40 +130,40 @@ struct xe_gt_tlb_inval_job *xe_gt_tlb_inval_job_create(struct xe_exec_queue *q,
 	return ERR_PTR(err);
 }
 
-static void xe_gt_tlb_inval_job_destroy(struct kref *ref)
+static void xe_tlb_inval_job_destroy(struct kref *ref)
 {
-	struct xe_gt_tlb_inval_job *job = container_of(ref, typeof(*job),
-						       refcount);
-	struct xe_gt_tlb_inval_fence *ifence =
+	struct xe_tlb_inval_job *job = container_of(ref, typeof(*job),
+						    refcount);
+	struct xe_tlb_inval_fence *ifence =
 		container_of(job->fence, typeof(*ifence), base);
-	struct xe_device *xe = gt_to_xe(job->gt);
 	struct xe_exec_queue *q = job->q;
+	struct xe_device *xe = gt_to_xe(q->gt);
 
 	if (!job->fence_armed)
 		kfree(ifence);
 	else
-		/* Ref from xe_gt_tlb_inval_fence_init */
+		/* Ref from xe_tlb_inval_fence_init */
 		dma_fence_put(job->fence);
 
 	drm_sched_job_cleanup(&job->dep.drm);
 	kfree(job);
-	xe_exec_queue_put(q);	/* Pairs with get from xe_gt_tlb_inval_job_create */
-	xe_pm_runtime_put(xe);	/* Pairs with get from xe_gt_tlb_inval_job_create */
+	xe_exec_queue_put(q);	/* Pairs with get from xe_tlb_inval_job_create */
+	xe_pm_runtime_put(xe);	/* Pairs with get from xe_tlb_inval_job_create */
 }
 
 /**
- * xe_gt_tlb_inval_alloc_dep() - GT TLB invalidation job alloc dependency
- * @job: GT TLB invalidation job to alloc dependency for
+ * xe_tlb_inval_alloc_dep() - TLB invalidation job alloc dependency
+ * @job: TLB invalidation job to alloc dependency for
  *
- * Allocate storage for a dependency in the GT TLB invalidation fence. This
+ * Allocate storage for a dependency in the TLB invalidation fence. This
  * function should be called at most once per job and must be paired with
- * xe_gt_tlb_inval_job_push being called with a real fence.
+ * xe_tlb_inval_job_push being called with a real fence.
  *
  * Return: 0 on success, -errno on failure
  */
-int xe_gt_tlb_inval_job_alloc_dep(struct xe_gt_tlb_inval_job *job)
+int xe_tlb_inval_job_alloc_dep(struct xe_tlb_inval_job *job)
 {
-	xe_assert(gt_to_xe(job->gt), !xa_load(&job->dep.drm.dependencies, 0));
+	xe_assert(gt_to_xe(job->q->gt), !xa_load(&job->dep.drm.dependencies, 0));
 	might_alloc(GFP_KERNEL);
 
 	return drm_sched_job_add_dependency(&job->dep.drm,
@@ -177,24 +171,24 @@ int xe_gt_tlb_inval_job_alloc_dep(struct xe_gt_tlb_inval_job *job)
 }
 
 /**
- * xe_gt_tlb_inval_job_push() - GT TLB invalidation job push
- * @job: GT TLB invalidation job to push
+ * xe_tlb_inval_job_push() - TLB invalidation job push
+ * @job: TLB invalidation job to push
  * @m: The migration object being used
- * @fence: Dependency for GT TLB invalidation job
+ * @fence: Dependency for TLB invalidation job
  *
- * Pushes a GT TLB invalidation job for execution, using @fence as a dependency.
- * Storage for @fence must be preallocated with xe_gt_tlb_inval_job_alloc_dep
+ * Pushes a TLB invalidation job for execution, using @fence as a dependency.
+ * Storage for @fence must be preallocated with xe_tlb_inval_job_alloc_dep
  * prior to this call if @fence is not signaled. Takes a reference to the job’s
  * finished fence, which the caller is responsible for releasing, and return it
  * to the caller. This function is safe to be called in the path of reclaim.
  *
  * Return: Job's finished fence on success, cannot fail
  */
-struct dma_fence *xe_gt_tlb_inval_job_push(struct xe_gt_tlb_inval_job *job,
-					   struct xe_migrate *m,
-					   struct dma_fence *fence)
+struct dma_fence *xe_tlb_inval_job_push(struct xe_tlb_inval_job *job,
+					struct xe_migrate *m,
+					struct dma_fence *fence)
 {
-	struct xe_gt_tlb_inval_fence *ifence =
+	struct xe_tlb_inval_fence *ifence =
 		container_of(job->fence, typeof(*ifence), base);
 
 	if (!dma_fence_is_signaled(fence)) {
@@ -202,20 +196,20 @@ struct dma_fence *xe_gt_tlb_inval_job_push(struct xe_gt_tlb_inval_job *job,
 
 		/*
 		 * Can be in path of reclaim, hence the preallocation of fence
-		 * storage in xe_gt_tlb_inval_job_alloc_dep. Verify caller did
+		 * storage in xe_tlb_inval_job_alloc_dep. Verify caller did
 		 * this correctly.
 		 */
-		xe_assert(gt_to_xe(job->gt),
+		xe_assert(gt_to_xe(job->q->gt),
 			  xa_load(&job->dep.drm.dependencies, 0) ==
 			  dma_fence_get_stub());
 
 		dma_fence_get(fence);	/* ref released once dependency processed by scheduler */
 		ptr = xa_store(&job->dep.drm.dependencies, 0, fence,
 			       GFP_ATOMIC);
-		xe_assert(gt_to_xe(job->gt), !xa_is_err(ptr));
+		xe_assert(gt_to_xe(job->q->gt), !xa_is_err(ptr));
 	}
 
-	xe_gt_tlb_inval_job_get(job);	/* Pairs with put in free_job */
+	xe_tlb_inval_job_get(job);	/* Pairs with put in free_job */
 	job->fence_armed = true;
 
 	/*
@@ -225,8 +219,8 @@ struct dma_fence *xe_gt_tlb_inval_job_push(struct xe_gt_tlb_inval_job *job,
 	 */
 	xe_migrate_job_lock(m, job->q);
 
-	/* Creation ref pairs with put in xe_gt_tlb_inval_job_destroy */
-	xe_gt_tlb_inval_fence_init(job->gt, ifence, false);
+	/* Creation ref pairs with put in xe_tlb_inval_job_destroy */
+	xe_tlb_inval_fence_init(job->tlb_inval, ifence, false);
 	dma_fence_get(job->fence);	/* Pairs with put in DRM scheduler */
 
 	drm_sched_job_arm(&job->dep.drm);
@@ -241,7 +235,7 @@ struct dma_fence *xe_gt_tlb_inval_job_push(struct xe_gt_tlb_inval_job *job,
 
 	/*
 	 * Not using job->fence, as it has its own dma-fence context, which does
-	 * not allow GT TLB invalidation fences on the same queue, GT tuple to
+	 * not allow TLB invalidation fences on the same queue, GT tuple to
 	 * be squashed in dma-resv/DRM scheduler. Instead, we use the DRM scheduler
 	 * context and job's finished fence, which enables squashing.
 	 */
@@ -249,26 +243,26 @@ struct dma_fence *xe_gt_tlb_inval_job_push(struct xe_gt_tlb_inval_job *job,
 }
 
 /**
- * xe_gt_tlb_inval_job_get() - Get a reference to GT TLB invalidation job
- * @job: GT TLB invalidation job object
+ * xe_tlb_inval_job_get() - Get a reference to TLB invalidation job
+ * @job: TLB invalidation job object
  *
- * Increment the GT TLB invalidation job's reference count
+ * Increment the TLB invalidation job's reference count
  */
-void xe_gt_tlb_inval_job_get(struct xe_gt_tlb_inval_job *job)
+void xe_tlb_inval_job_get(struct xe_tlb_inval_job *job)
 {
 	kref_get(&job->refcount);
 }
 
 /**
- * xe_gt_tlb_inval_job_put() - Put a reference to GT TLB invalidation job
- * @job: GT TLB invalidation job object
+ * xe_tlb_inval_job_put() - Put a reference to TLB invalidation job
+ * @job: TLB invalidation job object
  *
- * Decrement the GT TLB invalidation job's reference count, call
- * xe_gt_tlb_inval_job_destroy when reference count == 0. Skips decrement if
+ * Decrement the TLB invalidation job's reference count, call
+ * xe_tlb_inval_job_destroy when reference count == 0. Skips decrement if
  * input @job is NULL or IS_ERR.
  */
-void xe_gt_tlb_inval_job_put(struct xe_gt_tlb_inval_job *job)
+void xe_tlb_inval_job_put(struct xe_tlb_inval_job *job)
 {
 	if (!IS_ERR_OR_NULL(job))
-		kref_put(&job->refcount, xe_gt_tlb_inval_job_destroy);
+		kref_put(&job->refcount, xe_tlb_inval_job_destroy);
 }
diff --git a/drivers/gpu/drm/xe/xe_tlb_inval_job.h b/drivers/gpu/drm/xe/xe_tlb_inval_job.h
new file mode 100644
index 000000000000..e63edcb26b50
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_tlb_inval_job.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_TLB_INVAL_JOB_H_
+#define _XE_TLB_INVAL_JOB_H_
+
+#include <linux/types.h>
+
+struct dma_fence;
+struct xe_dep_scheduler;
+struct xe_exec_queue;
+struct xe_tlb_inval;
+struct xe_tlb_inval_job;
+struct xe_migrate;
+
+struct xe_tlb_inval_job *
+xe_tlb_inval_job_create(struct xe_exec_queue *q, struct xe_tlb_inval *tlb_inval,
+			struct xe_dep_scheduler *dep_scheduler,
+			u64 start, u64 end, u32 asid);
+
+int xe_tlb_inval_job_alloc_dep(struct xe_tlb_inval_job *job);
+
+struct dma_fence *xe_tlb_inval_job_push(struct xe_tlb_inval_job *job,
+					struct xe_migrate *m,
+					struct dma_fence *fence);
+
+void xe_tlb_inval_job_get(struct xe_tlb_inval_job *job);
+
+void xe_tlb_inval_job_put(struct xe_tlb_inval_job *job);
+
+#endif
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h b/drivers/gpu/drm/xe/xe_tlb_inval_types.h
similarity index 56%
rename from drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h
rename to drivers/gpu/drm/xe/xe_tlb_inval_types.h
index 442f72b78ccf..6d14b9f17b91 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_inval_types.h
+++ b/drivers/gpu/drm/xe/xe_tlb_inval_types.h
@@ -3,58 +3,57 @@
  * Copyright © 2023 Intel Corporation
  */
 
-#ifndef _XE_GT_TLB_INVAL_TYPES_H_
-#define _XE_GT_TLB_INVAL_TYPES_H_
+#ifndef _XE_TLB_INVAL_TYPES_H_
+#define _XE_TLB_INVAL_TYPES_H_
 
 #include <linux/workqueue.h>
 #include <linux/dma-fence.h>
 
-struct xe_gt;
-
 /** struct xe_tlb_inval - TLB invalidation client */
 struct xe_tlb_inval {
+	/** @private: Backend private pointer */
+	void *private;
 	/** @tlb_inval.seqno: TLB invalidation seqno, protected by CT lock */
 #define TLB_INVALIDATION_SEQNO_MAX	0x100000
 	int seqno;
 	/** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
 	struct mutex seqno_lock;
 	/**
-	 * @tlb_inval.seqno_recv: last received TLB invalidation seqno,
-	 * protected by CT lock
+	 * @seqno_recv: last received TLB invalidation seqno, protected by
+	 * CT lock
 	 */
 	int seqno_recv;
 	/**
-	 * @tlb_inval.pending_fences: list of pending fences waiting TLB
-	 * invaliations, protected by CT lock
+	 * @pending_fences: list of pending fences waiting TLB invaliations,
+	 * protected CT lock
 	 */
 	struct list_head pending_fences;
 	/**
-	 * @tlb_inval.pending_lock: protects @tlb_inval.pending_fences
-	 * and updating @tlb_inval.seqno_recv.
+	 * @pending_lock: protects @pending_fences and updating @seqno_recv.
 	 */
 	spinlock_t pending_lock;
 	/**
-	 * @tlb_inval.fence_tdr: schedules a delayed call to
-	 * xe_gt_tlb_fence_timeout after the timeut interval is over.
+	 * @fence_tdr: schedules a delayed call to xe_tlb_fence_timeout after
+	 * the timeout interval is over.
 	 */
 	struct delayed_work fence_tdr;
-	/** @wtlb_invalidation.wq: schedules GT TLB invalidation jobs */
+	/** @job_wq: schedules TLB invalidation jobs */
 	struct workqueue_struct *job_wq;
 	/** @tlb_inval.lock: protects TLB invalidation fences */
 	spinlock_t lock;
 };
 
 /**
- * struct xe_gt_tlb_inval_fence - XE GT TLB invalidation fence
+ * struct xe_tlb_inval_fence - TLB invalidation fence
  *
- * Optionally passed to xe_gt_tlb_inval and will be signaled upon TLB
+ * Optionally passed to xe_tlb_inval* functions and will be signaled upon TLB
  * invalidation completion.
  */
-struct xe_gt_tlb_inval_fence {
+struct xe_tlb_inval_fence {
 	/** @base: dma fence base */
 	struct dma_fence base;
-	/** @gt: GT which fence belong to */
-	struct xe_gt *gt;
+	/** @tlb_inval: TLB invalidation client which fence belong to */
+	struct xe_tlb_inval *tlb_inval;
 	/** @link: link into list of pending tlb fences */
 	struct list_head link;
 	/** @seqno: seqno of TLB invalidation to signal fence one */
diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
index 36538f50d06f..314f42fcbcbd 100644
--- a/drivers/gpu/drm/xe/xe_trace.h
+++ b/drivers/gpu/drm/xe/xe_trace.h
@@ -14,10 +14,10 @@
 
 #include "xe_exec_queue_types.h"
 #include "xe_gpu_scheduler_types.h"
-#include "xe_gt_tlb_inval_types.h"
 #include "xe_gt_types.h"
 #include "xe_guc_exec_queue_types.h"
 #include "xe_sched_job.h"
+#include "xe_tlb_inval_types.h"
 #include "xe_vm.h"
 
 #define __dev_name_xe(xe)	dev_name((xe)->drm.dev)
@@ -25,13 +25,13 @@
 #define __dev_name_gt(gt)	__dev_name_xe(gt_to_xe((gt)))
 #define __dev_name_eq(q)	__dev_name_gt((q)->gt)
 
-DECLARE_EVENT_CLASS(xe_gt_tlb_inval_fence,
-		    TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence),
+DECLARE_EVENT_CLASS(xe_tlb_inval_fence,
+		    TP_PROTO(struct xe_device *xe, struct xe_tlb_inval_fence *fence),
 		    TP_ARGS(xe, fence),
 
 		    TP_STRUCT__entry(
 			     __string(dev, __dev_name_xe(xe))
-			     __field(struct xe_gt_tlb_inval_fence *, fence)
+			     __field(struct xe_tlb_inval_fence *, fence)
 			     __field(int, seqno)
 			     ),
 
@@ -45,23 +45,23 @@ DECLARE_EVENT_CLASS(xe_gt_tlb_inval_fence,
 			      __get_str(dev), __entry->fence, __entry->seqno)
 );
 
-DEFINE_EVENT(xe_gt_tlb_inval_fence, xe_gt_tlb_inval_fence_send,
-	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence),
+DEFINE_EVENT(xe_tlb_inval_fence, xe_tlb_inval_fence_send,
+	     TP_PROTO(struct xe_device *xe, struct xe_tlb_inval_fence *fence),
 	     TP_ARGS(xe, fence)
 );
 
-DEFINE_EVENT(xe_gt_tlb_inval_fence, xe_gt_tlb_inval_fence_recv,
-	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence),
+DEFINE_EVENT(xe_tlb_inval_fence, xe_tlb_inval_fence_recv,
+	     TP_PROTO(struct xe_device *xe, struct xe_tlb_inval_fence *fence),
 	     TP_ARGS(xe, fence)
 );
 
-DEFINE_EVENT(xe_gt_tlb_inval_fence, xe_gt_tlb_inval_fence_signal,
-	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence),
+DEFINE_EVENT(xe_tlb_inval_fence, xe_tlb_inval_fence_signal,
+	     TP_PROTO(struct xe_device *xe, struct xe_tlb_inval_fence *fence),
 	     TP_ARGS(xe, fence)
 );
 
-DEFINE_EVENT(xe_gt_tlb_inval_fence, xe_gt_tlb_inval_fence_timeout,
-	     TP_PROTO(struct xe_device *xe, struct xe_gt_tlb_inval_fence *fence),
+DEFINE_EVENT(xe_tlb_inval_fence, xe_tlb_inval_fence_timeout,
+	     TP_PROTO(struct xe_device *xe, struct xe_tlb_inval_fence *fence),
 	     TP_ARGS(xe, fence)
 );
 
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index fd42aa1b7fa0..d6a6e41d55fa 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -28,7 +28,6 @@
 #include "xe_drm_client.h"
 #include "xe_exec_queue.h"
 #include "xe_gt_pagefault.h"
-#include "xe_gt_tlb_inval.h"
 #include "xe_migrate.h"
 #include "xe_pat.h"
 #include "xe_pm.h"
@@ -38,6 +37,7 @@
 #include "xe_res_cursor.h"
 #include "xe_svm.h"
 #include "xe_sync.h"
+#include "xe_tlb_inval.h"
 #include "xe_trace_bo.h"
 #include "xe_wa.h"
 #include "xe_hmm.h"
@@ -1892,7 +1892,7 @@ static void xe_vm_close(struct xe_vm *vm)
 					xe_pt_clear(xe, vm->pt_root[id]);
 
 			for_each_gt(gt, xe, id)
-				xe_gt_tlb_inval_vm(gt, vm);
+				xe_tlb_inval_vm(&gt->tlb_inval, vm);
 		}
 	}
 
@@ -3889,7 +3889,7 @@ void xe_vm_unlock(struct xe_vm *vm)
 int xe_vm_range_tilemask_tlb_inval(struct xe_vm *vm, u64 start,
 				   u64 end, u8 tile_mask)
 {
-	struct xe_gt_tlb_inval_fence
+	struct xe_tlb_inval_fence
 		fence[XE_MAX_TILES_PER_DEVICE * XE_MAX_GT_PER_TILE];
 	struct xe_tile *tile;
 	u32 fence_id = 0;
@@ -3903,11 +3903,12 @@ int xe_vm_range_tilemask_tlb_inval(struct xe_vm *vm, u64 start,
 		if (!(tile_mask & BIT(id)))
 			continue;
 
-		xe_gt_tlb_inval_fence_init(tile->primary_gt,
-					   &fence[fence_id], true);
+		xe_tlb_inval_fence_init(&tile->primary_gt->tlb_inval,
+					&fence[fence_id], true);
 
-		err = xe_gt_tlb_inval_range(tile->primary_gt, &fence[fence_id],
-					    start, end, vm->usm.asid);
+		err = xe_tlb_inval_range(&tile->primary_gt->tlb_inval,
+					 &fence[fence_id], start, end,
+					 vm->usm.asid);
 		if (err)
 			goto wait;
 		++fence_id;
@@ -3915,11 +3916,12 @@ int xe_vm_range_tilemask_tlb_inval(struct xe_vm *vm, u64 start,
 		if (!tile->media_gt)
 			continue;
 
-		xe_gt_tlb_inval_fence_init(tile->media_gt,
-					   &fence[fence_id], true);
+		xe_tlb_inval_fence_init(&tile->media_gt->tlb_inval,
+					&fence[fence_id], true);
 
-		err = xe_gt_tlb_inval_range(tile->media_gt, &fence[fence_id],
-					    start, end, vm->usm.asid);
+		err = xe_tlb_inval_range(&tile->media_gt->tlb_inval,
+					 &fence[fence_id], start, end,
+					 vm->usm.asid);
 		if (err)
 			goto wait;
 		++fence_id;
@@ -3927,7 +3929,7 @@ int xe_vm_range_tilemask_tlb_inval(struct xe_vm *vm, u64 start,
 
 wait:
 	for (id = 0; id < fence_id; ++id)
-		xe_gt_tlb_inval_fence_wait(&fence[id]);
+		xe_tlb_inval_fence_wait(&fence[id]);
 
 	return err;
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 7/9] drm/xe: Prep TLB invalidation fence before sending
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
                   ` (5 preceding siblings ...)
  2025-08-25 17:57 ` [PATCH 6/9] drm/xe: Decouple TLB invalidations from GT Stuart Summers
@ 2025-08-25 17:57 ` Stuart Summers
  2025-08-25 17:57 ` [PATCH 8/9] drm/xe: Add helpers to send TLB invalidations Stuart Summers
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-25 17:57 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

From: Matthew Brost <matthew.brost@intel.com>

It is a bit backwards to add a TLB invalidation fence to the pending
list after issuing the invalidation. Perform this step before issuing
the TLB invalidation in a helper function.

v2: Make sure the seqno_lock mutex covers the send as well (Matt)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_tlb_inval.c | 109 +++++++++++++++---------------
 1 file changed, 55 insertions(+), 54 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_tlb_inval.c b/drivers/gpu/drm/xe/xe_tlb_inval.c
index f4b7c0c74894..dff26f7954ec 100644
--- a/drivers/gpu/drm/xe/xe_tlb_inval.c
+++ b/drivers/gpu/drm/xe/xe_tlb_inval.c
@@ -66,19 +66,19 @@ __inval_fence_signal(struct xe_device *xe, struct xe_tlb_inval_fence *fence)
 static void
 inval_fence_signal(struct xe_device *xe, struct xe_tlb_inval_fence *fence)
 {
+	lockdep_assert_held(&fence->tlb_inval->pending_lock);
+
 	list_del(&fence->link);
 	__inval_fence_signal(xe, fence);
 }
 
-void xe_tlb_inval_fence_signal(struct xe_tlb_inval_fence *fence)
+static void
+inval_fence_signal_unlocked(struct xe_device *xe,
+			    struct xe_tlb_inval_fence *fence)
 {
-	struct xe_gt *gt;
-
-	if (WARN_ON_ONCE(!fence->tlb_inval))
-		return;
-
-	gt = fence->tlb_inval->private;
-	__inval_fence_signal(gt_to_xe(gt), fence);
+	spin_lock_irq(&fence->tlb_inval->pending_lock);
+	inval_fence_signal(xe, fence);
+	spin_unlock_irq(&fence->tlb_inval->pending_lock);
 }
 
 static void xe_gt_tlb_fence_timeout(struct work_struct *work)
@@ -221,14 +221,10 @@ static bool tlb_inval_seqno_past(struct xe_gt *gt, int seqno)
 	return seqno_recv >= seqno;
 }
 
-static int send_tlb_inval(struct xe_guc *guc,
-			  struct xe_tlb_inval_fence *fence,
+static int send_tlb_inval(struct xe_guc *guc, struct xe_tlb_inval_fence *fence,
 			  u32 *action, int len)
 {
 	struct xe_gt *gt = guc_to_gt(guc);
-	struct xe_device *xe = gt_to_xe(gt);
-	int seqno;
-	int ret;
 
 	xe_gt_assert(gt, fence);
 
@@ -238,47 +234,36 @@ static int send_tlb_inval(struct xe_guc *guc,
 	 * need to be updated.
 	 */
 
-	mutex_lock(&gt->tlb_inval.seqno_lock);
-	seqno = gt->tlb_inval.seqno;
-	fence->seqno = seqno;
-	trace_xe_tlb_inval_fence_send(xe, fence);
-	action[1] = seqno;
-	ret = xe_guc_ct_send(&guc->ct, action, len,
-			     G2H_LEN_DW_TLB_INVALIDATE, 1);
-	if (!ret) {
-		spin_lock_irq(&gt->tlb_inval.pending_lock);
-		/*
-		 * We haven't actually published the TLB fence as per
-		 * pending_fences, but in theory our seqno could have already
-		 * been written as we acquired the pending_lock. In such a case
-		 * we can just go ahead and signal the fence here.
-		 */
-		if (tlb_inval_seqno_past(gt, seqno)) {
-			__inval_fence_signal(xe, fence);
-		} else {
-			fence->inval_time = ktime_get();
-			list_add_tail(&fence->link,
-				      &gt->tlb_inval.pending_fences);
-
-			if (list_is_singular(&gt->tlb_inval.pending_fences))
-				queue_delayed_work(system_wq,
-						   &gt->tlb_inval.fence_tdr,
-						   tlb_timeout_jiffies(gt));
-		}
-		spin_unlock_irq(&gt->tlb_inval.pending_lock);
-	} else {
-		__inval_fence_signal(xe, fence);
-	}
-	if (!ret) {
-		gt->tlb_inval.seqno = (gt->tlb_inval.seqno + 1) %
-			TLB_INVALIDATION_SEQNO_MAX;
-		if (!gt->tlb_inval.seqno)
-			gt->tlb_inval.seqno = 1;
-	}
-	mutex_unlock(&gt->tlb_inval.seqno_lock);
 	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
+	action[1] = fence->seqno;
 
-	return ret;
+	return xe_guc_ct_send(&guc->ct, action, len,
+			      G2H_LEN_DW_TLB_INVALIDATE, 1);
+}
+
+static void xe_tlb_inval_fence_prep(struct xe_tlb_inval_fence *fence)
+{
+	struct xe_tlb_inval *tlb_inval = fence->tlb_inval;
+	struct xe_gt *gt = tlb_inval->private;
+	struct xe_device *xe = gt_to_xe(gt);
+
+	fence->seqno = tlb_inval->seqno;
+	trace_xe_tlb_inval_fence_send(xe, fence);
+
+	spin_lock_irq(&tlb_inval->pending_lock);
+	fence->inval_time = ktime_get();
+	list_add_tail(&fence->link, &tlb_inval->pending_fences);
+
+	if (list_is_singular(&tlb_inval->pending_fences))
+		queue_delayed_work(system_wq,
+				   &tlb_inval->fence_tdr,
+				   tlb_timeout_jiffies(gt));
+	spin_unlock_irq(&tlb_inval->pending_lock);
+
+	tlb_inval->seqno = (tlb_inval->seqno + 1) %
+		TLB_INVALIDATION_SEQNO_MAX;
+	if (!tlb_inval->seqno)
+		tlb_inval->seqno = 1;
 }
 
 #define MAKE_INVAL_OP(type)	((type << XE_GUC_TLB_INVAL_TYPE_SHIFT) | \
@@ -306,7 +291,14 @@ static int xe_tlb_inval_guc(struct xe_gt *gt,
 	};
 	int ret;
 
+	mutex_lock(&gt->tlb_inval.seqno_lock);
+	xe_tlb_inval_fence_prep(fence);
+
 	ret = send_tlb_inval(&gt->uc.guc, fence, action, ARRAY_SIZE(action));
+	if (ret < 0)
+		inval_fence_signal_unlocked(gt_to_xe(gt), fence);
+	mutex_unlock(&gt->tlb_inval.seqno_lock);
+
 	/*
 	 * -ECANCELED indicates the CT is stopped for a GT reset. TLB caches
 	 *  should be nuked on a GT reset so this error can be ignored.
@@ -433,7 +425,7 @@ int xe_tlb_inval_range(struct xe_tlb_inval *tlb_inval,
 #define MAX_TLB_INVALIDATION_LEN	7
 	u32 action[MAX_TLB_INVALIDATION_LEN];
 	u64 length = end - start;
-	int len = 0;
+	int len = 0, ret;
 
 	xe_gt_assert(gt, fence);
 
@@ -494,7 +486,16 @@ int xe_tlb_inval_range(struct xe_tlb_inval *tlb_inval,
 
 	xe_gt_assert(gt, len <= MAX_TLB_INVALIDATION_LEN);
 
-	return send_tlb_inval(&gt->uc.guc, fence, action, len);
+	mutex_lock(&gt->tlb_inval.seqno_lock);
+	xe_tlb_inval_fence_prep(fence);
+
+	ret = send_tlb_inval(&gt->uc.guc, fence, action,
+			     ARRAY_SIZE(action));
+	if (ret < 0)
+		inval_fence_signal_unlocked(xe, fence);
+	mutex_unlock(&gt->tlb_inval.seqno_lock);
+
+	return ret;
 }
 
 /**
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 8/9] drm/xe: Add helpers to send TLB invalidations
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
                   ` (6 preceding siblings ...)
  2025-08-25 17:57 ` [PATCH 7/9] drm/xe: Prep TLB invalidation fence before sending Stuart Summers
@ 2025-08-25 17:57 ` Stuart Summers
  2025-08-25 17:57 ` [PATCH 9/9] drm/xe: Split TLB invalidation code in frontend and backend Stuart Summers
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-25 17:57 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

From: Matthew Brost <matthew.brost@intel.com>

Break out the GuC specific code into helpers as part of the process to
decouple frontback TLB invalidation code from the backend.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_tlb_inval.c | 234 +++++++++++++++---------------
 1 file changed, 117 insertions(+), 117 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_tlb_inval.c b/drivers/gpu/drm/xe/xe_tlb_inval.c
index dff26f7954ec..d81a45f1dd7c 100644
--- a/drivers/gpu/drm/xe/xe_tlb_inval.c
+++ b/drivers/gpu/drm/xe/xe_tlb_inval.c
@@ -221,12 +221,11 @@ static bool tlb_inval_seqno_past(struct xe_gt *gt, int seqno)
 	return seqno_recv >= seqno;
 }
 
-static int send_tlb_inval(struct xe_guc *guc, struct xe_tlb_inval_fence *fence,
-			  u32 *action, int len)
+static int send_tlb_inval(struct xe_guc *guc, const u32 *action, int len)
 {
 	struct xe_gt *gt = guc_to_gt(guc);
 
-	xe_gt_assert(gt, fence);
+	xe_gt_assert(gt, action[1]);	/* Seqno */
 
 	/*
 	 * XXX: The seqno algorithm relies on TLB invalidation being processed
@@ -235,7 +234,6 @@ static int send_tlb_inval(struct xe_guc *guc, struct xe_tlb_inval_fence *fence,
 	 */
 
 	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
-	action[1] = fence->seqno;
 
 	return xe_guc_ct_send(&guc->ct, action, len,
 			      G2H_LEN_DW_TLB_INVALIDATE, 1);
@@ -270,91 +268,15 @@ static void xe_tlb_inval_fence_prep(struct xe_tlb_inval_fence *fence)
 		XE_GUC_TLB_INVAL_MODE_HEAVY << XE_GUC_TLB_INVAL_MODE_SHIFT | \
 		XE_GUC_TLB_INVAL_FLUSH_CACHE)
 
-/**
- * xe_tlb_inval_guc - Issue a TLB invalidation on this GT for the GuC
- * @gt: GT structure
- * @fence: invalidation fence which will be signal on TLB invalidation
- * completion
- *
- * Issue a TLB invalidation for the GuC. Completion of TLB is asynchronous and
- * caller can use the invalidation fence to wait for completion.
- *
- * Return: 0 on success, negative error code on error
- */
-static int xe_tlb_inval_guc(struct xe_gt *gt,
-			    struct xe_tlb_inval_fence *fence)
+static int send_tlb_inval_ggtt(struct xe_gt *gt, int seqno)
 {
 	u32 action[] = {
 		XE_GUC_ACTION_TLB_INVALIDATION,
-		0,  /* seqno, replaced in send_tlb_inval */
+		seqno,
 		MAKE_INVAL_OP(XE_GUC_TLB_INVAL_GUC),
 	};
-	int ret;
-
-	mutex_lock(&gt->tlb_inval.seqno_lock);
-	xe_tlb_inval_fence_prep(fence);
-
-	ret = send_tlb_inval(&gt->uc.guc, fence, action, ARRAY_SIZE(action));
-	if (ret < 0)
-		inval_fence_signal_unlocked(gt_to_xe(gt), fence);
-	mutex_unlock(&gt->tlb_inval.seqno_lock);
-
-	/*
-	 * -ECANCELED indicates the CT is stopped for a GT reset. TLB caches
-	 *  should be nuked on a GT reset so this error can be ignored.
-	 */
-	if (ret == -ECANCELED)
-		return 0;
-
-	return ret;
-}
-
-/**
- * xe_tlb_inval_ggtt - Issue a TLB invalidation on this GT for the GGTT
- * @tlb_inval: TLB invalidation client
- *
- * Issue a TLB invalidation for the GGTT. Completion of TLB invalidation is
- * synchronous.
- *
- * Return: 0 on success, negative error code on error
- */
-int xe_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval)
-{
-	struct xe_gt *gt = tlb_inval->private;
-	struct xe_device *xe = gt_to_xe(gt);
-	unsigned int fw_ref;
-
-	if (xe_guc_ct_enabled(&gt->uc.guc.ct) &&
-	    gt->uc.guc.submission_state.enabled) {
-		struct xe_tlb_inval_fence fence;
-		int ret;
-
-		xe_tlb_inval_fence_init(tlb_inval, &fence, true);
-		ret = xe_tlb_inval_guc(gt, &fence);
-		if (ret)
-			return ret;
-
-		xe_tlb_inval_fence_wait(&fence);
-	} else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
-		struct xe_mmio *mmio = &gt->mmio;
 
-		if (IS_SRIOV_VF(xe))
-			return 0;
-
-		fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
-		if (xe->info.platform == XE_PVC || GRAPHICS_VER(xe) >= 20) {
-			xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC1,
-					PVC_GUC_TLB_INV_DESC1_INVALIDATE);
-			xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC0,
-					PVC_GUC_TLB_INV_DESC0_VALID);
-		} else {
-			xe_mmio_write32(mmio, GUC_TLB_INV_CR,
-					GUC_TLB_INV_CR_INVALIDATE);
-		}
-		xe_force_wake_put(gt_to_fw(gt), fw_ref);
-	}
-
-	return 0;
+	return send_tlb_inval(&gt->uc.guc, action, ARRAY_SIZE(action));
 }
 
 static int send_tlb_inval_all(struct xe_tlb_inval *tlb_inval,
@@ -369,7 +291,7 @@ static int send_tlb_inval_all(struct xe_tlb_inval *tlb_inval,
 
 	xe_gt_assert(gt, fence);
 
-	return send_tlb_inval(&gt->uc.guc, fence, action, ARRAY_SIZE(action));
+	return send_tlb_inval(&gt->uc.guc, action, ARRAY_SIZE(action));
 }
 
 /**
@@ -401,43 +323,17 @@ int xe_tlb_inval_all(struct xe_tlb_inval *tlb_inval,
  */
 #define MAX_RANGE_TLB_INVALIDATION_LENGTH (rounddown_pow_of_two(ULONG_MAX))
 
-/**
- * xe_tlb_inval_range - Issue a TLB invalidation on this GT for an address range
- * @tlb_inval: TLB invalidation client
- * @fence: invalidation fence which will be signal on TLB invalidation
- * completion
- * @start: start address
- * @end: end address
- * @asid: address space id
- *
- * Issue a range based TLB invalidation if supported, if not fallback to a full
- * TLB invalidation. Completion of TLB is asynchronous and caller can use
- * the invalidation fence to wait for completion.
- *
- * Return: Negative error code on error, 0 on success
- */
-int xe_tlb_inval_range(struct xe_tlb_inval *tlb_inval,
-		       struct xe_tlb_inval_fence *fence, u64 start, u64 end,
-		       u32 asid)
+static int send_tlb_inval_ppgtt(struct xe_gt *gt, u64 start, u64 end,
+				u32 asid, int seqno)
 {
-	struct xe_gt *gt = tlb_inval->private;
-	struct xe_device *xe = gt_to_xe(gt);
 #define MAX_TLB_INVALIDATION_LEN	7
 	u32 action[MAX_TLB_INVALIDATION_LEN];
 	u64 length = end - start;
-	int len = 0, ret;
-
-	xe_gt_assert(gt, fence);
-
-	/* Execlists not supported */
-	if (gt_to_xe(gt)->info.force_execlist) {
-		__inval_fence_signal(xe, fence);
-		return 0;
-	}
+	int len = 0;
 
 	action[len++] = XE_GUC_ACTION_TLB_INVALIDATION;
-	action[len++] = 0; /* seqno, replaced in send_tlb_inval */
-	if (!xe->info.has_range_tlb_inval ||
+	action[len++] = seqno;
+	if (!gt_to_xe(gt)->info.has_range_tlb_inval ||
 	    length > MAX_RANGE_TLB_INVALIDATION_LENGTH) {
 		action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
 	} else {
@@ -486,11 +382,115 @@ int xe_tlb_inval_range(struct xe_tlb_inval *tlb_inval,
 
 	xe_gt_assert(gt, len <= MAX_TLB_INVALIDATION_LEN);
 
+	return send_tlb_inval(&gt->uc.guc, action, len);
+}
+
+static int __xe_tlb_inval_ggtt(struct xe_gt *gt,
+			       struct xe_tlb_inval_fence *fence)
+{
+	int ret;
+
+	mutex_lock(&gt->tlb_inval.seqno_lock);
+	xe_tlb_inval_fence_prep(fence);
+
+	ret = send_tlb_inval_ggtt(gt, fence->seqno);
+	if (ret < 0)
+		inval_fence_signal_unlocked(gt_to_xe(gt), fence);
+	mutex_unlock(&gt->tlb_inval.seqno_lock);
+
+	/*
+	 * -ECANCELED indicates the CT is stopped for a GT reset. TLB caches
+	 *  should be nuked on a GT reset so this error can be ignored.
+	 */
+	if (ret == -ECANCELED)
+		return 0;
+
+	return ret;
+}
+
+/**
+ * xe_tlb_inval_ggtt - Issue a TLB invalidation on this GT for the GGTT
+ * @tlb_inval: TLB invalidation client
+ *
+ * Issue a TLB invalidation for the GGTT. Completion of TLB invalidation is
+ * synchronous.
+ *
+ * Return: 0 on success, negative error code on error
+ */
+int xe_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval)
+{
+	struct xe_gt *gt = tlb_inval->private;
+	struct xe_device *xe = gt_to_xe(gt);
+	unsigned int fw_ref;
+
+	if (xe_guc_ct_enabled(&gt->uc.guc.ct) &&
+	    gt->uc.guc.submission_state.enabled) {
+		struct xe_tlb_inval_fence fence;
+		int ret;
+
+		xe_tlb_inval_fence_init(tlb_inval, &fence, true);
+		ret = __xe_tlb_inval_ggtt(gt, &fence);
+		if (ret)
+			return ret;
+
+		xe_tlb_inval_fence_wait(&fence);
+	} else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
+		struct xe_mmio *mmio = &gt->mmio;
+
+		if (IS_SRIOV_VF(xe))
+			return 0;
+
+		fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+		if (xe->info.platform == XE_PVC || GRAPHICS_VER(xe) >= 20) {
+			xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC1,
+					PVC_GUC_TLB_INV_DESC1_INVALIDATE);
+			xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC0,
+					PVC_GUC_TLB_INV_DESC0_VALID);
+		} else {
+			xe_mmio_write32(mmio, GUC_TLB_INV_CR,
+					GUC_TLB_INV_CR_INVALIDATE);
+		}
+		xe_force_wake_put(gt_to_fw(gt), fw_ref);
+	}
+
+	return 0;
+}
+
+/**
+ * xe_tlb_inval_range - Issue a TLB invalidation on this GT for an address range
+ * @tlb_inval: TLB invalidation client
+ * @fence: invalidation fence which will be signal on TLB invalidation
+ * completion
+ * @start: start address
+ * @end: end address
+ * @asid: address space id
+ *
+ * Issue a range based TLB invalidation if supported, if not fallback to a full
+ * TLB invalidation. Completion of TLB is asynchronous and caller can use
+ * the invalidation fence to wait for completion.
+ *
+ * Return: Negative error code on error, 0 on success
+ */
+int xe_tlb_inval_range(struct xe_tlb_inval *tlb_inval,
+		       struct xe_tlb_inval_fence *fence, u64 start, u64 end,
+		       u32 asid)
+{
+	struct xe_gt *gt = tlb_inval->private;
+	struct xe_device *xe = gt_to_xe(gt);
+	int  ret;
+
+	xe_gt_assert(gt, fence);
+
+	/* Execlists not supported */
+	if (xe->info.force_execlist) {
+		__inval_fence_signal(xe, fence);
+		return 0;
+	}
+
 	mutex_lock(&gt->tlb_inval.seqno_lock);
 	xe_tlb_inval_fence_prep(fence);
 
-	ret = send_tlb_inval(&gt->uc.guc, fence, action,
-			     ARRAY_SIZE(action));
+	ret = send_tlb_inval_ppgtt(gt, start, end, asid, fence->seqno);
 	if (ret < 0)
 		inval_fence_signal_unlocked(xe, fence);
 	mutex_unlock(&gt->tlb_inval.seqno_lock);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 9/9] drm/xe: Split TLB invalidation code in frontend and backend
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
                   ` (7 preceding siblings ...)
  2025-08-25 17:57 ` [PATCH 8/9] drm/xe: Add helpers to send TLB invalidations Stuart Summers
@ 2025-08-25 17:57 ` Stuart Summers
  2025-08-25 19:09 ` ✗ CI.checkpatch: warning for Add TLB invalidation abstraction (rev9) Patchwork
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-25 17:57 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

From: Matthew Brost <matthew.brost@intel.com>

The frontend exposes an API to the driver to send invalidations, handles
sequence number assignment, synchronization (fences), and provides a
timeout mechanism. The backend issues the actual invalidation to the
hardware (or firmware).

The new layering easily allows issuing TLB invalidations to different
hardware or firmware interfaces.

Normalize some naming while here too.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/Makefile             |   1 +
 drivers/gpu/drm/xe/xe_gt.c              |   2 -
 drivers/gpu/drm/xe/xe_guc_ct.c          |   2 +-
 drivers/gpu/drm/xe/xe_guc_tlb_inval.c   | 242 +++++++++++
 drivers/gpu/drm/xe/xe_guc_tlb_inval.h   |  19 +
 drivers/gpu/drm/xe/xe_tlb_inval.c       | 537 +++++++-----------------
 drivers/gpu/drm/xe/xe_tlb_inval.h       |  15 +-
 drivers/gpu/drm/xe/xe_tlb_inval_types.h |  67 ++-
 8 files changed, 500 insertions(+), 385 deletions(-)
 create mode 100644 drivers/gpu/drm/xe/xe_guc_tlb_inval.c
 create mode 100644 drivers/gpu/drm/xe/xe_guc_tlb_inval.h

diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index e4a363489072..65853f6e63c1 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -75,6 +75,7 @@ xe-y += xe_bb.o \
 	xe_guc_log.o \
 	xe_guc_pc.o \
 	xe_guc_submit.o \
+	xe_guc_tlb_inval.o \
 	xe_heci_gsc.o \
 	xe_huc.o \
 	xe_hw_engine.o \
diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
index 67ee7cdbd6ec..34505a6d93ed 100644
--- a/drivers/gpu/drm/xe/xe_gt.c
+++ b/drivers/gpu/drm/xe/xe_gt.c
@@ -603,8 +603,6 @@ static void xe_gt_fini(void *arg)
 	struct xe_gt *gt = arg;
 	int i;
 
-	xe_gt_tlb_inval_fini(gt);
-
 	for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
 		xe_hw_fence_irq_finish(&gt->fence_irq[i]);
 
diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index 5f38041cff4c..848065a25c44 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -30,9 +30,9 @@
 #include "xe_guc_log.h"
 #include "xe_guc_relay.h"
 #include "xe_guc_submit.h"
+#include "xe_guc_tlb_inval.h"
 #include "xe_map.h"
 #include "xe_pm.h"
-#include "xe_tlb_inval.h"
 #include "xe_trace_guc.h"
 
 static void receive_g2h(struct xe_guc_ct *ct);
diff --git a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
new file mode 100644
index 000000000000..6bf2103602f8
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
@@ -0,0 +1,242 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include "abi/guc_actions_abi.h"
+
+#include "xe_device.h"
+#include "xe_gt_stats.h"
+#include "xe_gt_types.h"
+#include "xe_guc.h"
+#include "xe_guc_ct.h"
+#include "xe_guc_tlb_inval.h"
+#include "xe_force_wake.h"
+#include "xe_mmio.h"
+#include "xe_tlb_inval.h"
+
+#include "regs/xe_guc_regs.h"
+
+/*
+ * XXX: The seqno algorithm relies on TLB invalidation being processed in order
+ * which they currently are by the GuC, if that changes the algorithm will need
+ * to be updated.
+ */
+
+static int send_tlb_inval(struct xe_guc *guc, const u32 *action, int len)
+{
+	struct xe_gt *gt = guc_to_gt(guc);
+
+	xe_gt_assert(gt, action[1]);	/* Seqno */
+
+	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
+	return xe_guc_ct_send(&guc->ct, action, len,
+			      G2H_LEN_DW_TLB_INVALIDATE, 1);
+}
+
+#define MAKE_INVAL_OP(type)	((type << XE_GUC_TLB_INVAL_TYPE_SHIFT) | \
+		XE_GUC_TLB_INVAL_MODE_HEAVY << XE_GUC_TLB_INVAL_MODE_SHIFT | \
+		XE_GUC_TLB_INVAL_FLUSH_CACHE)
+
+static int send_tlb_inval_all(struct xe_tlb_inval *tlb_inval, u32 seqno)
+{
+	struct xe_guc *guc = tlb_inval->private;
+	u32 action[] = {
+		XE_GUC_ACTION_TLB_INVALIDATION_ALL,
+		seqno,
+		MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL),
+	};
+
+	return send_tlb_inval(guc, action, ARRAY_SIZE(action));
+}
+
+static int send_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval, u32 seqno)
+{
+	struct xe_guc *guc = tlb_inval->private;
+	struct xe_gt *gt = guc_to_gt(guc);
+	struct xe_device *xe = guc_to_xe(guc);
+
+	/*
+	 * Returning -ECANCELED in this function is squashed at the caller and
+	 * signals waiters.
+	 */
+
+	if (xe_guc_ct_enabled(&guc->ct) && guc->submission_state.enabled) {
+		u32 action[] = {
+			XE_GUC_ACTION_TLB_INVALIDATION,
+			seqno,
+			MAKE_INVAL_OP(XE_GUC_TLB_INVAL_GUC),
+		};
+
+		return send_tlb_inval(guc, action, ARRAY_SIZE(action));
+	} else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
+		struct xe_mmio *mmio = &gt->mmio;
+		unsigned int fw_ref;
+
+		if (IS_SRIOV_VF(xe))
+			return -ECANCELED;
+
+		fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+		if (xe->info.platform == XE_PVC || GRAPHICS_VER(xe) >= 20) {
+			xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC1,
+					PVC_GUC_TLB_INV_DESC1_INVALIDATE);
+			xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC0,
+					PVC_GUC_TLB_INV_DESC0_VALID);
+		} else {
+			xe_mmio_write32(mmio, GUC_TLB_INV_CR,
+					GUC_TLB_INV_CR_INVALIDATE);
+		}
+		xe_force_wake_put(gt_to_fw(gt), fw_ref);
+	}
+
+	return -ECANCELED;
+}
+
+/*
+ * Ensure that roundup_pow_of_two(length) doesn't overflow.
+ * Note that roundup_pow_of_two() operates on unsigned long,
+ * not on u64.
+ */
+#define MAX_RANGE_TLB_INVALIDATION_LENGTH (rounddown_pow_of_two(ULONG_MAX))
+
+static int send_tlb_inval_ppgtt(struct xe_tlb_inval *tlb_inval, u32 seqno,
+				u64 start, u64 end, u32 asid)
+{
+#define MAX_TLB_INVALIDATION_LEN	7
+	struct xe_guc *guc = tlb_inval->private;
+	struct xe_gt *gt = guc_to_gt(guc);
+	u32 action[MAX_TLB_INVALIDATION_LEN];
+	u64 length = end - start;
+	int len = 0;
+
+	if (guc_to_xe(guc)->info.force_execlist)
+		return -ECANCELED;
+
+	action[len++] = XE_GUC_ACTION_TLB_INVALIDATION;
+	action[len++] = seqno;
+	if (!gt_to_xe(gt)->info.has_range_tlb_inval ||
+	    length > MAX_RANGE_TLB_INVALIDATION_LENGTH) {
+		action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
+	} else {
+		u64 orig_start = start;
+		u64 align;
+
+		if (length < SZ_4K)
+			length = SZ_4K;
+
+		/*
+		 * We need to invalidate a higher granularity if start address
+		 * is not aligned to length. When start is not aligned with
+		 * length we need to find the length large enough to create an
+		 * address mask covering the required range.
+		 */
+		align = roundup_pow_of_two(length);
+		start = ALIGN_DOWN(start, align);
+		end = ALIGN(end, align);
+		length = align;
+		while (start + length < end) {
+			length <<= 1;
+			start = ALIGN_DOWN(orig_start, length);
+		}
+
+		/*
+		 * Minimum invalidation size for a 2MB page that the hardware
+		 * expects is 16MB
+		 */
+		if (length >= SZ_2M) {
+			length = max_t(u64, SZ_16M, length);
+			start = ALIGN_DOWN(orig_start, length);
+		}
+
+		xe_gt_assert(gt, length >= SZ_4K);
+		xe_gt_assert(gt, is_power_of_2(length));
+		xe_gt_assert(gt, !(length & GENMASK(ilog2(SZ_16M) - 1,
+						    ilog2(SZ_2M) + 1)));
+		xe_gt_assert(gt, IS_ALIGNED(start, length));
+
+		action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_PAGE_SELECTIVE);
+		action[len++] = asid;
+		action[len++] = lower_32_bits(start);
+		action[len++] = upper_32_bits(start);
+		action[len++] = ilog2(length) - ilog2(SZ_4K);
+	}
+
+	xe_gt_assert(gt, len <= MAX_TLB_INVALIDATION_LEN);
+
+	return send_tlb_inval(guc, action, len);
+}
+
+static bool tlb_inval_initialized(struct xe_tlb_inval *tlb_inval)
+{
+	struct xe_guc *guc = tlb_inval->private;
+
+	return xe_guc_ct_initialized(&guc->ct);
+}
+
+static void tlb_inval_flush(struct xe_tlb_inval *tlb_inval)
+{
+	struct xe_guc *guc = tlb_inval->private;
+
+	LNL_FLUSH_WORK(&guc->ct.g2h_worker);
+}
+
+static long tlb_inval_timeout_delay(struct xe_tlb_inval *tlb_inval)
+{
+	struct xe_guc *guc = tlb_inval->private;
+
+	/* this reflects what HW/GuC needs to process TLB inv request */
+	const long hw_tlb_timeout = HZ / 4;
+
+	/* this estimates actual delay caused by the CTB transport */
+	long delay = xe_guc_ct_queue_proc_time_jiffies(&guc->ct);
+
+	return hw_tlb_timeout + 2 * delay;
+}
+
+static const struct xe_tlb_inval_ops guc_tlb_inval_ops = {
+	.all = send_tlb_inval_all,
+	.ggtt = send_tlb_inval_ggtt,
+	.ppgtt = send_tlb_inval_ppgtt,
+	.initialized = tlb_inval_initialized,
+	.flush = tlb_inval_flush,
+	.timeout_delay = tlb_inval_timeout_delay,
+};
+
+/**
+ * xe_guc_tlb_inval_init_early() - Init GuC TLB invalidation early
+ * @guc: GuC object
+ * @tlb_inval: TLB invalidation client
+ *
+ * Inititialize GuC TLB invalidation by setting back pointer in TLB invalidation
+ * client to the GuC and setting GuC backend ops.
+ */
+void xe_guc_tlb_inval_init_early(struct xe_guc *guc,
+				 struct xe_tlb_inval *tlb_inval)
+{
+	tlb_inval->private = guc;
+	tlb_inval->ops = &guc_tlb_inval_ops;
+}
+
+/**
+ * xe_guc_tlb_inval_done_handler() - TLB invalidation done handler
+ * @guc: guc
+ * @msg: message indicating TLB invalidation done
+ * @len: length of message
+ *
+ * Parse seqno of TLB invalidation, wake any waiters for seqno, and signal any
+ * invalidation fences for seqno. Algorithm for this depends on seqno being
+ * received in-order and asserts this assumption.
+ *
+ * Return: 0 on success, -EPROTO for malformed messages.
+ */
+int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
+{
+	struct xe_gt *gt = guc_to_gt(guc);
+
+	if (unlikely(len != 1))
+		return -EPROTO;
+
+	xe_tlb_inval_done_handler(&gt->tlb_inval, msg[0]);
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_guc_tlb_inval.h b/drivers/gpu/drm/xe/xe_guc_tlb_inval.h
new file mode 100644
index 000000000000..07d668b02e3d
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_guc_tlb_inval.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_GUC_TLB_INVAL_H_
+#define _XE_GUC_TLB_INVAL_H_
+
+#include <linux/types.h>
+
+struct xe_guc;
+struct xe_tlb_inval;
+
+void xe_guc_tlb_inval_init_early(struct xe_guc *guc,
+				 struct xe_tlb_inval *tlb_inval);
+
+int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
+
+#endif
diff --git a/drivers/gpu/drm/xe/xe_tlb_inval.c b/drivers/gpu/drm/xe/xe_tlb_inval.c
index d81a45f1dd7c..e6e97b5a7b5c 100644
--- a/drivers/gpu/drm/xe/xe_tlb_inval.c
+++ b/drivers/gpu/drm/xe/xe_tlb_inval.c
@@ -12,51 +12,45 @@
 #include "xe_gt_printk.h"
 #include "xe_guc.h"
 #include "xe_guc_ct.h"
+#include "xe_guc_tlb_inval.h"
 #include "xe_gt_stats.h"
 #include "xe_tlb_inval.h"
 #include "xe_mmio.h"
 #include "xe_pm.h"
-#include "xe_sriov.h"
+#include "xe_tlb_inval.h"
 #include "xe_trace.h"
-#include "regs/xe_guc_regs.h"
-
-#define FENCE_STACK_BIT		DMA_FENCE_FLAG_USER_BITS
 
-/*
- * TLB inval depends on pending commands in the CT queue and then the real
- * invalidation time. Double up the time to process full CT queue
- * just to be on the safe side.
+/**
+ * DOC: Xe TLB invalidation
+ *
+ * Xe TLB invalidation is implemented in two layers. The first is the frontend
+ * API, which provides an interface for TLB invalidations to the driver code.
+ * The frontend handles seqno assignment, synchronization (fences), and the
+ * timeout mechanism. The frontend is implemented via an embedded structure
+ * xe_tlb_inval that includes a set of ops hooking into the backend. The backend
+ * interacts with the hardware (or firmware) to perform the actual invalidation.
  */
-static long tlb_timeout_jiffies(struct xe_gt *gt)
-{
-	/* this reflects what HW/GuC needs to process TLB inv request */
-	const long hw_tlb_timeout = HZ / 4;
-
-	/* this estimates actual delay caused by the CTB transport */
-	long delay = xe_guc_ct_queue_proc_time_jiffies(&gt->uc.guc.ct);
 
-	return hw_tlb_timeout + 2 * delay;
-}
+#define FENCE_STACK_BIT		DMA_FENCE_FLAG_USER_BITS
 
 static void xe_tlb_inval_fence_fini(struct xe_tlb_inval_fence *fence)
 {
-	struct xe_gt *gt;
-
 	if (WARN_ON_ONCE(!fence->tlb_inval))
 		return;
 
-	gt = fence->tlb_inval->private;
-
-	xe_pm_runtime_put(gt_to_xe(gt));
+	xe_pm_runtime_put(fence->tlb_inval->xe);
 	fence->tlb_inval = NULL; /* fini() should be called once */
 }
 
 static void
-__inval_fence_signal(struct xe_device *xe, struct xe_tlb_inval_fence *fence)
+xe_tlb_inval_fence_signal(struct xe_tlb_inval_fence *fence)
 {
 	bool stack = test_bit(FENCE_STACK_BIT, &fence->base.flags);
 
-	trace_xe_tlb_inval_fence_signal(xe, fence);
+	lockdep_assert_held(&fence->tlb_inval->pending_lock);
+
+	list_del(&fence->link);
+	trace_xe_tlb_inval_fence_signal(fence->tlb_inval->xe, fence);
 	xe_tlb_inval_fence_fini(fence);
 	dma_fence_signal(&fence->base);
 	if (!stack)
@@ -64,57 +58,65 @@ __inval_fence_signal(struct xe_device *xe, struct xe_tlb_inval_fence *fence)
 }
 
 static void
-inval_fence_signal(struct xe_device *xe, struct xe_tlb_inval_fence *fence)
+xe_tlb_inval_fence_signal_unlocked(struct xe_tlb_inval_fence *fence)
 {
-	lockdep_assert_held(&fence->tlb_inval->pending_lock);
-
-	list_del(&fence->link);
-	__inval_fence_signal(xe, fence);
-}
+	struct xe_tlb_inval *tlb_inval = fence->tlb_inval;
 
-static void
-inval_fence_signal_unlocked(struct xe_device *xe,
-			    struct xe_tlb_inval_fence *fence)
-{
-	spin_lock_irq(&fence->tlb_inval->pending_lock);
-	inval_fence_signal(xe, fence);
-	spin_unlock_irq(&fence->tlb_inval->pending_lock);
+	spin_lock_irq(&tlb_inval->pending_lock);
+	xe_tlb_inval_fence_signal(fence);
+	spin_unlock_irq(&tlb_inval->pending_lock);
 }
 
-static void xe_gt_tlb_fence_timeout(struct work_struct *work)
+static void xe_tlb_inval_fence_timeout(struct work_struct *work)
 {
-	struct xe_gt *gt = container_of(work, struct xe_gt,
-					tlb_inval.fence_tdr.work);
-	struct xe_device *xe = gt_to_xe(gt);
+	struct xe_tlb_inval *tlb_inval = container_of(work, struct xe_tlb_inval,
+						      fence_tdr.work);
+	struct xe_device *xe = tlb_inval->xe;
 	struct xe_tlb_inval_fence *fence, *next;
+	long timeout_delay = tlb_inval->ops->timeout_delay(tlb_inval);
 
-	LNL_FLUSH_WORK(&gt->uc.guc.ct.g2h_worker);
+	tlb_inval->ops->flush(tlb_inval);
 
-	spin_lock_irq(&gt->tlb_inval.pending_lock);
+	spin_lock_irq(&tlb_inval->pending_lock);
 	list_for_each_entry_safe(fence, next,
-				 &gt->tlb_inval.pending_fences, link) {
+				 &tlb_inval->pending_fences, link) {
 		s64 since_inval_ms = ktime_ms_delta(ktime_get(),
 						    fence->inval_time);
 
-		if (msecs_to_jiffies(since_inval_ms) < tlb_timeout_jiffies(gt))
+		if (msecs_to_jiffies(since_inval_ms) < timeout_delay)
 			break;
 
 		trace_xe_tlb_inval_fence_timeout(xe, fence);
-		xe_gt_err(gt, "TLB invalidation fence timeout, seqno=%d recv=%d",
-			  fence->seqno, gt->tlb_inval.seqno_recv);
+		drm_err(&xe->drm,
+			"TLB invalidation fence timeout, seqno=%d recv=%d",
+			fence->seqno, tlb_inval->seqno_recv);
 
 		fence->base.error = -ETIME;
-		inval_fence_signal(xe, fence);
+		xe_tlb_inval_fence_signal(fence);
 	}
-	if (!list_empty(&gt->tlb_inval.pending_fences))
-		queue_delayed_work(system_wq,
-				   &gt->tlb_inval.fence_tdr,
-				   tlb_timeout_jiffies(gt));
-	spin_unlock_irq(&gt->tlb_inval.pending_lock);
+	if (!list_empty(&tlb_inval->pending_fences))
+		queue_delayed_work(system_wq, &tlb_inval->fence_tdr,
+				   timeout_delay);
+	spin_unlock_irq(&tlb_inval->pending_lock);
 }
 
 /**
- * xe_gt_tlb_inval_init_early - Initialize GT TLB invalidation state
+ * tlb_inval_fini - Clean up TLB invalidation state
+ * @drm: @drm_device
+ * @arg: pointer to struct @xe_tlb_inval
+ *
+ * Cancel pending fence workers and clean up any additional
+ * TLB invalidation state.
+ */
+static void tlb_inval_fini(struct drm_device *drm, void *arg)
+{
+	struct xe_tlb_inval *tlb_inval = arg;
+
+	xe_tlb_inval_reset(tlb_inval);
+}
+
+/**
+ * xe_gt_tlb_inval_init - Initialize TLB invalidation state
  * @gt: GT structure
  *
  * Initialize TLB invalidation state, purely software initialization, should
@@ -125,92 +127,84 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
 int xe_gt_tlb_inval_init_early(struct xe_gt *gt)
 {
 	struct xe_device *xe = gt_to_xe(gt);
+	struct xe_tlb_inval *tlb_inval = &gt->tlb_inval;
 	int err;
 
-	gt->tlb_inval.private = gt;
-	gt->tlb_inval.seqno = 1;
-	INIT_LIST_HEAD(&gt->tlb_inval.pending_fences);
-	spin_lock_init(&gt->tlb_inval.pending_lock);
-	spin_lock_init(&gt->tlb_inval.lock);
-	INIT_DELAYED_WORK(&gt->tlb_inval.fence_tdr,
-			  xe_gt_tlb_fence_timeout);
+	tlb_inval->xe = xe;
+	tlb_inval->seqno = 1;
+	INIT_LIST_HEAD(&tlb_inval->pending_fences);
+	spin_lock_init(&tlb_inval->pending_lock);
+	spin_lock_init(&tlb_inval->lock);
+	INIT_DELAYED_WORK(&tlb_inval->fence_tdr, xe_tlb_inval_fence_timeout);
 
-	err = drmm_mutex_init(&xe->drm, &gt->tlb_inval.seqno_lock);
+	err = drmm_mutex_init(&xe->drm, &tlb_inval->seqno_lock);
 	if (err)
 		return err;
 
-	gt->tlb_inval.job_wq =
-		drmm_alloc_ordered_workqueue(&gt_to_xe(gt)->drm, "gt-tbl-inval-job-wq",
-					     WQ_MEM_RECLAIM);
-	if (IS_ERR(gt->tlb_inval.job_wq))
-		return PTR_ERR(gt->tlb_inval.job_wq);
+	tlb_inval->job_wq = drmm_alloc_ordered_workqueue(&xe->drm,
+							 "gt-tbl-inval-job-wq",
+							 WQ_MEM_RECLAIM);
+	if (IS_ERR(tlb_inval->job_wq))
+		return PTR_ERR(tlb_inval->job_wq);
 
-	return 0;
+	/* XXX: Blindly setting up backend to GuC */
+	xe_guc_tlb_inval_init_early(&gt->uc.guc, tlb_inval);
+
+	return drmm_add_action_or_reset(&xe->drm, tlb_inval_fini, tlb_inval);
 }
 
 /**
- * xe_tlb_inval_reset - Initialize TLB invalidation reset
+ * xe_tlb_inval_reset() - TLB invalidation reset
  * @tlb_inval: TLB invalidation client
  *
  * Signal any pending invalidation fences, should be called during a GT reset
  */
 void xe_tlb_inval_reset(struct xe_tlb_inval *tlb_inval)
 {
-	struct xe_gt *gt = tlb_inval->private;
 	struct xe_tlb_inval_fence *fence, *next;
 	int pending_seqno;
 
 	/*
-	 * we can get here before the CTs are even initialized if we're wedging
-	 * very early, in which case there are not going to be any pending
-	 * fences so we can bail immediately.
+	 * we can get here before the backends are even initialized if we're
+	 * wedging very early, in which case there are not going to be any
+	 * pendind fences so we can bail immediately.
 	 */
-	if (!xe_guc_ct_initialized(&gt->uc.guc.ct))
+	if (!tlb_inval->ops->initialized(tlb_inval))
 		return;
 
 	/*
-	 * CT channel is already disabled at this point. No new TLB requests can
+	 * Backend is already disabled at this point. No new TLB requests can
 	 * appear.
 	 */
 
-	mutex_lock(&gt->tlb_inval.seqno_lock);
-	spin_lock_irq(&gt->tlb_inval.pending_lock);
-	cancel_delayed_work(&gt->tlb_inval.fence_tdr);
+	mutex_lock(&tlb_inval->seqno_lock);
+	spin_lock_irq(&tlb_inval->pending_lock);
+	cancel_delayed_work(&tlb_inval->fence_tdr);
 	/*
 	 * We might have various kworkers waiting for TLB flushes to complete
 	 * which are not tracked with an explicit TLB fence, however at this
-	 * stage that will never happen since the CT is already disabled, so
-	 * make sure we signal them here under the assumption that we have
+	 * stage that will never happen since the backend is already disabled,
+	 * so make sure we signal them here under the assumption that we have
 	 * completed a full GT reset.
 	 */
-	if (gt->tlb_inval.seqno == 1)
+	if (tlb_inval->seqno == 1)
 		pending_seqno = TLB_INVALIDATION_SEQNO_MAX - 1;
 	else
-		pending_seqno = gt->tlb_inval.seqno - 1;
-	WRITE_ONCE(gt->tlb_inval.seqno_recv, pending_seqno);
+		pending_seqno = tlb_inval->seqno - 1;
+	WRITE_ONCE(tlb_inval->seqno_recv, pending_seqno);
 
 	list_for_each_entry_safe(fence, next,
-				 &gt->tlb_inval.pending_fences, link)
-		inval_fence_signal(gt_to_xe(gt), fence);
-	spin_unlock_irq(&gt->tlb_inval.pending_lock);
-	mutex_unlock(&gt->tlb_inval.seqno_lock);
+				 &tlb_inval->pending_fences, link)
+		xe_tlb_inval_fence_signal(fence);
+	spin_unlock_irq(&tlb_inval->pending_lock);
+	mutex_unlock(&tlb_inval->seqno_lock);
 }
 
-/**
- *
- * xe_gt_tlb_inval_fini - Clean up GT TLB invalidation state
- *
- * Cancel pending fence workers and clean up any additional
- * GT TLB invalidation state.
- */
-void xe_gt_tlb_inval_fini(struct xe_gt *gt)
+static bool xe_tlb_inval_seqno_past(struct xe_tlb_inval *tlb_inval, int seqno)
 {
-	xe_gt_tlb_inval_reset(gt);
-}
+	int seqno_recv = READ_ONCE(tlb_inval->seqno_recv);
 
-static bool tlb_inval_seqno_past(struct xe_gt *gt, int seqno)
-{
-	int seqno_recv = READ_ONCE(gt->tlb_inval.seqno_recv);
+	lockdep_assert_held(&tlb_inval->pending_lock);
 
 	if (seqno - seqno_recv < -(TLB_INVALIDATION_SEQNO_MAX / 2))
 		return false;
@@ -221,41 +215,20 @@ static bool tlb_inval_seqno_past(struct xe_gt *gt, int seqno)
 	return seqno_recv >= seqno;
 }
 
-static int send_tlb_inval(struct xe_guc *guc, const u32 *action, int len)
-{
-	struct xe_gt *gt = guc_to_gt(guc);
-
-	xe_gt_assert(gt, action[1]);	/* Seqno */
-
-	/*
-	 * XXX: The seqno algorithm relies on TLB invalidation being processed
-	 * in order which they currently are, if that changes the algorithm will
-	 * need to be updated.
-	 */
-
-	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
-
-	return xe_guc_ct_send(&guc->ct, action, len,
-			      G2H_LEN_DW_TLB_INVALIDATE, 1);
-}
-
 static void xe_tlb_inval_fence_prep(struct xe_tlb_inval_fence *fence)
 {
 	struct xe_tlb_inval *tlb_inval = fence->tlb_inval;
-	struct xe_gt *gt = tlb_inval->private;
-	struct xe_device *xe = gt_to_xe(gt);
 
 	fence->seqno = tlb_inval->seqno;
-	trace_xe_tlb_inval_fence_send(xe, fence);
+	trace_xe_tlb_inval_fence_send(tlb_inval->xe, fence);
 
 	spin_lock_irq(&tlb_inval->pending_lock);
 	fence->inval_time = ktime_get();
 	list_add_tail(&fence->link, &tlb_inval->pending_fences);
 
 	if (list_is_singular(&tlb_inval->pending_fences))
-		queue_delayed_work(system_wq,
-				   &tlb_inval->fence_tdr,
-				   tlb_timeout_jiffies(gt));
+		queue_delayed_work(system_wq, &tlb_inval->fence_tdr,
+				   tlb_inval->ops->timeout_delay(tlb_inval));
 	spin_unlock_irq(&tlb_inval->pending_lock);
 
 	tlb_inval->seqno = (tlb_inval->seqno + 1) %
@@ -264,200 +237,63 @@ static void xe_tlb_inval_fence_prep(struct xe_tlb_inval_fence *fence)
 		tlb_inval->seqno = 1;
 }
 
-#define MAKE_INVAL_OP(type)	((type << XE_GUC_TLB_INVAL_TYPE_SHIFT) | \
-		XE_GUC_TLB_INVAL_MODE_HEAVY << XE_GUC_TLB_INVAL_MODE_SHIFT | \
-		XE_GUC_TLB_INVAL_FLUSH_CACHE)
-
-static int send_tlb_inval_ggtt(struct xe_gt *gt, int seqno)
-{
-	u32 action[] = {
-		XE_GUC_ACTION_TLB_INVALIDATION,
-		seqno,
-		MAKE_INVAL_OP(XE_GUC_TLB_INVAL_GUC),
-	};
-
-	return send_tlb_inval(&gt->uc.guc, action, ARRAY_SIZE(action));
-}
-
-static int send_tlb_inval_all(struct xe_tlb_inval *tlb_inval,
-			      struct xe_tlb_inval_fence *fence)
-{
-	u32 action[] = {
-		XE_GUC_ACTION_TLB_INVALIDATION_ALL,
-		0,  /* seqno, replaced in send_tlb_inval */
-		MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL),
-	};
-	struct xe_gt *gt = tlb_inval->private;
-
-	xe_gt_assert(gt, fence);
-
-	return send_tlb_inval(&gt->uc.guc, action, ARRAY_SIZE(action));
-}
+#define xe_tlb_inval_issue(__tlb_inval, __fence, op, args...)	\
+({								\
+	int __ret;						\
+								\
+	xe_assert((__tlb_inval)->xe, (__tlb_inval)->ops);	\
+	xe_assert((__tlb_inval)->xe, (__fence));		\
+								\
+	mutex_lock(&(__tlb_inval)->seqno_lock); 		\
+	xe_tlb_inval_fence_prep((__fence));			\
+	__ret = op((__tlb_inval), (__fence)->seqno, ##args);	\
+	if (__ret < 0)						\
+		xe_tlb_inval_fence_signal_unlocked((__fence));	\
+	mutex_unlock(&(__tlb_inval)->seqno_lock);		\
+								\
+	__ret == -ECANCELED ? 0 : __ret;			\
+})
 
 /**
- * xe_gt_tlb_invalidation_all - Invalidate all TLBs across PF and all VFs.
- * @gt: the &xe_gt structure
- * @fence: the &xe_tlb_inval_fence to be signaled on completion
+ * xe_tlb_inval_all() - Issue a TLB invalidation for all TLBs
+ * @tlb_inval: TLB invalidation client
+ * @fence: invalidation fence which will be signal on TLB invalidation
+ * completion
  *
- * Send a request to invalidate all TLBs across PF and all VFs.
+ * Issue a TLB invalidation for all TLBs. Completion of TLB is asynchronous and
+ * caller can use the invalidation fence to wait for completion.
  *
  * Return: 0 on success, negative error code on error
  */
 int xe_tlb_inval_all(struct xe_tlb_inval *tlb_inval,
 		     struct xe_tlb_inval_fence *fence)
 {
-	struct xe_gt *gt = tlb_inval->private;
-	int err;
-
-	err = send_tlb_inval_all(tlb_inval, fence);
-	if (err)
-		xe_gt_err(gt, "TLB invalidation request failed (%pe)", ERR_PTR(err));
-
-	return err;
-}
-
-/*
- * Ensure that roundup_pow_of_two(length) doesn't overflow.
- * Note that roundup_pow_of_two() operates on unsigned long,
- * not on u64.
- */
-#define MAX_RANGE_TLB_INVALIDATION_LENGTH (rounddown_pow_of_two(ULONG_MAX))
-
-static int send_tlb_inval_ppgtt(struct xe_gt *gt, u64 start, u64 end,
-				u32 asid, int seqno)
-{
-#define MAX_TLB_INVALIDATION_LEN	7
-	u32 action[MAX_TLB_INVALIDATION_LEN];
-	u64 length = end - start;
-	int len = 0;
-
-	action[len++] = XE_GUC_ACTION_TLB_INVALIDATION;
-	action[len++] = seqno;
-	if (!gt_to_xe(gt)->info.has_range_tlb_inval ||
-	    length > MAX_RANGE_TLB_INVALIDATION_LENGTH) {
-		action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
-	} else {
-		u64 orig_start = start;
-		u64 align;
-
-		if (length < SZ_4K)
-			length = SZ_4K;
-
-		/*
-		 * We need to invalidate a higher granularity if start address
-		 * is not aligned to length. When start is not aligned with
-		 * length we need to find the length large enough to create an
-		 * address mask covering the required range.
-		 */
-		align = roundup_pow_of_two(length);
-		start = ALIGN_DOWN(start, align);
-		end = ALIGN(end, align);
-		length = align;
-		while (start + length < end) {
-			length <<= 1;
-			start = ALIGN_DOWN(orig_start, length);
-		}
-
-		/*
-		 * Minimum invalidation size for a 2MB page that the hardware
-		 * expects is 16MB
-		 */
-		if (length >= SZ_2M) {
-			length = max_t(u64, SZ_16M, length);
-			start = ALIGN_DOWN(orig_start, length);
-		}
-
-		xe_gt_assert(gt, length >= SZ_4K);
-		xe_gt_assert(gt, is_power_of_2(length));
-		xe_gt_assert(gt, !(length & GENMASK(ilog2(SZ_16M) - 1,
-						    ilog2(SZ_2M) + 1)));
-		xe_gt_assert(gt, IS_ALIGNED(start, length));
-
-		action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_PAGE_SELECTIVE);
-		action[len++] = asid;
-		action[len++] = lower_32_bits(start);
-		action[len++] = upper_32_bits(start);
-		action[len++] = ilog2(length) - ilog2(SZ_4K);
-	}
-
-	xe_gt_assert(gt, len <= MAX_TLB_INVALIDATION_LEN);
-
-	return send_tlb_inval(&gt->uc.guc, action, len);
-}
-
-static int __xe_tlb_inval_ggtt(struct xe_gt *gt,
-			       struct xe_tlb_inval_fence *fence)
-{
-	int ret;
-
-	mutex_lock(&gt->tlb_inval.seqno_lock);
-	xe_tlb_inval_fence_prep(fence);
-
-	ret = send_tlb_inval_ggtt(gt, fence->seqno);
-	if (ret < 0)
-		inval_fence_signal_unlocked(gt_to_xe(gt), fence);
-	mutex_unlock(&gt->tlb_inval.seqno_lock);
-
-	/*
-	 * -ECANCELED indicates the CT is stopped for a GT reset. TLB caches
-	 *  should be nuked on a GT reset so this error can be ignored.
-	 */
-	if (ret == -ECANCELED)
-		return 0;
-
-	return ret;
+	return xe_tlb_inval_issue(tlb_inval, fence, tlb_inval->ops->all);
 }
 
 /**
- * xe_tlb_inval_ggtt - Issue a TLB invalidation on this GT for the GGTT
+ * xe_tlb_inval_ggtt() - Issue a TLB invalidation for the GGTT
  * @tlb_inval: TLB invalidation client
  *
- * Issue a TLB invalidation for the GGTT. Completion of TLB invalidation is
- * synchronous.
+ * Issue a TLB invalidation for the GGTT. Completion of TLB is asynchronous and
+ * caller can use the invalidation fence to wait for completion.
  *
  * Return: 0 on success, negative error code on error
  */
 int xe_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval)
 {
-	struct xe_gt *gt = tlb_inval->private;
-	struct xe_device *xe = gt_to_xe(gt);
-	unsigned int fw_ref;
-
-	if (xe_guc_ct_enabled(&gt->uc.guc.ct) &&
-	    gt->uc.guc.submission_state.enabled) {
-		struct xe_tlb_inval_fence fence;
-		int ret;
-
-		xe_tlb_inval_fence_init(tlb_inval, &fence, true);
-		ret = __xe_tlb_inval_ggtt(gt, &fence);
-		if (ret)
-			return ret;
-
-		xe_tlb_inval_fence_wait(&fence);
-	} else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
-		struct xe_mmio *mmio = &gt->mmio;
-
-		if (IS_SRIOV_VF(xe))
-			return 0;
-
-		fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
-		if (xe->info.platform == XE_PVC || GRAPHICS_VER(xe) >= 20) {
-			xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC1,
-					PVC_GUC_TLB_INV_DESC1_INVALIDATE);
-			xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC0,
-					PVC_GUC_TLB_INV_DESC0_VALID);
-		} else {
-			xe_mmio_write32(mmio, GUC_TLB_INV_CR,
-					GUC_TLB_INV_CR_INVALIDATE);
-		}
-		xe_force_wake_put(gt_to_fw(gt), fw_ref);
-	}
+	struct xe_tlb_inval_fence fence, *fence_ptr = &fence;
+	int ret;
+
+	xe_tlb_inval_fence_init(tlb_inval, fence_ptr, true);
+	ret = xe_tlb_inval_issue(tlb_inval, fence_ptr, tlb_inval->ops->ggtt);
+	xe_tlb_inval_fence_wait(fence_ptr);
 
-	return 0;
+	return ret;
 }
 
 /**
- * xe_tlb_inval_range - Issue a TLB invalidation on this GT for an address range
+ * xe_tlb_inval_range() - Issue a TLB invalidation for an address range
  * @tlb_inval: TLB invalidation client
  * @fence: invalidation fence which will be signal on TLB invalidation
  * completion
@@ -475,31 +311,12 @@ int xe_tlb_inval_range(struct xe_tlb_inval *tlb_inval,
 		       struct xe_tlb_inval_fence *fence, u64 start, u64 end,
 		       u32 asid)
 {
-	struct xe_gt *gt = tlb_inval->private;
-	struct xe_device *xe = gt_to_xe(gt);
-	int  ret;
-
-	xe_gt_assert(gt, fence);
-
-	/* Execlists not supported */
-	if (xe->info.force_execlist) {
-		__inval_fence_signal(xe, fence);
-		return 0;
-	}
-
-	mutex_lock(&gt->tlb_inval.seqno_lock);
-	xe_tlb_inval_fence_prep(fence);
-
-	ret = send_tlb_inval_ppgtt(gt, start, end, asid, fence->seqno);
-	if (ret < 0)
-		inval_fence_signal_unlocked(xe, fence);
-	mutex_unlock(&gt->tlb_inval.seqno_lock);
-
-	return ret;
+	return xe_tlb_inval_issue(tlb_inval, fence, tlb_inval->ops->ppgtt,
+				  start, end, asid);
 }
 
 /**
- * xe_tlb_inval_vm - Issue a TLB invalidation on this GT for a VM
+ * xe_tlb_inval_vm() - Issue a TLB invalidation for a VM
  * @tlb_inval: TLB invalidation client
  * @vm: VM to invalidate
  *
@@ -509,27 +326,22 @@ void xe_tlb_inval_vm(struct xe_tlb_inval *tlb_inval, struct xe_vm *vm)
 {
 	struct xe_tlb_inval_fence fence;
 	u64 range = 1ull << vm->xe->info.va_bits;
-	int ret;
 
 	xe_tlb_inval_fence_init(tlb_inval, &fence, true);
-
-	ret = xe_tlb_inval_range(tlb_inval, &fence, 0, range, vm->usm.asid);
-	if (ret < 0)
-		return;
-
+	xe_tlb_inval_range(tlb_inval, &fence, 0, range, vm->usm.asid);
 	xe_tlb_inval_fence_wait(&fence);
 }
 
 /**
- * xe_tlb_inval_done_handler - TLB invalidation done handler
- * @gt: gt
+ * xe_tlb_inval_done_handler() - TLB invalidation done handler
+ * @tlb_inval: TLB invalidation client
  * @seqno: seqno of invalidation that is done
  *
  * Update recv seqno, signal any TLB invalidation fences, and restart TDR
  */
-static void xe_tlb_inval_done_handler(struct xe_gt *gt, int seqno)
+void xe_tlb_inval_done_handler(struct xe_tlb_inval *tlb_inval, int seqno)
 {
-	struct xe_device *xe = gt_to_xe(gt);
+	struct xe_device *xe = tlb_inval->xe;
 	struct xe_tlb_inval_fence *fence, *next;
 	unsigned long flags;
 
@@ -548,77 +360,53 @@ static void xe_tlb_inval_done_handler(struct xe_gt *gt, int seqno)
 	 * officially process the CT message like if racing against
 	 * process_g2h_msg().
 	 */
-	spin_lock_irqsave(&gt->tlb_inval.pending_lock, flags);
-	if (tlb_inval_seqno_past(gt, seqno)) {
-		spin_unlock_irqrestore(&gt->tlb_inval.pending_lock, flags);
+	spin_lock_irqsave(&tlb_inval->pending_lock, flags);
+	if (xe_tlb_inval_seqno_past(tlb_inval, seqno)) {
+		spin_unlock_irqrestore(&tlb_inval->pending_lock, flags);
 		return;
 	}
 
-	WRITE_ONCE(gt->tlb_inval.seqno_recv, seqno);
+	WRITE_ONCE(tlb_inval->seqno_recv, seqno);
 
 	list_for_each_entry_safe(fence, next,
-				 &gt->tlb_inval.pending_fences, link) {
+				 &tlb_inval->pending_fences, link) {
 		trace_xe_tlb_inval_fence_recv(xe, fence);
 
-		if (!tlb_inval_seqno_past(gt, fence->seqno))
+		if (!xe_tlb_inval_seqno_past(tlb_inval, fence->seqno))
 			break;
 
-		inval_fence_signal(xe, fence);
+		xe_tlb_inval_fence_signal(fence);
 	}
 
-	if (!list_empty(&gt->tlb_inval.pending_fences))
+	if (!list_empty(&tlb_inval->pending_fences))
 		mod_delayed_work(system_wq,
-				 &gt->tlb_inval.fence_tdr,
-				 tlb_timeout_jiffies(gt));
+				 &tlb_inval->fence_tdr,
+				 tlb_inval->ops->timeout_delay(tlb_inval));
 	else
-		cancel_delayed_work(&gt->tlb_inval.fence_tdr);
+		cancel_delayed_work(&tlb_inval->fence_tdr);
 
-	spin_unlock_irqrestore(&gt->tlb_inval.pending_lock, flags);
-}
-
-/**
- * xe_guc_tlb_inval_done_handler - TLB invalidation done handler
- * @guc: guc
- * @msg: message indicating TLB invalidation done
- * @len: length of message
- *
- * Parse seqno of TLB invalidation, wake any waiters for seqno, and signal any
- * invalidation fences for seqno. Algorithm for this depends on seqno being
- * received in-order and asserts this assumption.
- *
- * Return: 0 on success, -EPROTO for malformed messages.
- */
-int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
-{
-	struct xe_gt *gt = guc_to_gt(guc);
-
-	if (unlikely(len != 1))
-		return -EPROTO;
-
-	xe_tlb_inval_done_handler(gt, msg[0]);
-
-	return 0;
+	spin_unlock_irqrestore(&tlb_inval->pending_lock, flags);
 }
 
 static const char *
-inval_fence_get_driver_name(struct dma_fence *dma_fence)
+xe_inval_fence_get_driver_name(struct dma_fence *dma_fence)
 {
 	return "xe";
 }
 
 static const char *
-inval_fence_get_timeline_name(struct dma_fence *dma_fence)
+xe_inval_fence_get_timeline_name(struct dma_fence *dma_fence)
 {
-	return "inval_fence";
+	return "tlb_inval_fence";
 }
 
 static const struct dma_fence_ops inval_fence_ops = {
-	.get_driver_name = inval_fence_get_driver_name,
-	.get_timeline_name = inval_fence_get_timeline_name,
+	.get_driver_name = xe_inval_fence_get_driver_name,
+	.get_timeline_name = xe_inval_fence_get_timeline_name,
 };
 
 /**
- * xe_tlb_inval_fence_init - Initialize TLB invalidation fence
+ * xe_tlb_inval_fence_init() - Initialize TLB invalidation fence
  * @tlb_inval: TLB invalidation client
  * @fence: TLB invalidation fence to initialize
  * @stack: fence is stack variable
@@ -631,15 +419,12 @@ void xe_tlb_inval_fence_init(struct xe_tlb_inval *tlb_inval,
 			     struct xe_tlb_inval_fence *fence,
 			     bool stack)
 {
-	struct xe_gt *gt = tlb_inval->private;
-
-	xe_pm_runtime_get_noresume(gt_to_xe(gt));
+	xe_pm_runtime_get_noresume(tlb_inval->xe);
 
-	spin_lock_irq(&gt->tlb_inval.lock);
-	dma_fence_init(&fence->base, &inval_fence_ops,
-		       &gt->tlb_inval.lock,
+	spin_lock_irq(&tlb_inval->lock);
+	dma_fence_init(&fence->base, &inval_fence_ops, &tlb_inval->lock,
 		       dma_fence_context_alloc(1), 1);
-	spin_unlock_irq(&gt->tlb_inval.lock);
+	spin_unlock_irq(&tlb_inval->lock);
 	INIT_LIST_HEAD(&fence->link);
 	if (stack)
 		set_bit(FENCE_STACK_BIT, &fence->base.flags);
diff --git a/drivers/gpu/drm/xe/xe_tlb_inval.h b/drivers/gpu/drm/xe/xe_tlb_inval.h
index ab6f769c50be..554634dfd4e2 100644
--- a/drivers/gpu/drm/xe/xe_tlb_inval.h
+++ b/drivers/gpu/drm/xe/xe_tlb_inval.h
@@ -15,27 +15,32 @@ struct xe_guc;
 struct xe_vm;
 
 int xe_gt_tlb_inval_init_early(struct xe_gt *gt);
-void xe_gt_tlb_inval_fini(struct xe_gt *gt);
 
 void xe_tlb_inval_reset(struct xe_tlb_inval *tlb_inval);
-int xe_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval);
-void xe_tlb_inval_vm(struct xe_tlb_inval *tlb_inval, struct xe_vm *vm);
 int xe_tlb_inval_all(struct xe_tlb_inval *tlb_inval,
 		     struct xe_tlb_inval_fence *fence);
+int xe_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval);
+void xe_tlb_inval_vm(struct xe_tlb_inval *tlb_inval, struct xe_vm *vm);
 int xe_tlb_inval_range(struct xe_tlb_inval *tlb_inval,
 		       struct xe_tlb_inval_fence *fence,
 		       u64 start, u64 end, u32 asid);
-int xe_guc_tlb_inval_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
 
 void xe_tlb_inval_fence_init(struct xe_tlb_inval *tlb_inval,
 			     struct xe_tlb_inval_fence *fence,
 			     bool stack);
-void xe_tlb_inval_fence_signal(struct xe_tlb_inval_fence *fence);
 
+/**
+ * xe_tlb_inval_fence_wait() - TLB invalidiation fence wait
+ * @fence: TLB invalidation fence to wait on
+ *
+ * Wait on a TLB invalidiation fence until it signals, non interruptable
+ */
 static inline void
 xe_tlb_inval_fence_wait(struct xe_tlb_inval_fence *fence)
 {
 	dma_fence_wait(&fence->base, false);
 }
 
+void xe_tlb_inval_done_handler(struct xe_tlb_inval *tlb_inval, int seqno);
+
 #endif	/* _XE_TLB_INVAL_ */
diff --git a/drivers/gpu/drm/xe/xe_tlb_inval_types.h b/drivers/gpu/drm/xe/xe_tlb_inval_types.h
index 6d14b9f17b91..8f8b060e9005 100644
--- a/drivers/gpu/drm/xe/xe_tlb_inval_types.h
+++ b/drivers/gpu/drm/xe/xe_tlb_inval_types.h
@@ -9,10 +9,75 @@
 #include <linux/workqueue.h>
 #include <linux/dma-fence.h>
 
-/** struct xe_tlb_inval - TLB invalidation client */
+struct xe_tlb_inval;
+
+/** struct xe_tlb_inval_ops - TLB invalidation ops (backend) */
+struct xe_tlb_inval_ops {
+	/**
+	 * @all: Invalidate all TLBs
+	 * @tlb_inval: TLB invalidation client
+	 * @seqno: Seqno of TLB invalidation
+	 *
+	 * Return 0 on success, -ECANCELED if backend is mid-reset, error on
+	 * failure
+	 */
+	int (*all)(struct xe_tlb_inval *tlb_inval, u32 seqno);
+
+	/**
+	 * @ggtt: Invalidate global translation TLBs
+	 * @tlb_inval: TLB invalidation client
+	 * @seqno: Seqno of TLB invalidation
+	 *
+	 * Return 0 on success, -ECANCELED if backend is mid-reset, error on
+	 * failure
+	 */
+	int (*ggtt)(struct xe_tlb_inval *tlb_inval, u32 seqno);
+
+	/**
+	 * @ppgtt: Invalidate per-process translation TLBs
+	 * @tlb_inval: TLB invalidation client
+	 * @seqno: Seqno of TLB invalidation
+	 * @start: Start address
+	 * @end: End address
+	 * @asid: Address space ID
+	 *
+	 * Return 0 on success, -ECANCELED if backend is mid-reset, error on
+	 * failure
+	 */
+	int (*ppgtt)(struct xe_tlb_inval *tlb_inval, u32 seqno, u64 start,
+		     u64 end, u32 asid);
+
+	/**
+	 * @initialized: Backend is initialized
+	 * @tlb_inval: TLB invalidation client
+	 *
+	 * Return: True if back is initialized, False otherwise
+	 */
+	bool (*initialized)(struct xe_tlb_inval *tlb_inval);
+
+	/**
+	 * @flush: Flush pending TLB invalidations
+	 * @tlb_inval: TLB invalidation client
+	 */
+	void (*flush)(struct xe_tlb_inval *tlb_inval);
+
+	/**
+	 * @timeout_delay: Timeout delay for TLB invalidation
+	 * @tlb_inval: TLB invalidation client
+	 *
+	 * Return: Timeout delay for TLB invalidation in jiffies
+	 */
+	long (*timeout_delay)(struct xe_tlb_inval *tlb_inval);
+};
+
+/** struct xe_tlb_inval - TLB invalidation client (frontend) */
 struct xe_tlb_inval {
 	/** @private: Backend private pointer */
 	void *private;
+	/** @xe: Pointer to Xe device */
+	struct xe_device *xe;
+	/** @ops: TLB invalidation ops */
+	const struct xe_tlb_inval_ops *ops;
 	/** @tlb_inval.seqno: TLB invalidation seqno, protected by CT lock */
 #define TLB_INVALIDATION_SEQNO_MAX	0x100000
 	int seqno;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/9] drm/xe: Cancel pending TLB inval workers on teardown
  2025-08-25 17:57 ` [PATCH 2/9] drm/xe: Cancel pending TLB inval workers on teardown Stuart Summers
@ 2025-08-25 18:06   ` Summers, Stuart
  2025-08-25 18:20     ` Matthew Brost
  0 siblings, 1 reply; 23+ messages in thread
From: Summers, Stuart @ 2025-08-25 18:06 UTC (permalink / raw)
  To: Summers, Stuart
  Cc: intel-xe@lists.freedesktop.org, Brost,  Matthew, Kassabri, Farah

On Mon, 2025-08-25 at 17:57 +0000, Stuart Summers wrote:
> Add a new _fini() routine on the GT TLB invalidation
> side to handle this worker cleanup on driver teardown.
> 
> v2: Move the TLB teardown to the gt fini() routine called during
>     gt_init rather than in gt_alloc. This way the GT structure stays
>     alive for while we reset the TLB state.
> 
> Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_gt.c                  |  2 ++
>  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 12 ++++++++++++
>  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h |  1 +
>  3 files changed, 15 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
> index a3397f04abcc..178c4783bbda 100644
> --- a/drivers/gpu/drm/xe/xe_gt.c
> +++ b/drivers/gpu/drm/xe/xe_gt.c
> @@ -603,6 +603,8 @@ static void xe_gt_fini(void *arg)
>         struct xe_gt *gt = arg;
>         int i;
>  
> +       xe_gt_tlb_invalidation_fini(gt);
> +
>         for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
>                 xe_hw_fence_irq_finish(&gt->fence_irq[i]);
>  
> diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> index 75854b963d66..db00c5adead9 100644
> --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> @@ -188,6 +188,18 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt
> *gt)
>         mutex_unlock(&gt->tlb_invalidation.seqno_lock);
>  }
>  
> +/**
> + *
> + * xe_gt_tlb_invalidation_fini - Clean up GT TLB invalidation state
> + *
> + * Cancel pending fence workers and clean up any additional
> + * GT TLB invalidation state.
> + */
> +void xe_gt_tlb_invalidation_fini(struct xe_gt *gt)
> +{
> +       xe_gt_tlb_invalidation_reset(gt);

I've been seeing an issue on fault injection, running in a tight while
loop, where occasionally we see that a couple of sysfs files weren't
properly torn down on a prior driver instance, followed by TLB
invalidation timeouts. Up until today I was only able to reproduce that
with this series, so I wanted to be sure we weren't causing something
here, particularly with this _reset() call (one of the reasons I had
declined to include this in the original series). Today though, even
without the series, I was able to reproduce that behavior (-EEXIST on
sysfs create, followed by TLB inval timeout). So I don't think we
should block this series on that debug.

I see a few things in ci-buglog that could be related, although I don't
see any results in those to confirm:
https://gfx-ci.igk.intel.com/cibuglog-ng/issue/7175?query_key=a5707a4d3ae2ebb8c04ef6cea0ef747322df4ee1
https://gfx-ci.igk.intel.com/cibuglog-ng/issue/10412?query_key=402f2615406c4afa4814a29849b312a0c7b66e9c
https://gfx-ci.igk.intel.com/cibuglog-ng/issue/15004?query_key=e0ce601ae69ec76bbdf27293dc2919ba07357de3

Anyway, at least for this series, I think we can ignore that issue.

Thanks,
Stuart

> +}
> +
>  static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
>  {
>         int seqno_recv = READ_ONCE(gt->tlb_invalidation.seqno_recv);
> diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> index f7f0f2eaf4b5..3e4cff3922d6 100644
> --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> @@ -16,6 +16,7 @@ struct xe_vm;
>  struct xe_vma;
>  
>  int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt);
> +void xe_gt_tlb_invalidation_fini(struct xe_gt *gt);
>  
>  void xe_gt_tlb_invalidation_reset(struct xe_gt *gt);
>  int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/9] drm/xe: Cancel pending TLB inval workers on teardown
  2025-08-25 18:06   ` Summers, Stuart
@ 2025-08-25 18:20     ` Matthew Brost
  2025-08-25 18:23       ` Summers, Stuart
  0 siblings, 1 reply; 23+ messages in thread
From: Matthew Brost @ 2025-08-25 18:20 UTC (permalink / raw)
  To: Summers, Stuart; +Cc: intel-xe@lists.freedesktop.org, Kassabri, Farah

On Mon, Aug 25, 2025 at 12:06:44PM -0600, Summers, Stuart wrote:
> On Mon, 2025-08-25 at 17:57 +0000, Stuart Summers wrote:
> > Add a new _fini() routine on the GT TLB invalidation
> > side to handle this worker cleanup on driver teardown.
> > 
> > v2: Move the TLB teardown to the gt fini() routine called during
> >     gt_init rather than in gt_alloc. This way the GT structure stays
> >     alive for while we reset the TLB state.
> > 
> > Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_gt.c                  |  2 ++
> >  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 12 ++++++++++++
> >  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h |  1 +
> >  3 files changed, 15 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
> > index a3397f04abcc..178c4783bbda 100644
> > --- a/drivers/gpu/drm/xe/xe_gt.c
> > +++ b/drivers/gpu/drm/xe/xe_gt.c
> > @@ -603,6 +603,8 @@ static void xe_gt_fini(void *arg)
> >         struct xe_gt *gt = arg;
> >         int i;
> >  
> > +       xe_gt_tlb_invalidation_fini(gt);
> > +
> >         for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
> >                 xe_hw_fence_irq_finish(&gt->fence_irq[i]);
> >  
> > diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > index 75854b963d66..db00c5adead9 100644
> > --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > @@ -188,6 +188,18 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt
> > *gt)
> >         mutex_unlock(&gt->tlb_invalidation.seqno_lock);
> >  }
> >  
> > +/**
> > + *
> > + * xe_gt_tlb_invalidation_fini - Clean up GT TLB invalidation state
> > + *
> > + * Cancel pending fence workers and clean up any additional
> > + * GT TLB invalidation state.
> > + */
> > +void xe_gt_tlb_invalidation_fini(struct xe_gt *gt)
> > +{
> > +       xe_gt_tlb_invalidation_reset(gt);
> 
> I've been seeing an issue on fault injection, running in a tight while

I think fault injection case will be fixed by [1] whenever that merges.

[1] https://patchwork.freedesktop.org/series/152870/

For general safety though, I think calling tlb_invalidation_reset is a
good idea.

> loop, where occasionally we see that a couple of sysfs files weren't
> properly torn down on a prior driver instance, followed by TLB
> invalidation timeouts. Up until today I was only able to reproduce that
> with this series, so I wanted to be sure we weren't causing something
> here, particularly with this _reset() call (one of the reasons I had
> declined to include this in the original series). Today though, even
> without the series, I was able to reproduce that behavior (-EEXIST on
> sysfs create, followed by TLB inval timeout). So I don't think we
> should block this series on that debug.
> 

I agree. The prior CI run LGTM. The failure here [2] should be fixed by
[3] which merged last night.

[2] https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v8/shard-lnl-7/igt@xe_exec_compute_mode@many-execqueues-userptr-rebind.html
[3] https://patchwork.freedesktop.org/series/153197/ 

> I see a few things in ci-buglog that could be related, although I don't
> see any results in those to confirm:
> https://gfx-ci.igk.intel.com/cibuglog-ng/issue/7175?query_key=a5707a4d3ae2ebb8c04ef6cea0ef747322df4ee1
> https://gfx-ci.igk.intel.com/cibuglog-ng/issue/10412?query_key=402f2615406c4afa4814a29849b312a0c7b66e9c
> https://gfx-ci.igk.intel.com/cibuglog-ng/issue/15004?query_key=e0ce601ae69ec76bbdf27293dc2919ba07357de3
> 
> Anyway, at least for this series, I think we can ignore that issue.
> 

Again agree. I think if this CI run is clean, go ahead and merge.

With that:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> Thanks,
> Stuart
> 
> > +}
> > +
> >  static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
> >  {
> >         int seqno_recv = READ_ONCE(gt->tlb_invalidation.seqno_recv);
> > diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > index f7f0f2eaf4b5..3e4cff3922d6 100644
> > --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > @@ -16,6 +16,7 @@ struct xe_vm;
> >  struct xe_vma;
> >  
> >  int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt);
> > +void xe_gt_tlb_invalidation_fini(struct xe_gt *gt);
> >  
> >  void xe_gt_tlb_invalidation_reset(struct xe_gt *gt);
> >  int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/9] drm/xe: Cancel pending TLB inval workers on teardown
  2025-08-25 18:20     ` Matthew Brost
@ 2025-08-25 18:23       ` Summers, Stuart
  2025-08-25 18:32         ` Summers, Stuart
  0 siblings, 1 reply; 23+ messages in thread
From: Summers, Stuart @ 2025-08-25 18:23 UTC (permalink / raw)
  To: Brost, Matthew; +Cc: intel-xe@lists.freedesktop.org, Kassabri, Farah

On Mon, 2025-08-25 at 11:20 -0700, Matthew Brost wrote:
> On Mon, Aug 25, 2025 at 12:06:44PM -0600, Summers, Stuart wrote:
> > On Mon, 2025-08-25 at 17:57 +0000, Stuart Summers wrote:
> > > Add a new _fini() routine on the GT TLB invalidation
> > > side to handle this worker cleanup on driver teardown.
> > > 
> > > v2: Move the TLB teardown to the gt fini() routine called during
> > >     gt_init rather than in gt_alloc. This way the GT structure
> > > stays
> > >     alive for while we reset the TLB state.
> > > 
> > > Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_gt.c                  |  2 ++
> > >  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 12 ++++++++++++
> > >  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h |  1 +
> > >  3 files changed, 15 insertions(+)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/xe_gt.c
> > > b/drivers/gpu/drm/xe/xe_gt.c
> > > index a3397f04abcc..178c4783bbda 100644
> > > --- a/drivers/gpu/drm/xe/xe_gt.c
> > > +++ b/drivers/gpu/drm/xe/xe_gt.c
> > > @@ -603,6 +603,8 @@ static void xe_gt_fini(void *arg)
> > >         struct xe_gt *gt = arg;
> > >         int i;
> > >  
> > > +       xe_gt_tlb_invalidation_fini(gt);
> > > +
> > >         for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
> > >                 xe_hw_fence_irq_finish(&gt->fence_irq[i]);
> > >  
> > > diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > index 75854b963d66..db00c5adead9 100644
> > > --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > @@ -188,6 +188,18 @@ void xe_gt_tlb_invalidation_reset(struct
> > > xe_gt
> > > *gt)
> > >         mutex_unlock(&gt->tlb_invalidation.seqno_lock);
> > >  }
> > >  
> > > +/**
> > > + *
> > > + * xe_gt_tlb_invalidation_fini - Clean up GT TLB invalidation
> > > state
> > > + *
> > > + * Cancel pending fence workers and clean up any additional
> > > + * GT TLB invalidation state.
> > > + */
> > > +void xe_gt_tlb_invalidation_fini(struct xe_gt *gt)
> > > +{
> > > +       xe_gt_tlb_invalidation_reset(gt);
> > 
> > I've been seeing an issue on fault injection, running in a tight
> > while
> 
> I think fault injection case will be fixed by [1] whenever that
> merges.

Oh excellent. I had seen the series here but hadn't thought to test as
I was worried the injection error was related to mine. I'll check that
out here shortly just to confirm. I do have the basic error injection
change locally too that forces this, so might float that if it does.

> 
> [1] https://patchwork.freedesktop.org/series/152870/
> 
> For general safety though, I think calling tlb_invalidation_reset is
> a
> good idea.
> 
> > loop, where occasionally we see that a couple of sysfs files
> > weren't
> > properly torn down on a prior driver instance, followed by TLB
> > invalidation timeouts. Up until today I was only able to reproduce
> > that
> > with this series, so I wanted to be sure we weren't causing
> > something
> > here, particularly with this _reset() call (one of the reasons I
> > had
> > declined to include this in the original series). Today though,
> > even
> > without the series, I was able to reproduce that behavior (-EEXIST
> > on
> > sysfs create, followed by TLB inval timeout). So I don't think we
> > should block this series on that debug.
> > 
> 
> I agree. The prior CI run LGTM. The failure here [2] should be fixed
> by
> [3] which merged last night.

Great

> 
> [2]
> https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v8/shard-lnl-7/igt@xe_exec_compute_mode@many-execqueues-userptr-rebind.html
> [3] https://patchwork.freedesktop.org/series/153197/ 
> 
> > I see a few things in ci-buglog that could be related, although I
> > don't
> > see any results in those to confirm:
> > https://gfx-ci.igk.intel.com/cibuglog-ng/issue/7175?query_key=a5707a4d3ae2ebb8c04ef6cea0ef747322df4ee1
> > https://gfx-ci.igk.intel.com/cibuglog-ng/issue/10412?query_key=402f2615406c4afa4814a29849b312a0c7b66e9c
> > https://gfx-ci.igk.intel.com/cibuglog-ng/issue/15004?query_key=e0ce601ae69ec76bbdf27293dc2919ba07357de3
> > 
> > Anyway, at least for this series, I think we can ignore that issue.
> > 
> 
> Again agree. I think if this CI run is clean, go ahead and merge.
> 
> With that:
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>

Appreciate the feedback and review Matt!

-Stuart

> 
> > Thanks,
> > Stuart
> > 
> > > +}
> > > +
> > >  static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int
> > > seqno)
> > >  {
> > >         int seqno_recv = READ_ONCE(gt-
> > > >tlb_invalidation.seqno_recv);
> > > diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > > b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > > index f7f0f2eaf4b5..3e4cff3922d6 100644
> > > --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > > +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > > @@ -16,6 +16,7 @@ struct xe_vm;
> > >  struct xe_vma;
> > >  
> > >  int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt);
> > > +void xe_gt_tlb_invalidation_fini(struct xe_gt *gt);
> > >  
> > >  void xe_gt_tlb_invalidation_reset(struct xe_gt *gt);
> > >  int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);
> > 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/9] drm/xe: Cancel pending TLB inval workers on teardown
  2025-08-25 18:23       ` Summers, Stuart
@ 2025-08-25 18:32         ` Summers, Stuart
  0 siblings, 0 replies; 23+ messages in thread
From: Summers, Stuart @ 2025-08-25 18:32 UTC (permalink / raw)
  To: Brost, Matthew; +Cc: intel-xe@lists.freedesktop.org, Kassabri, Farah

On Mon, 2025-08-25 at 18:23 +0000, Summers, Stuart wrote:
> On Mon, 2025-08-25 at 11:20 -0700, Matthew Brost wrote:
> > On Mon, Aug 25, 2025 at 12:06:44PM -0600, Summers, Stuart wrote:
> > > On Mon, 2025-08-25 at 17:57 +0000, Stuart Summers wrote:
> > > > Add a new _fini() routine on the GT TLB invalidation
> > > > side to handle this worker cleanup on driver teardown.
> > > > 
> > > > v2: Move the TLB teardown to the gt fini() routine called
> > > > during
> > > >     gt_init rather than in gt_alloc. This way the GT structure
> > > > stays
> > > >     alive for while we reset the TLB state.
> > > > 
> > > > Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> > > > ---
> > > >  drivers/gpu/drm/xe/xe_gt.c                  |  2 ++
> > > >  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 12 ++++++++++++
> > > >  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h |  1 +
> > > >  3 files changed, 15 insertions(+)
> > > > 
> > > > diff --git a/drivers/gpu/drm/xe/xe_gt.c
> > > > b/drivers/gpu/drm/xe/xe_gt.c
> > > > index a3397f04abcc..178c4783bbda 100644
> > > > --- a/drivers/gpu/drm/xe/xe_gt.c
> > > > +++ b/drivers/gpu/drm/xe/xe_gt.c
> > > > @@ -603,6 +603,8 @@ static void xe_gt_fini(void *arg)
> > > >         struct xe_gt *gt = arg;
> > > >         int i;
> > > >  
> > > > +       xe_gt_tlb_invalidation_fini(gt);
> > > > +
> > > >         for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
> > > >                 xe_hw_fence_irq_finish(&gt->fence_irq[i]);
> > > >  
> > > > diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > > b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > > index 75854b963d66..db00c5adead9 100644
> > > > --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > > +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> > > > @@ -188,6 +188,18 @@ void xe_gt_tlb_invalidation_reset(struct
> > > > xe_gt
> > > > *gt)
> > > >         mutex_unlock(&gt->tlb_invalidation.seqno_lock);
> > > >  }
> > > >  
> > > > +/**
> > > > + *
> > > > + * xe_gt_tlb_invalidation_fini - Clean up GT TLB invalidation
> > > > state
> > > > + *
> > > > + * Cancel pending fence workers and clean up any additional
> > > > + * GT TLB invalidation state.
> > > > + */
> > > > +void xe_gt_tlb_invalidation_fini(struct xe_gt *gt)
> > > > +{
> > > > +       xe_gt_tlb_invalidation_reset(gt);
> > > 
> > > I've been seeing an issue on fault injection, running in a tight
> > > while
> > 
> > I think fault injection case will be fixed by [1] whenever that
> > merges.
> 
> Oh excellent. I had seen the series here but hadn't thought to test
> as
> I was worried the injection error was related to mine. I'll check
> that
> out here shortly just to confirm. I do have the basic error injection
> change locally too that forces this, so might float that if it does.

Yeah no unfortunately it still reproduces. Basically this with the
equivalent function injection point in the driver:
xe_fault_injection --r inject-fault-probe-function-xe_device_sysfs_init

I'll keep poking around to see what I can come up with there.

Thanks,
Stuart

> 
> > 
> > [1] https://patchwork.freedesktop.org/series/152870/
> > 
> > For general safety though, I think calling tlb_invalidation_reset
> > is
> > a
> > good idea.
> > 
> > > loop, where occasionally we see that a couple of sysfs files
> > > weren't
> > > properly torn down on a prior driver instance, followed by TLB
> > > invalidation timeouts. Up until today I was only able to
> > > reproduce
> > > that
> > > with this series, so I wanted to be sure we weren't causing
> > > something
> > > here, particularly with this _reset() call (one of the reasons I
> > > had
> > > declined to include this in the original series). Today though,
> > > even
> > > without the series, I was able to reproduce that behavior (-
> > > EEXIST
> > > on
> > > sysfs create, followed by TLB inval timeout). So I don't think we
> > > should block this series on that debug.
> > > 
> > 
> > I agree. The prior CI run LGTM. The failure here [2] should be
> > fixed
> > by
> > [3] which merged last night.
> 
> Great
> 
> > 
> > [2]
> > https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v8/shard-lnl-7/igt@xe_exec_compute_mode@many-execqueues-userptr-rebind.html
> > [3] https://patchwork.freedesktop.org/series/153197/ 
> > 
> > > I see a few things in ci-buglog that could be related, although I
> > > don't
> > > see any results in those to confirm:
> > > https://gfx-ci.igk.intel.com/cibuglog-ng/issue/7175?query_key=a5707a4d3ae2ebb8c04ef6cea0ef747322df4ee1
> > > https://gfx-ci.igk.intel.com/cibuglog-ng/issue/10412?query_key=402f2615406c4afa4814a29849b312a0c7b66e9c
> > > https://gfx-ci.igk.intel.com/cibuglog-ng/issue/15004?query_key=e0ce601ae69ec76bbdf27293dc2919ba07357de3
> > > 
> > > Anyway, at least for this series, I think we can ignore that
> > > issue.
> > > 
> > 
> > Again agree. I think if this CI run is clean, go ahead and merge.
> > 
> > With that:
> > Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> 
> Appreciate the feedback and review Matt!
> 
> -Stuart
> 
> > 
> > > Thanks,
> > > Stuart
> > > 
> > > > +}
> > > > +
> > > >  static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int
> > > > seqno)
> > > >  {
> > > >         int seqno_recv = READ_ONCE(gt-
> > > > > tlb_invalidation.seqno_recv);
> > > > diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > > > b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > > > index f7f0f2eaf4b5..3e4cff3922d6 100644
> > > > --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > > > +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
> > > > @@ -16,6 +16,7 @@ struct xe_vm;
> > > >  struct xe_vma;
> > > >  
> > > >  int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt);
> > > > +void xe_gt_tlb_invalidation_fini(struct xe_gt *gt);
> > > >  
> > > >  void xe_gt_tlb_invalidation_reset(struct xe_gt *gt);
> > > >  int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);
> > > 
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* ✗ CI.checkpatch: warning for Add TLB invalidation abstraction (rev9)
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
                   ` (8 preceding siblings ...)
  2025-08-25 17:57 ` [PATCH 9/9] drm/xe: Split TLB invalidation code in frontend and backend Stuart Summers
@ 2025-08-25 19:09 ` Patchwork
  2025-08-25 19:10 ` ✓ CI.KUnit: success " Patchwork
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 23+ messages in thread
From: Patchwork @ 2025-08-25 19:09 UTC (permalink / raw)
  To: Summers, Stuart; +Cc: intel-xe

== Series Details ==

Series: Add TLB invalidation abstraction (rev9)
URL   : https://patchwork.freedesktop.org/series/152022/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
553439844b6500767ce8aef522cfe9fbb7ece541
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit d1b344af6d143951a7a36c2c4876605d0e659d9e
Author: Matthew Brost <matthew.brost@intel.com>
Date:   Mon Aug 25 17:57:21 2025 +0000

    drm/xe: Split TLB invalidation code in frontend and backend
    
    The frontend exposes an API to the driver to send invalidations, handles
    sequence number assignment, synchronization (fences), and provides a
    timeout mechanism. The backend issues the actual invalidation to the
    hardware (or firmware).
    
    The new layering easily allows issuing TLB invalidations to different
    hardware or firmware interfaces.
    
    Normalize some naming while here too.
    
    Signed-off-by: Matthew Brost <matthew.brost@intel.com>
    Signed-off-by: Stuart Summers <stuart.summers@intel.com>
    Reviewed-by: Stuart Summers <stuart.summers@intel.com>
+ /mt/dim checkpatch a7c735c1739662dcc431bda50653821bff0d63fb drm-intel
890a6bbbae5b drm/xe: Move explicit CT lock in TLB invalidation sequence
72d981660ff5 drm/xe: Cancel pending TLB inval workers on teardown
d67ccf64f356 drm/xe: s/tlb_invalidation/tlb_inval
-:137: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#137: 
rename from drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c

total: 0 errors, 1 warnings, 0 checks, 1143 lines checked
4cf7b30db663 drm/xe: Add xe_tlb_inval structure
87ea3a175314 drm/xe: Add xe_gt_tlb_invalidation_done_handler
c50850989e68 drm/xe: Decouple TLB invalidations from GT
-:103: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#103: 
deleted file mode 100644

total: 0 errors, 1 warnings, 0 checks, 1234 lines checked
3ad5b30d0e7c drm/xe: Prep TLB invalidation fence before sending
208dcf0f629b drm/xe: Add helpers to send TLB invalidations
d1b344af6d14 drm/xe: Split TLB invalidation code in frontend and backend
-:61: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#61: 
new file mode 100644

-:712: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__tlb_inval' - possible side-effects?
#712: FILE: drivers/gpu/drm/xe/xe_tlb_inval.c:240:
+#define xe_tlb_inval_issue(__tlb_inval, __fence, op, args...)	\
+({								\
+	int __ret;						\
+								\
+	xe_assert((__tlb_inval)->xe, (__tlb_inval)->ops);	\
+	xe_assert((__tlb_inval)->xe, (__fence));		\
+								\
+	mutex_lock(&(__tlb_inval)->seqno_lock); 		\
+	xe_tlb_inval_fence_prep((__fence));			\
+	__ret = op((__tlb_inval), (__fence)->seqno, ##args);	\
+	if (__ret < 0)						\
+		xe_tlb_inval_fence_signal_unlocked((__fence));	\
+	mutex_unlock(&(__tlb_inval)->seqno_lock);		\
+								\
+	__ret == -ECANCELED ? 0 : __ret;			\
+})

-:712: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__fence' - possible side-effects?
#712: FILE: drivers/gpu/drm/xe/xe_tlb_inval.c:240:
+#define xe_tlb_inval_issue(__tlb_inval, __fence, op, args...)	\
+({								\
+	int __ret;						\
+								\
+	xe_assert((__tlb_inval)->xe, (__tlb_inval)->ops);	\
+	xe_assert((__tlb_inval)->xe, (__fence));		\
+								\
+	mutex_lock(&(__tlb_inval)->seqno_lock); 		\
+	xe_tlb_inval_fence_prep((__fence));			\
+	__ret = op((__tlb_inval), (__fence)->seqno, ##args);	\
+	if (__ret < 0)						\
+		xe_tlb_inval_fence_signal_unlocked((__fence));	\
+	mutex_unlock(&(__tlb_inval)->seqno_lock);		\
+								\
+	__ret == -ECANCELED ? 0 : __ret;			\
+})

-:719: WARNING:SPACE_BEFORE_TAB: please, no space before tabs
#719: FILE: drivers/gpu/drm/xe/xe_tlb_inval.c:247:
+^Imutex_lock(&(__tlb_inval)->seqno_lock); ^I^I\$

total: 0 errors, 2 warnings, 2 checks, 1151 lines checked



^ permalink raw reply	[flat|nested] 23+ messages in thread

* ✓ CI.KUnit: success for Add TLB invalidation abstraction (rev9)
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
                   ` (9 preceding siblings ...)
  2025-08-25 19:09 ` ✗ CI.checkpatch: warning for Add TLB invalidation abstraction (rev9) Patchwork
@ 2025-08-25 19:10 ` Patchwork
  2025-08-25 20:09 ` ✓ Xe.CI.BAT: " Patchwork
  2025-08-26  6:18 ` ✓ Xe.CI.Full: " Patchwork
  12 siblings, 0 replies; 23+ messages in thread
From: Patchwork @ 2025-08-25 19:10 UTC (permalink / raw)
  To: Summers, Stuart; +Cc: intel-xe

== Series Details ==

Series: Add TLB invalidation abstraction (rev9)
URL   : https://patchwork.freedesktop.org/series/152022/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[19:09:38] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[19:09:42] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[19:10:11] Starting KUnit Kernel (1/1)...
[19:10:11] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[19:10:11] ================== guc_buf (11 subtests) ===================
[19:10:11] [PASSED] test_smallest
[19:10:11] [PASSED] test_largest
[19:10:11] [PASSED] test_granular
[19:10:11] [PASSED] test_unique
[19:10:11] [PASSED] test_overlap
[19:10:11] [PASSED] test_reusable
[19:10:11] [PASSED] test_too_big
[19:10:11] [PASSED] test_flush
[19:10:11] [PASSED] test_lookup
[19:10:11] [PASSED] test_data
[19:10:11] [PASSED] test_class
[19:10:11] ===================== [PASSED] guc_buf =====================
[19:10:11] =================== guc_dbm (7 subtests) ===================
[19:10:11] [PASSED] test_empty
[19:10:11] [PASSED] test_default
[19:10:11] ======================== test_size  ========================
[19:10:11] [PASSED] 4
[19:10:11] [PASSED] 8
[19:10:11] [PASSED] 32
[19:10:11] [PASSED] 256
[19:10:11] ==================== [PASSED] test_size ====================
[19:10:11] ======================= test_reuse  ========================
[19:10:11] [PASSED] 4
[19:10:11] [PASSED] 8
[19:10:11] [PASSED] 32
[19:10:11] [PASSED] 256
[19:10:11] =================== [PASSED] test_reuse ====================
[19:10:11] =================== test_range_overlap  ====================
[19:10:11] [PASSED] 4
[19:10:11] [PASSED] 8
[19:10:11] [PASSED] 32
[19:10:11] [PASSED] 256
[19:10:11] =============== [PASSED] test_range_overlap ================
[19:10:11] =================== test_range_compact  ====================
[19:10:11] [PASSED] 4
[19:10:11] [PASSED] 8
[19:10:11] [PASSED] 32
[19:10:11] [PASSED] 256
[19:10:11] =============== [PASSED] test_range_compact ================
[19:10:11] ==================== test_range_spare  =====================
[19:10:11] [PASSED] 4
[19:10:11] [PASSED] 8
[19:10:11] [PASSED] 32
[19:10:11] [PASSED] 256
[19:10:11] ================ [PASSED] test_range_spare =================
[19:10:11] ===================== [PASSED] guc_dbm =====================
[19:10:11] =================== guc_idm (6 subtests) ===================
[19:10:11] [PASSED] bad_init
[19:10:11] [PASSED] no_init
[19:10:11] [PASSED] init_fini
[19:10:11] [PASSED] check_used
[19:10:11] [PASSED] check_quota
[19:10:11] [PASSED] check_all
[19:10:11] ===================== [PASSED] guc_idm =====================
[19:10:11] ================== no_relay (3 subtests) ===================
[19:10:11] [PASSED] xe_drops_guc2pf_if_not_ready
[19:10:11] [PASSED] xe_drops_guc2vf_if_not_ready
[19:10:11] [PASSED] xe_rejects_send_if_not_ready
[19:10:11] ==================== [PASSED] no_relay =====================
[19:10:11] ================== pf_relay (14 subtests) ==================
[19:10:11] [PASSED] pf_rejects_guc2pf_too_short
[19:10:11] [PASSED] pf_rejects_guc2pf_too_long
[19:10:11] [PASSED] pf_rejects_guc2pf_no_payload
[19:10:11] [PASSED] pf_fails_no_payload
[19:10:11] [PASSED] pf_fails_bad_origin
[19:10:11] [PASSED] pf_fails_bad_type
[19:10:11] [PASSED] pf_txn_reports_error
[19:10:11] [PASSED] pf_txn_sends_pf2guc
[19:10:11] [PASSED] pf_sends_pf2guc
[19:10:11] [SKIPPED] pf_loopback_nop
[19:10:11] [SKIPPED] pf_loopback_echo
[19:10:11] [SKIPPED] pf_loopback_fail
[19:10:11] [SKIPPED] pf_loopback_busy
[19:10:11] [SKIPPED] pf_loopback_retry
[19:10:11] ==================== [PASSED] pf_relay =====================
[19:10:11] ================== vf_relay (3 subtests) ===================
[19:10:11] [PASSED] vf_rejects_guc2vf_too_short
[19:10:11] [PASSED] vf_rejects_guc2vf_too_long
[19:10:11] [PASSED] vf_rejects_guc2vf_no_payload
[19:10:11] ==================== [PASSED] vf_relay =====================
[19:10:11] ===================== lmtt (1 subtest) =====================
[19:10:11] ======================== test_ops  =========================
[19:10:11] [PASSED] 2-level
[19:10:11] [PASSED] multi-level
[19:10:11] ==================== [PASSED] test_ops =====================
[19:10:11] ====================== [PASSED] lmtt =======================
[19:10:11] ================= pf_service (11 subtests) =================
[19:10:11] [PASSED] pf_negotiate_any
[19:10:11] [PASSED] pf_negotiate_base_match
[19:10:11] [PASSED] pf_negotiate_base_newer
[19:10:11] [PASSED] pf_negotiate_base_next
[19:10:11] [SKIPPED] pf_negotiate_base_older
[19:10:11] [PASSED] pf_negotiate_base_prev
[19:10:11] [PASSED] pf_negotiate_latest_match
[19:10:11] [PASSED] pf_negotiate_latest_newer
[19:10:11] [PASSED] pf_negotiate_latest_next
[19:10:11] [SKIPPED] pf_negotiate_latest_older
[19:10:11] [SKIPPED] pf_negotiate_latest_prev
[19:10:11] =================== [PASSED] pf_service ====================
[19:10:11] =================== xe_mocs (2 subtests) ===================
[19:10:11] ================ xe_live_mocs_kernel_kunit  ================
[19:10:11] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[19:10:11] ================ xe_live_mocs_reset_kunit  =================
[19:10:11] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[19:10:11] ==================== [SKIPPED] xe_mocs =====================
[19:10:11] ================= xe_migrate (2 subtests) ==================
[19:10:11] ================= xe_migrate_sanity_kunit  =================
[19:10:11] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[19:10:11] ================== xe_validate_ccs_kunit  ==================
[19:10:11] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[19:10:11] =================== [SKIPPED] xe_migrate ===================
[19:10:11] ================== xe_dma_buf (1 subtest) ==================
[19:10:11] ==================== xe_dma_buf_kunit  =====================
[19:10:11] ================ [SKIPPED] xe_dma_buf_kunit ================
[19:10:11] =================== [SKIPPED] xe_dma_buf ===================
[19:10:11] ================= xe_bo_shrink (1 subtest) =================
[19:10:11] =================== xe_bo_shrink_kunit  ====================
[19:10:11] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[19:10:11] ================== [SKIPPED] xe_bo_shrink ==================
[19:10:11] ==================== xe_bo (2 subtests) ====================
[19:10:11] ================== xe_ccs_migrate_kunit  ===================
[19:10:11] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[19:10:11] ==================== xe_bo_evict_kunit  ====================
[19:10:11] =============== [SKIPPED] xe_bo_evict_kunit ================
[19:10:11] ===================== [SKIPPED] xe_bo ======================
[19:10:11] ==================== args (11 subtests) ====================
[19:10:11] [PASSED] count_args_test
[19:10:11] [PASSED] call_args_example
[19:10:11] [PASSED] call_args_test
[19:10:11] [PASSED] drop_first_arg_example
[19:10:11] [PASSED] drop_first_arg_test
[19:10:11] [PASSED] first_arg_example
[19:10:11] [PASSED] first_arg_test
[19:10:11] [PASSED] last_arg_example
[19:10:11] [PASSED] last_arg_test
[19:10:11] [PASSED] pick_arg_example
[19:10:11] [PASSED] sep_comma_example
[19:10:11] ====================== [PASSED] args =======================
[19:10:11] =================== xe_pci (3 subtests) ====================
[19:10:11] ==================== check_graphics_ip  ====================
[19:10:11] [PASSED] 12.70 Xe_LPG
[19:10:11] [PASSED] 12.71 Xe_LPG
[19:10:11] [PASSED] 12.74 Xe_LPG+
[19:10:11] [PASSED] 20.01 Xe2_HPG
[19:10:11] [PASSED] 20.02 Xe2_HPG
[19:10:11] [PASSED] 20.04 Xe2_LPG
[19:10:11] [PASSED] 30.00 Xe3_LPG
[19:10:11] [PASSED] 30.01 Xe3_LPG
[19:10:11] [PASSED] 30.03 Xe3_LPG
[19:10:11] ================ [PASSED] check_graphics_ip ================
[19:10:11] ===================== check_media_ip  ======================
[19:10:11] [PASSED] 13.00 Xe_LPM+
[19:10:11] [PASSED] 13.01 Xe2_HPM
[19:10:11] [PASSED] 20.00 Xe2_LPM
[19:10:11] [PASSED] 30.00 Xe3_LPM
[19:10:11] [PASSED] 30.02 Xe3_LPM
[19:10:11] ================= [PASSED] check_media_ip ==================
[19:10:11] ================= check_platform_gt_count  =================
[19:10:11] [PASSED] 0x9A60 (TIGERLAKE)
[19:10:11] [PASSED] 0x9A68 (TIGERLAKE)
[19:10:11] [PASSED] 0x9A70 (TIGERLAKE)
[19:10:11] [PASSED] 0x9A40 (TIGERLAKE)
[19:10:11] [PASSED] 0x9A49 (TIGERLAKE)
[19:10:11] [PASSED] 0x9A59 (TIGERLAKE)
[19:10:11] [PASSED] 0x9A78 (TIGERLAKE)
[19:10:11] [PASSED] 0x9AC0 (TIGERLAKE)
[19:10:11] [PASSED] 0x9AC9 (TIGERLAKE)
[19:10:11] [PASSED] 0x9AD9 (TIGERLAKE)
[19:10:11] [PASSED] 0x9AF8 (TIGERLAKE)
[19:10:11] [PASSED] 0x4C80 (ROCKETLAKE)
[19:10:11] [PASSED] 0x4C8A (ROCKETLAKE)
[19:10:11] [PASSED] 0x4C8B (ROCKETLAKE)
[19:10:11] [PASSED] 0x4C8C (ROCKETLAKE)
[19:10:11] [PASSED] 0x4C90 (ROCKETLAKE)
[19:10:11] [PASSED] 0x4C9A (ROCKETLAKE)
[19:10:11] [PASSED] 0x4680 (ALDERLAKE_S)
[19:10:11] [PASSED] 0x4682 (ALDERLAKE_S)
[19:10:11] [PASSED] 0x4688 (ALDERLAKE_S)
[19:10:11] [PASSED] 0x468A (ALDERLAKE_S)
[19:10:11] [PASSED] 0x468B (ALDERLAKE_S)
[19:10:11] [PASSED] 0x4690 (ALDERLAKE_S)
[19:10:11] [PASSED] 0x4692 (ALDERLAKE_S)
[19:10:11] [PASSED] 0x4693 (ALDERLAKE_S)
[19:10:11] [PASSED] 0x46A0 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46A1 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46A2 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46A3 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46A6 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46A8 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46AA (ALDERLAKE_P)
[19:10:11] [PASSED] 0x462A (ALDERLAKE_P)
[19:10:11] [PASSED] 0x4626 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x4628 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46B0 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46B1 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46B2 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46B3 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46C0 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46C1 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46C2 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46C3 (ALDERLAKE_P)
[19:10:11] [PASSED] 0x46D0 (ALDERLAKE_N)
[19:10:11] [PASSED] 0x46D1 (ALDERLAKE_N)
[19:10:11] [PASSED] 0x46D2 (ALDERLAKE_N)
[19:10:11] [PASSED] 0x46D3 (ALDERLAKE_N)
[19:10:11] [PASSED] 0x46D4 (ALDERLAKE_N)
[19:10:11] [PASSED] 0xA721 (ALDERLAKE_P)
[19:10:11] [PASSED] 0xA7A1 (ALDERLAKE_P)
[19:10:11] [PASSED] 0xA7A9 (ALDERLAKE_P)
[19:10:11] [PASSED] 0xA7AC (ALDERLAKE_P)
[19:10:11] [PASSED] 0xA7AD (ALDERLAKE_P)
[19:10:11] [PASSED] 0xA720 (ALDERLAKE_P)
[19:10:11] [PASSED] 0xA7A0 (ALDERLAKE_P)
[19:10:11] [PASSED] 0xA7A8 (ALDERLAKE_P)
[19:10:11] [PASSED] 0xA7AA (ALDERLAKE_P)
[19:10:11] [PASSED] 0xA7AB (ALDERLAKE_P)
[19:10:11] [PASSED] 0xA780 (ALDERLAKE_S)
[19:10:11] [PASSED] 0xA781 (ALDERLAKE_S)
[19:10:11] [PASSED] 0xA782 (ALDERLAKE_S)
[19:10:11] [PASSED] 0xA783 (ALDERLAKE_S)
[19:10:11] [PASSED] 0xA788 (ALDERLAKE_S)
[19:10:11] [PASSED] 0xA789 (ALDERLAKE_S)
[19:10:11] [PASSED] 0xA78A (ALDERLAKE_S)
[19:10:11] [PASSED] 0xA78B (ALDERLAKE_S)
[19:10:11] [PASSED] 0x4905 (DG1)
[19:10:11] [PASSED] 0x4906 (DG1)
[19:10:11] [PASSED] 0x4907 (DG1)
[19:10:11] [PASSED] 0x4908 (DG1)
[19:10:11] [PASSED] 0x4909 (DG1)
[19:10:11] [PASSED] 0x56C0 (DG2)
[19:10:11] [PASSED] 0x56C2 (DG2)
[19:10:11] [PASSED] 0x56C1 (DG2)
[19:10:11] [PASSED] 0x7D51 (METEORLAKE)
[19:10:11] [PASSED] 0x7DD1 (METEORLAKE)
[19:10:11] [PASSED] 0x7D41 (METEORLAKE)
[19:10:11] [PASSED] 0x7D67 (METEORLAKE)
[19:10:11] [PASSED] 0xB640 (METEORLAKE)
[19:10:11] [PASSED] 0x56A0 (DG2)
[19:10:11] [PASSED] 0x56A1 (DG2)
[19:10:11] [PASSED] 0x56A2 (DG2)
[19:10:11] [PASSED] 0x56BE (DG2)
[19:10:11] [PASSED] 0x56BF (DG2)
[19:10:11] [PASSED] 0x5690 (DG2)
[19:10:11] [PASSED] 0x5691 (DG2)
[19:10:11] [PASSED] 0x5692 (DG2)
[19:10:11] [PASSED] 0x56A5 (DG2)
[19:10:11] [PASSED] 0x56A6 (DG2)
[19:10:11] [PASSED] 0x56B0 (DG2)
[19:10:11] [PASSED] 0x56B1 (DG2)
[19:10:11] [PASSED] 0x56BA (DG2)
[19:10:11] [PASSED] 0x56BB (DG2)
[19:10:11] [PASSED] 0x56BC (DG2)
[19:10:11] [PASSED] 0x56BD (DG2)
[19:10:11] [PASSED] 0x5693 (DG2)
[19:10:11] [PASSED] 0x5694 (DG2)
[19:10:11] [PASSED] 0x5695 (DG2)
[19:10:11] [PASSED] 0x56A3 (DG2)
[19:10:11] [PASSED] 0x56A4 (DG2)
[19:10:11] [PASSED] 0x56B2 (DG2)
[19:10:11] [PASSED] 0x56B3 (DG2)
[19:10:11] [PASSED] 0x5696 (DG2)
[19:10:11] [PASSED] 0x5697 (DG2)
[19:10:11] [PASSED] 0xB69 (PVC)
[19:10:11] [PASSED] 0xB6E (PVC)
[19:10:11] [PASSED] 0xBD4 (PVC)
[19:10:11] [PASSED] 0xBD5 (PVC)
[19:10:11] [PASSED] 0xBD6 (PVC)
[19:10:11] [PASSED] 0xBD7 (PVC)
[19:10:11] [PASSED] 0xBD8 (PVC)
[19:10:11] [PASSED] 0xBD9 (PVC)
[19:10:11] [PASSED] 0xBDA (PVC)
[19:10:11] [PASSED] 0xBDB (PVC)
[19:10:11] [PASSED] 0xBE0 (PVC)
[19:10:11] [PASSED] 0xBE1 (PVC)
[19:10:11] [PASSED] 0xBE5 (PVC)
[19:10:11] [PASSED] 0x7D40 (METEORLAKE)
[19:10:11] [PASSED] 0x7D45 (METEORLAKE)
[19:10:11] [PASSED] 0x7D55 (METEORLAKE)
[19:10:11] [PASSED] 0x7D60 (METEORLAKE)
[19:10:11] [PASSED] 0x7DD5 (METEORLAKE)
[19:10:11] [PASSED] 0x6420 (LUNARLAKE)
[19:10:11] [PASSED] 0x64A0 (LUNARLAKE)
[19:10:11] [PASSED] 0x64B0 (LUNARLAKE)
[19:10:11] [PASSED] 0xE202 (BATTLEMAGE)
[19:10:11] [PASSED] 0xE209 (BATTLEMAGE)
[19:10:11] [PASSED] 0xE20B (BATTLEMAGE)
[19:10:11] [PASSED] 0xE20C (BATTLEMAGE)
[19:10:11] [PASSED] 0xE20D (BATTLEMAGE)
[19:10:11] [PASSED] 0xE210 (BATTLEMAGE)
[19:10:11] [PASSED] 0xE211 (BATTLEMAGE)
[19:10:11] [PASSED] 0xE212 (BATTLEMAGE)
[19:10:11] [PASSED] 0xE216 (BATTLEMAGE)
[19:10:11] [PASSED] 0xE220 (BATTLEMAGE)
[19:10:11] [PASSED] 0xE221 (BATTLEMAGE)
[19:10:11] [PASSED] 0xE222 (BATTLEMAGE)
[19:10:11] [PASSED] 0xE223 (BATTLEMAGE)
[19:10:11] [PASSED] 0xB080 (PANTHERLAKE)
[19:10:11] [PASSED] 0xB081 (PANTHERLAKE)
[19:10:11] [PASSED] 0xB082 (PANTHERLAKE)
[19:10:11] [PASSED] 0xB083 (PANTHERLAKE)
[19:10:11] [PASSED] 0xB084 (PANTHERLAKE)
[19:10:11] [PASSED] 0xB085 (PANTHERLAKE)
[19:10:11] [PASSED] 0xB086 (PANTHERLAKE)
[19:10:11] [PASSED] 0xB087 (PANTHERLAKE)
[19:10:11] [PASSED] 0xB08F (PANTHERLAKE)
[19:10:11] [PASSED] 0xB090 (PANTHERLAKE)
[19:10:11] [PASSED] 0xB0A0 (PANTHERLAKE)
[19:10:11] [PASSED] 0xB0B0 (PANTHERLAKE)
[19:10:11] [PASSED] 0xFD80 (PANTHERLAKE)
[19:10:11] [PASSED] 0xFD81 (PANTHERLAKE)
[19:10:11] ============= [PASSED] check_platform_gt_count =============
[19:10:11] ===================== [PASSED] xe_pci ======================
[19:10:11] =================== xe_rtp (2 subtests) ====================
[19:10:11] =============== xe_rtp_process_to_sr_tests  ================
[19:10:11] [PASSED] coalesce-same-reg
[19:10:11] [PASSED] no-match-no-add
[19:10:11] [PASSED] match-or
[19:10:11] [PASSED] match-or-xfail
[19:10:11] [PASSED] no-match-no-add-multiple-rules
[19:10:11] [PASSED] two-regs-two-entries
[19:10:11] [PASSED] clr-one-set-other
[19:10:11] [PASSED] set-field
[19:10:11] [PASSED] conflict-duplicate
[19:10:11] [PASSED] conflict-not-disjoint
[19:10:11] [PASSED] conflict-reg-type
[19:10:11] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[19:10:11] ================== xe_rtp_process_tests  ===================
[19:10:11] [PASSED] active1
[19:10:11] [PASSED] active2
[19:10:11] [PASSED] active-inactive
[19:10:11] [PASSED] inactive-active
[19:10:11] [PASSED] inactive-1st_or_active-inactive
[19:10:11] [PASSED] inactive-2nd_or_active-inactive
[19:10:11] [PASSED] inactive-last_or_active-inactive
[19:10:11] [PASSED] inactive-no_or_active-inactive
[19:10:11] ============== [PASSED] xe_rtp_process_tests ===============
[19:10:11] ===================== [PASSED] xe_rtp ======================
[19:10:11] ==================== xe_wa (1 subtest) =====================
[19:10:11] ======================== xe_wa_gt  =========================
[19:10:11] [PASSED] TIGERLAKE (B0)
[19:10:11] [PASSED] DG1 (A0)
[19:10:11] [PASSED] DG1 (B0)
[19:10:11] [PASSED] ALDERLAKE_S (A0)
[19:10:11] [PASSED] ALDERLAKE_S (B0)
[19:10:11] [PASSED] ALDERLAKE_S (C0)
[19:10:11] [PASSED] ALDERLAKE_S (D0)
[19:10:11] [PASSED] ALDERLAKE_P (A0)
[19:10:11] [PASSED] ALDERLAKE_P (B0)
[19:10:11] [PASSED] ALDERLAKE_P (C0)
[19:10:11] [PASSED] ALDERLAKE_S_RPLS (D0)
[19:10:11] [PASSED] ALDERLAKE_P_RPLU (E0)
[19:10:11] [PASSED] DG2_G10 (C0)
[19:10:11] [PASSED] DG2_G11 (B1)
[19:10:11] [PASSED] DG2_G12 (A1)
[19:10:11] [PASSED] METEORLAKE (g:A0, m:A0)
[19:10:11] [PASSED] METEORLAKE (g:A0, m:A0)
[19:10:11] [PASSED] METEORLAKE (g:A0, m:A0)
[19:10:11] [PASSED] LUNARLAKE (g:A0, m:A0)
[19:10:11] [PASSED] LUNARLAKE (g:B0, m:A0)
stty: 'standard input': Inappropriate ioctl for device
[19:10:11] [PASSED] BATTLEMAGE (g:A0, m:A1)
[19:10:11] [PASSED] PANTHERLAKE (g:A0, m:A0)
[19:10:11] ==================== [PASSED] xe_wa_gt =====================
[19:10:11] ====================== [PASSED] xe_wa ======================
[19:10:11] ============================================================
[19:10:11] Testing complete. Ran 298 tests: passed: 282, skipped: 16
[19:10:11] Elapsed time: 33.458s total, 4.281s configuring, 28.810s building, 0.327s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[19:10:11] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[19:10:13] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[19:10:36] Starting KUnit Kernel (1/1)...
[19:10:36] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[19:10:36] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[19:10:36] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[19:10:36] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[19:10:36] =========== drm_validate_clone_mode (2 subtests) ===========
[19:10:36] ============== drm_test_check_in_clone_mode  ===============
[19:10:36] [PASSED] in_clone_mode
[19:10:36] [PASSED] not_in_clone_mode
[19:10:36] ========== [PASSED] drm_test_check_in_clone_mode ===========
[19:10:36] =============== drm_test_check_valid_clones  ===============
[19:10:36] [PASSED] not_in_clone_mode
[19:10:36] [PASSED] valid_clone
[19:10:36] [PASSED] invalid_clone
[19:10:36] =========== [PASSED] drm_test_check_valid_clones ===========
[19:10:36] ============= [PASSED] drm_validate_clone_mode =============
[19:10:36] ============= drm_validate_modeset (1 subtest) =============
[19:10:36] [PASSED] drm_test_check_connector_changed_modeset
[19:10:36] ============== [PASSED] drm_validate_modeset ===============
[19:10:36] ====== drm_test_bridge_get_current_state (2 subtests) ======
[19:10:36] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[19:10:36] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[19:10:36] ======== [PASSED] drm_test_bridge_get_current_state ========
[19:10:36] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[19:10:36] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[19:10:36] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[19:10:36] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[19:10:36] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[19:10:36] ============== drm_bridge_alloc (2 subtests) ===============
[19:10:36] [PASSED] drm_test_drm_bridge_alloc_basic
[19:10:36] [PASSED] drm_test_drm_bridge_alloc_get_put
[19:10:36] ================ [PASSED] drm_bridge_alloc =================
[19:10:36] ================== drm_buddy (7 subtests) ==================
[19:10:36] [PASSED] drm_test_buddy_alloc_limit
[19:10:36] [PASSED] drm_test_buddy_alloc_optimistic
[19:10:36] [PASSED] drm_test_buddy_alloc_pessimistic
[19:10:36] [PASSED] drm_test_buddy_alloc_pathological
[19:10:36] [PASSED] drm_test_buddy_alloc_contiguous
[19:10:36] [PASSED] drm_test_buddy_alloc_clear
[19:10:36] [PASSED] drm_test_buddy_alloc_range_bias
[19:10:36] ==================== [PASSED] drm_buddy ====================
[19:10:36] ============= drm_cmdline_parser (40 subtests) =============
[19:10:36] [PASSED] drm_test_cmdline_force_d_only
[19:10:36] [PASSED] drm_test_cmdline_force_D_only_dvi
[19:10:36] [PASSED] drm_test_cmdline_force_D_only_hdmi
[19:10:36] [PASSED] drm_test_cmdline_force_D_only_not_digital
[19:10:36] [PASSED] drm_test_cmdline_force_e_only
[19:10:36] [PASSED] drm_test_cmdline_res
[19:10:36] [PASSED] drm_test_cmdline_res_vesa
[19:10:36] [PASSED] drm_test_cmdline_res_vesa_rblank
[19:10:36] [PASSED] drm_test_cmdline_res_rblank
[19:10:36] [PASSED] drm_test_cmdline_res_bpp
[19:10:36] [PASSED] drm_test_cmdline_res_refresh
[19:10:36] [PASSED] drm_test_cmdline_res_bpp_refresh
[19:10:36] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[19:10:36] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[19:10:36] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[19:10:36] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[19:10:36] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[19:10:36] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[19:10:36] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[19:10:36] [PASSED] drm_test_cmdline_res_margins_force_on
[19:10:36] [PASSED] drm_test_cmdline_res_vesa_margins
[19:10:36] [PASSED] drm_test_cmdline_name
[19:10:36] [PASSED] drm_test_cmdline_name_bpp
[19:10:36] [PASSED] drm_test_cmdline_name_option
[19:10:36] [PASSED] drm_test_cmdline_name_bpp_option
[19:10:36] [PASSED] drm_test_cmdline_rotate_0
[19:10:36] [PASSED] drm_test_cmdline_rotate_90
[19:10:36] [PASSED] drm_test_cmdline_rotate_180
[19:10:36] [PASSED] drm_test_cmdline_rotate_270
[19:10:36] [PASSED] drm_test_cmdline_hmirror
[19:10:36] [PASSED] drm_test_cmdline_vmirror
[19:10:36] [PASSED] drm_test_cmdline_margin_options
[19:10:36] [PASSED] drm_test_cmdline_multiple_options
[19:10:36] [PASSED] drm_test_cmdline_bpp_extra_and_option
[19:10:36] [PASSED] drm_test_cmdline_extra_and_option
[19:10:36] [PASSED] drm_test_cmdline_freestanding_options
[19:10:36] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[19:10:36] [PASSED] drm_test_cmdline_panel_orientation
[19:10:36] ================ drm_test_cmdline_invalid  =================
[19:10:36] [PASSED] margin_only
[19:10:36] [PASSED] interlace_only
[19:10:36] [PASSED] res_missing_x
[19:10:36] [PASSED] res_missing_y
[19:10:36] [PASSED] res_bad_y
[19:10:36] [PASSED] res_missing_y_bpp
[19:10:36] [PASSED] res_bad_bpp
[19:10:36] [PASSED] res_bad_refresh
[19:10:36] [PASSED] res_bpp_refresh_force_on_off
[19:10:36] [PASSED] res_invalid_mode
[19:10:36] [PASSED] res_bpp_wrong_place_mode
[19:10:36] [PASSED] name_bpp_refresh
[19:10:36] [PASSED] name_refresh
[19:10:36] [PASSED] name_refresh_wrong_mode
[19:10:36] [PASSED] name_refresh_invalid_mode
[19:10:36] [PASSED] rotate_multiple
[19:10:36] [PASSED] rotate_invalid_val
[19:10:36] [PASSED] rotate_truncated
[19:10:36] [PASSED] invalid_option
[19:10:36] [PASSED] invalid_tv_option
[19:10:36] [PASSED] truncated_tv_option
[19:10:36] ============ [PASSED] drm_test_cmdline_invalid =============
[19:10:36] =============== drm_test_cmdline_tv_options  ===============
[19:10:36] [PASSED] NTSC
[19:10:36] [PASSED] NTSC_443
[19:10:36] [PASSED] NTSC_J
[19:10:36] [PASSED] PAL
[19:10:36] [PASSED] PAL_M
[19:10:36] [PASSED] PAL_N
[19:10:36] [PASSED] SECAM
[19:10:36] [PASSED] MONO_525
[19:10:36] [PASSED] MONO_625
[19:10:36] =========== [PASSED] drm_test_cmdline_tv_options ===========
[19:10:36] =============== [PASSED] drm_cmdline_parser ================
[19:10:36] ========== drmm_connector_hdmi_init (20 subtests) ==========
[19:10:36] [PASSED] drm_test_connector_hdmi_init_valid
[19:10:36] [PASSED] drm_test_connector_hdmi_init_bpc_8
[19:10:36] [PASSED] drm_test_connector_hdmi_init_bpc_10
[19:10:36] [PASSED] drm_test_connector_hdmi_init_bpc_12
[19:10:36] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[19:10:36] [PASSED] drm_test_connector_hdmi_init_bpc_null
[19:10:36] [PASSED] drm_test_connector_hdmi_init_formats_empty
[19:10:36] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[19:10:36] === drm_test_connector_hdmi_init_formats_yuv420_allowed  ===
[19:10:36] [PASSED] supported_formats=0x9 yuv420_allowed=1
[19:10:36] [PASSED] supported_formats=0x9 yuv420_allowed=0
[19:10:36] [PASSED] supported_formats=0x3 yuv420_allowed=1
[19:10:36] [PASSED] supported_formats=0x3 yuv420_allowed=0
[19:10:36] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[19:10:36] [PASSED] drm_test_connector_hdmi_init_null_ddc
[19:10:36] [PASSED] drm_test_connector_hdmi_init_null_product
[19:10:36] [PASSED] drm_test_connector_hdmi_init_null_vendor
[19:10:36] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[19:10:36] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[19:10:36] [PASSED] drm_test_connector_hdmi_init_product_valid
[19:10:36] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[19:10:36] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[19:10:36] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[19:10:36] ========= drm_test_connector_hdmi_init_type_valid  =========
[19:10:36] [PASSED] HDMI-A
[19:10:36] [PASSED] HDMI-B
[19:10:36] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[19:10:36] ======== drm_test_connector_hdmi_init_type_invalid  ========
[19:10:36] [PASSED] Unknown
[19:10:36] [PASSED] VGA
[19:10:36] [PASSED] DVI-I
[19:10:36] [PASSED] DVI-D
[19:10:36] [PASSED] DVI-A
[19:10:36] [PASSED] Composite
[19:10:36] [PASSED] SVIDEO
[19:10:36] [PASSED] LVDS
[19:10:36] [PASSED] Component
[19:10:36] [PASSED] DIN
[19:10:36] [PASSED] DP
[19:10:36] [PASSED] TV
[19:10:36] [PASSED] eDP
[19:10:36] [PASSED] Virtual
[19:10:36] [PASSED] DSI
[19:10:36] [PASSED] DPI
[19:10:36] [PASSED] Writeback
[19:10:36] [PASSED] SPI
[19:10:36] [PASSED] USB
[19:10:36] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[19:10:36] ============ [PASSED] drmm_connector_hdmi_init =============
[19:10:36] ============= drmm_connector_init (3 subtests) =============
[19:10:36] [PASSED] drm_test_drmm_connector_init
[19:10:36] [PASSED] drm_test_drmm_connector_init_null_ddc
[19:10:36] ========= drm_test_drmm_connector_init_type_valid  =========
[19:10:36] [PASSED] Unknown
[19:10:36] [PASSED] VGA
[19:10:36] [PASSED] DVI-I
[19:10:36] [PASSED] DVI-D
[19:10:36] [PASSED] DVI-A
[19:10:36] [PASSED] Composite
[19:10:36] [PASSED] SVIDEO
[19:10:36] [PASSED] LVDS
[19:10:36] [PASSED] Component
[19:10:36] [PASSED] DIN
[19:10:36] [PASSED] DP
[19:10:36] [PASSED] HDMI-A
[19:10:36] [PASSED] HDMI-B
[19:10:36] [PASSED] TV
[19:10:36] [PASSED] eDP
[19:10:36] [PASSED] Virtual
[19:10:36] [PASSED] DSI
[19:10:36] [PASSED] DPI
[19:10:36] [PASSED] Writeback
[19:10:36] [PASSED] SPI
[19:10:36] [PASSED] USB
[19:10:36] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[19:10:36] =============== [PASSED] drmm_connector_init ===============
[19:10:36] ========= drm_connector_dynamic_init (6 subtests) ==========
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_init
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_init_properties
[19:10:36] ===== drm_test_drm_connector_dynamic_init_type_valid  ======
[19:10:36] [PASSED] Unknown
[19:10:36] [PASSED] VGA
[19:10:36] [PASSED] DVI-I
[19:10:36] [PASSED] DVI-D
[19:10:36] [PASSED] DVI-A
[19:10:36] [PASSED] Composite
[19:10:36] [PASSED] SVIDEO
[19:10:36] [PASSED] LVDS
[19:10:36] [PASSED] Component
[19:10:36] [PASSED] DIN
[19:10:36] [PASSED] DP
[19:10:36] [PASSED] HDMI-A
[19:10:36] [PASSED] HDMI-B
[19:10:36] [PASSED] TV
[19:10:36] [PASSED] eDP
[19:10:36] [PASSED] Virtual
[19:10:36] [PASSED] DSI
[19:10:36] [PASSED] DPI
[19:10:36] [PASSED] Writeback
[19:10:36] [PASSED] SPI
[19:10:36] [PASSED] USB
[19:10:36] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[19:10:36] ======== drm_test_drm_connector_dynamic_init_name  =========
[19:10:36] [PASSED] Unknown
[19:10:36] [PASSED] VGA
[19:10:36] [PASSED] DVI-I
[19:10:36] [PASSED] DVI-D
[19:10:36] [PASSED] DVI-A
[19:10:36] [PASSED] Composite
[19:10:36] [PASSED] SVIDEO
[19:10:36] [PASSED] LVDS
[19:10:36] [PASSED] Component
[19:10:36] [PASSED] DIN
[19:10:36] [PASSED] DP
[19:10:36] [PASSED] HDMI-A
[19:10:36] [PASSED] HDMI-B
[19:10:36] [PASSED] TV
[19:10:36] [PASSED] eDP
[19:10:36] [PASSED] Virtual
[19:10:36] [PASSED] DSI
[19:10:36] [PASSED] DPI
[19:10:36] [PASSED] Writeback
[19:10:36] [PASSED] SPI
[19:10:36] [PASSED] USB
[19:10:36] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[19:10:36] =========== [PASSED] drm_connector_dynamic_init ============
[19:10:36] ==== drm_connector_dynamic_register_early (4 subtests) =====
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[19:10:36] ====== [PASSED] drm_connector_dynamic_register_early =======
[19:10:36] ======= drm_connector_dynamic_register (7 subtests) ========
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[19:10:36] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[19:10:36] ========= [PASSED] drm_connector_dynamic_register ==========
[19:10:36] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[19:10:36] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[19:10:36] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[19:10:36] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[19:10:36] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[19:10:36] ========== drm_test_get_tv_mode_from_name_valid  ===========
[19:10:36] [PASSED] NTSC
[19:10:36] [PASSED] NTSC-443
[19:10:36] [PASSED] NTSC-J
[19:10:36] [PASSED] PAL
[19:10:36] [PASSED] PAL-M
[19:10:36] [PASSED] PAL-N
[19:10:36] [PASSED] SECAM
[19:10:36] [PASSED] Mono
[19:10:36] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[19:10:36] [PASSED] drm_test_get_tv_mode_from_name_truncated
[19:10:36] ============ [PASSED] drm_get_tv_mode_from_name ============
[19:10:36] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[19:10:36] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[19:10:36] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[19:10:36] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[19:10:36] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[19:10:36] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[19:10:36] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[19:10:36] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[19:10:36] [PASSED] VIC 96
[19:10:36] [PASSED] VIC 97
[19:10:36] [PASSED] VIC 101
[19:10:36] [PASSED] VIC 102
[19:10:36] [PASSED] VIC 106
[19:10:36] [PASSED] VIC 107
[19:10:36] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[19:10:36] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[19:10:36] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[19:10:36] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[19:10:36] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[19:10:36] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[19:10:36] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[19:10:36] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[19:10:36] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[19:10:36] [PASSED] Automatic
[19:10:36] [PASSED] Full
[19:10:36] [PASSED] Limited 16:235
[19:10:36] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[19:10:36] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[19:10:36] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[19:10:36] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[19:10:36] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[19:10:36] [PASSED] RGB
[19:10:36] [PASSED] YUV 4:2:0
[19:10:36] [PASSED] YUV 4:2:2
[19:10:36] [PASSED] YUV 4:4:4
[19:10:36] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[19:10:36] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[19:10:36] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[19:10:36] ============= drm_damage_helper (21 subtests) ==============
[19:10:36] [PASSED] drm_test_damage_iter_no_damage
[19:10:36] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[19:10:36] [PASSED] drm_test_damage_iter_no_damage_src_moved
[19:10:36] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[19:10:36] [PASSED] drm_test_damage_iter_no_damage_not_visible
[19:10:36] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[19:10:36] [PASSED] drm_test_damage_iter_no_damage_no_fb
[19:10:36] [PASSED] drm_test_damage_iter_simple_damage
[19:10:36] [PASSED] drm_test_damage_iter_single_damage
[19:10:36] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[19:10:36] [PASSED] drm_test_damage_iter_single_damage_outside_src
[19:10:36] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[19:10:36] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[19:10:36] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[19:10:36] [PASSED] drm_test_damage_iter_single_damage_src_moved
[19:10:36] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[19:10:36] [PASSED] drm_test_damage_iter_damage
[19:10:36] [PASSED] drm_test_damage_iter_damage_one_intersect
[19:10:36] [PASSED] drm_test_damage_iter_damage_one_outside
[19:10:36] [PASSED] drm_test_damage_iter_damage_src_moved
[19:10:36] [PASSED] drm_test_damage_iter_damage_not_visible
[19:10:36] ================ [PASSED] drm_damage_helper ================
[19:10:36] ============== drm_dp_mst_helper (3 subtests) ==============
[19:10:36] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[19:10:36] [PASSED] Clock 154000 BPP 30 DSC disabled
[19:10:36] [PASSED] Clock 234000 BPP 30 DSC disabled
[19:10:36] [PASSED] Clock 297000 BPP 24 DSC disabled
[19:10:36] [PASSED] Clock 332880 BPP 24 DSC enabled
[19:10:36] [PASSED] Clock 324540 BPP 24 DSC enabled
[19:10:36] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[19:10:36] ============== drm_test_dp_mst_calc_pbn_div  ===============
[19:10:36] [PASSED] Link rate 2000000 lane count 4
[19:10:36] [PASSED] Link rate 2000000 lane count 2
[19:10:36] [PASSED] Link rate 2000000 lane count 1
[19:10:36] [PASSED] Link rate 1350000 lane count 4
[19:10:36] [PASSED] Link rate 1350000 lane count 2
[19:10:36] [PASSED] Link rate 1350000 lane count 1
[19:10:36] [PASSED] Link rate 1000000 lane count 4
[19:10:36] [PASSED] Link rate 1000000 lane count 2
[19:10:36] [PASSED] Link rate 1000000 lane count 1
[19:10:36] [PASSED] Link rate 810000 lane count 4
[19:10:36] [PASSED] Link rate 810000 lane count 2
[19:10:36] [PASSED] Link rate 810000 lane count 1
[19:10:36] [PASSED] Link rate 540000 lane count 4
[19:10:36] [PASSED] Link rate 540000 lane count 2
[19:10:36] [PASSED] Link rate 540000 lane count 1
[19:10:36] [PASSED] Link rate 270000 lane count 4
[19:10:36] [PASSED] Link rate 270000 lane count 2
[19:10:36] [PASSED] Link rate 270000 lane count 1
[19:10:36] [PASSED] Link rate 162000 lane count 4
[19:10:36] [PASSED] Link rate 162000 lane count 2
[19:10:36] [PASSED] Link rate 162000 lane count 1
[19:10:36] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[19:10:36] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[19:10:36] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[19:10:36] [PASSED] DP_POWER_UP_PHY with port number
[19:10:36] [PASSED] DP_POWER_DOWN_PHY with port number
[19:10:36] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[19:10:36] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[19:10:36] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[19:10:36] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[19:10:36] [PASSED] DP_QUERY_PAYLOAD with port number
[19:10:36] [PASSED] DP_QUERY_PAYLOAD with VCPI
[19:10:36] [PASSED] DP_REMOTE_DPCD_READ with port number
[19:10:36] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[19:10:36] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[19:10:36] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[19:10:36] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[19:10:36] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[19:10:36] [PASSED] DP_REMOTE_I2C_READ with port number
[19:10:36] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[19:10:36] [PASSED] DP_REMOTE_I2C_READ with transactions array
[19:10:36] [PASSED] DP_REMOTE_I2C_WRITE with port number
[19:10:36] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[19:10:36] [PASSED] DP_REMOTE_I2C_WRITE with data array
[19:10:36] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[19:10:36] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[19:10:36] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[19:10:36] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[19:10:36] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[19:10:36] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[19:10:36] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[19:10:36] ================ [PASSED] drm_dp_mst_helper ================
[19:10:36] ================== drm_exec (7 subtests) ===================
[19:10:36] [PASSED] sanitycheck
[19:10:36] [PASSED] test_lock
[19:10:36] [PASSED] test_lock_unlock
[19:10:36] [PASSED] test_duplicates
[19:10:36] [PASSED] test_prepare
[19:10:36] [PASSED] test_prepare_array
[19:10:36] [PASSED] test_multiple_loops
[19:10:36] ==================== [PASSED] drm_exec =====================
[19:10:36] =========== drm_format_helper_test (17 subtests) ===========
[19:10:36] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[19:10:36] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[19:10:36] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[19:10:36] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[19:10:36] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[19:10:36] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[19:10:36] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[19:10:36] ============= drm_test_fb_xrgb8888_to_bgr888  ==============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[19:10:36] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[19:10:36] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[19:10:36] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[19:10:36] ============== drm_test_fb_xrgb8888_to_mono  ===============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[19:10:36] ==================== drm_test_fb_swab  =====================
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ================ [PASSED] drm_test_fb_swab =================
[19:10:36] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[19:10:36] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[19:10:36] [PASSED] single_pixel_source_buffer
[19:10:36] [PASSED] single_pixel_clip_rectangle
[19:10:36] [PASSED] well_known_colors
[19:10:36] [PASSED] destination_pitch
[19:10:36] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[19:10:36] ================= drm_test_fb_clip_offset  =================
[19:10:36] [PASSED] pass through
[19:10:36] [PASSED] horizontal offset
[19:10:36] [PASSED] vertical offset
[19:10:36] [PASSED] horizontal and vertical offset
[19:10:36] [PASSED] horizontal offset (custom pitch)
[19:10:36] [PASSED] vertical offset (custom pitch)
[19:10:36] [PASSED] horizontal and vertical offset (custom pitch)
[19:10:36] ============= [PASSED] drm_test_fb_clip_offset =============
[19:10:36] =================== drm_test_fb_memcpy  ====================
[19:10:36] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[19:10:36] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[19:10:36] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[19:10:36] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[19:10:36] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[19:10:36] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[19:10:36] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[19:10:36] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[19:10:36] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[19:10:36] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[19:10:36] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[19:10:36] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[19:10:36] =============== [PASSED] drm_test_fb_memcpy ================
[19:10:36] ============= [PASSED] drm_format_helper_test ==============
[19:10:36] ================= drm_format (18 subtests) =================
[19:10:36] [PASSED] drm_test_format_block_width_invalid
[19:10:36] [PASSED] drm_test_format_block_width_one_plane
[19:10:36] [PASSED] drm_test_format_block_width_two_plane
[19:10:36] [PASSED] drm_test_format_block_width_three_plane
[19:10:36] [PASSED] drm_test_format_block_width_tiled
[19:10:36] [PASSED] drm_test_format_block_height_invalid
[19:10:36] [PASSED] drm_test_format_block_height_one_plane
[19:10:36] [PASSED] drm_test_format_block_height_two_plane
[19:10:36] [PASSED] drm_test_format_block_height_three_plane
[19:10:36] [PASSED] drm_test_format_block_height_tiled
[19:10:36] [PASSED] drm_test_format_min_pitch_invalid
[19:10:36] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[19:10:36] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[19:10:36] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[19:10:36] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[19:10:36] [PASSED] drm_test_format_min_pitch_two_plane
[19:10:36] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[19:10:36] [PASSED] drm_test_format_min_pitch_tiled
[19:10:36] =================== [PASSED] drm_format ====================
[19:10:36] ============== drm_framebuffer (10 subtests) ===============
[19:10:36] ========== drm_test_framebuffer_check_src_coords  ==========
[19:10:36] [PASSED] Success: source fits into fb
[19:10:36] [PASSED] Fail: overflowing fb with x-axis coordinate
[19:10:36] [PASSED] Fail: overflowing fb with y-axis coordinate
[19:10:36] [PASSED] Fail: overflowing fb with source width
[19:10:36] [PASSED] Fail: overflowing fb with source height
[19:10:36] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[19:10:36] [PASSED] drm_test_framebuffer_cleanup
[19:10:36] =============== drm_test_framebuffer_create  ===============
[19:10:36] [PASSED] ABGR8888 normal sizes
[19:10:36] [PASSED] ABGR8888 max sizes
[19:10:36] [PASSED] ABGR8888 pitch greater than min required
[19:10:36] [PASSED] ABGR8888 pitch less than min required
[19:10:36] [PASSED] ABGR8888 Invalid width
[19:10:36] [PASSED] ABGR8888 Invalid buffer handle
[19:10:36] [PASSED] No pixel format
[19:10:36] [PASSED] ABGR8888 Width 0
[19:10:36] [PASSED] ABGR8888 Height 0
[19:10:36] [PASSED] ABGR8888 Out of bound height * pitch combination
[19:10:36] [PASSED] ABGR8888 Large buffer offset
[19:10:36] [PASSED] ABGR8888 Buffer offset for inexistent plane
[19:10:36] [PASSED] ABGR8888 Invalid flag
[19:10:36] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[19:10:36] [PASSED] ABGR8888 Valid buffer modifier
[19:10:36] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[19:10:36] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[19:10:36] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[19:10:36] [PASSED] NV12 Normal sizes
[19:10:36] [PASSED] NV12 Max sizes
[19:10:36] [PASSED] NV12 Invalid pitch
[19:10:36] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[19:10:36] [PASSED] NV12 different  modifier per-plane
[19:10:36] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[19:10:36] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[19:10:36] [PASSED] NV12 Modifier for inexistent plane
[19:10:36] [PASSED] NV12 Handle for inexistent plane
[19:10:36] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[19:10:36] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[19:10:36] [PASSED] YVU420 Normal sizes
[19:10:36] [PASSED] YVU420 Max sizes
[19:10:36] [PASSED] YVU420 Invalid pitch
[19:10:36] [PASSED] YVU420 Different pitches
[19:10:36] [PASSED] YVU420 Different buffer offsets/pitches
[19:10:36] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[19:10:36] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[19:10:36] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[19:10:36] [PASSED] YVU420 Valid modifier
[19:10:36] [PASSED] YVU420 Different modifiers per plane
[19:10:36] [PASSED] YVU420 Modifier for inexistent plane
[19:10:36] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[19:10:36] [PASSED] X0L2 Normal sizes
[19:10:36] [PASSED] X0L2 Max sizes
[19:10:36] [PASSED] X0L2 Invalid pitch
[19:10:36] [PASSED] X0L2 Pitch greater than minimum required
[19:10:36] [PASSED] X0L2 Handle for inexistent plane
[19:10:36] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[19:10:36] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[19:10:36] [PASSED] X0L2 Valid modifier
[19:10:36] [PASSED] X0L2 Modifier for inexistent plane
[19:10:36] =========== [PASSED] drm_test_framebuffer_create ===========
[19:10:36] [PASSED] drm_test_framebuffer_free
[19:10:36] [PASSED] drm_test_framebuffer_init
[19:10:36] [PASSED] drm_test_framebuffer_init_bad_format
[19:10:36] [PASSED] drm_test_framebuffer_init_dev_mismatch
[19:10:36] [PASSED] drm_test_framebuffer_lookup
[19:10:36] [PASSED] drm_test_framebuffer_lookup_inexistent
[19:10:36] [PASSED] drm_test_framebuffer_modifiers_not_supported
[19:10:36] ================= [PASSED] drm_framebuffer =================
[19:10:36] ================ drm_gem_shmem (8 subtests) ================
[19:10:36] [PASSED] drm_gem_shmem_test_obj_create
[19:10:36] [PASSED] drm_gem_shmem_test_obj_create_private
[19:10:36] [PASSED] drm_gem_shmem_test_pin_pages
[19:10:36] [PASSED] drm_gem_shmem_test_vmap
[19:10:36] [PASSED] drm_gem_shmem_test_get_pages_sgt
[19:10:36] [PASSED] drm_gem_shmem_test_get_sg_table
[19:10:36] [PASSED] drm_gem_shmem_test_madvise
[19:10:36] [PASSED] drm_gem_shmem_test_purge
[19:10:36] ================== [PASSED] drm_gem_shmem ==================
[19:10:36] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[19:10:36] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[19:10:36] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[19:10:36] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[19:10:36] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[19:10:36] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[19:10:36] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[19:10:36] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420  =======
[19:10:36] [PASSED] Automatic
[19:10:36] [PASSED] Full
[19:10:36] [PASSED] Limited 16:235
[19:10:36] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[19:10:36] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[19:10:36] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[19:10:36] [PASSED] drm_test_check_disable_connector
[19:10:36] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[19:10:36] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[19:10:36] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[19:10:36] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[19:10:36] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[19:10:36] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[19:10:36] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[19:10:36] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[19:10:36] [PASSED] drm_test_check_output_bpc_dvi
[19:10:36] [PASSED] drm_test_check_output_bpc_format_vic_1
[19:10:36] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[19:10:36] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[19:10:36] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[19:10:36] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[19:10:36] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[19:10:36] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[19:10:36] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[19:10:36] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[19:10:36] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[19:10:36] [PASSED] drm_test_check_broadcast_rgb_value
[19:10:36] [PASSED] drm_test_check_bpc_8_value
[19:10:36] [PASSED] drm_test_check_bpc_10_value
[19:10:36] [PASSED] drm_test_check_bpc_12_value
[19:10:36] [PASSED] drm_test_check_format_value
[19:10:36] [PASSED] drm_test_check_tmds_char_value
[19:10:36] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[19:10:36] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[19:10:36] [PASSED] drm_test_check_mode_valid
[19:10:36] [PASSED] drm_test_check_mode_valid_reject
[19:10:36] [PASSED] drm_test_check_mode_valid_reject_rate
[19:10:36] [PASSED] drm_test_check_mode_valid_reject_max_clock
[19:10:36] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[19:10:36] ================= drm_managed (2 subtests) =================
[19:10:36] [PASSED] drm_test_managed_release_action
[19:10:36] [PASSED] drm_test_managed_run_action
[19:10:36] =================== [PASSED] drm_managed ===================
[19:10:36] =================== drm_mm (6 subtests) ====================
[19:10:36] [PASSED] drm_test_mm_init
[19:10:36] [PASSED] drm_test_mm_debug
[19:10:36] [PASSED] drm_test_mm_align32
[19:10:36] [PASSED] drm_test_mm_align64
[19:10:36] [PASSED] drm_test_mm_lowest
[19:10:36] [PASSED] drm_test_mm_highest
[19:10:36] ===================== [PASSED] drm_mm ======================
[19:10:36] ============= drm_modes_analog_tv (5 subtests) =============
[19:10:36] [PASSED] drm_test_modes_analog_tv_mono_576i
[19:10:36] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[19:10:36] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[19:10:36] [PASSED] drm_test_modes_analog_tv_pal_576i
[19:10:36] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[19:10:36] =============== [PASSED] drm_modes_analog_tv ===============
[19:10:36] ============== drm_plane_helper (2 subtests) ===============
[19:10:36] =============== drm_test_check_plane_state  ================
[19:10:36] [PASSED] clipping_simple
[19:10:36] [PASSED] clipping_rotate_reflect
[19:10:36] [PASSED] positioning_simple
[19:10:36] [PASSED] upscaling
[19:10:36] [PASSED] downscaling
[19:10:36] [PASSED] rounding1
[19:10:36] [PASSED] rounding2
[19:10:36] [PASSED] rounding3
[19:10:36] [PASSED] rounding4
[19:10:36] =========== [PASSED] drm_test_check_plane_state ============
[19:10:36] =========== drm_test_check_invalid_plane_state  ============
[19:10:36] [PASSED] positioning_invalid
[19:10:36] [PASSED] upscaling_invalid
[19:10:36] [PASSED] downscaling_invalid
[19:10:36] ======= [PASSED] drm_test_check_invalid_plane_state ========
[19:10:36] ================ [PASSED] drm_plane_helper =================
[19:10:36] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[19:10:36] ====== drm_test_connector_helper_tv_get_modes_check  =======
[19:10:36] [PASSED] None
[19:10:36] [PASSED] PAL
[19:10:36] [PASSED] NTSC
[19:10:36] [PASSED] Both, NTSC Default
[19:10:36] [PASSED] Both, PAL Default
[19:10:36] [PASSED] Both, NTSC Default, with PAL on command-line
[19:10:36] [PASSED] Both, PAL Default, with NTSC on command-line
[19:10:36] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[19:10:36] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[19:10:36] ================== drm_rect (9 subtests) ===================
[19:10:36] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[19:10:36] [PASSED] drm_test_rect_clip_scaled_not_clipped
[19:10:36] [PASSED] drm_test_rect_clip_scaled_clipped
[19:10:36] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[19:10:36] ================= drm_test_rect_intersect  =================
[19:10:36] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[19:10:36] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[19:10:36] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[19:10:36] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[19:10:36] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[19:10:36] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[19:10:36] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[19:10:36] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[19:10:36] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[19:10:36] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[19:10:36] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[19:10:36] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[19:10:36] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[19:10:36] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[19:10:36] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[19:10:36] ============= [PASSED] drm_test_rect_intersect =============
[19:10:36] ================ drm_test_rect_calc_hscale  ================
[19:10:36] [PASSED] normal use
[19:10:36] [PASSED] out of max range
[19:10:36] [PASSED] out of min range
[19:10:36] [PASSED] zero dst
[19:10:36] [PASSED] negative src
[19:10:36] [PASSED] negative dst
[19:10:36] ============ [PASSED] drm_test_rect_calc_hscale ============
[19:10:36] ================ drm_test_rect_calc_vscale  ================
[19:10:36] [PASSED] normal use
[19:10:36] [PASSED] out of max range
[19:10:36] [PASSED] out of min range
[19:10:36] [PASSED] zero dst
[19:10:36] [PASSED] negative src
[19:10:36] [PASSED] negative dst
[19:10:36] ============ [PASSED] drm_test_rect_calc_vscale ============
[19:10:36] ================== drm_test_rect_rotate  ===================
[19:10:36] [PASSED] reflect-x
[19:10:36] [PASSED] reflect-y
[19:10:36] [PASSED] rotate-0
[19:10:36] [PASSED] rotate-90
[19:10:36] [PASSED] rotate-180
[19:10:36] [PASSED] rotate-270
stty: 'standard input': Inappropriate ioctl for device
[19:10:36] ============== [PASSED] drm_test_rect_rotate ===============
[19:10:36] ================ drm_test_rect_rotate_inv  =================
[19:10:36] [PASSED] reflect-x
[19:10:36] [PASSED] reflect-y
[19:10:36] [PASSED] rotate-0
[19:10:36] [PASSED] rotate-90
[19:10:36] [PASSED] rotate-180
[19:10:36] [PASSED] rotate-270
[19:10:36] ============ [PASSED] drm_test_rect_rotate_inv =============
[19:10:36] ==================== [PASSED] drm_rect =====================
[19:10:36] ============ drm_sysfb_modeset_test (1 subtest) ============
[19:10:36] ============ drm_test_sysfb_build_fourcc_list  =============
[19:10:36] [PASSED] no native formats
[19:10:36] [PASSED] XRGB8888 as native format
[19:10:36] [PASSED] remove duplicates
[19:10:36] [PASSED] convert alpha formats
[19:10:36] [PASSED] random formats
[19:10:36] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[19:10:36] ============= [PASSED] drm_sysfb_modeset_test ==============
[19:10:36] ============================================================
[19:10:36] Testing complete. Ran 616 tests: passed: 616
[19:10:36] Elapsed time: 24.442s total, 1.710s configuring, 22.565s building, 0.149s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[19:10:36] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[19:10:38] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[19:10:46] Starting KUnit Kernel (1/1)...
[19:10:46] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[19:10:46] ================= ttm_device (5 subtests) ==================
[19:10:46] [PASSED] ttm_device_init_basic
[19:10:46] [PASSED] ttm_device_init_multiple
[19:10:46] [PASSED] ttm_device_fini_basic
[19:10:46] [PASSED] ttm_device_init_no_vma_man
[19:10:46] ================== ttm_device_init_pools  ==================
[19:10:46] [PASSED] No DMA allocations, no DMA32 required
[19:10:46] [PASSED] DMA allocations, DMA32 required
[19:10:46] [PASSED] No DMA allocations, DMA32 required
[19:10:46] [PASSED] DMA allocations, no DMA32 required
[19:10:46] ============== [PASSED] ttm_device_init_pools ==============
[19:10:46] =================== [PASSED] ttm_device ====================
[19:10:46] ================== ttm_pool (8 subtests) ===================
[19:10:46] ================== ttm_pool_alloc_basic  ===================
[19:10:46] [PASSED] One page
[19:10:46] [PASSED] More than one page
[19:10:46] [PASSED] Above the allocation limit
[19:10:46] [PASSED] One page, with coherent DMA mappings enabled
[19:10:46] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[19:10:46] ============== [PASSED] ttm_pool_alloc_basic ===============
[19:10:46] ============== ttm_pool_alloc_basic_dma_addr  ==============
[19:10:46] [PASSED] One page
[19:10:46] [PASSED] More than one page
[19:10:46] [PASSED] Above the allocation limit
[19:10:46] [PASSED] One page, with coherent DMA mappings enabled
[19:10:46] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[19:10:46] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[19:10:46] [PASSED] ttm_pool_alloc_order_caching_match
[19:10:46] [PASSED] ttm_pool_alloc_caching_mismatch
[19:10:46] [PASSED] ttm_pool_alloc_order_mismatch
[19:10:46] [PASSED] ttm_pool_free_dma_alloc
[19:10:46] [PASSED] ttm_pool_free_no_dma_alloc
[19:10:46] [PASSED] ttm_pool_fini_basic
[19:10:46] ==================== [PASSED] ttm_pool =====================
[19:10:46] ================ ttm_resource (8 subtests) =================
[19:10:46] ================= ttm_resource_init_basic  =================
[19:10:46] [PASSED] Init resource in TTM_PL_SYSTEM
[19:10:46] [PASSED] Init resource in TTM_PL_VRAM
[19:10:46] [PASSED] Init resource in a private placement
[19:10:46] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[19:10:46] ============= [PASSED] ttm_resource_init_basic =============
[19:10:46] [PASSED] ttm_resource_init_pinned
[19:10:46] [PASSED] ttm_resource_fini_basic
[19:10:46] [PASSED] ttm_resource_manager_init_basic
[19:10:46] [PASSED] ttm_resource_manager_usage_basic
[19:10:46] [PASSED] ttm_resource_manager_set_used_basic
[19:10:46] [PASSED] ttm_sys_man_alloc_basic
[19:10:46] [PASSED] ttm_sys_man_free_basic
[19:10:46] ================== [PASSED] ttm_resource ===================
[19:10:46] =================== ttm_tt (15 subtests) ===================
[19:10:46] ==================== ttm_tt_init_basic  ====================
[19:10:46] [PASSED] Page-aligned size
[19:10:46] [PASSED] Extra pages requested
[19:10:46] ================ [PASSED] ttm_tt_init_basic ================
[19:10:46] [PASSED] ttm_tt_init_misaligned
[19:10:46] [PASSED] ttm_tt_fini_basic
[19:10:46] [PASSED] ttm_tt_fini_sg
[19:10:46] [PASSED] ttm_tt_fini_shmem
[19:10:46] [PASSED] ttm_tt_create_basic
[19:10:46] [PASSED] ttm_tt_create_invalid_bo_type
[19:10:46] [PASSED] ttm_tt_create_ttm_exists
[19:10:46] [PASSED] ttm_tt_create_failed
[19:10:46] [PASSED] ttm_tt_destroy_basic
[19:10:46] [PASSED] ttm_tt_populate_null_ttm
[19:10:46] [PASSED] ttm_tt_populate_populated_ttm
[19:10:46] [PASSED] ttm_tt_unpopulate_basic
[19:10:46] [PASSED] ttm_tt_unpopulate_empty_ttm
[19:10:46] [PASSED] ttm_tt_swapin_basic
[19:10:46] ===================== [PASSED] ttm_tt ======================
[19:10:46] =================== ttm_bo (14 subtests) ===================
[19:10:46] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[19:10:46] [PASSED] Cannot be interrupted and sleeps
[19:10:46] [PASSED] Cannot be interrupted, locks straight away
[19:10:46] [PASSED] Can be interrupted, sleeps
[19:10:46] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[19:10:46] [PASSED] ttm_bo_reserve_locked_no_sleep
[19:10:46] [PASSED] ttm_bo_reserve_no_wait_ticket
[19:10:46] [PASSED] ttm_bo_reserve_double_resv
[19:10:46] [PASSED] ttm_bo_reserve_interrupted
[19:10:46] [PASSED] ttm_bo_reserve_deadlock
[19:10:46] [PASSED] ttm_bo_unreserve_basic
[19:10:46] [PASSED] ttm_bo_unreserve_pinned
[19:10:46] [PASSED] ttm_bo_unreserve_bulk
[19:10:46] [PASSED] ttm_bo_put_basic
[19:10:46] [PASSED] ttm_bo_put_shared_resv
[19:10:46] [PASSED] ttm_bo_pin_basic
[19:10:46] [PASSED] ttm_bo_pin_unpin_resource
[19:10:46] [PASSED] ttm_bo_multiple_pin_one_unpin
[19:10:46] ===================== [PASSED] ttm_bo ======================
[19:10:46] ============== ttm_bo_validate (21 subtests) ===============
[19:10:46] ============== ttm_bo_init_reserved_sys_man  ===============
[19:10:46] [PASSED] Buffer object for userspace
[19:10:46] [PASSED] Kernel buffer object
[19:10:46] [PASSED] Shared buffer object
[19:10:46] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[19:10:46] ============== ttm_bo_init_reserved_mock_man  ==============
[19:10:46] [PASSED] Buffer object for userspace
[19:10:46] [PASSED] Kernel buffer object
[19:10:46] [PASSED] Shared buffer object
[19:10:46] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[19:10:46] [PASSED] ttm_bo_init_reserved_resv
[19:10:46] ================== ttm_bo_validate_basic  ==================
[19:10:46] [PASSED] Buffer object for userspace
[19:10:46] [PASSED] Kernel buffer object
[19:10:46] [PASSED] Shared buffer object
[19:10:46] ============== [PASSED] ttm_bo_validate_basic ==============
[19:10:46] [PASSED] ttm_bo_validate_invalid_placement
[19:10:46] ============= ttm_bo_validate_same_placement  ==============
[19:10:46] [PASSED] System manager
[19:10:46] [PASSED] VRAM manager
[19:10:46] ========= [PASSED] ttm_bo_validate_same_placement ==========
[19:10:46] [PASSED] ttm_bo_validate_failed_alloc
[19:10:46] [PASSED] ttm_bo_validate_pinned
[19:10:46] [PASSED] ttm_bo_validate_busy_placement
[19:10:46] ================ ttm_bo_validate_multihop  =================
[19:10:46] [PASSED] Buffer object for userspace
[19:10:46] [PASSED] Kernel buffer object
[19:10:46] [PASSED] Shared buffer object
[19:10:46] ============ [PASSED] ttm_bo_validate_multihop =============
[19:10:46] ========== ttm_bo_validate_no_placement_signaled  ==========
[19:10:46] [PASSED] Buffer object in system domain, no page vector
[19:10:46] [PASSED] Buffer object in system domain with an existing page vector
[19:10:46] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[19:10:46] ======== ttm_bo_validate_no_placement_not_signaled  ========
[19:10:46] [PASSED] Buffer object for userspace
[19:10:46] [PASSED] Kernel buffer object
[19:10:46] [PASSED] Shared buffer object
[19:10:46] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[19:10:46] [PASSED] ttm_bo_validate_move_fence_signaled
[19:10:46] ========= ttm_bo_validate_move_fence_not_signaled  =========
[19:10:46] [PASSED] Waits for GPU
[19:10:46] [PASSED] Tries to lock straight away
[19:10:46] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[19:10:46] [PASSED] ttm_bo_validate_happy_evict
[19:10:46] [PASSED] ttm_bo_validate_all_pinned_evict
[19:10:46] [PASSED] ttm_bo_validate_allowed_only_evict
[19:10:46] [PASSED] ttm_bo_validate_deleted_evict
[19:10:46] [PASSED] ttm_bo_validate_busy_domain_evict
[19:10:46] [PASSED] ttm_bo_validate_evict_gutting
[19:10:46] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[19:10:46] ================= [PASSED] ttm_bo_validate =================
[19:10:46] ============================================================
[19:10:46] Testing complete. Ran 101 tests: passed: 101
[19:10:46] Elapsed time: 10.013s total, 1.751s configuring, 8.046s building, 0.185s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 23+ messages in thread

* ✓ Xe.CI.BAT: success for Add TLB invalidation abstraction (rev9)
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
                   ` (10 preceding siblings ...)
  2025-08-25 19:10 ` ✓ CI.KUnit: success " Patchwork
@ 2025-08-25 20:09 ` Patchwork
  2025-08-26  6:18 ` ✓ Xe.CI.Full: " Patchwork
  12 siblings, 0 replies; 23+ messages in thread
From: Patchwork @ 2025-08-25 20:09 UTC (permalink / raw)
  To: Summers, Stuart; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 2802 bytes --]

== Series Details ==

Series: Add TLB invalidation abstraction (rev9)
URL   : https://patchwork.freedesktop.org/series/152022/
State : success

== Summary ==

CI Bug Log - changes from xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb_BAT -> xe-pw-152022v9_BAT
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (11 -> 11)
------------------------------

  No changes in participating hosts

Known issues
------------

  Here are the changes found in xe-pw-152022v9_BAT that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_flip@basic-plain-flip@c-edp1:
    - bat-adlp-7:         [PASS][1] -> [DMESG-WARN][2] ([Intel XE#4543])
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/bat-adlp-7/igt@kms_flip@basic-plain-flip@c-edp1.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/bat-adlp-7/igt@kms_flip@basic-plain-flip@c-edp1.html

  
#### Possible fixes ####

  * igt@kms_flip@basic-plain-flip@a-edp1:
    - bat-adlp-7:         [DMESG-WARN][3] ([Intel XE#4543]) -> [PASS][4]
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/bat-adlp-7/igt@kms_flip@basic-plain-flip@a-edp1.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/bat-adlp-7/igt@kms_flip@basic-plain-flip@a-edp1.html

  * igt@xe_vm@bind-execqueues-independent:
    - {bat-ptl-vm}:       [FAIL][5] ([Intel XE#5783]) -> [PASS][6]
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/bat-ptl-vm/igt@xe_vm@bind-execqueues-independent.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/bat-ptl-vm/igt@xe_vm@bind-execqueues-independent.html
    - {bat-ptl-2}:        [FAIL][7] ([Intel XE#5783]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/bat-ptl-2/igt@xe_vm@bind-execqueues-independent.html
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/bat-ptl-2/igt@xe_vm@bind-execqueues-independent.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
  [Intel XE#5783]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5783


Build changes
-------------

  * Linux: xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb -> xe-pw-152022v9

  IGT_8507: 8507
  xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb: a7c735c1739662dcc431bda50653821bff0d63fb
  xe-pw-152022v9: 152022v9

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/index.html

[-- Attachment #2: Type: text/html, Size: 3586 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* ✓ Xe.CI.Full: success for Add TLB invalidation abstraction (rev9)
  2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
                   ` (11 preceding siblings ...)
  2025-08-25 20:09 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-08-26  6:18 ` Patchwork
  12 siblings, 0 replies; 23+ messages in thread
From: Patchwork @ 2025-08-26  6:18 UTC (permalink / raw)
  To: Summers, Stuart; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 99744 bytes --]

== Series Details ==

Series: Add TLB invalidation abstraction (rev9)
URL   : https://patchwork.freedesktop.org/series/152022/
State : success

== Summary ==

CI Bug Log - changes from xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb_FULL -> xe-pw-152022v9_FULL
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (4 -> 4)
------------------------------

  No changes in participating hosts

Known issues
------------

  Here are the changes found in xe-pw-152022v9_FULL that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@core_getversion@all-cards:
    - shard-dg2-set2:     [PASS][1] -> [FAIL][2] ([Intel XE#4208])
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-466/igt@core_getversion@all-cards.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@core_getversion@all-cards.html

  * igt@fbdev@write:
    - shard-dg2-set2:     [PASS][3] -> [SKIP][4] ([Intel XE#2134]) +1 other test skip
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@fbdev@write.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@fbdev@write.html

  * igt@kms_atomic_transition@modeset-transition-nonblocking-fencing:
    - shard-dg2-set2:     [PASS][5] -> [SKIP][6] ([Intel XE#4208] / [i915#2575]) +97 other tests skip
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_atomic_transition@modeset-transition-nonblocking-fencing.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_atomic_transition@modeset-transition-nonblocking-fencing.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-async-flip:
    - shard-adlp:         NOTRUN -> [SKIP][7] ([Intel XE#1124])
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-async-flip.html

  * igt@kms_big_fb@y-tiled-32bpp-rotate-270:
    - shard-dg2-set2:     NOTRUN -> [SKIP][8] ([Intel XE#1124]) +1 other test skip
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_big_fb@y-tiled-32bpp-rotate-270.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip:
    - shard-adlp:         [PASS][9] -> [DMESG-FAIL][10] ([Intel XE#4543]) +3 other tests dmesg-fail
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-3/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-1/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html

  * igt@kms_big_fb@yf-tiled-addfb:
    - shard-adlp:         NOTRUN -> [SKIP][11] ([Intel XE#619])
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_big_fb@yf-tiled-addfb.html

  * igt@kms_bw@linear-tiling-4-displays-2560x1440p:
    - shard-dg2-set2:     NOTRUN -> [SKIP][12] ([Intel XE#367])
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_bw@linear-tiling-4-displays-2560x1440p.html

  * igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs@pipe-a-dp-2:
    - shard-dg2-set2:     NOTRUN -> [SKIP][13] ([Intel XE#787]) +181 other tests skip
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs@pipe-a-dp-2.html

  * igt@kms_ccs@crc-primary-basic-yf-tiled-ccs@pipe-d-dp-2:
    - shard-dg2-set2:     NOTRUN -> [SKIP][14] ([Intel XE#455] / [Intel XE#787]) +26 other tests skip
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_ccs@crc-primary-basic-yf-tiled-ccs@pipe-d-dp-2.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-6:
    - shard-dg2-set2:     [PASS][15] -> [DMESG-WARN][16] ([Intel XE#1727] / [Intel XE#3113])
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-463/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-6.html
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-464/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-6.html

  * igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [SKIP][17] ([Intel XE#787]) +2 other tests skip
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-1.html

  * igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs@pipe-d-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [SKIP][18] ([Intel XE#455] / [Intel XE#787]) +1 other test skip
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs@pipe-d-hdmi-a-1.html

  * igt@kms_chamelium_edid@hdmi-edid-change-during-hibernate:
    - shard-dg2-set2:     NOTRUN -> [SKIP][19] ([Intel XE#373])
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_chamelium_edid@hdmi-edid-change-during-hibernate.html

  * igt@kms_content_protection@srm@pipe-a-dp-2:
    - shard-bmg:          NOTRUN -> [FAIL][20] ([Intel XE#1178]) +1 other test fail
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-7/igt@kms_content_protection@srm@pipe-a-dp-2.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-atomic:
    - shard-bmg:          [PASS][21] -> [SKIP][22] ([Intel XE#2291]) +1 other test skip
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-3/igt@kms_cursor_legacy@cursora-vs-flipb-atomic.html
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-6/igt@kms_cursor_legacy@cursora-vs-flipb-atomic.html

  * igt@kms_cursor_legacy@torture-bo:
    - shard-adlp:         [PASS][23] -> [DMESG-WARN][24] ([Intel XE#2953] / [Intel XE#4173]) +6 other tests dmesg-warn
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-1/igt@kms_cursor_legacy@torture-bo.html
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-2/igt@kms_cursor_legacy@torture-bo.html

  * igt@kms_dp_aux_dev:
    - shard-bmg:          [PASS][25] -> [SKIP][26] ([Intel XE#3009])
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-2/igt@kms_dp_aux_dev.html
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-6/igt@kms_dp_aux_dev.html

  * igt@kms_fbcon_fbt@psr:
    - shard-dg2-set2:     NOTRUN -> [SKIP][27] ([Intel XE#776])
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_fbcon_fbt@psr.html

  * igt@kms_feature_discovery@psr1:
    - shard-dg2-set2:     NOTRUN -> [SKIP][28] ([Intel XE#1135])
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_feature_discovery@psr1.html

  * igt@kms_flip@2x-flip-vs-dpms-on-nop:
    - shard-bmg:          [PASS][29] -> [SKIP][30] ([Intel XE#2316]) +3 other tests skip
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-3/igt@kms_flip@2x-flip-vs-dpms-on-nop.html
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-6/igt@kms_flip@2x-flip-vs-dpms-on-nop.html

  * igt@kms_flip@flip-vs-expired-vblank@a-edp1:
    - shard-lnl:          [PASS][31] -> [FAIL][32] ([Intel XE#301]) +3 other tests fail
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-lnl-7/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-lnl-8/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-bmg:          [PASS][33] -> [INCOMPLETE][34] ([Intel XE#2049] / [Intel XE#2597]) +1 other test incomplete
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-7/igt@kms_flip@flip-vs-suspend-interruptible.html
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-3/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_flip@flip-vs-suspend@b-hdmi-a1:
    - shard-adlp:         [PASS][35] -> [DMESG-WARN][36] ([Intel XE#4543]) +1 other test dmesg-warn
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-3/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-1/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling@pipe-a-valid-mode:
    - shard-adlp:         NOTRUN -> [SKIP][37] ([Intel XE#455]) +2 other tests skip
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-upscaling@pipe-a-valid-mode:
    - shard-dg2-set2:     NOTRUN -> [SKIP][38] ([Intel XE#455]) +9 other tests skip
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_tiling@flip-change-tiling@pipe-c-hdmi-a-1-y-to-x:
    - shard-adlp:         [PASS][39] -> [FAIL][40] ([Intel XE#1874]) +1 other test fail
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-9/igt@kms_flip_tiling@flip-change-tiling@pipe-c-hdmi-a-1-y-to-x.html
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-4/igt@kms_flip_tiling@flip-change-tiling@pipe-c-hdmi-a-1-y-to-x.html

  * igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-fullscreen:
    - shard-dg2-set2:     NOTRUN -> [SKIP][41] ([Intel XE#651]) +3 other tests skip
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-fullscreen.html

  * igt@kms_frontbuffer_tracking@drrs-1p-rte:
    - shard-adlp:         NOTRUN -> [SKIP][42] ([Intel XE#651]) +2 other tests skip
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_frontbuffer_tracking@drrs-1p-rte.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff:
    - shard-dg2-set2:     [PASS][43] -> [SKIP][44] ([Intel XE#2351] / [Intel XE#4208]) +12 other tests skip
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-move:
    - shard-adlp:         NOTRUN -> [SKIP][45] ([Intel XE#656])
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-move.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-shrfb-msflip-blt:
    - shard-adlp:         NOTRUN -> [SKIP][46] ([Intel XE#653])
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-shrfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-pgflip-blt:
    - shard-dg2-set2:     NOTRUN -> [SKIP][47] ([Intel XE#653]) +4 other tests skip
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-pgflip-blt.html

  * igt@kms_hdr@static-toggle-suspend:
    - shard-bmg:          [PASS][48] -> [SKIP][49] ([Intel XE#1503]) +1 other test skip
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-2/igt@kms_hdr@static-toggle-suspend.html
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-6/igt@kms_hdr@static-toggle-suspend.html

  * igt@kms_joiner@invalid-modeset-ultra-joiner:
    - shard-adlp:         NOTRUN -> [SKIP][50] ([Intel XE#2927])
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_joiner@invalid-modeset-ultra-joiner.html

  * igt@kms_plane_cursor@primary@pipe-a-hdmi-a-6-size-256:
    - shard-dg2-set2:     NOTRUN -> [FAIL][51] ([Intel XE#616]) +2 other tests fail
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-433/igt@kms_plane_cursor@primary@pipe-a-hdmi-a-6-size-256.html

  * igt@kms_plane_multiple@tiling-yf:
    - shard-dg2-set2:     NOTRUN -> [SKIP][52] ([Intel XE#5020])
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_plane_multiple@tiling-yf.html

  * igt@kms_pm_rpm@universal-planes:
    - shard-adlp:         [PASS][53] -> [DMESG-WARN][54] ([Intel XE#2953] / [Intel XE#4173] / [Intel XE#5750])
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-3/igt@kms_pm_rpm@universal-planes.html
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_pm_rpm@universal-planes.html

  * igt@kms_psr2_sf@pr-overlay-plane-move-continuous-sf:
    - shard-dg2-set2:     NOTRUN -> [SKIP][55] ([Intel XE#1406] / [Intel XE#1489])
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_psr2_sf@pr-overlay-plane-move-continuous-sf.html

  * igt@kms_psr@fbc-pr-basic:
    - shard-adlp:         NOTRUN -> [SKIP][56] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929])
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_psr@fbc-pr-basic.html

  * igt@kms_psr@fbc-psr2-no-drrs:
    - shard-dg2-set2:     NOTRUN -> [SKIP][57] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +2 other tests skip
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_psr@fbc-psr2-no-drrs.html

  * igt@kms_rotation_crc@sprite-rotation-270:
    - shard-dg2-set2:     NOTRUN -> [SKIP][58] ([Intel XE#3414])
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_rotation_crc@sprite-rotation-270.html

  * igt@kms_vrr@cmrr@pipe-a-edp-1:
    - shard-lnl:          [PASS][59] -> [FAIL][60] ([Intel XE#4459]) +1 other test fail
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-lnl-8/igt@kms_vrr@cmrr@pipe-a-edp-1.html
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-lnl-1/igt@kms_vrr@cmrr@pipe-a-edp-1.html

  * igt@xe_compute_preempt@compute-threadgroup-preempt@engine-drm_xe_engine_class_compute:
    - shard-dg2-set2:     NOTRUN -> [SKIP][61] ([Intel XE#1280] / [Intel XE#455]) +1 other test skip
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@xe_compute_preempt@compute-threadgroup-preempt@engine-drm_xe_engine_class_compute.html

  * igt@xe_eu_stall@invalid-sampling-rate:
    - shard-dg2-set2:     NOTRUN -> [SKIP][62] ([Intel XE#5626])
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@xe_eu_stall@invalid-sampling-rate.html

  * igt@xe_eudebug@basic-vm-bind-metadata-discovery:
    - shard-adlp:         NOTRUN -> [SKIP][63] ([Intel XE#4837] / [Intel XE#5565])
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@xe_eudebug@basic-vm-bind-metadata-discovery.html

  * igt@xe_eudebug_online@writes-caching-sram-bb-vram-target-vram:
    - shard-dg2-set2:     NOTRUN -> [SKIP][64] ([Intel XE#4837]) +1 other test skip
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@xe_eudebug_online@writes-caching-sram-bb-vram-target-vram.html

  * igt@xe_exec_balancer@once-parallel-rebind:
    - shard-dg2-set2:     [PASS][65] -> [SKIP][66] ([Intel XE#4208]) +233 other tests skip
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@xe_exec_balancer@once-parallel-rebind.html
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_exec_balancer@once-parallel-rebind.html

  * igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-defer-bind:
    - shard-adlp:         NOTRUN -> [SKIP][67] ([Intel XE#1392] / [Intel XE#5575])
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-defer-bind.html

  * igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-rebind:
    - shard-dg2-set2:     [PASS][68] -> [SKIP][69] ([Intel XE#1392])
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-433/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-rebind.html
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-rebind.html

  * igt@xe_exec_fault_mode@many-userptr-invalidate-race:
    - shard-adlp:         NOTRUN -> [SKIP][70] ([Intel XE#288] / [Intel XE#5561]) +1 other test skip
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@xe_exec_fault_mode@many-userptr-invalidate-race.html

  * igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-imm:
    - shard-dg2-set2:     NOTRUN -> [SKIP][71] ([Intel XE#288]) +3 other tests skip
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-imm.html

  * igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence:
    - shard-dg2-set2:     NOTRUN -> [SKIP][72] ([Intel XE#2360])
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence.html

  * igt@xe_exec_system_allocator@many-stride-mmap-new-huge:
    - shard-adlp:         NOTRUN -> [SKIP][73] ([Intel XE#4915]) +16 other tests skip
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@xe_exec_system_allocator@many-stride-mmap-new-huge.html

  * igt@xe_exec_system_allocator@threads-many-free-race-nomemset:
    - shard-dg2-set2:     NOTRUN -> [SKIP][74] ([Intel XE#4915]) +40 other tests skip
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@xe_exec_system_allocator@threads-many-free-race-nomemset.html

  * igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv:
    - shard-dg2-set2:     [PASS][75] -> [DMESG-WARN][76] ([Intel XE#5893])
   [75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv.html
   [76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-433/igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv.html

  * igt@xe_live_ktest@xe_bo:
    - shard-dg2-set2:     [PASS][77] -> [SKIP][78] ([Intel XE#2229] / [Intel XE#455]) +1 other test skip
   [77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_live_ktest@xe_bo.html
   [78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_live_ktest@xe_bo.html

  * igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit:
    - shard-dg2-set2:     [PASS][79] -> [SKIP][80] ([Intel XE#2229])
   [79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html
   [80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html

  * igt@xe_module_load@reload-no-display:
    - shard-dg2-set2:     [PASS][81] -> [ABORT][82] ([Intel XE#5087])
   [81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-433/igt@xe_module_load@reload-no-display.html
   [82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@xe_module_load@reload-no-display.html

  * igt@xe_render_copy@render-stress-2-copies:
    - shard-dg2-set2:     NOTRUN -> [SKIP][83] ([Intel XE#4814])
   [83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@xe_render_copy@render-stress-2-copies.html

  
#### Possible fixes ####

  * igt@core_setmaster@master-drop-set-user:
    - shard-dg2-set2:     [FAIL][84] ([Intel XE#4208]) -> [PASS][85] +1 other test pass
   [84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@core_setmaster@master-drop-set-user.html
   [85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@core_setmaster@master-drop-set-user.html

  * igt@fbdev@info:
    - shard-dg2-set2:     [SKIP][86] ([Intel XE#2134]) -> [PASS][87] +1 other test pass
   [86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@fbdev@info.html
   [87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@fbdev@info.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-bmg-ccs@pipe-a-dp-2:
    - shard-bmg:          [FAIL][88] ([Intel XE#5376]) -> [PASS][89] +2 other tests pass
   [88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-5/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-bmg-ccs@pipe-a-dp-2.html
   [89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-2/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-bmg-ccs@pipe-a-dp-2.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs:
    - shard-dg2-set2:     [INCOMPLETE][90] ([Intel XE#2705] / [Intel XE#4212] / [Intel XE#4345]) -> [PASS][91]
   [90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-436/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
   [91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-dp-4:
    - shard-dg2-set2:     [INCOMPLETE][92] ([Intel XE#2705] / [Intel XE#4212]) -> [PASS][93]
   [92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-436/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-dp-4.html
   [93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-dp-4.html

  * igt@kms_cursor_crc@cursor-sliding-64x64:
    - shard-dg2-set2:     [SKIP][94] ([Intel XE#4208] / [i915#2575]) -> [PASS][95] +141 other tests pass
   [94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_cursor_crc@cursor-sliding-64x64.html
   [95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@kms_cursor_crc@cursor-sliding-64x64.html

  * igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy:
    - shard-bmg:          [SKIP][96] ([Intel XE#2291]) -> [PASS][97] +3 other tests pass
   [96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-6/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html
   [97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-1/igt@kms_cursor_legacy@2x-flip-vs-cursor-legacy.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size:
    - shard-bmg:          [DMESG-WARN][98] ([Intel XE#5354]) -> [PASS][99]
   [98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-5/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html
   [99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-2/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html

  * igt@kms_cursor_legacy@flip-vs-cursor-legacy:
    - shard-bmg:          [FAIL][100] ([Intel XE#5299]) -> [PASS][101]
   [100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-8/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
   [101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-5/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html

  * igt@kms_flip@2x-wf_vblank-ts-check-interruptible:
    - shard-bmg:          [SKIP][102] ([Intel XE#2316]) -> [PASS][103] +6 other tests pass
   [102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-6/igt@kms_flip@2x-wf_vblank-ts-check-interruptible.html
   [103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-1/igt@kms_flip@2x-wf_vblank-ts-check-interruptible.html

  * igt@kms_flip@flip-vs-rmfb:
    - shard-adlp:         [DMESG-WARN][104] ([Intel XE#4543] / [Intel XE#5208]) -> [PASS][105]
   [104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-2/igt@kms_flip@flip-vs-rmfb.html
   [105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-8/igt@kms_flip@flip-vs-rmfb.html

  * igt@kms_flip@flip-vs-rmfb-interruptible@b-hdmi-a1:
    - shard-adlp:         [DMESG-WARN][106] ([Intel XE#4543]) -> [PASS][107] +5 other tests pass
   [106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-3/igt@kms_flip@flip-vs-rmfb-interruptible@b-hdmi-a1.html
   [107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-1/igt@kms_flip@flip-vs-rmfb-interruptible@b-hdmi-a1.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-adlp:         [DMESG-WARN][108] ([Intel XE#2953] / [Intel XE#4173]) -> [PASS][109] +5 other tests pass
   [108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-2/igt@kms_flip@flip-vs-suspend-interruptible.html
   [109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-9/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling:
    - shard-dg2-set2:     [SKIP][110] ([Intel XE#2351] / [Intel XE#4208]) -> [PASS][111] +16 other tests pass
   [110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling.html
   [111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling.html

  * igt@kms_plane_scaling@2x-scaler-multi-pipe:
    - shard-bmg:          [SKIP][112] ([Intel XE#2571]) -> [PASS][113]
   [112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-6/igt@kms_plane_scaling@2x-scaler-multi-pipe.html
   [113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-2/igt@kms_plane_scaling@2x-scaler-multi-pipe.html

  * igt@kms_vblank@ts-continuation-suspend:
    - shard-adlp:         [INCOMPLETE][114] ([Intel XE#4488] / [Intel XE#5545]) -> [PASS][115]
   [114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-4/igt@kms_vblank@ts-continuation-suspend.html
   [115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_vblank@ts-continuation-suspend.html

  * igt@kms_vblank@ts-continuation-suspend@pipe-a-hdmi-a-1:
    - shard-adlp:         [DMESG-FAIL][116] ([Intel XE#5545]) -> [PASS][117]
   [116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-4/igt@kms_vblank@ts-continuation-suspend@pipe-a-hdmi-a-1.html
   [117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_vblank@ts-continuation-suspend@pipe-a-hdmi-a-1.html

  * igt@kms_vblank@ts-continuation-suspend@pipe-d-hdmi-a-1:
    - shard-adlp:         [INCOMPLETE][118] ([Intel XE#4488]) -> [PASS][119]
   [118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-4/igt@kms_vblank@ts-continuation-suspend@pipe-d-hdmi-a-1.html
   [119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@kms_vblank@ts-continuation-suspend@pipe-d-hdmi-a-1.html

  * igt@xe_exec_basic@multigpu-no-exec-basic-defer-mmap:
    - shard-dg2-set2:     [SKIP][120] ([Intel XE#1392]) -> [PASS][121]
   [120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_exec_basic@multigpu-no-exec-basic-defer-mmap.html
   [121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-433/igt@xe_exec_basic@multigpu-no-exec-basic-defer-mmap.html

  * igt@xe_exec_basic@no-exec-basic-defer-bind:
    - shard-dg2-set2:     [SKIP][122] ([Intel XE#4208]) -> [PASS][123] +268 other tests pass
   [122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_exec_basic@no-exec-basic-defer-bind.html
   [123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@xe_exec_basic@no-exec-basic-defer-bind.html

  * igt@xe_exec_reset@parallel-gt-reset:
    - shard-adlp:         [DMESG-WARN][124] ([Intel XE#3876]) -> [PASS][125]
   [124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-4/igt@xe_exec_reset@parallel-gt-reset.html
   [125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-3/igt@xe_exec_reset@parallel-gt-reset.html

  * igt@xe_module_load@many-reload:
    - shard-adlp:         [DMESG-WARN][126] ([Intel XE#5244]) -> [PASS][127]
   [126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-4/igt@xe_module_load@many-reload.html
   [127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-8/igt@xe_module_load@many-reload.html

  
#### Warnings ####

  * igt@kms_big_fb@4-tiled-32bpp-rotate-90:
    - shard-dg2-set2:     [SKIP][128] ([Intel XE#316]) -> [SKIP][129] ([Intel XE#2351] / [Intel XE#4208])
   [128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-466/igt@kms_big_fb@4-tiled-32bpp-rotate-90.html
   [129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_big_fb@4-tiled-32bpp-rotate-90.html

  * igt@kms_big_fb@linear-32bpp-rotate-270:
    - shard-dg2-set2:     [SKIP][130] ([Intel XE#316]) -> [SKIP][131] ([Intel XE#4208]) +2 other tests skip
   [130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_big_fb@linear-32bpp-rotate-270.html
   [131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_big_fb@linear-32bpp-rotate-270.html

  * igt@kms_big_fb@x-tiled-64bpp-rotate-90:
    - shard-dg2-set2:     [SKIP][132] ([Intel XE#4208]) -> [SKIP][133] ([Intel XE#316]) +1 other test skip
   [132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_big_fb@x-tiled-64bpp-rotate-90.html
   [133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@kms_big_fb@x-tiled-64bpp-rotate-90.html

  * igt@kms_big_fb@x-tiled-8bpp-rotate-90:
    - shard-dg2-set2:     [SKIP][134] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][135] ([Intel XE#316]) +3 other tests skip
   [134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_big_fb@x-tiled-8bpp-rotate-90.html
   [135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_big_fb@x-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@y-tiled-addfb:
    - shard-dg2-set2:     [SKIP][136] ([Intel XE#619]) -> [SKIP][137] ([Intel XE#2351] / [Intel XE#4208])
   [136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-435/igt@kms_big_fb@y-tiled-addfb.html
   [137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_big_fb@y-tiled-addfb.html

  * igt@kms_big_fb@y-tiled-addfb-size-offset-overflow:
    - shard-dg2-set2:     [SKIP][138] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][139] ([Intel XE#607])
   [138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html
   [139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip:
    - shard-dg2-set2:     [SKIP][140] ([Intel XE#1124]) -> [SKIP][141] ([Intel XE#2351] / [Intel XE#4208]) +3 other tests skip
   [140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip.html
   [141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip.html

  * igt@kms_big_fb@yf-tiled-32bpp-rotate-180:
    - shard-dg2-set2:     [SKIP][142] ([Intel XE#4208]) -> [SKIP][143] ([Intel XE#1124]) +6 other tests skip
   [142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_big_fb@yf-tiled-32bpp-rotate-180.html
   [143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_big_fb@yf-tiled-32bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-64bpp-rotate-180:
    - shard-dg2-set2:     [SKIP][144] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][145] ([Intel XE#1124]) +7 other tests skip
   [144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_big_fb@yf-tiled-64bpp-rotate-180.html
   [145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@kms_big_fb@yf-tiled-64bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-addfb:
    - shard-dg2-set2:     [SKIP][146] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][147] ([Intel XE#619])
   [146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_big_fb@yf-tiled-addfb.html
   [147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_big_fb@yf-tiled-addfb.html

  * igt@kms_big_fb@yf-tiled-addfb-size-overflow:
    - shard-dg2-set2:     [SKIP][148] ([Intel XE#610]) -> [SKIP][149] ([Intel XE#4208])
   [148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html
   [149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-async-flip:
    - shard-dg2-set2:     [SKIP][150] ([Intel XE#1124]) -> [SKIP][151] ([Intel XE#4208]) +8 other tests skip
   [150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html
   [151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html

  * igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p:
    - shard-dg2-set2:     [SKIP][152] ([Intel XE#4208] / [i915#2575]) -> [SKIP][153] ([Intel XE#2191])
   [152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p.html
   [153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p.html

  * igt@kms_bw@connected-linear-tiling-4-displays-2560x1440p:
    - shard-dg2-set2:     [SKIP][154] ([Intel XE#2191]) -> [SKIP][155] ([Intel XE#4208] / [i915#2575]) +1 other test skip
   [154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_bw@connected-linear-tiling-4-displays-2560x1440p.html
   [155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_bw@connected-linear-tiling-4-displays-2560x1440p.html

  * igt@kms_bw@linear-tiling-1-displays-1920x1080p:
    - shard-dg2-set2:     [SKIP][156] ([Intel XE#367]) -> [SKIP][157] ([Intel XE#4208] / [i915#2575]) +2 other tests skip
   [156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html
   [157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html

  * igt@kms_bw@linear-tiling-3-displays-2160x1440p:
    - shard-dg2-set2:     [SKIP][158] ([Intel XE#4208] / [i915#2575]) -> [SKIP][159] ([Intel XE#367]) +3 other tests skip
   [158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_bw@linear-tiling-3-displays-2160x1440p.html
   [159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_bw@linear-tiling-3-displays-2160x1440p.html

  * igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs-cc:
    - shard-dg2-set2:     [SKIP][160] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][161] ([Intel XE#455] / [Intel XE#787]) +2 other tests skip
   [160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs-cc.html
   [161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs-cc.html

  * igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs:
    - shard-dg2-set2:     [SKIP][162] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][163] ([Intel XE#2351] / [Intel XE#4208]) +5 other tests skip
   [162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs.html
   [163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs:
    - shard-dg2-set2:     [SKIP][164] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][165] ([Intel XE#4208]) +7 other tests skip
   [164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs.html
   [165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs.html

  * igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs:
    - shard-dg2-set2:     [SKIP][166] ([Intel XE#3442]) -> [SKIP][167] ([Intel XE#4208]) +1 other test skip
   [166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
   [167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-bmg-ccs:
    - shard-dg2-set2:     [SKIP][168] ([Intel XE#4208]) -> [SKIP][169] ([Intel XE#2907]) +3 other tests skip
   [168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-bmg-ccs.html
   [169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-bmg-ccs.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs:
    - shard-dg2-set2:     [INCOMPLETE][170] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4345] / [Intel XE#4522]) -> [INCOMPLETE][171] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#3124])
   [170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-463/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
   [171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-464/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-dp-4:
    - shard-dg2-set2:     [INCOMPLETE][172] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4522]) -> [INCOMPLETE][173] ([Intel XE#3124])
   [172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-463/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-dp-4.html
   [173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-464/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-dp-4.html

  * igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs:
    - shard-dg2-set2:     [SKIP][174] ([Intel XE#2907]) -> [SKIP][175] ([Intel XE#4208])
   [174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs.html
   [175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs.html

  * igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs:
    - shard-dg2-set2:     [SKIP][176] ([Intel XE#4208]) -> [SKIP][177] ([Intel XE#455] / [Intel XE#787]) +17 other tests skip
   [176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs.html
   [177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs.html

  * igt@kms_chamelium_color@ctm-limited-range:
    - shard-dg2-set2:     [SKIP][178] ([Intel XE#4208] / [i915#2575]) -> [SKIP][179] ([Intel XE#306]) +2 other tests skip
   [178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_chamelium_color@ctm-limited-range.html
   [179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@kms_chamelium_color@ctm-limited-range.html

  * igt@kms_chamelium_color@degamma:
    - shard-dg2-set2:     [SKIP][180] ([Intel XE#306]) -> [SKIP][181] ([Intel XE#4208] / [i915#2575]) +1 other test skip
   [180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-435/igt@kms_chamelium_color@degamma.html
   [181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_chamelium_color@degamma.html

  * igt@kms_chamelium_edid@hdmi-edid-stress-resolution-4k:
    - shard-dg2-set2:     [SKIP][182] ([Intel XE#4208] / [i915#2575]) -> [SKIP][183] ([Intel XE#373]) +12 other tests skip
   [182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_chamelium_edid@hdmi-edid-stress-resolution-4k.html
   [183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_chamelium_edid@hdmi-edid-stress-resolution-4k.html

  * igt@kms_chamelium_hpd@hdmi-hpd:
    - shard-dg2-set2:     [SKIP][184] ([Intel XE#373]) -> [SKIP][185] ([Intel XE#4208] / [i915#2575]) +11 other tests skip
   [184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_chamelium_hpd@hdmi-hpd.html
   [185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_chamelium_hpd@hdmi-hpd.html

  * igt@kms_content_protection@dp-mst-lic-type-0:
    - shard-dg2-set2:     [SKIP][186] ([Intel XE#4208] / [i915#2575]) -> [SKIP][187] ([Intel XE#307]) +1 other test skip
   [186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_content_protection@dp-mst-lic-type-0.html
   [187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@kms_content_protection@dp-mst-lic-type-0.html

  * igt@kms_content_protection@dp-mst-type-0:
    - shard-dg2-set2:     [SKIP][188] ([Intel XE#307]) -> [SKIP][189] ([Intel XE#4208] / [i915#2575])
   [188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_content_protection@dp-mst-type-0.html
   [189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_content_protection@dp-mst-type-0.html

  * igt@kms_content_protection@lic-type-0:
    - shard-dg2-set2:     [FAIL][190] ([Intel XE#1178]) -> [SKIP][191] ([Intel XE#4208] / [i915#2575]) +2 other tests skip
   [190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-466/igt@kms_content_protection@lic-type-0.html
   [191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_content_protection@lic-type-0.html

  * igt@kms_content_protection@srm:
    - shard-bmg:          [SKIP][192] ([Intel XE#2341]) -> [FAIL][193] ([Intel XE#1178]) +1 other test fail
   [192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-6/igt@kms_content_protection@srm.html
   [193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-7/igt@kms_content_protection@srm.html

  * igt@kms_cursor_crc@cursor-offscreen-512x170:
    - shard-dg2-set2:     [SKIP][194] ([Intel XE#308]) -> [SKIP][195] ([Intel XE#4208] / [i915#2575]) +2 other tests skip
   [194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-435/igt@kms_cursor_crc@cursor-offscreen-512x170.html
   [195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_cursor_crc@cursor-offscreen-512x170.html

  * igt@kms_cursor_crc@cursor-random-512x512:
    - shard-dg2-set2:     [SKIP][196] ([Intel XE#4208] / [i915#2575]) -> [SKIP][197] ([Intel XE#308])
   [196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_cursor_crc@cursor-random-512x512.html
   [197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@kms_cursor_crc@cursor-random-512x512.html

  * igt@kms_cursor_crc@cursor-sliding-32x32:
    - shard-dg2-set2:     [SKIP][198] ([Intel XE#4208] / [i915#2575]) -> [SKIP][199] ([Intel XE#455]) +5 other tests skip
   [198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_cursor_crc@cursor-sliding-32x32.html
   [199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_cursor_crc@cursor-sliding-32x32.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-varying-size:
    - shard-bmg:          [SKIP][200] ([Intel XE#2291]) -> [DMESG-WARN][201] ([Intel XE#5354])
   [200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-6/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html
   [201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-1/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
    - shard-dg2-set2:     [SKIP][202] ([Intel XE#4208] / [i915#2575]) -> [SKIP][203] ([Intel XE#323]) +1 other test skip
   [202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
   [203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
    - shard-dg2-set2:     [SKIP][204] ([Intel XE#323]) -> [SKIP][205] ([Intel XE#4208] / [i915#2575])
   [204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-435/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html
   [205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html

  * igt@kms_dp_link_training@non-uhbr-mst:
    - shard-dg2-set2:     [SKIP][206] ([Intel XE#4208]) -> [SKIP][207] ([Intel XE#4354])
   [206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_dp_link_training@non-uhbr-mst.html
   [207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_dp_link_training@non-uhbr-mst.html

  * igt@kms_dp_link_training@uhbr-mst:
    - shard-dg2-set2:     [SKIP][208] ([Intel XE#4356]) -> [SKIP][209] ([Intel XE#4208])
   [208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_dp_link_training@uhbr-mst.html
   [209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_dp_link_training@uhbr-mst.html

  * igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-out-visible-area:
    - shard-dg2-set2:     [SKIP][210] ([Intel XE#4422]) -> [SKIP][211] ([Intel XE#4208])
   [210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-out-visible-area.html
   [211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-out-visible-area.html

  * igt@kms_fbcon_fbt@psr-suspend:
    - shard-dg2-set2:     [SKIP][212] ([Intel XE#776]) -> [SKIP][213] ([Intel XE#4208])
   [212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_fbcon_fbt@psr-suspend.html
   [213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_fbcon_fbt@psr-suspend.html

  * igt@kms_feature_discovery@chamelium:
    - shard-dg2-set2:     [SKIP][214] ([Intel XE#4208] / [i915#2575]) -> [SKIP][215] ([Intel XE#701])
   [214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_feature_discovery@chamelium.html
   [215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_feature_discovery@chamelium.html

  * igt@kms_feature_discovery@dp-mst:
    - shard-dg2-set2:     [SKIP][216] ([Intel XE#1137]) -> [SKIP][217] ([Intel XE#4208] / [i915#2575])
   [216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_feature_discovery@dp-mst.html
   [217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_feature_discovery@dp-mst.html

  * igt@kms_flip@flip-vs-rmfb-interruptible:
    - shard-adlp:         [DMESG-WARN][218] ([Intel XE#4543] / [Intel XE#5208]) -> [DMESG-WARN][219] ([Intel XE#5208])
   [218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-3/igt@kms_flip@flip-vs-rmfb-interruptible.html
   [219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-1/igt@kms_flip@flip-vs-rmfb-interruptible.html

  * igt@kms_flip@flip-vs-suspend:
    - shard-adlp:         [DMESG-WARN][220] ([Intel XE#4543]) -> [DMESG-WARN][221] ([Intel XE#2953] / [Intel XE#4173] / [Intel XE#4543]) +1 other test dmesg-warn
   [220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-adlp-3/igt@kms_flip@flip-vs-suspend.html
   [221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-adlp-1/igt@kms_flip@flip-vs-suspend.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling:
    - shard-dg2-set2:     [SKIP][222] ([Intel XE#4208]) -> [SKIP][223] ([Intel XE#455]) +4 other tests skip
   [222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling.html
   [223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling:
    - shard-dg2-set2:     [SKIP][224] ([Intel XE#455]) -> [SKIP][225] ([Intel XE#2351] / [Intel XE#4208]) +1 other test skip
   [224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling.html
   [225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling:
    - shard-dg2-set2:     [SKIP][226] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][227] ([Intel XE#455]) +2 other tests skip
   [226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling.html
   [227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling:
    - shard-dg2-set2:     [SKIP][228] ([Intel XE#455]) -> [SKIP][229] ([Intel XE#4208]) +4 other tests skip
   [228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html
   [229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html

  * igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff:
    - shard-dg2-set2:     [SKIP][230] ([Intel XE#651]) -> [SKIP][231] ([Intel XE#4208]) +21 other tests skip
   [230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff.html
   [231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-pgflip-blt:
    - shard-bmg:          [SKIP][232] ([Intel XE#2311]) -> [SKIP][233] ([Intel XE#2312]) +6 other tests skip
   [232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-3/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-pgflip-blt.html
   [233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@drrs-suspend:
    - shard-dg2-set2:     [SKIP][234] ([Intel XE#651]) -> [SKIP][235] ([Intel XE#2351] / [Intel XE#4208]) +13 other tests skip
   [234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_frontbuffer_tracking@drrs-suspend.html
   [235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_frontbuffer_tracking@drrs-suspend.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt:
    - shard-bmg:          [SKIP][236] ([Intel XE#5390]) -> [SKIP][237] ([Intel XE#2312]) +2 other tests skip
   [236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt.html
   [237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-blt:
    - shard-bmg:          [SKIP][238] ([Intel XE#2312]) -> [SKIP][239] ([Intel XE#5390]) +10 other tests skip
   [238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-blt.html
   [239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-1/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-render:
    - shard-dg2-set2:     [SKIP][240] ([Intel XE#4208]) -> [SKIP][241] ([Intel XE#651]) +25 other tests skip
   [240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-render.html
   [241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-spr-indfb-onoff:
    - shard-dg2-set2:     [SKIP][242] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][243] ([Intel XE#651]) +17 other tests skip
   [242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-spr-indfb-onoff.html
   [243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-spr-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc:
    - shard-bmg:          [SKIP][244] ([Intel XE#2312]) -> [SKIP][245] ([Intel XE#2311]) +14 other tests skip
   [244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
   [245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-7/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-render:
    - shard-bmg:          [SKIP][246] ([Intel XE#2313]) -> [SKIP][247] ([Intel XE#2312]) +8 other tests skip
   [246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-render.html
   [247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-blt:
    - shard-dg2-set2:     [SKIP][248] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][249] ([Intel XE#653]) +8 other tests skip
   [248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-blt.html
   [249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-shrfb-draw-mmap-wc:
    - shard-dg2-set2:     [SKIP][250] ([Intel XE#653]) -> [SKIP][251] ([Intel XE#4208]) +25 other tests skip
   [250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-466/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-shrfb-draw-mmap-wc.html
   [251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-shrfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt:
    - shard-dg2-set2:     [SKIP][252] ([Intel XE#653]) -> [SKIP][253] ([Intel XE#2351] / [Intel XE#4208]) +9 other tests skip
   [252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-466/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt.html
   [253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen:
    - shard-bmg:          [SKIP][254] ([Intel XE#2312]) -> [SKIP][255] ([Intel XE#2313]) +11 other tests skip
   [254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen.html
   [255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-bmg-7/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen.html

  * igt@kms_frontbuffer_tracking@psr-slowdraw:
    - shard-dg2-set2:     [SKIP][256] ([Intel XE#4208]) -> [SKIP][257] ([Intel XE#653]) +34 other tests skip
   [256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_frontbuffer_tracking@psr-slowdraw.html
   [257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_frontbuffer_tracking@psr-slowdraw.html

  * igt@kms_joiner@basic-force-ultra-joiner:
    - shard-dg2-set2:     [SKIP][258] ([Intel XE#2925]) -> [SKIP][259] ([Intel XE#4208]) +1 other test skip
   [258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_joiner@basic-force-ultra-joiner.html
   [259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_joiner@basic-force-ultra-joiner.html

  * igt@kms_joiner@invalid-modeset-ultra-joiner:
    - shard-dg2-set2:     [SKIP][260] ([Intel XE#4208]) -> [SKIP][261] ([Intel XE#2927])
   [260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_joiner@invalid-modeset-ultra-joiner.html
   [261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_joiner@invalid-modeset-ultra-joiner.html

  * igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner:
    - shard-dg2-set2:     [SKIP][262] ([Intel XE#4208]) -> [SKIP][263] ([Intel XE#2925])
   [262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html
   [263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-dg2-set2:     [SKIP][264] ([Intel XE#4208] / [i915#2575]) -> [SKIP][265] ([Intel XE#356])
   [264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
   [265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_plane_multiple@2x-tiling-yf:
    - shard-dg2-set2:     [SKIP][266] ([Intel XE#4208] / [i915#2575]) -> [SKIP][267] ([Intel XE#5021])
   [266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_plane_multiple@2x-tiling-yf.html
   [267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_plane_multiple@2x-tiling-yf.html

  * igt@kms_plane_multiple@tiling-y:
    - shard-dg2-set2:     [SKIP][268] ([Intel XE#5020]) -> [SKIP][269] ([Intel XE#4208] / [i915#2575])
   [268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-435/igt@kms_plane_multiple@tiling-y.html
   [269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_plane_multiple@tiling-y.html

  * igt@kms_pm_backlight@basic-brightness:
    - shard-dg2-set2:     [SKIP][270] ([Intel XE#4208]) -> [SKIP][271] ([Intel XE#870])
   [270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_pm_backlight@basic-brightness.html
   [271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_pm_backlight@basic-brightness.html

  * igt@kms_pm_backlight@brightness-with-dpms:
    - shard-dg2-set2:     [SKIP][272] ([Intel XE#2938]) -> [SKIP][273] ([Intel XE#4208])
   [272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-466/igt@kms_pm_backlight@brightness-with-dpms.html
   [273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_pm_backlight@brightness-with-dpms.html

  * igt@kms_pm_backlight@fade-with-suspend:
    - shard-dg2-set2:     [SKIP][274] ([Intel XE#870]) -> [SKIP][275] ([Intel XE#4208])
   [274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-435/igt@kms_pm_backlight@fade-with-suspend.html
   [275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_pm_backlight@fade-with-suspend.html

  * igt@kms_pm_dc@dc5-psr:
    - shard-dg2-set2:     [SKIP][276] ([Intel XE#4208]) -> [SKIP][277] ([Intel XE#1129])
   [276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_pm_dc@dc5-psr.html
   [277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@kms_pm_dc@dc5-psr.html

  * igt@kms_pm_dc@deep-pkgc:
    - shard-dg2-set2:     [SKIP][278] ([Intel XE#4208]) -> [SKIP][279] ([Intel XE#908])
   [278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_pm_dc@deep-pkgc.html
   [279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_pm_dc@deep-pkgc.html

  * igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-fully-sf:
    - shard-dg2-set2:     [SKIP][280] ([Intel XE#1406] / [Intel XE#4208]) -> [SKIP][281] ([Intel XE#1406] / [Intel XE#1489]) +11 other tests skip
   [280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-fully-sf.html
   [281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-fully-sf.html

  * igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf:
    - shard-dg2-set2:     [SKIP][282] ([Intel XE#1406] / [Intel XE#1489]) -> [SKIP][283] ([Intel XE#1406] / [Intel XE#4208]) +10 other tests skip
   [282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf.html
   [283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf.html

  * igt@kms_psr2_su@page_flip-xrgb8888:
    - shard-dg2-set2:     [SKIP][284] ([Intel XE#1122] / [Intel XE#1406]) -> [SKIP][285] ([Intel XE#1406] / [Intel XE#4208])
   [284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_psr2_su@page_flip-xrgb8888.html
   [285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_psr2_su@page_flip-xrgb8888.html

  * igt@kms_psr@fbc-psr2-sprite-plane-move:
    - shard-dg2-set2:     [SKIP][286] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) -> [SKIP][287] ([Intel XE#1406] / [Intel XE#4208]) +12 other tests skip
   [286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_psr@fbc-psr2-sprite-plane-move.html
   [287]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_psr@fbc-psr2-sprite-plane-move.html

  * igt@kms_psr@pr-primary-page-flip:
    - shard-dg2-set2:     [SKIP][288] ([Intel XE#1406] / [Intel XE#2351] / [Intel XE#4208]) -> [SKIP][289] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +8 other tests skip
   [288]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_psr@pr-primary-page-flip.html
   [289]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_psr@pr-primary-page-flip.html

  * igt@kms_psr@pr-sprite-blt:
    - shard-dg2-set2:     [SKIP][290] ([Intel XE#1406] / [Intel XE#4208]) -> [SKIP][291] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +14 other tests skip
   [290]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_psr@pr-sprite-blt.html
   [291]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_psr@pr-sprite-blt.html

  * igt@kms_psr@psr-primary-page-flip:
    - shard-dg2-set2:     [SKIP][292] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) -> [SKIP][293] ([Intel XE#1406] / [Intel XE#2351] / [Intel XE#4208]) +3 other tests skip
   [292]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_psr@psr-primary-page-flip.html
   [293]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_psr@psr-primary-page-flip.html

  * igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
    - shard-dg2-set2:     [SKIP][294] ([Intel XE#1406] / [Intel XE#2939]) -> [SKIP][295] ([Intel XE#1406] / [Intel XE#4208])
   [294]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
   [295]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html

  * igt@kms_rotation_crc@bad-tiling:
    - shard-dg2-set2:     [SKIP][296] ([Intel XE#3414]) -> [SKIP][297] ([Intel XE#4208] / [i915#2575]) +2 other tests skip
   [296]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@kms_rotation_crc@bad-tiling.html
   [297]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_rotation_crc@bad-tiling.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-90:
    - shard-dg2-set2:     [SKIP][298] ([Intel XE#4208] / [i915#2575]) -> [SKIP][299] ([Intel XE#3414])
   [298]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
   [299]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180:
    - shard-dg2-set2:     [SKIP][300] ([Intel XE#1127]) -> [SKIP][301] ([Intel XE#4208] / [i915#2575]) +1 other test skip
   [300]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180.html
   [301]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180.html

  * igt@kms_tiled_display@basic-test-pattern:
    - shard-dg2-set2:     [SKIP][302] ([Intel XE#4208] / [i915#2575]) -> [FAIL][303] ([Intel XE#1729])
   [302]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_tiled_display@basic-test-pattern.html
   [303]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@kms_tiled_display@basic-test-pattern.html

  * igt@kms_tv_load_detect@load-detect:
    - shard-dg2-set2:     [SKIP][304] ([Intel XE#4208] / [i915#2575]) -> [SKIP][305] ([Intel XE#330])
   [304]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@kms_tv_load_detect@load-detect.html
   [305]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@kms_tv_load_detect@load-detect.html

  * igt@kms_vrr@flip-dpms:
    - shard-dg2-set2:     [SKIP][306] ([Intel XE#455]) -> [SKIP][307] ([Intel XE#4208] / [i915#2575]) +6 other tests skip
   [306]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-435/igt@kms_vrr@flip-dpms.html
   [307]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@kms_vrr@flip-dpms.html

  * igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all:
    - shard-dg2-set2:     [SKIP][308] ([Intel XE#4208] / [i915#2575]) -> [SKIP][309] ([Intel XE#1091] / [Intel XE#2849]) +1 other test skip
   [308]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all.html
   [309]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all.html

  * igt@xe_configfs@survivability-mode:
    - shard-dg2-set2:     [SKIP][310] ([Intel XE#5249]) -> [SKIP][311] ([Intel XE#4208])
   [310]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@xe_configfs@survivability-mode.html
   [311]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_configfs@survivability-mode.html

  * igt@xe_copy_basic@mem-copy-linear-0xfffe:
    - shard-dg2-set2:     [SKIP][312] ([Intel XE#4208]) -> [SKIP][313] ([Intel XE#1123])
   [312]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_copy_basic@mem-copy-linear-0xfffe.html
   [313]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@xe_copy_basic@mem-copy-linear-0xfffe.html

  * igt@xe_copy_basic@mem-set-linear-0xfd:
    - shard-dg2-set2:     [SKIP][314] ([Intel XE#4208]) -> [SKIP][315] ([Intel XE#1126])
   [314]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_copy_basic@mem-set-linear-0xfd.html
   [315]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@xe_copy_basic@mem-set-linear-0xfd.html

  * igt@xe_eu_stall@blocking-re-enable:
    - shard-dg2-set2:     [SKIP][316] ([Intel XE#4208]) -> [SKIP][317] ([Intel XE#5626]) +2 other tests skip
   [316]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_eu_stall@blocking-re-enable.html
   [317]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@xe_eu_stall@blocking-re-enable.html

  * igt@xe_eudebug@basic-vm-bind-ufence-delay-ack:
    - shard-dg2-set2:     [SKIP][318] ([Intel XE#4208]) -> [SKIP][319] ([Intel XE#4837]) +16 other tests skip
   [318]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_eudebug@basic-vm-bind-ufence-delay-ack.html
   [319]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@xe_eudebug@basic-vm-bind-ufence-delay-ack.html

  * igt@xe_eudebug_online@resume-dss:
    - shard-dg2-set2:     [SKIP][320] ([Intel XE#4837]) -> [SKIP][321] ([Intel XE#4208]) +16 other tests skip
   [320]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_eudebug_online@resume-dss.html
   [321]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_eudebug_online@resume-dss.html

  * igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap:
    - shard-dg2-set2:     [SKIP][322] ([Intel XE#4208]) -> [SKIP][323] ([Intel XE#1392]) +4 other tests skip
   [322]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap.html
   [323]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap.html

  * igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate-race:
    - shard-dg2-set2:     [SKIP][324] ([Intel XE#1392]) -> [SKIP][325] ([Intel XE#4208]) +3 other tests skip
   [324]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate-race.html
   [325]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate-race.html

  * igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-rebind-prefetch:
    - shard-dg2-set2:     [SKIP][326] ([Intel XE#288]) -> [SKIP][327] ([Intel XE#4208]) +28 other tests skip
   [326]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-rebind-prefetch.html
   [327]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-rebind-prefetch.html

  * igt@xe_exec_fault_mode@twice-userptr-rebind-imm:
    - shard-dg2-set2:     [SKIP][328] ([Intel XE#4208]) -> [SKIP][329] ([Intel XE#288]) +37 other tests skip
   [328]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_exec_fault_mode@twice-userptr-rebind-imm.html
   [329]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@xe_exec_fault_mode@twice-userptr-rebind-imm.html

  * igt@xe_exec_mix_modes@exec-simple-batch-store-lr:
    - shard-dg2-set2:     [SKIP][330] ([Intel XE#2360]) -> [SKIP][331] ([Intel XE#4208])
   [330]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@xe_exec_mix_modes@exec-simple-batch-store-lr.html
   [331]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_exec_mix_modes@exec-simple-batch-store-lr.html

  * igt@xe_exec_mix_modes@exec-spinner-interrupted-lr:
    - shard-dg2-set2:     [SKIP][332] ([Intel XE#4208]) -> [SKIP][333] ([Intel XE#2360])
   [332]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_exec_mix_modes@exec-spinner-interrupted-lr.html
   [333]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@xe_exec_mix_modes@exec-spinner-interrupted-lr.html

  * igt@xe_exec_system_allocator@threads-many-large-execqueues-malloc-mlock-nomemset:
    - shard-dg2-set2:     [SKIP][334] ([Intel XE#4915]) -> [SKIP][335] ([Intel XE#4208]) +308 other tests skip
   [334]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_exec_system_allocator@threads-many-large-execqueues-malloc-mlock-nomemset.html
   [335]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_exec_system_allocator@threads-many-large-execqueues-malloc-mlock-nomemset.html

  * igt@xe_exec_system_allocator@threads-many-large-mmap-shared-remap-dontunmap-eocheck:
    - shard-dg2-set2:     [SKIP][336] ([Intel XE#4208]) -> [SKIP][337] ([Intel XE#4915]) +372 other tests skip
   [336]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_exec_system_allocator@threads-many-large-mmap-shared-remap-dontunmap-eocheck.html
   [337]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@xe_exec_system_allocator@threads-many-large-mmap-shared-remap-dontunmap-eocheck.html

  * igt@xe_huc_copy@huc_copy:
    - shard-dg2-set2:     [SKIP][338] ([Intel XE#4208]) -> [SKIP][339] ([Intel XE#255])
   [338]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_huc_copy@huc_copy.html
   [339]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@xe_huc_copy@huc_copy.html

  * igt@xe_mmap@small-bar:
    - shard-dg2-set2:     [SKIP][340] ([Intel XE#512]) -> [SKIP][341] ([Intel XE#4208])
   [340]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-435/igt@xe_mmap@small-bar.html
   [341]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_mmap@small-bar.html

  * igt@xe_oa@mmio-triggered-reports-read:
    - shard-dg2-set2:     [SKIP][342] ([Intel XE#4208]) -> [SKIP][343] ([Intel XE#5103])
   [342]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_oa@mmio-triggered-reports-read.html
   [343]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@xe_oa@mmio-triggered-reports-read.html

  * igt@xe_oa@polling-small-buf:
    - shard-dg2-set2:     [SKIP][344] ([Intel XE#4208]) -> [SKIP][345] ([Intel XE#3573]) +9 other tests skip
   [344]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_oa@polling-small-buf.html
   [345]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@xe_oa@polling-small-buf.html

  * igt@xe_oa@whitelisted-registers-userspace-config:
    - shard-dg2-set2:     [SKIP][346] ([Intel XE#3573]) -> [SKIP][347] ([Intel XE#4208]) +7 other tests skip
   [346]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-435/igt@xe_oa@whitelisted-registers-userspace-config.html
   [347]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_oa@whitelisted-registers-userspace-config.html

  * igt@xe_pat@display-vs-wb-transient:
    - shard-dg2-set2:     [SKIP][348] ([Intel XE#4208]) -> [SKIP][349] ([Intel XE#1337])
   [348]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_pat@display-vs-wb-transient.html
   [349]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-463/igt@xe_pat@display-vs-wb-transient.html

  * igt@xe_peer2peer@read:
    - shard-dg2-set2:     [SKIP][350] ([Intel XE#1061]) -> [SKIP][351] ([Intel XE#1061] / [Intel XE#4208])
   [350]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_peer2peer@read.html
   [351]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_peer2peer@read.html

  * igt@xe_pm@d3cold-mocs:
    - shard-dg2-set2:     [SKIP][352] ([Intel XE#2284]) -> [SKIP][353] ([Intel XE#4208])
   [352]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_pm@d3cold-mocs.html
   [353]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_pm@d3cold-mocs.html

  * igt@xe_pm@d3cold-multiple-execs:
    - shard-dg2-set2:     [SKIP][354] ([Intel XE#2284] / [Intel XE#366]) -> [SKIP][355] ([Intel XE#4208])
   [354]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-466/igt@xe_pm@d3cold-multiple-execs.html
   [355]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_pm@d3cold-multiple-execs.html

  * igt@xe_pm@d3hot-i2c:
    - shard-dg2-set2:     [SKIP][356] ([Intel XE#5742]) -> [SKIP][357] ([Intel XE#4208])
   [356]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_pm@d3hot-i2c.html
   [357]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_pm@d3hot-i2c.html

  * igt@xe_pm@s2idle-d3cold-basic-exec:
    - shard-dg2-set2:     [SKIP][358] ([Intel XE#4208]) -> [SKIP][359] ([Intel XE#2284] / [Intel XE#366]) +2 other tests skip
   [358]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_pm@s2idle-d3cold-basic-exec.html
   [359]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@xe_pm@s2idle-d3cold-basic-exec.html

  * igt@xe_pmu@all-fn-engine-activity-load:
    - shard-dg2-set2:     [SKIP][360] ([Intel XE#4208]) -> [SKIP][361] ([Intel XE#4650]) +1 other test skip
   [360]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_pmu@all-fn-engine-activity-load.html
   [361]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@xe_pmu@all-fn-engine-activity-load.html

  * igt@xe_pmu@fn-engine-activity-load:
    - shard-dg2-set2:     [SKIP][362] ([Intel XE#4650]) -> [SKIP][363] ([Intel XE#4208])
   [362]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_pmu@fn-engine-activity-load.html
   [363]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_pmu@fn-engine-activity-load.html

  * igt@xe_pxp@pxp-stale-bo-exec-post-termination-irq:
    - shard-dg2-set2:     [SKIP][364] ([Intel XE#4733]) -> [SKIP][365] ([Intel XE#4208]) +3 other tests skip
   [364]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_pxp@pxp-stale-bo-exec-post-termination-irq.html
   [365]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_pxp@pxp-stale-bo-exec-post-termination-irq.html

  * igt@xe_pxp@pxp-termination-key-update-post-suspend:
    - shard-dg2-set2:     [SKIP][366] ([Intel XE#4208]) -> [SKIP][367] ([Intel XE#4733]) +3 other tests skip
   [366]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_pxp@pxp-termination-key-update-post-suspend.html
   [367]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@xe_pxp@pxp-termination-key-update-post-suspend.html

  * igt@xe_query@multigpu-query-invalid-cs-cycles:
    - shard-dg2-set2:     [SKIP][368] ([Intel XE#4208]) -> [SKIP][369] ([Intel XE#944]) +2 other tests skip
   [368]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_query@multigpu-query-invalid-cs-cycles.html
   [369]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-466/igt@xe_query@multigpu-query-invalid-cs-cycles.html

  * igt@xe_query@multigpu-query-oa-units:
    - shard-dg2-set2:     [SKIP][370] ([Intel XE#944]) -> [SKIP][371] ([Intel XE#4208]) +3 other tests skip
   [370]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-435/igt@xe_query@multigpu-query-oa-units.html
   [371]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_query@multigpu-query-oa-units.html

  * igt@xe_render_copy@render-stress-0-copies:
    - shard-dg2-set2:     [SKIP][372] ([Intel XE#4208]) -> [SKIP][373] ([Intel XE#4814])
   [372]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_render_copy@render-stress-0-copies.html
   [373]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@xe_render_copy@render-stress-0-copies.html

  * igt@xe_render_copy@render-stress-4-copies:
    - shard-dg2-set2:     [SKIP][374] ([Intel XE#4814]) -> [SKIP][375] ([Intel XE#4208])
   [374]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-464/igt@xe_render_copy@render-stress-4-copies.html
   [375]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_render_copy@render-stress-4-copies.html

  * igt@xe_sriov_auto_provisioning@exclusive-ranges:
    - shard-dg2-set2:     [SKIP][376] ([Intel XE#4208]) -> [SKIP][377] ([Intel XE#4130]) +1 other test skip
   [376]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_sriov_auto_provisioning@exclusive-ranges.html
   [377]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-432/igt@xe_sriov_auto_provisioning@exclusive-ranges.html

  * igt@xe_sriov_auto_provisioning@selfconfig-reprovision-reduce-numvfs:
    - shard-dg2-set2:     [SKIP][378] ([Intel XE#4130]) -> [SKIP][379] ([Intel XE#4208])
   [378]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_sriov_auto_provisioning@selfconfig-reprovision-reduce-numvfs.html
   [379]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_sriov_auto_provisioning@selfconfig-reprovision-reduce-numvfs.html

  * igt@xe_sriov_flr@flr-twice:
    - shard-dg2-set2:     [SKIP][380] ([Intel XE#4273]) -> [SKIP][381] ([Intel XE#4208])
   [380]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-466/igt@xe_sriov_flr@flr-twice.html
   [381]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_sriov_flr@flr-twice.html

  * igt@xe_sriov_flr@flr-vf1-clear:
    - shard-dg2-set2:     [SKIP][382] ([Intel XE#4208]) -> [SKIP][383] ([Intel XE#3342])
   [382]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-434/igt@xe_sriov_flr@flr-vf1-clear.html
   [383]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-436/igt@xe_sriov_flr@flr-vf1-clear.html

  * igt@xe_sriov_scheduling@equal-throughput:
    - shard-dg2-set2:     [SKIP][384] ([Intel XE#4351]) -> [SKIP][385] ([Intel XE#4208])
   [384]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb/shard-dg2-432/igt@xe_sriov_scheduling@equal-throughput.html
   [385]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/shard-dg2-434/igt@xe_sriov_scheduling@equal-throughput.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [Intel XE#1061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1061
  [Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
  [Intel XE#1122]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1122
  [Intel XE#1123]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1123
  [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
  [Intel XE#1126]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1126
  [Intel XE#1127]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1127
  [Intel XE#1129]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1129
  [Intel XE#1135]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1135
  [Intel XE#1137]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1137
  [Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
  [Intel XE#1280]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1280
  [Intel XE#1337]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1337
  [Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
  [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
  [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
  [Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
  [Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
  [Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
  [Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
  [Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
  [Intel XE#2134]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2134
  [Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
  [Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
  [Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
  [Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
  [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
  [Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
  [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
  [Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
  [Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
  [Intel XE#2351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2351
  [Intel XE#2360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2360
  [Intel XE#255]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/255
  [Intel XE#2571]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2571
  [Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
  [Intel XE#2705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2705
  [Intel XE#2849]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2849
  [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
  [Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
  [Intel XE#2907]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2907
  [Intel XE#2925]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2925
  [Intel XE#2927]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2927
  [Intel XE#2938]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2938
  [Intel XE#2939]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2939
  [Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
  [Intel XE#3009]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3009
  [Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
  [Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
  [Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
  [Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
  [Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
  [Intel XE#3124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3124
  [Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
  [Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
  [Intel XE#330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/330
  [Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
  [Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
  [Intel XE#3442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3442
  [Intel XE#356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/356
  [Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
  [Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
  [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
  [Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
  [Intel XE#3876]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3876
  [Intel XE#4130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4130
  [Intel XE#4173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4173
  [Intel XE#4208]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4208
  [Intel XE#4212]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4212
  [Intel XE#4273]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4273
  [Intel XE#4345]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4345
  [Intel XE#4351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4351
  [Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
  [Intel XE#4356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4356
  [Intel XE#4422]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4422
  [Intel XE#4459]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4459
  [Intel XE#4488]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4488
  [Intel XE#4522]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4522
  [Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
  [Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
  [Intel XE#4650]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4650
  [Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
  [Intel XE#4814]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4814
  [Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
  [Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
  [Intel XE#5020]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5020
  [Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
  [Intel XE#5087]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5087
  [Intel XE#5103]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5103
  [Intel XE#512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/512
  [Intel XE#5208]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5208
  [Intel XE#5244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5244
  [Intel XE#5249]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5249
  [Intel XE#5299]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5299
  [Intel XE#5300]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5300
  [Intel XE#5354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5354
  [Intel XE#5376]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5376
  [Intel XE#5390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5390
  [Intel XE#5503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5503
  [Intel XE#5545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5545
  [Intel XE#5561]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5561
  [Intel XE#5565]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5565
  [Intel XE#5575]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5575
  [Intel XE#5624]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5624
  [Intel XE#5626]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5626
  [Intel XE#5742]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5742
  [Intel XE#5750]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5750
  [Intel XE#5890]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5890
  [Intel XE#5893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5893
  [Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
  [Intel XE#610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/610
  [Intel XE#616]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/616
  [Intel XE#619]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/619
  [Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
  [Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
  [Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
  [Intel XE#701]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/701
  [Intel XE#776]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/776
  [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
  [Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
  [Intel XE#908]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/908
  [Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
  [Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
  [i915#2575]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2575


Build changes
-------------

  * Linux: xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb -> xe-pw-152022v9

  IGT_8507: 8507
  xe-3614-a7c735c1739662dcc431bda50653821bff0d63fb: a7c735c1739662dcc431bda50653821bff0d63fb
  xe-pw-152022v9: 152022v9

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-152022v9/index.html

[-- Attachment #2: Type: text/html, Size: 129020 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence
  2025-08-26 18:29 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
@ 2025-08-26 18:29 ` Stuart Summers
  0 siblings, 0 replies; 23+ messages in thread
From: Stuart Summers @ 2025-08-26 18:29 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, farah.kassabri, Stuart Summers

Currently the CT lock is used to cover TLB invalidation
sequence number updates. In an effort to separate the GuC
back end tracking of communication with the firmware from
the front end TLB sequence number tracking, add a new lock
here to specifically track those sequence number updates
coming in from the user.

Apart from the CT lock, we also have a pending lock to
cover both pending fences and sequence numbers received
from the back end. Those cover interrupt cases and so
it makes not to overload those with sequence numbers
coming in from new transactions. In that way, we'll employ
a mutex here.

v2: Actually add the correct lock rather than just dropping
    it... (Matt)

Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 19 +++++++++++++------
 drivers/gpu/drm/xe/xe_gt_types.h            |  2 ++
 2 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
index 02f0bb92d6e0..75854b963d66 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
@@ -118,6 +118,9 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
  */
 int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
 {
+	struct xe_device *xe = gt_to_xe(gt);
+	int err;
+
 	gt->tlb_invalidation.seqno = 1;
 	INIT_LIST_HEAD(&gt->tlb_invalidation.pending_fences);
 	spin_lock_init(&gt->tlb_invalidation.pending_lock);
@@ -125,6 +128,10 @@ int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
 	INIT_DELAYED_WORK(&gt->tlb_invalidation.fence_tdr,
 			  xe_gt_tlb_fence_timeout);
 
+	err = drmm_mutex_init(&xe->drm, &gt->tlb_invalidation.seqno_lock);
+	if (err)
+		return err;
+
 	gt->tlb_invalidation.job_wq =
 		drmm_alloc_ordered_workqueue(&gt_to_xe(gt)->drm, "gt-tbl-inval-job-wq",
 					     WQ_MEM_RECLAIM);
@@ -158,7 +165,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 	 * appear.
 	 */
 
-	mutex_lock(&gt->uc.guc.ct.lock);
+	mutex_lock(&gt->tlb_invalidation.seqno_lock);
 	spin_lock_irq(&gt->tlb_invalidation.pending_lock);
 	cancel_delayed_work(&gt->tlb_invalidation.fence_tdr);
 	/*
@@ -178,7 +185,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
 				 &gt->tlb_invalidation.pending_fences, link)
 		invalidation_fence_signal(gt_to_xe(gt), fence);
 	spin_unlock_irq(&gt->tlb_invalidation.pending_lock);
-	mutex_unlock(&gt->uc.guc.ct.lock);
+	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 }
 
 static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
@@ -211,13 +218,13 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 	 * need to be updated.
 	 */
 
-	mutex_lock(&guc->ct.lock);
+	mutex_lock(&gt->tlb_invalidation.seqno_lock);
 	seqno = gt->tlb_invalidation.seqno;
 	fence->seqno = seqno;
 	trace_xe_gt_tlb_invalidation_fence_send(xe, fence);
 	action[1] = seqno;
-	ret = xe_guc_ct_send_locked(&guc->ct, action, len,
-				    G2H_LEN_DW_TLB_INVALIDATE, 1);
+	ret = xe_guc_ct_send(&guc->ct, action, len,
+			     G2H_LEN_DW_TLB_INVALIDATE, 1);
 	if (!ret) {
 		spin_lock_irq(&gt->tlb_invalidation.pending_lock);
 		/*
@@ -248,7 +255,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
 		if (!gt->tlb_invalidation.seqno)
 			gt->tlb_invalidation.seqno = 1;
 	}
-	mutex_unlock(&guc->ct.lock);
+	mutex_unlock(&gt->tlb_invalidation.seqno_lock);
 	xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
 
 	return ret;
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index ef0f2eecfa29..4dbc40fa6639 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -190,6 +190,8 @@ struct xe_gt {
 		/** @tlb_invalidation.seqno: TLB invalidation seqno, protected by CT lock */
 #define TLB_INVALIDATION_SEQNO_MAX	0x100000
 		int seqno;
+		/** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
+		struct mutex seqno_lock;
 		/**
 		 * @tlb_invalidation.seqno_recv: last received TLB invalidation seqno,
 		 * protected by CT lock
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2025-08-26 18:29 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
2025-08-25 17:57 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
2025-08-25 17:57 ` [PATCH 2/9] drm/xe: Cancel pending TLB inval workers on teardown Stuart Summers
2025-08-25 18:06   ` Summers, Stuart
2025-08-25 18:20     ` Matthew Brost
2025-08-25 18:23       ` Summers, Stuart
2025-08-25 18:32         ` Summers, Stuart
2025-08-25 17:57 ` [PATCH 3/9] drm/xe: s/tlb_invalidation/tlb_inval Stuart Summers
2025-08-25 17:57 ` [PATCH 4/9] drm/xe: Add xe_tlb_inval structure Stuart Summers
2025-08-25 17:57 ` [PATCH 5/9] drm/xe: Add xe_gt_tlb_invalidation_done_handler Stuart Summers
2025-08-25 17:57 ` [PATCH 6/9] drm/xe: Decouple TLB invalidations from GT Stuart Summers
2025-08-25 17:57 ` [PATCH 7/9] drm/xe: Prep TLB invalidation fence before sending Stuart Summers
2025-08-25 17:57 ` [PATCH 8/9] drm/xe: Add helpers to send TLB invalidations Stuart Summers
2025-08-25 17:57 ` [PATCH 9/9] drm/xe: Split TLB invalidation code in frontend and backend Stuart Summers
2025-08-25 19:09 ` ✗ CI.checkpatch: warning for Add TLB invalidation abstraction (rev9) Patchwork
2025-08-25 19:10 ` ✓ CI.KUnit: success " Patchwork
2025-08-25 20:09 ` ✓ Xe.CI.BAT: " Patchwork
2025-08-26  6:18 ` ✓ Xe.CI.Full: " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2025-08-26 18:29 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
2025-08-26 18:29 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
2025-08-20 23:30 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
2025-08-20 23:30 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
2025-08-21 22:09   ` Matthew Brost
2025-08-20 22:45 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
2025-08-20 22:45 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
2025-08-13 19:47 [PATCH 0/9] Add TLB invalidation abstraction stuartsummers
2025-08-13 19:47 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence stuartsummers

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).