* [PATCH v2 1/7] drm/xe: Adjust long-running workload timeslices to reasonable values
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
@ 2025-12-12 18:28 ` Matthew Brost
2025-12-15 10:08 ` Thomas Hellström
2025-12-12 18:28 ` [PATCH v2 2/7] drm/xe: Use usleep_range for accurate long-running workload timeslicing Matthew Brost
` (9 subsequent siblings)
10 siblings, 1 reply; 24+ messages in thread
From: Matthew Brost @ 2025-12-12 18:28 UTC (permalink / raw)
To: intel-xe; +Cc: francois.dugast, thomas.hellstrom, michal.mrozek
A 10ms timeslice for long-running workloads is far too long and causes
significant jitter in benchmarks when the system is shared. Adjust the
value to 5ms for preempt-fencing VMs, as the resume step there is quite
costly as memory is moved around, and set it to zero for pagefault VMs,
since switching back to pagefault mode after dma-fence mode is
relatively fast.
Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Cc: stable@vger.kernel.org
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 5 ++++-
drivers/gpu/drm/xe/xe_vm_types.h | 2 +-
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index c2012d20faa6..4648f8a458cf 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1508,7 +1508,10 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)
INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
INIT_LIST_HEAD(&vm->preempt.exec_queues);
- vm->preempt.min_run_period_ms = 10; /* FIXME: Wire up to uAPI */
+ if (flags & XE_VM_FLAG_FAULT_MODE)
+ vm->preempt.min_run_period_ms = 0;
+ else
+ vm->preempt.min_run_period_ms = 5;
for_each_tile(tile, xe, id)
xe_range_fence_tree_init(&vm->rftree[id]);
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 3bf912bfbdcc..18bad1dd08e6 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -263,7 +263,7 @@ struct xe_vm {
* @min_run_period_ms: The minimum run period before preempting
* an engine again
*/
- s64 min_run_period_ms;
+ unsigned int min_run_period_ms;
/** @exec_queues: list of exec queues attached to this VM */
struct list_head exec_queues;
/** @num_exec_queues: number exec queues attached to this VM */
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v2 1/7] drm/xe: Adjust long-running workload timeslices to reasonable values
2025-12-12 18:28 ` [PATCH v2 1/7] drm/xe: Adjust long-running workload timeslices to reasonable values Matthew Brost
@ 2025-12-15 10:08 ` Thomas Hellström
2025-12-15 21:48 ` Matthew Brost
0 siblings, 1 reply; 24+ messages in thread
From: Thomas Hellström @ 2025-12-15 10:08 UTC (permalink / raw)
To: Matthew Brost, intel-xe; +Cc: francois.dugast, michal.mrozek
On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> A 10ms timeslice for long-running workloads is far too long and
> causes
> significant jitter in benchmarks when the system is shared. Adjust
> the
> value to 5ms for preempt-fencing VMs, as the resume step there is
> quite
> costly as memory is moved around, and set it to zero for pagefault
> VMs,
> since switching back to pagefault mode after dma-fence mode is
> relatively fast.
>
> Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel
> GPUs")
> Cc: stable@vger.kernel.org
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Please add a comment in the commit message explaining why the type was
changed.
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 5 ++++-
> drivers/gpu/drm/xe/xe_vm_types.h | 2 +-
> 2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index c2012d20faa6..4648f8a458cf 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1508,7 +1508,10 @@ struct xe_vm *xe_vm_create(struct xe_device
> *xe, u32 flags, struct xe_file *xef)
> INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
>
> INIT_LIST_HEAD(&vm->preempt.exec_queues);
> - vm->preempt.min_run_period_ms = 10; /* FIXME: Wire up to
> uAPI */
> + if (flags & XE_VM_FLAG_FAULT_MODE)
> + vm->preempt.min_run_period_ms = 0;
> + else
> + vm->preempt.min_run_period_ms = 5;
>
> for_each_tile(tile, xe, id)
> xe_range_fence_tree_init(&vm->rftree[id]);
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> b/drivers/gpu/drm/xe/xe_vm_types.h
> index 3bf912bfbdcc..18bad1dd08e6 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -263,7 +263,7 @@ struct xe_vm {
> * @min_run_period_ms: The minimum run period before
> preempting
> * an engine again
> */
> - s64 min_run_period_ms;
> + unsigned int min_run_period_ms;
> /** @exec_queues: list of exec queues attached to
> this VM */
> struct list_head exec_queues;
> /** @num_exec_queues: number exec queues attached to
> this VM */
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v2 1/7] drm/xe: Adjust long-running workload timeslices to reasonable values
2025-12-15 10:08 ` Thomas Hellström
@ 2025-12-15 21:48 ` Matthew Brost
0 siblings, 0 replies; 24+ messages in thread
From: Matthew Brost @ 2025-12-15 21:48 UTC (permalink / raw)
To: Thomas Hellström; +Cc: intel-xe, francois.dugast, michal.mrozek
On Mon, Dec 15, 2025 at 11:08:21AM +0100, Thomas Hellström wrote:
> On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> > A 10ms timeslice for long-running workloads is far too long and
> > causes
> > significant jitter in benchmarks when the system is shared. Adjust
> > the
> > value to 5ms for preempt-fencing VMs, as the resume step there is
> > quite
> > costly as memory is moved around, and set it to zero for pagefault
> > VMs,
> > since switching back to pagefault mode after dma-fence mode is
> > relatively fast.
> >
> > Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel
> > GPUs")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>
> Please add a comment in the commit message explaining why the type was
> changed.
>
Will do. I noticed s64 type which makes no sense at all when working on
this series, so fixed that part up too.
Matt
> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>
>
> > ---
> > drivers/gpu/drm/xe/xe_vm.c | 5 ++++-
> > drivers/gpu/drm/xe/xe_vm_types.h | 2 +-
> > 2 files changed, 5 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index c2012d20faa6..4648f8a458cf 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -1508,7 +1508,10 @@ struct xe_vm *xe_vm_create(struct xe_device
> > *xe, u32 flags, struct xe_file *xef)
> > INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
> >
> > INIT_LIST_HEAD(&vm->preempt.exec_queues);
> > - vm->preempt.min_run_period_ms = 10; /* FIXME: Wire up to
> > uAPI */
> > + if (flags & XE_VM_FLAG_FAULT_MODE)
> > + vm->preempt.min_run_period_ms = 0;
> > + else
> > + vm->preempt.min_run_period_ms = 5;
> >
> > for_each_tile(tile, xe, id)
> > xe_range_fence_tree_init(&vm->rftree[id]);
> > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> > b/drivers/gpu/drm/xe/xe_vm_types.h
> > index 3bf912bfbdcc..18bad1dd08e6 100644
> > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > @@ -263,7 +263,7 @@ struct xe_vm {
> > * @min_run_period_ms: The minimum run period before
> > preempting
> > * an engine again
> > */
> > - s64 min_run_period_ms;
> > + unsigned int min_run_period_ms;
> > /** @exec_queues: list of exec queues attached to
> > this VM */
> > struct list_head exec_queues;
> > /** @num_exec_queues: number exec queues attached to
> > this VM */
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 2/7] drm/xe: Use usleep_range for accurate long-running workload timeslicing
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
2025-12-12 18:28 ` [PATCH v2 1/7] drm/xe: Adjust long-running workload timeslices to reasonable values Matthew Brost
@ 2025-12-12 18:28 ` Matthew Brost
2025-12-15 10:10 ` Thomas Hellström
2025-12-12 18:28 ` [PATCH v2 3/7] drm/xe: Add debugfs knobs to control long running " Matthew Brost
` (8 subsequent siblings)
10 siblings, 1 reply; 24+ messages in thread
From: Matthew Brost @ 2025-12-12 18:28 UTC (permalink / raw)
To: intel-xe; +Cc: francois.dugast, thomas.hellstrom, michal.mrozek
msleep is not very accurate in terms of how long it actually sleeps,
whereas usleep_range is precise. Replace the timeslice sleep for
long-running workloads with the more accurate usleep_range to avoid
jitter if the sleep period is less than 20ms.
Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Cc: stable@vger.kernel.org
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_guc_submit.c | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 21a8bd2ec672..18cac5594d6a 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -990,6 +990,24 @@ static u32 wq_space_until_wrap(struct xe_exec_queue *q)
return (WQ_SIZE - q->guc->wqi_tail);
}
+static inline void relaxed_ms_sleep(unsigned int delay_ms)
+{
+ unsigned long min_us, max_us;
+
+ if (!delay_ms)
+ return;
+
+ if (delay_ms > 20) {
+ msleep(delay_ms);
+ return;
+ }
+
+ min_us = mul_u32_u32(delay_ms, 1000);
+ max_us = min_us + 500;
+
+ usleep_range(min_us, max_us);
+}
+
static int wq_wait_for_space(struct xe_exec_queue *q, u32 wqi_size)
{
struct xe_guc *guc = exec_queue_to_guc(q);
@@ -1903,7 +1921,7 @@ static void __guc_exec_queue_process_msg_suspend(struct xe_sched_msg *msg)
since_resume_ms;
if (wait_ms > 0 && q->guc->resume_time)
- msleep(wait_ms);
+ relaxed_ms_sleep(wait_ms);
set_exec_queue_suspended(q);
disable_scheduling(q, false);
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v2 2/7] drm/xe: Use usleep_range for accurate long-running workload timeslicing
2025-12-12 18:28 ` [PATCH v2 2/7] drm/xe: Use usleep_range for accurate long-running workload timeslicing Matthew Brost
@ 2025-12-15 10:10 ` Thomas Hellström
0 siblings, 0 replies; 24+ messages in thread
From: Thomas Hellström @ 2025-12-15 10:10 UTC (permalink / raw)
To: Matthew Brost, intel-xe; +Cc: francois.dugast, michal.mrozek
On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> msleep is not very accurate in terms of how long it actually sleeps,
> whereas usleep_range is precise. Replace the timeslice sleep for
> long-running workloads with the more accurate usleep_range to avoid
> jitter if the sleep period is less than 20ms.
>
> Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel
> GPUs")
> Cc: stable@vger.kernel.org
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_guc_submit.c | 20 +++++++++++++++++++-
> 1 file changed, 19 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c
> b/drivers/gpu/drm/xe/xe_guc_submit.c
> index 21a8bd2ec672..18cac5594d6a 100644
> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> @@ -990,6 +990,24 @@ static u32 wq_space_until_wrap(struct
> xe_exec_queue *q)
> return (WQ_SIZE - q->guc->wqi_tail);
> }
>
> +static inline void relaxed_ms_sleep(unsigned int delay_ms)
> +{
> + unsigned long min_us, max_us;
> +
> + if (!delay_ms)
> + return;
> +
> + if (delay_ms > 20) {
> + msleep(delay_ms);
> + return;
> + }
> +
> + min_us = mul_u32_u32(delay_ms, 1000);
> + max_us = min_us + 500;
> +
> + usleep_range(min_us, max_us);
> +}
> +
> static int wq_wait_for_space(struct xe_exec_queue *q, u32 wqi_size)
> {
> struct xe_guc *guc = exec_queue_to_guc(q);
> @@ -1903,7 +1921,7 @@ static void
> __guc_exec_queue_process_msg_suspend(struct xe_sched_msg *msg)
> since_resume_ms;
>
> if (wait_ms > 0 && q->guc->resume_time)
> - msleep(wait_ms);
> + relaxed_ms_sleep(wait_ms);
>
> set_exec_queue_suspended(q);
> disable_scheduling(q, false);
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 3/7] drm/xe: Add debugfs knobs to control long running workload timeslicing
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
2025-12-12 18:28 ` [PATCH v2 1/7] drm/xe: Adjust long-running workload timeslices to reasonable values Matthew Brost
2025-12-12 18:28 ` [PATCH v2 2/7] drm/xe: Use usleep_range for accurate long-running workload timeslicing Matthew Brost
@ 2025-12-12 18:28 ` Matthew Brost
2025-12-15 10:11 ` Thomas Hellström
2025-12-12 18:28 ` [PATCH v2 4/7] drm/xe: Skip exec queue schedule toggle if queue is idle during suspend Matthew Brost
` (7 subsequent siblings)
10 siblings, 1 reply; 24+ messages in thread
From: Matthew Brost @ 2025-12-12 18:28 UTC (permalink / raw)
To: intel-xe; +Cc: francois.dugast, thomas.hellstrom, michal.mrozek
Add debugfs knobs to control timeslicing for long-running workloads,
allowing quick tuning of values when running benchmarks.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_debugfs.c | 74 ++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_device.c | 1 +
drivers/gpu/drm/xe/xe_device_types.h | 6 +++
drivers/gpu/drm/xe/xe_vm.c | 4 +-
4 files changed, 83 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
index 4fa423a82bea..38433c9af59f 100644
--- a/drivers/gpu/drm/xe/xe_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_debugfs.c
@@ -328,6 +328,74 @@ static const struct file_operations atomic_svm_timeslice_ms_fops = {
.write = atomic_svm_timeslice_ms_set,
};
+static ssize_t min_run_period_lr_ms_show(struct file *f, char __user *ubuf,
+ size_t size, loff_t *pos)
+{
+ struct xe_device *xe = file_inode(f)->i_private;
+ char buf[32];
+ int len = 0;
+
+ len = scnprintf(buf, sizeof(buf), "%d\n", xe->min_run_period_lr_ms);
+
+ return simple_read_from_buffer(ubuf, size, pos, buf, len);
+}
+
+static ssize_t min_run_period_lr_ms_set(struct file *f, const char __user *ubuf,
+ size_t size, loff_t *pos)
+{
+ struct xe_device *xe = file_inode(f)->i_private;
+ u32 min_run_period_lr_ms;
+ ssize_t ret;
+
+ ret = kstrtouint_from_user(ubuf, size, 0, &min_run_period_lr_ms);
+ if (ret)
+ return ret;
+
+ xe->min_run_period_lr_ms = min_run_period_lr_ms;
+
+ return size;
+}
+
+static const struct file_operations min_run_period_lr_ms_fops = {
+ .owner = THIS_MODULE,
+ .read = min_run_period_lr_ms_show,
+ .write = min_run_period_lr_ms_set,
+};
+
+static ssize_t min_run_period_pf_ms_show(struct file *f, char __user *ubuf,
+ size_t size, loff_t *pos)
+{
+ struct xe_device *xe = file_inode(f)->i_private;
+ char buf[32];
+ int len = 0;
+
+ len = scnprintf(buf, sizeof(buf), "%d\n", xe->min_run_period_pf_ms);
+
+ return simple_read_from_buffer(ubuf, size, pos, buf, len);
+}
+
+static ssize_t min_run_period_pf_ms_set(struct file *f, const char __user *ubuf,
+ size_t size, loff_t *pos)
+{
+ struct xe_device *xe = file_inode(f)->i_private;
+ u32 min_run_period_pf_ms;
+ ssize_t ret;
+
+ ret = kstrtouint_from_user(ubuf, size, 0, &min_run_period_pf_ms);
+ if (ret)
+ return ret;
+
+ xe->min_run_period_pf_ms = min_run_period_pf_ms;
+
+ return size;
+}
+
+static const struct file_operations min_run_period_pf_ms_fops = {
+ .owner = THIS_MODULE,
+ .read = min_run_period_pf_ms_show,
+ .write = min_run_period_pf_ms_set,
+};
+
static ssize_t disable_late_binding_show(struct file *f, char __user *ubuf,
size_t size, loff_t *pos)
{
@@ -395,6 +463,12 @@ void xe_debugfs_register(struct xe_device *xe)
debugfs_create_file("atomic_svm_timeslice_ms", 0600, root, xe,
&atomic_svm_timeslice_ms_fops);
+ debugfs_create_file("min_run_period_lr_ms", 0600, root, xe,
+ &min_run_period_lr_ms_fops);
+
+ debugfs_create_file("min_run_period_pf_ms", 0600, root, xe,
+ &min_run_period_pf_ms_fops);
+
debugfs_create_file("disable_late_binding", 0600, root, xe,
&disable_late_binding_fops);
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 339b9aef9499..9f84ce3db1f6 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -460,6 +460,7 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
xe->info.revid = pdev->revision;
xe->info.force_execlist = xe_modparam.force_execlist;
xe->atomic_svm_timeslice_ms = 5;
+ xe->min_run_period_lr_ms = 5;
err = xe_irq_init(xe);
if (err)
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index b35ba29d4d35..7df0da592b50 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -615,6 +615,12 @@ struct xe_device {
/** @atomic_svm_timeslice_ms: Atomic SVM fault timeslice MS */
u32 atomic_svm_timeslice_ms;
+ /** @min_run_period_lr_ms: LR VM (preempt fence mode) timeslice */
+ u32 min_run_period_lr_ms;
+
+ /** @min_run_period_pf_ms: LR VM (page fault mode) timeslice */
+ u32 min_run_period_pf_ms;
+
#ifdef TEST_VM_OPS_ERROR
/**
* @vm_inject_error_position: inject errors at different places in VM
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 4648f8a458cf..a1363f675b51 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1509,9 +1509,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)
INIT_LIST_HEAD(&vm->preempt.exec_queues);
if (flags & XE_VM_FLAG_FAULT_MODE)
- vm->preempt.min_run_period_ms = 0;
+ vm->preempt.min_run_period_ms = xe->min_run_period_pf_ms;
else
- vm->preempt.min_run_period_ms = 5;
+ vm->preempt.min_run_period_ms = xe->min_run_period_lr_ms;
for_each_tile(tile, xe, id)
xe_range_fence_tree_init(&vm->rftree[id]);
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v2 3/7] drm/xe: Add debugfs knobs to control long running workload timeslicing
2025-12-12 18:28 ` [PATCH v2 3/7] drm/xe: Add debugfs knobs to control long running " Matthew Brost
@ 2025-12-15 10:11 ` Thomas Hellström
0 siblings, 0 replies; 24+ messages in thread
From: Thomas Hellström @ 2025-12-15 10:11 UTC (permalink / raw)
To: Matthew Brost, intel-xe; +Cc: francois.dugast, michal.mrozek
On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> Add debugfs knobs to control timeslicing for long-running workloads,
> allowing quick tuning of values when running benchmarks.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_debugfs.c | 74
> ++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_device.c | 1 +
> drivers/gpu/drm/xe/xe_device_types.h | 6 +++
> drivers/gpu/drm/xe/xe_vm.c | 4 +-
> 4 files changed, 83 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_debugfs.c
> b/drivers/gpu/drm/xe/xe_debugfs.c
> index 4fa423a82bea..38433c9af59f 100644
> --- a/drivers/gpu/drm/xe/xe_debugfs.c
> +++ b/drivers/gpu/drm/xe/xe_debugfs.c
> @@ -328,6 +328,74 @@ static const struct file_operations
> atomic_svm_timeslice_ms_fops = {
> .write = atomic_svm_timeslice_ms_set,
> };
>
> +static ssize_t min_run_period_lr_ms_show(struct file *f, char __user
> *ubuf,
> + size_t size, loff_t *pos)
> +{
> + struct xe_device *xe = file_inode(f)->i_private;
> + char buf[32];
> + int len = 0;
> +
> + len = scnprintf(buf, sizeof(buf), "%d\n", xe-
> >min_run_period_lr_ms);
> +
> + return simple_read_from_buffer(ubuf, size, pos, buf, len);
> +}
> +
> +static ssize_t min_run_period_lr_ms_set(struct file *f, const char
> __user *ubuf,
> + size_t size, loff_t *pos)
> +{
> + struct xe_device *xe = file_inode(f)->i_private;
> + u32 min_run_period_lr_ms;
> + ssize_t ret;
> +
> + ret = kstrtouint_from_user(ubuf, size, 0,
> &min_run_period_lr_ms);
> + if (ret)
> + return ret;
> +
> + xe->min_run_period_lr_ms = min_run_period_lr_ms;
> +
> + return size;
> +}
> +
> +static const struct file_operations min_run_period_lr_ms_fops = {
> + .owner = THIS_MODULE,
> + .read = min_run_period_lr_ms_show,
> + .write = min_run_period_lr_ms_set,
> +};
> +
> +static ssize_t min_run_period_pf_ms_show(struct file *f, char __user
> *ubuf,
> + size_t size, loff_t *pos)
> +{
> + struct xe_device *xe = file_inode(f)->i_private;
> + char buf[32];
> + int len = 0;
> +
> + len = scnprintf(buf, sizeof(buf), "%d\n", xe-
> >min_run_period_pf_ms);
> +
> + return simple_read_from_buffer(ubuf, size, pos, buf, len);
> +}
> +
> +static ssize_t min_run_period_pf_ms_set(struct file *f, const char
> __user *ubuf,
> + size_t size, loff_t *pos)
> +{
> + struct xe_device *xe = file_inode(f)->i_private;
> + u32 min_run_period_pf_ms;
> + ssize_t ret;
> +
> + ret = kstrtouint_from_user(ubuf, size, 0,
> &min_run_period_pf_ms);
> + if (ret)
> + return ret;
> +
> + xe->min_run_period_pf_ms = min_run_period_pf_ms;
> +
> + return size;
> +}
> +
> +static const struct file_operations min_run_period_pf_ms_fops = {
> + .owner = THIS_MODULE,
> + .read = min_run_period_pf_ms_show,
> + .write = min_run_period_pf_ms_set,
> +};
> +
> static ssize_t disable_late_binding_show(struct file *f, char __user
> *ubuf,
> size_t size, loff_t *pos)
> {
> @@ -395,6 +463,12 @@ void xe_debugfs_register(struct xe_device *xe)
> debugfs_create_file("atomic_svm_timeslice_ms", 0600, root,
> xe,
> &atomic_svm_timeslice_ms_fops);
>
> + debugfs_create_file("min_run_period_lr_ms", 0600, root, xe,
> + &min_run_period_lr_ms_fops);
> +
> + debugfs_create_file("min_run_period_pf_ms", 0600, root, xe,
> + &min_run_period_pf_ms_fops);
> +
> debugfs_create_file("disable_late_binding", 0600, root, xe,
> &disable_late_binding_fops);
>
> diff --git a/drivers/gpu/drm/xe/xe_device.c
> b/drivers/gpu/drm/xe/xe_device.c
> index 339b9aef9499..9f84ce3db1f6 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -460,6 +460,7 @@ struct xe_device *xe_device_create(struct pci_dev
> *pdev,
> xe->info.revid = pdev->revision;
> xe->info.force_execlist = xe_modparam.force_execlist;
> xe->atomic_svm_timeslice_ms = 5;
> + xe->min_run_period_lr_ms = 5;
>
> err = xe_irq_init(xe);
> if (err)
> diff --git a/drivers/gpu/drm/xe/xe_device_types.h
> b/drivers/gpu/drm/xe/xe_device_types.h
> index b35ba29d4d35..7df0da592b50 100644
> --- a/drivers/gpu/drm/xe/xe_device_types.h
> +++ b/drivers/gpu/drm/xe/xe_device_types.h
> @@ -615,6 +615,12 @@ struct xe_device {
> /** @atomic_svm_timeslice_ms: Atomic SVM fault timeslice MS
> */
> u32 atomic_svm_timeslice_ms;
>
> + /** @min_run_period_lr_ms: LR VM (preempt fence mode)
> timeslice */
> + u32 min_run_period_lr_ms;
> +
> + /** @min_run_period_pf_ms: LR VM (page fault mode) timeslice
> */
> + u32 min_run_period_pf_ms;
> +
> #ifdef TEST_VM_OPS_ERROR
> /**
> * @vm_inject_error_position: inject errors at different
> places in VM
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 4648f8a458cf..a1363f675b51 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1509,9 +1509,9 @@ struct xe_vm *xe_vm_create(struct xe_device
> *xe, u32 flags, struct xe_file *xef)
>
> INIT_LIST_HEAD(&vm->preempt.exec_queues);
> if (flags & XE_VM_FLAG_FAULT_MODE)
> - vm->preempt.min_run_period_ms = 0;
> + vm->preempt.min_run_period_ms = xe-
> >min_run_period_pf_ms;
> else
> - vm->preempt.min_run_period_ms = 5;
> + vm->preempt.min_run_period_ms = xe-
> >min_run_period_lr_ms;
>
> for_each_tile(tile, xe, id)
> xe_range_fence_tree_init(&vm->rftree[id]);
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 4/7] drm/xe: Skip exec queue schedule toggle if queue is idle during suspend
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
` (2 preceding siblings ...)
2025-12-12 18:28 ` [PATCH v2 3/7] drm/xe: Add debugfs knobs to control long running " Matthew Brost
@ 2025-12-12 18:28 ` Matthew Brost
2025-12-15 12:08 ` Thomas Hellström
2025-12-12 18:28 ` [PATCH v2 5/7] drm/xe: Wait on in-syncs when swicthing to dma-fence mode Matthew Brost
` (6 subsequent siblings)
10 siblings, 1 reply; 24+ messages in thread
From: Matthew Brost @ 2025-12-12 18:28 UTC (permalink / raw)
To: intel-xe; +Cc: francois.dugast, thomas.hellstrom, michal.mrozek
If an exec queue is idle, there is no need to issue a schedule disable
to the GuC when suspending the queue’s execution. Opportunistically skip
this step if the queue is idle and not a parallel queue. Parallel queues
must have their scheduling state flipped in the GuC due to limitations
in how submission is implemented in run_job().
Also if all pagefault queues can skip the schedule disable during a
switch to dma-fence mode, do not schedule a resume for the pagefault
queues after the next submission.
v2:
- Don't touch the LRC tail is queue is suspended but enabled in run_job
(CI)
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.h | 17 ++++++++
drivers/gpu/drm/xe/xe_guc_submit.c | 55 +++++++++++++++++++++++--
drivers/gpu/drm/xe/xe_hw_engine_group.c | 2 +-
3 files changed, 70 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
index 10abed98fb6b..b5ad975d7e97 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue.h
@@ -162,4 +162,21 @@ int xe_exec_queue_contexts_hwsp_rebase(struct xe_exec_queue *q, void *scratch);
struct xe_lrc *xe_exec_queue_lrc(struct xe_exec_queue *q);
+/**
+ * xe_exec_queue_idle_skip_suspend() - Can exec queue skip suspend
+ * @q: The exec_queue
+ *
+ * If an exec queue is not parallel and is idle, the suspend steps can be
+ * skipped in the submission backend immediatley signaling the suspend fence.
+ * Parallel queues cannot skip this step due to limitations in the submission
+ * backend.
+ *
+ * Return: True if exec queue is idle and can skip suspend steps, False
+ * otherwise
+ */
+static inline bool xe_exec_queue_idle_skip_suspend(struct xe_exec_queue *q)
+{
+ return !xe_exec_queue_is_parallel(q) && xe_exec_queue_is_idle(q);
+}
+
#endif
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 18cac5594d6a..8bab816da7fd 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -75,6 +75,7 @@ exec_queue_to_guc(struct xe_exec_queue *q)
#define EXEC_QUEUE_STATE_EXTRA_REF (1 << 11)
#define EXEC_QUEUE_STATE_PENDING_RESUME (1 << 12)
#define EXEC_QUEUE_STATE_PENDING_TDR_EXIT (1 << 13)
+#define EXEC_QUEUE_STATE_IDLE_SKIP_SUSPEND (1 << 14)
static bool exec_queue_registered(struct xe_exec_queue *q)
{
@@ -266,6 +267,21 @@ static void clear_exec_queue_pending_tdr_exit(struct xe_exec_queue *q)
atomic_and(~EXEC_QUEUE_STATE_PENDING_TDR_EXIT, &q->guc->state);
}
+static bool exec_queue_idle_skip_suspend(struct xe_exec_queue *q)
+{
+ return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_IDLE_SKIP_SUSPEND;
+}
+
+static void set_exec_queue_idle_skip_suspend(struct xe_exec_queue *q)
+{
+ atomic_or(EXEC_QUEUE_STATE_IDLE_SKIP_SUSPEND, &q->guc->state);
+}
+
+static void clear_exec_queue_idle_skip_suspend(struct xe_exec_queue *q)
+{
+ atomic_and(~EXEC_QUEUE_STATE_IDLE_SKIP_SUSPEND, &q->guc->state);
+}
+
static bool exec_queue_killed_or_banned_or_wedged(struct xe_exec_queue *q)
{
return (atomic_read(&q->guc->state) &
@@ -1118,7 +1134,7 @@ static void submit_exec_queue(struct xe_exec_queue *q, struct xe_sched_job *job)
if (!job->restore_replay || job->last_replay) {
if (xe_exec_queue_is_parallel(q))
wq_item_append(q);
- else
+ else if (!exec_queue_idle_skip_suspend(q))
xe_lrc_set_ring_tail(lrc, lrc->ring.tail);
job->last_replay = false;
}
@@ -1906,9 +1922,10 @@ static void __guc_exec_queue_process_msg_suspend(struct xe_sched_msg *msg)
{
struct xe_exec_queue *q = msg->private_data;
struct xe_guc *guc = exec_queue_to_guc(q);
+ bool idle_skip_suspend = xe_exec_queue_idle_skip_suspend(q);
- if (guc_exec_queue_allowed_to_change_state(q) && !exec_queue_suspended(q) &&
- exec_queue_enabled(q)) {
+ if (!idle_skip_suspend && guc_exec_queue_allowed_to_change_state(q) &&
+ !exec_queue_suspended(q) && exec_queue_enabled(q)) {
wait_event(guc->ct.wq, vf_recovery(guc) ||
((q->guc->resume_time != RESUME_PENDING ||
xe_guc_read_stopped(guc)) && !exec_queue_pending_disable(q)));
@@ -1927,11 +1944,33 @@ static void __guc_exec_queue_process_msg_suspend(struct xe_sched_msg *msg)
disable_scheduling(q, false);
}
} else if (q->guc->suspend_pending) {
+ if (idle_skip_suspend)
+ set_exec_queue_idle_skip_suspend(q);
set_exec_queue_suspended(q);
suspend_fence_signal(q);
}
}
+static void sched_context(struct xe_exec_queue *q)
+{
+ struct xe_guc *guc = exec_queue_to_guc(q);
+ struct xe_lrc *lrc = q->lrc[0];
+ u32 action [] = {
+ XE_GUC_ACTION_SCHED_CONTEXT,
+ q->guc->id,
+ };
+
+ xe_gt_assert(guc_to_gt(guc), !xe_exec_queue_is_parallel(q));
+ xe_gt_assert(guc_to_gt(guc), !exec_queue_destroyed(q));
+ xe_gt_assert(guc_to_gt(guc), exec_queue_registered(q));
+ xe_gt_assert(guc_to_gt(guc), !exec_queue_pending_disable(q));
+
+ trace_xe_exec_queue_submit(q);
+
+ xe_lrc_set_ring_tail(lrc, lrc->ring.tail);
+ xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action), 0, 0);
+}
+
static void __guc_exec_queue_process_msg_resume(struct xe_sched_msg *msg)
{
struct xe_exec_queue *q = msg->private_data;
@@ -1939,12 +1978,22 @@ static void __guc_exec_queue_process_msg_resume(struct xe_sched_msg *msg)
if (guc_exec_queue_allowed_to_change_state(q)) {
clear_exec_queue_suspended(q);
if (!exec_queue_enabled(q)) {
+ if (exec_queue_idle_skip_suspend(q)) {
+ struct xe_lrc *lrc = q->lrc[0];
+
+ clear_exec_queue_idle_skip_suspend(q);
+ xe_lrc_set_ring_tail(lrc, lrc->ring.tail);
+ }
q->guc->resume_time = RESUME_PENDING;
set_exec_queue_pending_resume(q);
enable_scheduling(q);
+ } else if (exec_queue_idle_skip_suspend(q)) {
+ clear_exec_queue_idle_skip_suspend(q);
+ sched_context(q);
}
} else {
clear_exec_queue_suspended(q);
+ clear_exec_queue_idle_skip_suspend(q);
}
}
diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c b/drivers/gpu/drm/xe/xe_hw_engine_group.c
index 290205a266b8..4d9263a1a208 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
@@ -205,7 +205,7 @@ static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
continue;
xe_gt_stats_incr(q->gt, XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT, 1);
- need_resume = true;
+ need_resume |= !xe_exec_queue_idle_skip_suspend(q);
q->ops->suspend(q);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v2 4/7] drm/xe: Skip exec queue schedule toggle if queue is idle during suspend
2025-12-12 18:28 ` [PATCH v2 4/7] drm/xe: Skip exec queue schedule toggle if queue is idle during suspend Matthew Brost
@ 2025-12-15 12:08 ` Thomas Hellström
0 siblings, 0 replies; 24+ messages in thread
From: Thomas Hellström @ 2025-12-15 12:08 UTC (permalink / raw)
To: Matthew Brost, intel-xe; +Cc: francois.dugast, michal.mrozek
On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> If an exec queue is idle, there is no need to issue a schedule
> disable
> to the GuC when suspending the queue’s execution. Opportunistically
> skip
> this step if the queue is idle and not a parallel queue. Parallel
> queues
> must have their scheduling state flipped in the GuC due to
> limitations
> in how submission is implemented in run_job().
>
> Also if all pagefault queues can skip the schedule disable during a
> switch to dma-fence mode, do not schedule a resume for the pagefault
> queues after the next submission.
>
> v2:
> - Don't touch the LRC tail is queue is suspended but enabled in
> run_job
> (CI)
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Not fully up-to-date with the GuC scheduling code, but changes look
sane to me.
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_exec_queue.h | 17 ++++++++
> drivers/gpu/drm/xe/xe_guc_submit.c | 55
> +++++++++++++++++++++++--
> drivers/gpu/drm/xe/xe_hw_engine_group.c | 2 +-
> 3 files changed, 70 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h
> b/drivers/gpu/drm/xe/xe_exec_queue.h
> index 10abed98fb6b..b5ad975d7e97 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> @@ -162,4 +162,21 @@ int xe_exec_queue_contexts_hwsp_rebase(struct
> xe_exec_queue *q, void *scratch);
>
> struct xe_lrc *xe_exec_queue_lrc(struct xe_exec_queue *q);
>
> +/**
> + * xe_exec_queue_idle_skip_suspend() - Can exec queue skip suspend
> + * @q: The exec_queue
> + *
> + * If an exec queue is not parallel and is idle, the suspend steps
> can be
> + * skipped in the submission backend immediatley signaling the
> suspend fence.
> + * Parallel queues cannot skip this step due to limitations in the
> submission
> + * backend.
> + *
> + * Return: True if exec queue is idle and can skip suspend steps,
> False
> + * otherwise
> + */
> +static inline bool xe_exec_queue_idle_skip_suspend(struct
> xe_exec_queue *q)
> +{
> + return !xe_exec_queue_is_parallel(q) &&
> xe_exec_queue_is_idle(q);
> +}
> +
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c
> b/drivers/gpu/drm/xe/xe_guc_submit.c
> index 18cac5594d6a..8bab816da7fd 100644
> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> @@ -75,6 +75,7 @@ exec_queue_to_guc(struct xe_exec_queue *q)
> #define EXEC_QUEUE_STATE_EXTRA_REF (1 << 11)
> #define EXEC_QUEUE_STATE_PENDING_RESUME (1 << 12)
> #define EXEC_QUEUE_STATE_PENDING_TDR_EXIT (1 << 13)
> +#define EXEC_QUEUE_STATE_IDLE_SKIP_SUSPEND (1 << 14)
>
> static bool exec_queue_registered(struct xe_exec_queue *q)
> {
> @@ -266,6 +267,21 @@ static void
> clear_exec_queue_pending_tdr_exit(struct xe_exec_queue *q)
> atomic_and(~EXEC_QUEUE_STATE_PENDING_TDR_EXIT, &q->guc-
> >state);
> }
>
> +static bool exec_queue_idle_skip_suspend(struct xe_exec_queue *q)
> +{
> + return atomic_read(&q->guc->state) &
> EXEC_QUEUE_STATE_IDLE_SKIP_SUSPEND;
> +}
> +
> +static void set_exec_queue_idle_skip_suspend(struct xe_exec_queue
> *q)
> +{
> + atomic_or(EXEC_QUEUE_STATE_IDLE_SKIP_SUSPEND, &q->guc-
> >state);
> +}
> +
> +static void clear_exec_queue_idle_skip_suspend(struct xe_exec_queue
> *q)
> +{
> + atomic_and(~EXEC_QUEUE_STATE_IDLE_SKIP_SUSPEND, &q->guc-
> >state);
> +}
> +
> static bool exec_queue_killed_or_banned_or_wedged(struct
> xe_exec_queue *q)
> {
> return (atomic_read(&q->guc->state) &
> @@ -1118,7 +1134,7 @@ static void submit_exec_queue(struct
> xe_exec_queue *q, struct xe_sched_job *job)
> if (!job->restore_replay || job->last_replay) {
> if (xe_exec_queue_is_parallel(q))
> wq_item_append(q);
> - else
> + else if (!exec_queue_idle_skip_suspend(q))
> xe_lrc_set_ring_tail(lrc, lrc->ring.tail);
> job->last_replay = false;
> }
> @@ -1906,9 +1922,10 @@ static void
> __guc_exec_queue_process_msg_suspend(struct xe_sched_msg *msg)
> {
> struct xe_exec_queue *q = msg->private_data;
> struct xe_guc *guc = exec_queue_to_guc(q);
> + bool idle_skip_suspend = xe_exec_queue_idle_skip_suspend(q);
>
> - if (guc_exec_queue_allowed_to_change_state(q) &&
> !exec_queue_suspended(q) &&
> - exec_queue_enabled(q)) {
> + if (!idle_skip_suspend &&
> guc_exec_queue_allowed_to_change_state(q) &&
> + !exec_queue_suspended(q) && exec_queue_enabled(q)) {
> wait_event(guc->ct.wq, vf_recovery(guc) ||
> ((q->guc->resume_time != RESUME_PENDING
> ||
> xe_guc_read_stopped(guc)) &&
> !exec_queue_pending_disable(q)));
> @@ -1927,11 +1944,33 @@ static void
> __guc_exec_queue_process_msg_suspend(struct xe_sched_msg *msg)
> disable_scheduling(q, false);
> }
> } else if (q->guc->suspend_pending) {
> + if (idle_skip_suspend)
> + set_exec_queue_idle_skip_suspend(q);
> set_exec_queue_suspended(q);
> suspend_fence_signal(q);
> }
> }
>
> +static void sched_context(struct xe_exec_queue *q)
> +{
> + struct xe_guc *guc = exec_queue_to_guc(q);
> + struct xe_lrc *lrc = q->lrc[0];
> + u32 action [] = {
> + XE_GUC_ACTION_SCHED_CONTEXT,
> + q->guc->id,
> + };
> +
> + xe_gt_assert(guc_to_gt(guc), !xe_exec_queue_is_parallel(q));
> + xe_gt_assert(guc_to_gt(guc), !exec_queue_destroyed(q));
> + xe_gt_assert(guc_to_gt(guc), exec_queue_registered(q));
> + xe_gt_assert(guc_to_gt(guc),
> !exec_queue_pending_disable(q));
> +
> + trace_xe_exec_queue_submit(q);
> +
> + xe_lrc_set_ring_tail(lrc, lrc->ring.tail);
> + xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action), 0, 0);
> +}
> +
> static void __guc_exec_queue_process_msg_resume(struct xe_sched_msg
> *msg)
> {
> struct xe_exec_queue *q = msg->private_data;
> @@ -1939,12 +1978,22 @@ static void
> __guc_exec_queue_process_msg_resume(struct xe_sched_msg *msg)
> if (guc_exec_queue_allowed_to_change_state(q)) {
> clear_exec_queue_suspended(q);
> if (!exec_queue_enabled(q)) {
> + if (exec_queue_idle_skip_suspend(q)) {
> + struct xe_lrc *lrc = q->lrc[0];
> +
> + clear_exec_queue_idle_skip_suspend(q
> );
> + xe_lrc_set_ring_tail(lrc, lrc-
> >ring.tail);
> + }
> q->guc->resume_time = RESUME_PENDING;
> set_exec_queue_pending_resume(q);
> enable_scheduling(q);
> + } else if (exec_queue_idle_skip_suspend(q)) {
> + clear_exec_queue_idle_skip_suspend(q);
> + sched_context(q);
> }
> } else {
> clear_exec_queue_suspended(q);
> + clear_exec_queue_idle_skip_suspend(q);
> }
> }
>
> diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> index 290205a266b8..4d9263a1a208 100644
> --- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> @@ -205,7 +205,7 @@ static int
> xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
> continue;
>
> xe_gt_stats_incr(q->gt,
> XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT, 1);
> - need_resume = true;
> + need_resume |= !xe_exec_queue_idle_skip_suspend(q);
> q->ops->suspend(q);
> }
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 5/7] drm/xe: Wait on in-syncs when swicthing to dma-fence mode
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
` (3 preceding siblings ...)
2025-12-12 18:28 ` [PATCH v2 4/7] drm/xe: Skip exec queue schedule toggle if queue is idle during suspend Matthew Brost
@ 2025-12-12 18:28 ` Matthew Brost
2025-12-15 10:32 ` Thomas Hellström
2025-12-12 18:28 ` [PATCH v2 6/7] drm/xe: Add GT stats ktime helpers Matthew Brost
` (5 subsequent siblings)
10 siblings, 1 reply; 24+ messages in thread
From: Matthew Brost @ 2025-12-12 18:28 UTC (permalink / raw)
To: intel-xe; +Cc: francois.dugast, thomas.hellstrom, michal.mrozek
If a dma-fence submission has in-fences and pagefault queues are running
work, there is little incentive to kick the pagefault queues off the
hardware until the dma-fence submission is ready to run. Therefore, wait
on the in-fences of the dma-fence submission before removing the
pagefault queues from the hardware.
v2:
- Fix kernel doc (CI)
- Don't wait under lock (Thomas)
- Make wait interruptable
Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec.c | 9 +++--
drivers/gpu/drm/xe/xe_hw_engine_group.c | 44 +++++++++++++++++++++----
drivers/gpu/drm/xe/xe_hw_engine_group.h | 4 ++-
drivers/gpu/drm/xe/xe_sync.c | 29 ++++++++++++++++
drivers/gpu/drm/xe/xe_sync.h | 2 ++
5 files changed, 78 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
index 4d81210e41f5..d462add2d005 100644
--- a/drivers/gpu/drm/xe/xe_exec.c
+++ b/drivers/gpu/drm/xe/xe_exec.c
@@ -121,7 +121,7 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
u64 addresses[XE_HW_ENGINE_MAX_INSTANCE];
struct drm_gpuvm_exec vm_exec = {.extra.fn = xe_exec_fn};
struct drm_exec *exec = &vm_exec.exec;
- u32 i, num_syncs, num_ufence = 0;
+ u32 i, num_syncs, num_in_sync = 0, num_ufence = 0;
struct xe_validation_ctx ctx;
struct xe_sched_job *job;
struct xe_vm *vm;
@@ -182,6 +182,9 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
if (xe_sync_is_ufence(&syncs[num_syncs]))
num_ufence++;
+
+ if (!num_in_sync && xe_sync_needs_wait(&syncs[num_syncs]))
+ num_in_sync++;
}
if (XE_IOCTL_DBG(xe, num_ufence > 1)) {
@@ -202,7 +205,9 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
mode = xe_hw_engine_group_find_exec_mode(q);
if (mode == EXEC_MODE_DMA_FENCE) {
- err = xe_hw_engine_group_get_mode(group, mode, &previous_mode);
+ err = xe_hw_engine_group_get_mode(group, mode, &previous_mode,
+ syncs, num_in_sync ?
+ num_syncs : 0);
if (err)
goto err_syncs;
}
diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c b/drivers/gpu/drm/xe/xe_hw_engine_group.c
index 4d9263a1a208..022fc0c30d38 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
@@ -11,6 +11,7 @@
#include "xe_gt.h"
#include "xe_gt_stats.h"
#include "xe_hw_engine_group.h"
+#include "xe_sync.h"
#include "xe_vm.h"
static void
@@ -21,7 +22,8 @@ hw_engine_group_resume_lr_jobs_func(struct work_struct *w)
int err;
enum xe_hw_engine_group_execution_mode previous_mode;
- err = xe_hw_engine_group_get_mode(group, EXEC_MODE_LR, &previous_mode);
+ err = xe_hw_engine_group_get_mode(group, EXEC_MODE_LR, &previous_mode,
+ NULL, 0);
if (err)
return;
@@ -189,10 +191,12 @@ void xe_hw_engine_group_resume_faulting_lr_jobs(struct xe_hw_engine_group *group
/**
* xe_hw_engine_group_suspend_faulting_lr_jobs() - Suspend the faulting LR jobs of this group
* @group: The hw engine group
+ * @has_deps: dma-fence job triggering suspend has dependencies
*
* Return: 0 on success, negative error code on error.
*/
-static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group *group)
+static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group *group,
+ bool has_deps)
{
int err;
struct xe_exec_queue *q;
@@ -201,11 +205,19 @@ static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
lockdep_assert_held_write(&group->mode_sem);
list_for_each_entry(q, &group->exec_queue_list, hw_engine_group_link) {
+ bool idle_skip_suspend;
+
if (!xe_vm_in_fault_mode(q->vm))
continue;
+ idle_skip_suspend = xe_exec_queue_idle_skip_suspend(q);
+ if (!idle_skip_suspend && has_deps)
+ return -EAGAIN;
+
xe_gt_stats_incr(q->gt, XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT, 1);
- need_resume |= !xe_exec_queue_idle_skip_suspend(q);
+
+
+ need_resume |= !idle_skip_suspend;
q->ops->suspend(q);
}
@@ -258,7 +270,7 @@ static int xe_hw_engine_group_wait_for_dma_fence_jobs(struct xe_hw_engine_group
return 0;
}
-static int switch_mode(struct xe_hw_engine_group *group)
+static int switch_mode(struct xe_hw_engine_group *group, bool has_deps)
{
int err = 0;
enum xe_hw_engine_group_execution_mode new_mode;
@@ -268,7 +280,8 @@ static int switch_mode(struct xe_hw_engine_group *group)
switch (group->cur_mode) {
case EXEC_MODE_LR:
new_mode = EXEC_MODE_DMA_FENCE;
- err = xe_hw_engine_group_suspend_faulting_lr_jobs(group);
+ err = xe_hw_engine_group_suspend_faulting_lr_jobs(group,
+ has_deps);
break;
case EXEC_MODE_DMA_FENCE:
new_mode = EXEC_MODE_LR;
@@ -289,14 +302,18 @@ static int switch_mode(struct xe_hw_engine_group *group)
* @group: The hw engine group
* @new_mode: The new execution mode
* @previous_mode: Pointer to the previous mode provided for use by caller
+ * @syncs: Syncs from exec IOCTL
+ * @num_syncs: Number of syncs from exec IOCTL
*
* Return: 0 if successful, -EINTR if locking failed.
*/
int xe_hw_engine_group_get_mode(struct xe_hw_engine_group *group,
enum xe_hw_engine_group_execution_mode new_mode,
- enum xe_hw_engine_group_execution_mode *previous_mode)
+ enum xe_hw_engine_group_execution_mode *previous_mode,
+ struct xe_sync_entry *syncs, int num_syncs)
__acquires(&group->mode_sem)
{
+ bool has_deps = !!num_syncs;
int err = down_read_interruptible(&group->mode_sem);
if (err)
@@ -306,14 +323,27 @@ __acquires(&group->mode_sem)
if (new_mode != group->cur_mode) {
up_read(&group->mode_sem);
+retry:
err = down_write_killable(&group->mode_sem);
if (err)
return err;
if (new_mode != group->cur_mode) {
- err = switch_mode(group);
+ err = switch_mode(group, has_deps);
if (err) {
up_write(&group->mode_sem);
+ if (err == -EAGAIN) {
+ int i;
+
+ for (i = 0; i < num_syncs; ++i) {
+ err = xe_sync_entry_wait(syncs + i);
+ if (err)
+ return err;
+ }
+
+ has_deps = false;
+ goto retry;
+ }
return err;
}
}
diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.h b/drivers/gpu/drm/xe/xe_hw_engine_group.h
index 797ee81acbf2..8b17ccd30b70 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine_group.h
+++ b/drivers/gpu/drm/xe/xe_hw_engine_group.h
@@ -11,6 +11,7 @@
struct drm_device;
struct xe_exec_queue;
struct xe_gt;
+struct xe_sync_entry;
int xe_hw_engine_setup_groups(struct xe_gt *gt);
@@ -19,7 +20,8 @@ void xe_hw_engine_group_del_exec_queue(struct xe_hw_engine_group *group, struct
int xe_hw_engine_group_get_mode(struct xe_hw_engine_group *group,
enum xe_hw_engine_group_execution_mode new_mode,
- enum xe_hw_engine_group_execution_mode *previous_mode);
+ enum xe_hw_engine_group_execution_mode *previous_mode,
+ struct xe_sync_entry *syncs, int num_syncs);
void xe_hw_engine_group_put(struct xe_hw_engine_group *group);
enum xe_hw_engine_group_execution_mode
diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
index 1fc4fa278b78..d970e11962ff 100644
--- a/drivers/gpu/drm/xe/xe_sync.c
+++ b/drivers/gpu/drm/xe/xe_sync.c
@@ -228,6 +228,35 @@ int xe_sync_entry_add_deps(struct xe_sync_entry *sync, struct xe_sched_job *job)
return 0;
}
+/**
+ * xe_sync_entry_wait() - Wait on in-sync
+ * @sync: Sync object
+ *
+ * If the sync is in an in-sync, wait on the sync to signal.
+ *
+ * Return: 0 on success, -ERESTARTSYS on failure (interruption)
+ */
+int xe_sync_entry_wait(struct xe_sync_entry *sync)
+{
+ if (sync->flags & DRM_XE_SYNC_FLAG_SIGNAL)
+ return 0;
+
+ return dma_fence_wait(sync->fence, true);
+}
+
+/**
+ * xe_sync_needs_wait() - Sync needs a wait (input dma-fence not signaled)
+ * @sync: Sync object
+ *
+ * Return: True if sync needs a wait, False otherwise
+ */
+bool xe_sync_needs_wait(struct xe_sync_entry *sync)
+{
+
+ return !(sync->flags & DRM_XE_SYNC_FLAG_SIGNAL) &&
+ !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &sync->fence->flags);
+}
+
void xe_sync_entry_signal(struct xe_sync_entry *sync, struct dma_fence *fence)
{
if (!(sync->flags & DRM_XE_SYNC_FLAG_SIGNAL))
diff --git a/drivers/gpu/drm/xe/xe_sync.h b/drivers/gpu/drm/xe/xe_sync.h
index 51f2d803e977..6b949194acff 100644
--- a/drivers/gpu/drm/xe/xe_sync.h
+++ b/drivers/gpu/drm/xe/xe_sync.h
@@ -29,6 +29,8 @@ int xe_sync_entry_add_deps(struct xe_sync_entry *sync,
struct xe_sched_job *job);
void xe_sync_entry_signal(struct xe_sync_entry *sync,
struct dma_fence *fence);
+int xe_sync_entry_wait(struct xe_sync_entry *sync);
+bool xe_sync_needs_wait(struct xe_sync_entry *sync);
void xe_sync_entry_cleanup(struct xe_sync_entry *sync);
struct dma_fence *
xe_sync_in_fence_get(struct xe_sync_entry *sync, int num_sync,
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v2 5/7] drm/xe: Wait on in-syncs when swicthing to dma-fence mode
2025-12-12 18:28 ` [PATCH v2 5/7] drm/xe: Wait on in-syncs when swicthing to dma-fence mode Matthew Brost
@ 2025-12-15 10:32 ` Thomas Hellström
2025-12-15 21:46 ` Matthew Brost
0 siblings, 1 reply; 24+ messages in thread
From: Thomas Hellström @ 2025-12-15 10:32 UTC (permalink / raw)
To: Matthew Brost, intel-xe; +Cc: francois.dugast, michal.mrozek
On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> If a dma-fence submission has in-fences and pagefault queues are
> running
> work, there is little incentive to kick the pagefault queues off the
> hardware until the dma-fence submission is ready to run. Therefore,
> wait
> on the in-fences of the dma-fence submission before removing the
> pagefault queues from the hardware.
>
> v2:
> - Fix kernel doc (CI)
> - Don't wait under lock (Thomas)
> - Make wait interruptable
>
> Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_exec.c | 9 +++--
> drivers/gpu/drm/xe/xe_hw_engine_group.c | 44 +++++++++++++++++++++--
> --
> drivers/gpu/drm/xe/xe_hw_engine_group.h | 4 ++-
> drivers/gpu/drm/xe/xe_sync.c | 29 ++++++++++++++++
> drivers/gpu/drm/xe/xe_sync.h | 2 ++
> 5 files changed, 78 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_exec.c
> b/drivers/gpu/drm/xe/xe_exec.c
> index 4d81210e41f5..d462add2d005 100644
> --- a/drivers/gpu/drm/xe/xe_exec.c
> +++ b/drivers/gpu/drm/xe/xe_exec.c
> @@ -121,7 +121,7 @@ int xe_exec_ioctl(struct drm_device *dev, void
> *data, struct drm_file *file)
> u64 addresses[XE_HW_ENGINE_MAX_INSTANCE];
> struct drm_gpuvm_exec vm_exec = {.extra.fn = xe_exec_fn};
> struct drm_exec *exec = &vm_exec.exec;
> - u32 i, num_syncs, num_ufence = 0;
> + u32 i, num_syncs, num_in_sync = 0, num_ufence = 0;
> struct xe_validation_ctx ctx;
> struct xe_sched_job *job;
> struct xe_vm *vm;
> @@ -182,6 +182,9 @@ int xe_exec_ioctl(struct drm_device *dev, void
> *data, struct drm_file *file)
>
> if (xe_sync_is_ufence(&syncs[num_syncs]))
> num_ufence++;
> +
> + if (!num_in_sync &&
> xe_sync_needs_wait(&syncs[num_syncs]))
> + num_in_sync++;
> }
>
> if (XE_IOCTL_DBG(xe, num_ufence > 1)) {
> @@ -202,7 +205,9 @@ int xe_exec_ioctl(struct drm_device *dev, void
> *data, struct drm_file *file)
> mode = xe_hw_engine_group_find_exec_mode(q);
>
> if (mode == EXEC_MODE_DMA_FENCE) {
> - err = xe_hw_engine_group_get_mode(group, mode,
> &previous_mode);
> + err = xe_hw_engine_group_get_mode(group, mode,
> &previous_mode,
> + syncs, num_in_sync
> ?
> + num_syncs : 0);
> if (err)
> goto err_syncs;
> }
> diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> index 4d9263a1a208..022fc0c30d38 100644
> --- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> @@ -11,6 +11,7 @@
> #include "xe_gt.h"
> #include "xe_gt_stats.h"
> #include "xe_hw_engine_group.h"
> +#include "xe_sync.h"
> #include "xe_vm.h"
>
> static void
> @@ -21,7 +22,8 @@ hw_engine_group_resume_lr_jobs_func(struct
> work_struct *w)
> int err;
> enum xe_hw_engine_group_execution_mode previous_mode;
>
> - err = xe_hw_engine_group_get_mode(group, EXEC_MODE_LR,
> &previous_mode);
> + err = xe_hw_engine_group_get_mode(group, EXEC_MODE_LR,
> &previous_mode,
> + NULL, 0);
> if (err)
> return;
>
> @@ -189,10 +191,12 @@ void
> xe_hw_engine_group_resume_faulting_lr_jobs(struct xe_hw_engine_group
> *group
> /**
> * xe_hw_engine_group_suspend_faulting_lr_jobs() - Suspend the
> faulting LR jobs of this group
> * @group: The hw engine group
> + * @has_deps: dma-fence job triggering suspend has dependencies
> *
> * Return: 0 on success, negative error code on error.
> */
> -static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct
> xe_hw_engine_group *group)
> +static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct
> xe_hw_engine_group *group,
> + bool
> has_deps)
> {
> int err;
> struct xe_exec_queue *q;
> @@ -201,11 +205,19 @@ static int
> xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
> lockdep_assert_held_write(&group->mode_sem);
>
> list_for_each_entry(q, &group->exec_queue_list,
> hw_engine_group_link) {
> + bool idle_skip_suspend;
> +
> if (!xe_vm_in_fault_mode(q->vm))
> continue;
>
> + idle_skip_suspend =
> xe_exec_queue_idle_skip_suspend(q);
> + if (!idle_skip_suspend && has_deps)
> + return -EAGAIN;
> +
> xe_gt_stats_incr(q->gt,
> XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT, 1);
> - need_resume |= !xe_exec_queue_idle_skip_suspend(q);
> +
> +
> + need_resume |= !idle_skip_suspend;
> q->ops->suspend(q);
> }
>
> @@ -258,7 +270,7 @@ static int
> xe_hw_engine_group_wait_for_dma_fence_jobs(struct xe_hw_engine_group
> return 0;
> }
>
> -static int switch_mode(struct xe_hw_engine_group *group)
> +static int switch_mode(struct xe_hw_engine_group *group, bool
> has_deps)
> {
> int err = 0;
> enum xe_hw_engine_group_execution_mode new_mode;
> @@ -268,7 +280,8 @@ static int switch_mode(struct xe_hw_engine_group
> *group)
> switch (group->cur_mode) {
> case EXEC_MODE_LR:
> new_mode = EXEC_MODE_DMA_FENCE;
> - err =
> xe_hw_engine_group_suspend_faulting_lr_jobs(group);
> + err =
> xe_hw_engine_group_suspend_faulting_lr_jobs(group,
> +
> has_deps);
> break;
> case EXEC_MODE_DMA_FENCE:
> new_mode = EXEC_MODE_LR;
> @@ -289,14 +302,18 @@ static int switch_mode(struct
> xe_hw_engine_group *group)
> * @group: The hw engine group
> * @new_mode: The new execution mode
> * @previous_mode: Pointer to the previous mode provided for use by
> caller
> + * @syncs: Syncs from exec IOCTL
> + * @num_syncs: Number of syncs from exec IOCTL
> *
> * Return: 0 if successful, -EINTR if locking failed.
> */
> int xe_hw_engine_group_get_mode(struct xe_hw_engine_group *group,
> enum
> xe_hw_engine_group_execution_mode new_mode,
> - enum
> xe_hw_engine_group_execution_mode *previous_mode)
> + enum
> xe_hw_engine_group_execution_mode *previous_mode,
> + struct xe_sync_entry *syncs, int
> num_syncs)
> __acquires(&group->mode_sem)
> {
> + bool has_deps = !!num_syncs;
> int err = down_read_interruptible(&group->mode_sem);
>
> if (err)
> @@ -306,14 +323,27 @@ __acquires(&group->mode_sem)
>
> if (new_mode != group->cur_mode) {
> up_read(&group->mode_sem);
> +retry:
> err = down_write_killable(&group->mode_sem);
> if (err)
> return err;
>
> if (new_mode != group->cur_mode) {
> - err = switch_mode(group);
> + err = switch_mode(group, has_deps);
> if (err) {
> up_write(&group->mode_sem);
> + if (err == -EAGAIN) {
> + int i;
> +
> + for (i = 0; i < num_syncs;
> ++i) {
> + err =
> xe_sync_entry_wait(syncs + i);
> + if (err)
> + return err;
> + }
> +
> + has_deps = false;
> + goto retry;
> + }
> return err;
> }
> }
> diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.h
> b/drivers/gpu/drm/xe/xe_hw_engine_group.h
> index 797ee81acbf2..8b17ccd30b70 100644
> --- a/drivers/gpu/drm/xe/xe_hw_engine_group.h
> +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.h
> @@ -11,6 +11,7 @@
> struct drm_device;
> struct xe_exec_queue;
> struct xe_gt;
> +struct xe_sync_entry;
>
> int xe_hw_engine_setup_groups(struct xe_gt *gt);
>
> @@ -19,7 +20,8 @@ void xe_hw_engine_group_del_exec_queue(struct
> xe_hw_engine_group *group, struct
>
> int xe_hw_engine_group_get_mode(struct xe_hw_engine_group *group,
> enum
> xe_hw_engine_group_execution_mode new_mode,
> - enum
> xe_hw_engine_group_execution_mode *previous_mode);
> + enum
> xe_hw_engine_group_execution_mode *previous_mode,
> + struct xe_sync_entry *syncs, int
> num_syncs);
> void xe_hw_engine_group_put(struct xe_hw_engine_group *group);
>
> enum xe_hw_engine_group_execution_mode
> diff --git a/drivers/gpu/drm/xe/xe_sync.c
> b/drivers/gpu/drm/xe/xe_sync.c
> index 1fc4fa278b78..d970e11962ff 100644
> --- a/drivers/gpu/drm/xe/xe_sync.c
> +++ b/drivers/gpu/drm/xe/xe_sync.c
> @@ -228,6 +228,35 @@ int xe_sync_entry_add_deps(struct xe_sync_entry
> *sync, struct xe_sched_job *job)
> return 0;
> }
>
> +/**
> + * xe_sync_entry_wait() - Wait on in-sync
> + * @sync: Sync object
> + *
> + * If the sync is in an in-sync, wait on the sync to signal.
> + *
> + * Return: 0 on success, -ERESTARTSYS on failure (interruption)
> + */
> +int xe_sync_entry_wait(struct xe_sync_entry *sync)
> +{
> + if (sync->flags & DRM_XE_SYNC_FLAG_SIGNAL)
> + return 0;
> +
> + return dma_fence_wait(sync->fence, true);
> +}
> +
> +/**
> + * xe_sync_needs_wait() - Sync needs a wait (input dma-fence not
> signaled)
> + * @sync: Sync object
> + *
> + * Return: True if sync needs a wait, False otherwise
> + */
> +bool xe_sync_needs_wait(struct xe_sync_entry *sync)
> +{
> +
> + return !(sync->flags & DRM_XE_SYNC_FLAG_SIGNAL) &&
> + !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &sync->fence-
> >flags);
dma_fence_is_signaled() ?
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> +}
> +
> void xe_sync_entry_signal(struct xe_sync_entry *sync, struct
> dma_fence *fence)
> {
> if (!(sync->flags & DRM_XE_SYNC_FLAG_SIGNAL))
> diff --git a/drivers/gpu/drm/xe/xe_sync.h
> b/drivers/gpu/drm/xe/xe_sync.h
> index 51f2d803e977..6b949194acff 100644
> --- a/drivers/gpu/drm/xe/xe_sync.h
> +++ b/drivers/gpu/drm/xe/xe_sync.h
> @@ -29,6 +29,8 @@ int xe_sync_entry_add_deps(struct xe_sync_entry
> *sync,
> struct xe_sched_job *job);
> void xe_sync_entry_signal(struct xe_sync_entry *sync,
> struct dma_fence *fence);
> +int xe_sync_entry_wait(struct xe_sync_entry *sync);
> +bool xe_sync_needs_wait(struct xe_sync_entry *sync);
> void xe_sync_entry_cleanup(struct xe_sync_entry *sync);
> struct dma_fence *
> xe_sync_in_fence_get(struct xe_sync_entry *sync, int num_sync,
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v2 5/7] drm/xe: Wait on in-syncs when swicthing to dma-fence mode
2025-12-15 10:32 ` Thomas Hellström
@ 2025-12-15 21:46 ` Matthew Brost
2025-12-15 21:48 ` Thomas Hellström
0 siblings, 1 reply; 24+ messages in thread
From: Matthew Brost @ 2025-12-15 21:46 UTC (permalink / raw)
To: Thomas Hellström; +Cc: intel-xe, francois.dugast, michal.mrozek
On Mon, Dec 15, 2025 at 11:32:23AM +0100, Thomas Hellström wrote:
> On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> > If a dma-fence submission has in-fences and pagefault queues are
> > running
> > work, there is little incentive to kick the pagefault queues off the
> > hardware until the dma-fence submission is ready to run. Therefore,
> > wait
> > on the in-fences of the dma-fence submission before removing the
> > pagefault queues from the hardware.
> >
> > v2:
> > - Fix kernel doc (CI)
> > - Don't wait under lock (Thomas)
> > - Make wait interruptable
> >
> > Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_exec.c | 9 +++--
> > drivers/gpu/drm/xe/xe_hw_engine_group.c | 44 +++++++++++++++++++++--
> > --
> > drivers/gpu/drm/xe/xe_hw_engine_group.h | 4 ++-
> > drivers/gpu/drm/xe/xe_sync.c | 29 ++++++++++++++++
> > drivers/gpu/drm/xe/xe_sync.h | 2 ++
> > 5 files changed, 78 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_exec.c
> > b/drivers/gpu/drm/xe/xe_exec.c
> > index 4d81210e41f5..d462add2d005 100644
> > --- a/drivers/gpu/drm/xe/xe_exec.c
> > +++ b/drivers/gpu/drm/xe/xe_exec.c
> > @@ -121,7 +121,7 @@ int xe_exec_ioctl(struct drm_device *dev, void
> > *data, struct drm_file *file)
> > u64 addresses[XE_HW_ENGINE_MAX_INSTANCE];
> > struct drm_gpuvm_exec vm_exec = {.extra.fn = xe_exec_fn};
> > struct drm_exec *exec = &vm_exec.exec;
> > - u32 i, num_syncs, num_ufence = 0;
> > + u32 i, num_syncs, num_in_sync = 0, num_ufence = 0;
> > struct xe_validation_ctx ctx;
> > struct xe_sched_job *job;
> > struct xe_vm *vm;
> > @@ -182,6 +182,9 @@ int xe_exec_ioctl(struct drm_device *dev, void
> > *data, struct drm_file *file)
> >
> > if (xe_sync_is_ufence(&syncs[num_syncs]))
> > num_ufence++;
> > +
> > + if (!num_in_sync &&
> > xe_sync_needs_wait(&syncs[num_syncs]))
> > + num_in_sync++;
> > }
> >
> > if (XE_IOCTL_DBG(xe, num_ufence > 1)) {
> > @@ -202,7 +205,9 @@ int xe_exec_ioctl(struct drm_device *dev, void
> > *data, struct drm_file *file)
> > mode = xe_hw_engine_group_find_exec_mode(q);
> >
> > if (mode == EXEC_MODE_DMA_FENCE) {
> > - err = xe_hw_engine_group_get_mode(group, mode,
> > &previous_mode);
> > + err = xe_hw_engine_group_get_mode(group, mode,
> > &previous_mode,
> > + syncs, num_in_sync
> > ?
> > + num_syncs : 0);
> > if (err)
> > goto err_syncs;
> > }
> > diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > index 4d9263a1a208..022fc0c30d38 100644
> > --- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > @@ -11,6 +11,7 @@
> > #include "xe_gt.h"
> > #include "xe_gt_stats.h"
> > #include "xe_hw_engine_group.h"
> > +#include "xe_sync.h"
> > #include "xe_vm.h"
> >
> > static void
> > @@ -21,7 +22,8 @@ hw_engine_group_resume_lr_jobs_func(struct
> > work_struct *w)
> > int err;
> > enum xe_hw_engine_group_execution_mode previous_mode;
> >
> > - err = xe_hw_engine_group_get_mode(group, EXEC_MODE_LR,
> > &previous_mode);
> > + err = xe_hw_engine_group_get_mode(group, EXEC_MODE_LR,
> > &previous_mode,
> > + NULL, 0);
> > if (err)
> > return;
> >
> > @@ -189,10 +191,12 @@ void
> > xe_hw_engine_group_resume_faulting_lr_jobs(struct xe_hw_engine_group
> > *group
> > /**
> > * xe_hw_engine_group_suspend_faulting_lr_jobs() - Suspend the
> > faulting LR jobs of this group
> > * @group: The hw engine group
> > + * @has_deps: dma-fence job triggering suspend has dependencies
> > *
> > * Return: 0 on success, negative error code on error.
> > */
> > -static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct
> > xe_hw_engine_group *group)
> > +static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct
> > xe_hw_engine_group *group,
> > + bool
> > has_deps)
> > {
> > int err;
> > struct xe_exec_queue *q;
> > @@ -201,11 +205,19 @@ static int
> > xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
> > lockdep_assert_held_write(&group->mode_sem);
> >
> > list_for_each_entry(q, &group->exec_queue_list,
> > hw_engine_group_link) {
> > + bool idle_skip_suspend;
> > +
> > if (!xe_vm_in_fault_mode(q->vm))
> > continue;
> >
> > + idle_skip_suspend =
> > xe_exec_queue_idle_skip_suspend(q);
> > + if (!idle_skip_suspend && has_deps)
> > + return -EAGAIN;
> > +
> > xe_gt_stats_incr(q->gt,
> > XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT, 1);
> > - need_resume |= !xe_exec_queue_idle_skip_suspend(q);
> > +
> > +
> > + need_resume |= !idle_skip_suspend;
> > q->ops->suspend(q);
> > }
> >
> > @@ -258,7 +270,7 @@ static int
> > xe_hw_engine_group_wait_for_dma_fence_jobs(struct xe_hw_engine_group
> > return 0;
> > }
> >
> > -static int switch_mode(struct xe_hw_engine_group *group)
> > +static int switch_mode(struct xe_hw_engine_group *group, bool
> > has_deps)
> > {
> > int err = 0;
> > enum xe_hw_engine_group_execution_mode new_mode;
> > @@ -268,7 +280,8 @@ static int switch_mode(struct xe_hw_engine_group
> > *group)
> > switch (group->cur_mode) {
> > case EXEC_MODE_LR:
> > new_mode = EXEC_MODE_DMA_FENCE;
> > - err =
> > xe_hw_engine_group_suspend_faulting_lr_jobs(group);
> > + err =
> > xe_hw_engine_group_suspend_faulting_lr_jobs(group,
> > +
> > has_deps);
> > break;
> > case EXEC_MODE_DMA_FENCE:
> > new_mode = EXEC_MODE_LR;
> > @@ -289,14 +302,18 @@ static int switch_mode(struct
> > xe_hw_engine_group *group)
> > * @group: The hw engine group
> > * @new_mode: The new execution mode
> > * @previous_mode: Pointer to the previous mode provided for use by
> > caller
> > + * @syncs: Syncs from exec IOCTL
> > + * @num_syncs: Number of syncs from exec IOCTL
> > *
> > * Return: 0 if successful, -EINTR if locking failed.
> > */
> > int xe_hw_engine_group_get_mode(struct xe_hw_engine_group *group,
> > enum
> > xe_hw_engine_group_execution_mode new_mode,
> > - enum
> > xe_hw_engine_group_execution_mode *previous_mode)
> > + enum
> > xe_hw_engine_group_execution_mode *previous_mode,
> > + struct xe_sync_entry *syncs, int
> > num_syncs)
> > __acquires(&group->mode_sem)
> > {
> > + bool has_deps = !!num_syncs;
> > int err = down_read_interruptible(&group->mode_sem);
> >
> > if (err)
> > @@ -306,14 +323,27 @@ __acquires(&group->mode_sem)
> >
> > if (new_mode != group->cur_mode) {
> > up_read(&group->mode_sem);
> > +retry:
> > err = down_write_killable(&group->mode_sem);
> > if (err)
> > return err;
> >
> > if (new_mode != group->cur_mode) {
> > - err = switch_mode(group);
> > + err = switch_mode(group, has_deps);
> > if (err) {
> > up_write(&group->mode_sem);
> > + if (err == -EAGAIN) {
> > + int i;
> > +
> > + for (i = 0; i < num_syncs;
> > ++i) {
> > + err =
> > xe_sync_entry_wait(syncs + i);
> > + if (err)
> > + return err;
> > + }
> > +
> > + has_deps = false;
> > + goto retry;
> > + }
> > return err;
> > }
> > }
> > diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > b/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > index 797ee81acbf2..8b17ccd30b70 100644
> > --- a/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > @@ -11,6 +11,7 @@
> > struct drm_device;
> > struct xe_exec_queue;
> > struct xe_gt;
> > +struct xe_sync_entry;
> >
> > int xe_hw_engine_setup_groups(struct xe_gt *gt);
> >
> > @@ -19,7 +20,8 @@ void xe_hw_engine_group_del_exec_queue(struct
> > xe_hw_engine_group *group, struct
> >
> > int xe_hw_engine_group_get_mode(struct xe_hw_engine_group *group,
> > enum
> > xe_hw_engine_group_execution_mode new_mode,
> > - enum
> > xe_hw_engine_group_execution_mode *previous_mode);
> > + enum
> > xe_hw_engine_group_execution_mode *previous_mode,
> > + struct xe_sync_entry *syncs, int
> > num_syncs);
> > void xe_hw_engine_group_put(struct xe_hw_engine_group *group);
> >
> > enum xe_hw_engine_group_execution_mode
> > diff --git a/drivers/gpu/drm/xe/xe_sync.c
> > b/drivers/gpu/drm/xe/xe_sync.c
> > index 1fc4fa278b78..d970e11962ff 100644
> > --- a/drivers/gpu/drm/xe/xe_sync.c
> > +++ b/drivers/gpu/drm/xe/xe_sync.c
> > @@ -228,6 +228,35 @@ int xe_sync_entry_add_deps(struct xe_sync_entry
> > *sync, struct xe_sched_job *job)
> > return 0;
> > }
> >
> > +/**
> > + * xe_sync_entry_wait() - Wait on in-sync
> > + * @sync: Sync object
> > + *
> > + * If the sync is in an in-sync, wait on the sync to signal.
> > + *
> > + * Return: 0 on success, -ERESTARTSYS on failure (interruption)
> > + */
> > +int xe_sync_entry_wait(struct xe_sync_entry *sync)
> > +{
> > + if (sync->flags & DRM_XE_SYNC_FLAG_SIGNAL)
> > + return 0;
> > +
> > + return dma_fence_wait(sync->fence, true);
> > +}
> > +
> > +/**
> > + * xe_sync_needs_wait() - Sync needs a wait (input dma-fence not
> > signaled)
> > + * @sync: Sync object
> > + *
> > + * Return: True if sync needs a wait, False otherwise
> > + */
> > +bool xe_sync_needs_wait(struct xe_sync_entry *sync)
> > +{
> > +
> > + return !(sync->flags & DRM_XE_SYNC_FLAG_SIGNAL) &&
> > + !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &sync->fence-
> > >flags);
>
> dma_fence_is_signaled() ?
>
I don't want to signal the fence here, Phillip Stanner merged a
dma-fence helper that does this check to drm-misc-next but this change
hasn't made it to dma-xe-next yet. I have patch built on top of his
series to to convert Xe to use these helpers, when I rebase that patch
I'll fixup this code too.
Matt
> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>
> > +}
> > +
> > void xe_sync_entry_signal(struct xe_sync_entry *sync, struct
> > dma_fence *fence)
> > {
> > if (!(sync->flags & DRM_XE_SYNC_FLAG_SIGNAL))
> > diff --git a/drivers/gpu/drm/xe/xe_sync.h
> > b/drivers/gpu/drm/xe/xe_sync.h
> > index 51f2d803e977..6b949194acff 100644
> > --- a/drivers/gpu/drm/xe/xe_sync.h
> > +++ b/drivers/gpu/drm/xe/xe_sync.h
> > @@ -29,6 +29,8 @@ int xe_sync_entry_add_deps(struct xe_sync_entry
> > *sync,
> > struct xe_sched_job *job);
> > void xe_sync_entry_signal(struct xe_sync_entry *sync,
> > struct dma_fence *fence);
> > +int xe_sync_entry_wait(struct xe_sync_entry *sync);
> > +bool xe_sync_needs_wait(struct xe_sync_entry *sync);
> > void xe_sync_entry_cleanup(struct xe_sync_entry *sync);
> > struct dma_fence *
> > xe_sync_in_fence_get(struct xe_sync_entry *sync, int num_sync,
>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v2 5/7] drm/xe: Wait on in-syncs when swicthing to dma-fence mode
2025-12-15 21:46 ` Matthew Brost
@ 2025-12-15 21:48 ` Thomas Hellström
2025-12-16 1:12 ` Matthew Brost
0 siblings, 1 reply; 24+ messages in thread
From: Thomas Hellström @ 2025-12-15 21:48 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, francois.dugast, michal.mrozek
On Mon, 2025-12-15 at 13:46 -0800, Matthew Brost wrote:
> On Mon, Dec 15, 2025 at 11:32:23AM +0100, Thomas Hellström wrote:
> > On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> > > If a dma-fence submission has in-fences and pagefault queues are
> > > running
> > > work, there is little incentive to kick the pagefault queues off
> > > the
> > > hardware until the dma-fence submission is ready to run.
> > > Therefore,
> > > wait
> > > on the in-fences of the dma-fence submission before removing the
> > > pagefault queues from the hardware.
> > >
> > > v2:
> > > - Fix kernel doc (CI)
> > > - Don't wait under lock (Thomas)
> > > - Make wait interruptable
> > >
> > > Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_exec.c | 9 +++--
> > > drivers/gpu/drm/xe/xe_hw_engine_group.c | 44
> > > +++++++++++++++++++++--
> > > --
> > > drivers/gpu/drm/xe/xe_hw_engine_group.h | 4 ++-
> > > drivers/gpu/drm/xe/xe_sync.c | 29 ++++++++++++++++
> > > drivers/gpu/drm/xe/xe_sync.h | 2 ++
> > > 5 files changed, 78 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_exec.c
> > > b/drivers/gpu/drm/xe/xe_exec.c
> > > index 4d81210e41f5..d462add2d005 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec.c
> > > +++ b/drivers/gpu/drm/xe/xe_exec.c
> > > @@ -121,7 +121,7 @@ int xe_exec_ioctl(struct drm_device *dev,
> > > void
> > > *data, struct drm_file *file)
> > > u64 addresses[XE_HW_ENGINE_MAX_INSTANCE];
> > > struct drm_gpuvm_exec vm_exec = {.extra.fn =
> > > xe_exec_fn};
> > > struct drm_exec *exec = &vm_exec.exec;
> > > - u32 i, num_syncs, num_ufence = 0;
> > > + u32 i, num_syncs, num_in_sync = 0, num_ufence = 0;
> > > struct xe_validation_ctx ctx;
> > > struct xe_sched_job *job;
> > > struct xe_vm *vm;
> > > @@ -182,6 +182,9 @@ int xe_exec_ioctl(struct drm_device *dev,
> > > void
> > > *data, struct drm_file *file)
> > >
> > > if (xe_sync_is_ufence(&syncs[num_syncs]))
> > > num_ufence++;
> > > +
> > > + if (!num_in_sync &&
> > > xe_sync_needs_wait(&syncs[num_syncs]))
> > > + num_in_sync++;
> > > }
> > >
> > > if (XE_IOCTL_DBG(xe, num_ufence > 1)) {
> > > @@ -202,7 +205,9 @@ int xe_exec_ioctl(struct drm_device *dev,
> > > void
> > > *data, struct drm_file *file)
> > > mode = xe_hw_engine_group_find_exec_mode(q);
> > >
> > > if (mode == EXEC_MODE_DMA_FENCE) {
> > > - err = xe_hw_engine_group_get_mode(group, mode,
> > > &previous_mode);
> > > + err = xe_hw_engine_group_get_mode(group, mode,
> > > &previous_mode,
> > > + syncs,
> > > num_in_sync
> > > ?
> > > + num_syncs :
> > > 0);
> > > if (err)
> > > goto err_syncs;
> > > }
> > > diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > > b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > > index 4d9263a1a208..022fc0c30d38 100644
> > > --- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > > +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > > @@ -11,6 +11,7 @@
> > > #include "xe_gt.h"
> > > #include "xe_gt_stats.h"
> > > #include "xe_hw_engine_group.h"
> > > +#include "xe_sync.h"
> > > #include "xe_vm.h"
> > >
> > > static void
> > > @@ -21,7 +22,8 @@ hw_engine_group_resume_lr_jobs_func(struct
> > > work_struct *w)
> > > int err;
> > > enum xe_hw_engine_group_execution_mode previous_mode;
> > >
> > > - err = xe_hw_engine_group_get_mode(group, EXEC_MODE_LR,
> > > &previous_mode);
> > > + err = xe_hw_engine_group_get_mode(group, EXEC_MODE_LR,
> > > &previous_mode,
> > > + NULL, 0);
> > > if (err)
> > > return;
> > >
> > > @@ -189,10 +191,12 @@ void
> > > xe_hw_engine_group_resume_faulting_lr_jobs(struct
> > > xe_hw_engine_group
> > > *group
> > > /**
> > > * xe_hw_engine_group_suspend_faulting_lr_jobs() - Suspend the
> > > faulting LR jobs of this group
> > > * @group: The hw engine group
> > > + * @has_deps: dma-fence job triggering suspend has dependencies
> > > *
> > > * Return: 0 on success, negative error code on error.
> > > */
> > > -static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct
> > > xe_hw_engine_group *group)
> > > +static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct
> > > xe_hw_engine_group *group,
> > > + bool
> > > has_deps)
> > > {
> > > int err;
> > > struct xe_exec_queue *q;
> > > @@ -201,11 +205,19 @@ static int
> > > xe_hw_engine_group_suspend_faulting_lr_jobs(struct
> > > xe_hw_engine_group
> > > lockdep_assert_held_write(&group->mode_sem);
> > >
> > > list_for_each_entry(q, &group->exec_queue_list,
> > > hw_engine_group_link) {
> > > + bool idle_skip_suspend;
> > > +
> > > if (!xe_vm_in_fault_mode(q->vm))
> > > continue;
> > >
> > > + idle_skip_suspend =
> > > xe_exec_queue_idle_skip_suspend(q);
> > > + if (!idle_skip_suspend && has_deps)
> > > + return -EAGAIN;
> > > +
> > > xe_gt_stats_incr(q->gt,
> > > XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT, 1);
> > > - need_resume |=
> > > !xe_exec_queue_idle_skip_suspend(q);
> > > +
> > > +
> > > + need_resume |= !idle_skip_suspend;
> > > q->ops->suspend(q);
> > > }
> > >
> > > @@ -258,7 +270,7 @@ static int
> > > xe_hw_engine_group_wait_for_dma_fence_jobs(struct
> > > xe_hw_engine_group
> > > return 0;
> > > }
> > >
> > > -static int switch_mode(struct xe_hw_engine_group *group)
> > > +static int switch_mode(struct xe_hw_engine_group *group, bool
> > > has_deps)
> > > {
> > > int err = 0;
> > > enum xe_hw_engine_group_execution_mode new_mode;
> > > @@ -268,7 +280,8 @@ static int switch_mode(struct
> > > xe_hw_engine_group
> > > *group)
> > > switch (group->cur_mode) {
> > > case EXEC_MODE_LR:
> > > new_mode = EXEC_MODE_DMA_FENCE;
> > > - err =
> > > xe_hw_engine_group_suspend_faulting_lr_jobs(group);
> > > + err =
> > > xe_hw_engine_group_suspend_faulting_lr_jobs(group,
> > > +
> > > has_deps);
> > > break;
> > > case EXEC_MODE_DMA_FENCE:
> > > new_mode = EXEC_MODE_LR;
> > > @@ -289,14 +302,18 @@ static int switch_mode(struct
> > > xe_hw_engine_group *group)
> > > * @group: The hw engine group
> > > * @new_mode: The new execution mode
> > > * @previous_mode: Pointer to the previous mode provided for use
> > > by
> > > caller
> > > + * @syncs: Syncs from exec IOCTL
> > > + * @num_syncs: Number of syncs from exec IOCTL
> > > *
> > > * Return: 0 if successful, -EINTR if locking failed.
> > > */
> > > int xe_hw_engine_group_get_mode(struct xe_hw_engine_group
> > > *group,
> > > enum
> > > xe_hw_engine_group_execution_mode new_mode,
> > > - enum
> > > xe_hw_engine_group_execution_mode *previous_mode)
> > > + enum
> > > xe_hw_engine_group_execution_mode *previous_mode,
> > > + struct xe_sync_entry *syncs, int
> > > num_syncs)
> > > __acquires(&group->mode_sem)
> > > {
> > > + bool has_deps = !!num_syncs;
> > > int err = down_read_interruptible(&group->mode_sem);
> > >
> > > if (err)
> > > @@ -306,14 +323,27 @@ __acquires(&group->mode_sem)
> > >
> > > if (new_mode != group->cur_mode) {
> > > up_read(&group->mode_sem);
> > > +retry:
> > > err = down_write_killable(&group->mode_sem);
> > > if (err)
> > > return err;
> > >
> > > if (new_mode != group->cur_mode) {
> > > - err = switch_mode(group);
> > > + err = switch_mode(group, has_deps);
> > > if (err) {
> > > up_write(&group->mode_sem);
> > > + if (err == -EAGAIN) {
> > > + int i;
> > > +
> > > + for (i = 0; i <
> > > num_syncs;
> > > ++i) {
> > > + err =
> > > xe_sync_entry_wait(syncs + i);
> > > + if (err)
> > > + return
> > > err;
> > > + }
> > > +
> > > + has_deps = false;
> > > + goto retry;
> > > + }
> > > return err;
> > > }
> > > }
> > > diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > > b/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > > index 797ee81acbf2..8b17ccd30b70 100644
> > > --- a/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > > +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > > @@ -11,6 +11,7 @@
> > > struct drm_device;
> > > struct xe_exec_queue;
> > > struct xe_gt;
> > > +struct xe_sync_entry;
> > >
> > > int xe_hw_engine_setup_groups(struct xe_gt *gt);
> > >
> > > @@ -19,7 +20,8 @@ void xe_hw_engine_group_del_exec_queue(struct
> > > xe_hw_engine_group *group, struct
> > >
> > > int xe_hw_engine_group_get_mode(struct xe_hw_engine_group
> > > *group,
> > > enum
> > > xe_hw_engine_group_execution_mode new_mode,
> > > - enum
> > > xe_hw_engine_group_execution_mode *previous_mode);
> > > + enum
> > > xe_hw_engine_group_execution_mode *previous_mode,
> > > + struct xe_sync_entry *syncs, int
> > > num_syncs);
> > > void xe_hw_engine_group_put(struct xe_hw_engine_group *group);
> > >
> > > enum xe_hw_engine_group_execution_mode
> > > diff --git a/drivers/gpu/drm/xe/xe_sync.c
> > > b/drivers/gpu/drm/xe/xe_sync.c
> > > index 1fc4fa278b78..d970e11962ff 100644
> > > --- a/drivers/gpu/drm/xe/xe_sync.c
> > > +++ b/drivers/gpu/drm/xe/xe_sync.c
> > > @@ -228,6 +228,35 @@ int xe_sync_entry_add_deps(struct
> > > xe_sync_entry
> > > *sync, struct xe_sched_job *job)
> > > return 0;
> > > }
> > >
> > > +/**
> > > + * xe_sync_entry_wait() - Wait on in-sync
> > > + * @sync: Sync object
> > > + *
> > > + * If the sync is in an in-sync, wait on the sync to signal.
> > > + *
> > > + * Return: 0 on success, -ERESTARTSYS on failure (interruption)
> > > + */
> > > +int xe_sync_entry_wait(struct xe_sync_entry *sync)
> > > +{
> > > + if (sync->flags & DRM_XE_SYNC_FLAG_SIGNAL)
> > > + return 0;
> > > +
> > > + return dma_fence_wait(sync->fence, true);
> > > +}
> > > +
> > > +/**
> > > + * xe_sync_needs_wait() - Sync needs a wait (input dma-fence not
> > > signaled)
> > > + * @sync: Sync object
> > > + *
> > > + * Return: True if sync needs a wait, False otherwise
> > > + */
> > > +bool xe_sync_needs_wait(struct xe_sync_entry *sync)
> > > +{
> > > +
> > > + return !(sync->flags & DRM_XE_SYNC_FLAG_SIGNAL) &&
> > > + !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &sync-
> > > >fence-
> > > > flags);
> >
> > dma_fence_is_signaled() ?
> >
>
> I don't want to signal the fence here, Phillip Stanner merged a
> dma-fence helper that does this check to drm-misc-next but this
> change
> hasn't made it to dma-xe-next yet. I have patch built on top of his
> series to to convert Xe to use these helpers, when I rebase that
> patch
> I'll fixup this code too.
OK. Just out of interest, why not signal the fence here?
/Thomas
>
> Matt
>
> > Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> >
> > > +}
> > > +
> > > void xe_sync_entry_signal(struct xe_sync_entry *sync, struct
> > > dma_fence *fence)
> > > {
> > > if (!(sync->flags & DRM_XE_SYNC_FLAG_SIGNAL))
> > > diff --git a/drivers/gpu/drm/xe/xe_sync.h
> > > b/drivers/gpu/drm/xe/xe_sync.h
> > > index 51f2d803e977..6b949194acff 100644
> > > --- a/drivers/gpu/drm/xe/xe_sync.h
> > > +++ b/drivers/gpu/drm/xe/xe_sync.h
> > > @@ -29,6 +29,8 @@ int xe_sync_entry_add_deps(struct xe_sync_entry
> > > *sync,
> > > struct xe_sched_job *job);
> > > void xe_sync_entry_signal(struct xe_sync_entry *sync,
> > > struct dma_fence *fence);
> > > +int xe_sync_entry_wait(struct xe_sync_entry *sync);
> > > +bool xe_sync_needs_wait(struct xe_sync_entry *sync);
> > > void xe_sync_entry_cleanup(struct xe_sync_entry *sync);
> > > struct dma_fence *
> > > xe_sync_in_fence_get(struct xe_sync_entry *sync, int num_sync,
> >
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v2 5/7] drm/xe: Wait on in-syncs when swicthing to dma-fence mode
2025-12-15 21:48 ` Thomas Hellström
@ 2025-12-16 1:12 ` Matthew Brost
0 siblings, 0 replies; 24+ messages in thread
From: Matthew Brost @ 2025-12-16 1:12 UTC (permalink / raw)
To: Thomas Hellström; +Cc: intel-xe, francois.dugast, michal.mrozek
On Mon, Dec 15, 2025 at 10:48:59PM +0100, Thomas Hellström wrote:
> On Mon, 2025-12-15 at 13:46 -0800, Matthew Brost wrote:
> > On Mon, Dec 15, 2025 at 11:32:23AM +0100, Thomas Hellström wrote:
> > > On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> > > > If a dma-fence submission has in-fences and pagefault queues are
> > > > running
> > > > work, there is little incentive to kick the pagefault queues off
> > > > the
> > > > hardware until the dma-fence submission is ready to run.
> > > > Therefore,
> > > > wait
> > > > on the in-fences of the dma-fence submission before removing the
> > > > pagefault queues from the hardware.
> > > >
> > > > v2:
> > > > - Fix kernel doc (CI)
> > > > - Don't wait under lock (Thomas)
> > > > - Make wait interruptable
> > > >
> > > > Suggested-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > ---
> > > > drivers/gpu/drm/xe/xe_exec.c | 9 +++--
> > > > drivers/gpu/drm/xe/xe_hw_engine_group.c | 44
> > > > +++++++++++++++++++++--
> > > > --
> > > > drivers/gpu/drm/xe/xe_hw_engine_group.h | 4 ++-
> > > > drivers/gpu/drm/xe/xe_sync.c | 29 ++++++++++++++++
> > > > drivers/gpu/drm/xe/xe_sync.h | 2 ++
> > > > 5 files changed, 78 insertions(+), 10 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_exec.c
> > > > b/drivers/gpu/drm/xe/xe_exec.c
> > > > index 4d81210e41f5..d462add2d005 100644
> > > > --- a/drivers/gpu/drm/xe/xe_exec.c
> > > > +++ b/drivers/gpu/drm/xe/xe_exec.c
> > > > @@ -121,7 +121,7 @@ int xe_exec_ioctl(struct drm_device *dev,
> > > > void
> > > > *data, struct drm_file *file)
> > > > u64 addresses[XE_HW_ENGINE_MAX_INSTANCE];
> > > > struct drm_gpuvm_exec vm_exec = {.extra.fn =
> > > > xe_exec_fn};
> > > > struct drm_exec *exec = &vm_exec.exec;
> > > > - u32 i, num_syncs, num_ufence = 0;
> > > > + u32 i, num_syncs, num_in_sync = 0, num_ufence = 0;
> > > > struct xe_validation_ctx ctx;
> > > > struct xe_sched_job *job;
> > > > struct xe_vm *vm;
> > > > @@ -182,6 +182,9 @@ int xe_exec_ioctl(struct drm_device *dev,
> > > > void
> > > > *data, struct drm_file *file)
> > > >
> > > > if (xe_sync_is_ufence(&syncs[num_syncs]))
> > > > num_ufence++;
> > > > +
> > > > + if (!num_in_sync &&
> > > > xe_sync_needs_wait(&syncs[num_syncs]))
> > > > + num_in_sync++;
> > > > }
> > > >
> > > > if (XE_IOCTL_DBG(xe, num_ufence > 1)) {
> > > > @@ -202,7 +205,9 @@ int xe_exec_ioctl(struct drm_device *dev,
> > > > void
> > > > *data, struct drm_file *file)
> > > > mode = xe_hw_engine_group_find_exec_mode(q);
> > > >
> > > > if (mode == EXEC_MODE_DMA_FENCE) {
> > > > - err = xe_hw_engine_group_get_mode(group, mode,
> > > > &previous_mode);
> > > > + err = xe_hw_engine_group_get_mode(group, mode,
> > > > &previous_mode,
> > > > + syncs,
> > > > num_in_sync
> > > > ?
> > > > + num_syncs :
> > > > 0);
> > > > if (err)
> > > > goto err_syncs;
> > > > }
> > > > diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > > > b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > > > index 4d9263a1a208..022fc0c30d38 100644
> > > > --- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > > > +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> > > > @@ -11,6 +11,7 @@
> > > > #include "xe_gt.h"
> > > > #include "xe_gt_stats.h"
> > > > #include "xe_hw_engine_group.h"
> > > > +#include "xe_sync.h"
> > > > #include "xe_vm.h"
> > > >
> > > > static void
> > > > @@ -21,7 +22,8 @@ hw_engine_group_resume_lr_jobs_func(struct
> > > > work_struct *w)
> > > > int err;
> > > > enum xe_hw_engine_group_execution_mode previous_mode;
> > > >
> > > > - err = xe_hw_engine_group_get_mode(group, EXEC_MODE_LR,
> > > > &previous_mode);
> > > > + err = xe_hw_engine_group_get_mode(group, EXEC_MODE_LR,
> > > > &previous_mode,
> > > > + NULL, 0);
> > > > if (err)
> > > > return;
> > > >
> > > > @@ -189,10 +191,12 @@ void
> > > > xe_hw_engine_group_resume_faulting_lr_jobs(struct
> > > > xe_hw_engine_group
> > > > *group
> > > > /**
> > > > * xe_hw_engine_group_suspend_faulting_lr_jobs() - Suspend the
> > > > faulting LR jobs of this group
> > > > * @group: The hw engine group
> > > > + * @has_deps: dma-fence job triggering suspend has dependencies
> > > > *
> > > > * Return: 0 on success, negative error code on error.
> > > > */
> > > > -static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct
> > > > xe_hw_engine_group *group)
> > > > +static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct
> > > > xe_hw_engine_group *group,
> > > > + bool
> > > > has_deps)
> > > > {
> > > > int err;
> > > > struct xe_exec_queue *q;
> > > > @@ -201,11 +205,19 @@ static int
> > > > xe_hw_engine_group_suspend_faulting_lr_jobs(struct
> > > > xe_hw_engine_group
> > > > lockdep_assert_held_write(&group->mode_sem);
> > > >
> > > > list_for_each_entry(q, &group->exec_queue_list,
> > > > hw_engine_group_link) {
> > > > + bool idle_skip_suspend;
> > > > +
> > > > if (!xe_vm_in_fault_mode(q->vm))
> > > > continue;
> > > >
> > > > + idle_skip_suspend =
> > > > xe_exec_queue_idle_skip_suspend(q);
> > > > + if (!idle_skip_suspend && has_deps)
> > > > + return -EAGAIN;
> > > > +
> > > > xe_gt_stats_incr(q->gt,
> > > > XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT, 1);
> > > > - need_resume |=
> > > > !xe_exec_queue_idle_skip_suspend(q);
> > > > +
> > > > +
> > > > + need_resume |= !idle_skip_suspend;
> > > > q->ops->suspend(q);
> > > > }
> > > >
> > > > @@ -258,7 +270,7 @@ static int
> > > > xe_hw_engine_group_wait_for_dma_fence_jobs(struct
> > > > xe_hw_engine_group
> > > > return 0;
> > > > }
> > > >
> > > > -static int switch_mode(struct xe_hw_engine_group *group)
> > > > +static int switch_mode(struct xe_hw_engine_group *group, bool
> > > > has_deps)
> > > > {
> > > > int err = 0;
> > > > enum xe_hw_engine_group_execution_mode new_mode;
> > > > @@ -268,7 +280,8 @@ static int switch_mode(struct
> > > > xe_hw_engine_group
> > > > *group)
> > > > switch (group->cur_mode) {
> > > > case EXEC_MODE_LR:
> > > > new_mode = EXEC_MODE_DMA_FENCE;
> > > > - err =
> > > > xe_hw_engine_group_suspend_faulting_lr_jobs(group);
> > > > + err =
> > > > xe_hw_engine_group_suspend_faulting_lr_jobs(group,
> > > > +
> > > > has_deps);
> > > > break;
> > > > case EXEC_MODE_DMA_FENCE:
> > > > new_mode = EXEC_MODE_LR;
> > > > @@ -289,14 +302,18 @@ static int switch_mode(struct
> > > > xe_hw_engine_group *group)
> > > > * @group: The hw engine group
> > > > * @new_mode: The new execution mode
> > > > * @previous_mode: Pointer to the previous mode provided for use
> > > > by
> > > > caller
> > > > + * @syncs: Syncs from exec IOCTL
> > > > + * @num_syncs: Number of syncs from exec IOCTL
> > > > *
> > > > * Return: 0 if successful, -EINTR if locking failed.
> > > > */
> > > > int xe_hw_engine_group_get_mode(struct xe_hw_engine_group
> > > > *group,
> > > > enum
> > > > xe_hw_engine_group_execution_mode new_mode,
> > > > - enum
> > > > xe_hw_engine_group_execution_mode *previous_mode)
> > > > + enum
> > > > xe_hw_engine_group_execution_mode *previous_mode,
> > > > + struct xe_sync_entry *syncs, int
> > > > num_syncs)
> > > > __acquires(&group->mode_sem)
> > > > {
> > > > + bool has_deps = !!num_syncs;
> > > > int err = down_read_interruptible(&group->mode_sem);
> > > >
> > > > if (err)
> > > > @@ -306,14 +323,27 @@ __acquires(&group->mode_sem)
> > > >
> > > > if (new_mode != group->cur_mode) {
> > > > up_read(&group->mode_sem);
> > > > +retry:
> > > > err = down_write_killable(&group->mode_sem);
> > > > if (err)
> > > > return err;
> > > >
> > > > if (new_mode != group->cur_mode) {
> > > > - err = switch_mode(group);
> > > > + err = switch_mode(group, has_deps);
> > > > if (err) {
> > > > up_write(&group->mode_sem);
> > > > + if (err == -EAGAIN) {
> > > > + int i;
> > > > +
> > > > + for (i = 0; i <
> > > > num_syncs;
> > > > ++i) {
> > > > + err =
> > > > xe_sync_entry_wait(syncs + i);
> > > > + if (err)
> > > > + return
> > > > err;
> > > > + }
> > > > +
> > > > + has_deps = false;
> > > > + goto retry;
> > > > + }
> > > > return err;
> > > > }
> > > > }
> > > > diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > > > b/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > > > index 797ee81acbf2..8b17ccd30b70 100644
> > > > --- a/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > > > +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.h
> > > > @@ -11,6 +11,7 @@
> > > > struct drm_device;
> > > > struct xe_exec_queue;
> > > > struct xe_gt;
> > > > +struct xe_sync_entry;
> > > >
> > > > int xe_hw_engine_setup_groups(struct xe_gt *gt);
> > > >
> > > > @@ -19,7 +20,8 @@ void xe_hw_engine_group_del_exec_queue(struct
> > > > xe_hw_engine_group *group, struct
> > > >
> > > > int xe_hw_engine_group_get_mode(struct xe_hw_engine_group
> > > > *group,
> > > > enum
> > > > xe_hw_engine_group_execution_mode new_mode,
> > > > - enum
> > > > xe_hw_engine_group_execution_mode *previous_mode);
> > > > + enum
> > > > xe_hw_engine_group_execution_mode *previous_mode,
> > > > + struct xe_sync_entry *syncs, int
> > > > num_syncs);
> > > > void xe_hw_engine_group_put(struct xe_hw_engine_group *group);
> > > >
> > > > enum xe_hw_engine_group_execution_mode
> > > > diff --git a/drivers/gpu/drm/xe/xe_sync.c
> > > > b/drivers/gpu/drm/xe/xe_sync.c
> > > > index 1fc4fa278b78..d970e11962ff 100644
> > > > --- a/drivers/gpu/drm/xe/xe_sync.c
> > > > +++ b/drivers/gpu/drm/xe/xe_sync.c
> > > > @@ -228,6 +228,35 @@ int xe_sync_entry_add_deps(struct
> > > > xe_sync_entry
> > > > *sync, struct xe_sched_job *job)
> > > > return 0;
> > > > }
> > > >
> > > > +/**
> > > > + * xe_sync_entry_wait() - Wait on in-sync
> > > > + * @sync: Sync object
> > > > + *
> > > > + * If the sync is in an in-sync, wait on the sync to signal.
> > > > + *
> > > > + * Return: 0 on success, -ERESTARTSYS on failure (interruption)
> > > > + */
> > > > +int xe_sync_entry_wait(struct xe_sync_entry *sync)
> > > > +{
> > > > + if (sync->flags & DRM_XE_SYNC_FLAG_SIGNAL)
> > > > + return 0;
> > > > +
> > > > + return dma_fence_wait(sync->fence, true);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_sync_needs_wait() - Sync needs a wait (input dma-fence not
> > > > signaled)
> > > > + * @sync: Sync object
> > > > + *
> > > > + * Return: True if sync needs a wait, False otherwise
> > > > + */
> > > > +bool xe_sync_needs_wait(struct xe_sync_entry *sync)
> > > > +{
> > > > +
> > > > + return !(sync->flags & DRM_XE_SYNC_FLAG_SIGNAL) &&
> > > > + !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &sync-
> > > > >fence-
> > > > > flags);
> > >
> > > dma_fence_is_signaled() ?
> > >
> >
> > I don't want to signal the fence here, Phillip Stanner merged a
> > dma-fence helper that does this check to drm-misc-next but this
> > change
> > hasn't made it to dma-xe-next yet. I have patch built on top of his
> > series to to convert Xe to use these helpers, when I rebase that
> > patch
> > I'll fixup this code too.
>
> OK. Just out of interest, why not signal the fence here?
> /Thomas
>
It’s probably fine to signal the fence. This is just a defensive
leftover from my early days in Xe to avoid signaling the Xe hardware
fence from anywhere other than a single location. This won’t be a
hardware fence though; here, this is an eager check where I don’t think
it’s worth taking the dma-fence spinlock.
Matt
>
> >
> > Matt
> >
> > > Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > >
> > > > +}
> > > > +
> > > > void xe_sync_entry_signal(struct xe_sync_entry *sync, struct
> > > > dma_fence *fence)
> > > > {
> > > > if (!(sync->flags & DRM_XE_SYNC_FLAG_SIGNAL))
> > > > diff --git a/drivers/gpu/drm/xe/xe_sync.h
> > > > b/drivers/gpu/drm/xe/xe_sync.h
> > > > index 51f2d803e977..6b949194acff 100644
> > > > --- a/drivers/gpu/drm/xe/xe_sync.h
> > > > +++ b/drivers/gpu/drm/xe/xe_sync.h
> > > > @@ -29,6 +29,8 @@ int xe_sync_entry_add_deps(struct xe_sync_entry
> > > > *sync,
> > > > struct xe_sched_job *job);
> > > > void xe_sync_entry_signal(struct xe_sync_entry *sync,
> > > > struct dma_fence *fence);
> > > > +int xe_sync_entry_wait(struct xe_sync_entry *sync);
> > > > +bool xe_sync_needs_wait(struct xe_sync_entry *sync);
> > > > void xe_sync_entry_cleanup(struct xe_sync_entry *sync);
> > > > struct dma_fence *
> > > > xe_sync_in_fence_get(struct xe_sync_entry *sync, int num_sync,
> > >
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 6/7] drm/xe: Add GT stats ktime helpers
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
` (4 preceding siblings ...)
2025-12-12 18:28 ` [PATCH v2 5/7] drm/xe: Wait on in-syncs when swicthing to dma-fence mode Matthew Brost
@ 2025-12-12 18:28 ` Matthew Brost
2025-12-15 10:17 ` Thomas Hellström
2025-12-12 18:28 ` [PATCH v2 7/7] drm/xe: Add more GT stats around pagefault mode switch flows Matthew Brost
` (4 subsequent siblings)
10 siblings, 1 reply; 24+ messages in thread
From: Matthew Brost @ 2025-12-12 18:28 UTC (permalink / raw)
To: intel-xe; +Cc: francois.dugast, thomas.hellstrom, michal.mrozek
Normalize GT stats that record execution periods in code paths by
adding helpers to perform the ktime calculation. Use these helpers in
the SVM code.
Suggested-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_gt_stats.h | 33 +++++++++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_svm.c | 29 +++++++++-------------------
2 files changed, 41 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gt_stats.h b/drivers/gpu/drm/xe/xe_gt_stats.h
index e8aea32bc971..456782f23f39 100644
--- a/drivers/gpu/drm/xe/xe_gt_stats.h
+++ b/drivers/gpu/drm/xe/xe_gt_stats.h
@@ -6,6 +6,8 @@
#ifndef _XE_GT_STATS_H_
#define _XE_GT_STATS_H_
+#include <linux/ktime.h>
+
#include "xe_gt_stats_types.h"
struct xe_gt;
@@ -21,6 +23,35 @@ xe_gt_stats_incr(struct xe_gt *gt, const enum xe_gt_stats_id id,
int incr)
{
}
-
#endif
+
+/**
+ * xe_gt_stats_ktime_us_delta() - Get delta in microseconds between now and a
+ * start time
+ * @start: Start time
+ *
+ * Helper for GT stats to get delta in microseconds between now and a start
+ * time, compiles out if GT stats are disabled.
+ *
+ * Return: Delta in microseconds between now and a start time
+ */
+static inline s64 xe_gt_stats_ktime_us_delta(ktime_t start)
+{
+ return IS_ENABLED(CONFIG_DEBUG_FS) ?
+ ktime_us_delta(ktime_get(), start) : 0;
+}
+
+/**
+ * xe_gt_stats_ktime_get() - Get current ktime
+ *
+ * Helper for GT stats to get current ktime, compiles out if GT stats are
+ * disabled.
+ *
+ * Return: Get current ktime
+ */
+static inline ktime_t xe_gt_stats_ktime_get(void)
+{
+ return IS_ENABLED(CONFIG_DEBUG_FS) ? ktime_get() : 0;
+}
+
#endif
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 46977ec1e0de..93550c7c84ac 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -176,24 +176,13 @@ xe_svm_range_notifier_event_end(struct xe_vm *vm, struct drm_gpusvm_range *r,
mmu_range);
}
-static s64 xe_svm_stats_ktime_us_delta(ktime_t start)
-{
- return IS_ENABLED(CONFIG_DEBUG_FS) ?
- ktime_us_delta(ktime_get(), start) : 0;
-}
-
static void xe_svm_tlb_inval_us_stats_incr(struct xe_gt *gt, ktime_t start)
{
- s64 us_delta = xe_svm_stats_ktime_us_delta(start);
+ s64 us_delta = xe_gt_stats_ktime_us_delta(start);
xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_TLB_INVAL_US, us_delta);
}
-static ktime_t xe_svm_stats_ktime_get(void)
-{
- return IS_ENABLED(CONFIG_DEBUG_FS) ? ktime_get() : 0;
-}
-
static void xe_svm_invalidate(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_notifier *notifier,
const struct mmu_notifier_range *mmu_range)
@@ -202,7 +191,7 @@ static void xe_svm_invalidate(struct drm_gpusvm *gpusvm,
struct xe_device *xe = vm->xe;
struct drm_gpusvm_range *r, *first;
struct xe_tile *tile;
- ktime_t start = xe_svm_stats_ktime_get();
+ ktime_t start = xe_gt_stats_ktime_get();
u64 adj_start = mmu_range->start, adj_end = mmu_range->end;
u8 tile_mask = 0, id;
long err;
@@ -442,7 +431,7 @@ static void xe_svm_copy_us_stats_incr(struct xe_gt *gt,
unsigned long npages,
ktime_t start)
{
- s64 us_delta = xe_svm_stats_ktime_us_delta(start);
+ s64 us_delta = xe_gt_stats_ktime_us_delta(start);
if (dir == XE_SVM_COPY_TO_VRAM) {
switch (npages) {
@@ -494,7 +483,7 @@ static int xe_svm_copy(struct page **pages,
u64 vram_addr = XE_VRAM_ADDR_INVALID;
int err = 0, pos = 0;
bool sram = dir == XE_SVM_COPY_TO_SRAM;
- ktime_t start = xe_svm_stats_ktime_get();
+ ktime_t start = xe_gt_stats_ktime_get();
/*
* This flow is complex: it locates physically contiguous device pages,
@@ -986,7 +975,7 @@ static void xe_svm_range_##elem##_us_stats_incr(struct xe_gt *gt, \
struct xe_svm_range *range, \
ktime_t start) \
{ \
- s64 us_delta = xe_svm_stats_ktime_us_delta(start); \
+ s64 us_delta = xe_gt_stats_ktime_us_delta(start); \
\
switch (xe_svm_range_size(range)) { \
case SZ_4K: \
@@ -1031,7 +1020,7 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
struct drm_pagemap *dpagemap;
struct xe_tile *tile = gt_to_tile(gt);
int migrate_try_count = ctx.devmem_only ? 3 : 1;
- ktime_t start = xe_svm_stats_ktime_get(), bind_start, get_pages_start;
+ ktime_t start = xe_gt_stats_ktime_get(), bind_start, get_pages_start;
int err;
lockdep_assert_held_write(&vm->lock);
@@ -1070,7 +1059,7 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
if (--migrate_try_count >= 0 &&
xe_svm_range_needs_migrate_to_vram(range, vma, !!dpagemap || ctx.devmem_only)) {
- ktime_t migrate_start = xe_svm_stats_ktime_get();
+ ktime_t migrate_start = xe_gt_stats_ktime_get();
/* TODO : For multi-device dpagemap will be used to find the
* remote tile and remote device. Will need to modify
@@ -1107,7 +1096,7 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
}
get_pages:
- get_pages_start = xe_svm_stats_ktime_get();
+ get_pages_start = xe_gt_stats_ktime_get();
range_debug(range, "GET PAGES");
err = xe_svm_range_get_pages(vm, range, &ctx);
@@ -1134,7 +1123,7 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
xe_svm_range_get_pages_us_stats_incr(gt, range, get_pages_start);
range_debug(range, "PAGE FAULT - BIND");
- bind_start = xe_svm_stats_ktime_get();
+ bind_start = xe_gt_stats_ktime_get();
xe_validation_guard(&vctx, &vm->xe->val, &exec, (struct xe_val_flags) {}, err) {
err = xe_vm_drm_exec_lock(vm, &exec);
drm_exec_retry_on_contention(&exec);
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v2 6/7] drm/xe: Add GT stats ktime helpers
2025-12-12 18:28 ` [PATCH v2 6/7] drm/xe: Add GT stats ktime helpers Matthew Brost
@ 2025-12-15 10:17 ` Thomas Hellström
0 siblings, 0 replies; 24+ messages in thread
From: Thomas Hellström @ 2025-12-15 10:17 UTC (permalink / raw)
To: Matthew Brost, intel-xe; +Cc: francois.dugast, michal.mrozek
On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> Normalize GT stats that record execution periods in code paths by
> adding helpers to perform the ktime calculation. Use these helpers in
> the SVM code.
>
> Suggested-by: Francois Dugast <francois.dugast@intel.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_gt_stats.h | 33
> +++++++++++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_svm.c | 29 +++++++++-------------------
> 2 files changed, 41 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_stats.h
> b/drivers/gpu/drm/xe/xe_gt_stats.h
> index e8aea32bc971..456782f23f39 100644
> --- a/drivers/gpu/drm/xe/xe_gt_stats.h
> +++ b/drivers/gpu/drm/xe/xe_gt_stats.h
> @@ -6,6 +6,8 @@
> #ifndef _XE_GT_STATS_H_
> #define _XE_GT_STATS_H_
>
> +#include <linux/ktime.h>
> +
> #include "xe_gt_stats_types.h"
>
> struct xe_gt;
> @@ -21,6 +23,35 @@ xe_gt_stats_incr(struct xe_gt *gt, const enum
> xe_gt_stats_id id,
> int incr)
> {
> }
> -
Unrelated change
With that fixed,
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> #endif
> +
> +/**
> + * xe_gt_stats_ktime_us_delta() - Get delta in microseconds between
> now and a
> + * start time
> + * @start: Start time
> + *
> + * Helper for GT stats to get delta in microseconds between now and
> a start
> + * time, compiles out if GT stats are disabled.
> + *
> + * Return: Delta in microseconds between now and a start time
> + */
> +static inline s64 xe_gt_stats_ktime_us_delta(ktime_t start)
> +{
> + return IS_ENABLED(CONFIG_DEBUG_FS) ?
> + ktime_us_delta(ktime_get(), start) : 0;
> +}
> +
> +/**
> + * xe_gt_stats_ktime_get() - Get current ktime
> + *
> + * Helper for GT stats to get current ktime, compiles out if GT
> stats are
> + * disabled.
> + *
> + * Return: Get current ktime
> + */
> +static inline ktime_t xe_gt_stats_ktime_get(void)
> +{
> + return IS_ENABLED(CONFIG_DEBUG_FS) ? ktime_get() : 0;
> +}
> +
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_svm.c
> b/drivers/gpu/drm/xe/xe_svm.c
> index 46977ec1e0de..93550c7c84ac 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -176,24 +176,13 @@ xe_svm_range_notifier_event_end(struct xe_vm
> *vm, struct drm_gpusvm_range *r,
> mmu_range);
> }
>
> -static s64 xe_svm_stats_ktime_us_delta(ktime_t start)
> -{
> - return IS_ENABLED(CONFIG_DEBUG_FS) ?
> - ktime_us_delta(ktime_get(), start) : 0;
> -}
> -
> static void xe_svm_tlb_inval_us_stats_incr(struct xe_gt *gt, ktime_t
> start)
> {
> - s64 us_delta = xe_svm_stats_ktime_us_delta(start);
> + s64 us_delta = xe_gt_stats_ktime_us_delta(start);
>
> xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_TLB_INVAL_US,
> us_delta);
> }
>
> -static ktime_t xe_svm_stats_ktime_get(void)
> -{
> - return IS_ENABLED(CONFIG_DEBUG_FS) ? ktime_get() : 0;
> -}
> -
> static void xe_svm_invalidate(struct drm_gpusvm *gpusvm,
> struct drm_gpusvm_notifier *notifier,
> const struct mmu_notifier_range
> *mmu_range)
> @@ -202,7 +191,7 @@ static void xe_svm_invalidate(struct drm_gpusvm
> *gpusvm,
> struct xe_device *xe = vm->xe;
> struct drm_gpusvm_range *r, *first;
> struct xe_tile *tile;
> - ktime_t start = xe_svm_stats_ktime_get();
> + ktime_t start = xe_gt_stats_ktime_get();
> u64 adj_start = mmu_range->start, adj_end = mmu_range->end;
> u8 tile_mask = 0, id;
> long err;
> @@ -442,7 +431,7 @@ static void xe_svm_copy_us_stats_incr(struct
> xe_gt *gt,
> unsigned long npages,
> ktime_t start)
> {
> - s64 us_delta = xe_svm_stats_ktime_us_delta(start);
> + s64 us_delta = xe_gt_stats_ktime_us_delta(start);
>
> if (dir == XE_SVM_COPY_TO_VRAM) {
> switch (npages) {
> @@ -494,7 +483,7 @@ static int xe_svm_copy(struct page **pages,
> u64 vram_addr = XE_VRAM_ADDR_INVALID;
> int err = 0, pos = 0;
> bool sram = dir == XE_SVM_COPY_TO_SRAM;
> - ktime_t start = xe_svm_stats_ktime_get();
> + ktime_t start = xe_gt_stats_ktime_get();
>
> /*
> * This flow is complex: it locates physically contiguous
> device pages,
> @@ -986,7 +975,7 @@ static void
> xe_svm_range_##elem##_us_stats_incr(struct xe_gt *gt, \
> struct xe_svm_range
> *range, \
> ktime_t start) \
> { \
> - s64 us_delta = xe_svm_stats_ktime_us_delta(start); \
> + s64 us_delta = xe_gt_stats_ktime_us_delta(start); \
> \
> switch (xe_svm_range_size(range)) { \
> case SZ_4K: \
> @@ -1031,7 +1020,7 @@ static int __xe_svm_handle_pagefault(struct
> xe_vm *vm, struct xe_vma *vma,
> struct drm_pagemap *dpagemap;
> struct xe_tile *tile = gt_to_tile(gt);
> int migrate_try_count = ctx.devmem_only ? 3 : 1;
> - ktime_t start = xe_svm_stats_ktime_get(), bind_start,
> get_pages_start;
> + ktime_t start = xe_gt_stats_ktime_get(), bind_start,
> get_pages_start;
> int err;
>
> lockdep_assert_held_write(&vm->lock);
> @@ -1070,7 +1059,7 @@ static int __xe_svm_handle_pagefault(struct
> xe_vm *vm, struct xe_vma *vma,
>
> if (--migrate_try_count >= 0 &&
> xe_svm_range_needs_migrate_to_vram(range, vma,
> !!dpagemap || ctx.devmem_only)) {
> - ktime_t migrate_start = xe_svm_stats_ktime_get();
> + ktime_t migrate_start = xe_gt_stats_ktime_get();
>
> /* TODO : For multi-device dpagemap will be used to
> find the
> * remote tile and remote device. Will need to
> modify
> @@ -1107,7 +1096,7 @@ static int __xe_svm_handle_pagefault(struct
> xe_vm *vm, struct xe_vma *vma,
> }
>
> get_pages:
> - get_pages_start = xe_svm_stats_ktime_get();
> + get_pages_start = xe_gt_stats_ktime_get();
>
> range_debug(range, "GET PAGES");
> err = xe_svm_range_get_pages(vm, range, &ctx);
> @@ -1134,7 +1123,7 @@ static int __xe_svm_handle_pagefault(struct
> xe_vm *vm, struct xe_vma *vma,
> xe_svm_range_get_pages_us_stats_incr(gt, range,
> get_pages_start);
> range_debug(range, "PAGE FAULT - BIND");
>
> - bind_start = xe_svm_stats_ktime_get();
> + bind_start = xe_gt_stats_ktime_get();
> xe_validation_guard(&vctx, &vm->xe->val, &exec, (struct
> xe_val_flags) {}, err) {
> err = xe_vm_drm_exec_lock(vm, &exec);
> drm_exec_retry_on_contention(&exec);
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 7/7] drm/xe: Add more GT stats around pagefault mode switch flows
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
` (5 preceding siblings ...)
2025-12-12 18:28 ` [PATCH v2 6/7] drm/xe: Add GT stats ktime helpers Matthew Brost
@ 2025-12-12 18:28 ` Matthew Brost
2025-12-15 11:00 ` Thomas Hellström
2025-12-15 13:05 ` Francois Dugast
2025-12-12 22:37 ` ✗ CI.checkpatch: warning for Fix performance when pagefaults and 3d/display share resources (rev2) Patchwork
` (3 subsequent siblings)
10 siblings, 2 replies; 24+ messages in thread
From: Matthew Brost @ 2025-12-12 18:28 UTC (permalink / raw)
To: intel-xe; +Cc: francois.dugast, thomas.hellstrom, michal.mrozek
Add GT stats to measure the time spent switching between pagefault mode
and dma-fence mode. Also add a GT stat to indicate when pagefault
suspend is skipped because the system is idle. These metrics will help
profile pagefault workloads while 3D and display are enabled.
v2:
- Use GT stats helper functions (Francois)
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_gt_stats.c | 6 ++++++
drivers/gpu/drm/xe/xe_gt_stats_types.h | 3 +++
drivers/gpu/drm/xe/xe_hw_engine_group.c | 22 +++++++++++++++++++++-
3 files changed, 30 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_gt_stats.c b/drivers/gpu/drm/xe/xe_gt_stats.c
index 714045ad9354..fb2904bd0abd 100644
--- a/drivers/gpu/drm/xe/xe_gt_stats.c
+++ b/drivers/gpu/drm/xe/xe_gt_stats.c
@@ -68,8 +68,14 @@ static const char *const stat_description[__XE_GT_STATS_NUM_IDS] = {
DEF_STAT_STR(SVM_2M_BIND_US, "svm_2M_bind_us"),
DEF_STAT_STR(HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT,
"hw_engine_group_suspend_lr_queue_count"),
+ DEF_STAT_STR(HW_ENGINE_GROUP_SKIP_LR_QUEUE_COUNT,
+ "hw_engine_group_skip_lr_queue_count"),
DEF_STAT_STR(HW_ENGINE_GROUP_WAIT_DMA_QUEUE_COUNT,
"hw_engine_group_wait_dma_queue_count"),
+ DEF_STAT_STR(HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_US,
+ "hw_engine_group_suspend_lr_queue_us"),
+ DEF_STAT_STR(HW_ENGINE_GROUP_WAIT_DMA_QUEUE_US,
+ "hw_engine_group_wait_dma_queue_us"),
};
/**
diff --git a/drivers/gpu/drm/xe/xe_gt_stats_types.h b/drivers/gpu/drm/xe/xe_gt_stats_types.h
index aada5df421e5..b92d013091d5 100644
--- a/drivers/gpu/drm/xe/xe_gt_stats_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_stats_types.h
@@ -45,7 +45,10 @@ enum xe_gt_stats_id {
XE_GT_STATS_ID_SVM_64K_BIND_US,
XE_GT_STATS_ID_SVM_2M_BIND_US,
XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT,
+ XE_GT_STATS_ID_HW_ENGINE_GROUP_SKIP_LR_QUEUE_COUNT,
XE_GT_STATS_ID_HW_ENGINE_GROUP_WAIT_DMA_QUEUE_COUNT,
+ XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_US,
+ XE_GT_STATS_ID_HW_ENGINE_GROUP_WAIT_DMA_QUEUE_US,
/* must be the last entry */
__XE_GT_STATS_NUM_IDS,
};
diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c b/drivers/gpu/drm/xe/xe_hw_engine_group.c
index 022fc0c30d38..9a53021bbfa7 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
@@ -200,7 +200,9 @@ static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
{
int err;
struct xe_exec_queue *q;
+ struct xe_gt *gt = NULL;
bool need_resume = false;
+ ktime_t start = xe_gt_stats_ktime_get();
lockdep_assert_held_write(&group->mode_sem);
@@ -215,10 +217,13 @@ static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
return -EAGAIN;
xe_gt_stats_incr(q->gt, XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT, 1);
-
+ if (idle_skip_suspend)
+ xe_gt_stats_incr(q->gt,
+ XE_GT_STATS_ID_HW_ENGINE_GROUP_SKIP_LR_QUEUE_COUNT, 1);
need_resume |= !idle_skip_suspend;
q->ops->suspend(q);
+ gt = q->gt;
}
list_for_each_entry(q, &group->exec_queue_list, hw_engine_group_link) {
@@ -230,6 +235,12 @@ static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
return err;
}
+ if (gt) {
+ xe_gt_stats_incr(gt,
+ XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_US,
+ xe_gt_stats_ktime_us_delta(start));
+ }
+
if (need_resume)
xe_hw_engine_group_resume_faulting_lr_jobs(group);
@@ -250,7 +261,9 @@ static int xe_hw_engine_group_wait_for_dma_fence_jobs(struct xe_hw_engine_group
{
long timeout;
struct xe_exec_queue *q;
+ struct xe_gt *gt = NULL;
struct dma_fence *fence;
+ ktime_t start = xe_gt_stats_ktime_get();
lockdep_assert_held_write(&group->mode_sem);
@@ -262,11 +275,18 @@ static int xe_hw_engine_group_wait_for_dma_fence_jobs(struct xe_hw_engine_group
fence = xe_exec_queue_last_fence_get_for_resume(q, q->vm);
timeout = dma_fence_wait(fence, false);
dma_fence_put(fence);
+ gt = q->gt;
if (timeout < 0)
return -ETIME;
}
+ if (gt) {
+ xe_gt_stats_incr(gt,
+ XE_GT_STATS_ID_HW_ENGINE_GROUP_WAIT_DMA_QUEUE_US,
+ xe_gt_stats_ktime_us_delta(start));
+ }
+
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread* Re: [PATCH v2 7/7] drm/xe: Add more GT stats around pagefault mode switch flows
2025-12-12 18:28 ` [PATCH v2 7/7] drm/xe: Add more GT stats around pagefault mode switch flows Matthew Brost
@ 2025-12-15 11:00 ` Thomas Hellström
2025-12-15 13:05 ` Francois Dugast
1 sibling, 0 replies; 24+ messages in thread
From: Thomas Hellström @ 2025-12-15 11:00 UTC (permalink / raw)
To: Matthew Brost, intel-xe; +Cc: francois.dugast, michal.mrozek
On Fri, 2025-12-12 at 10:28 -0800, Matthew Brost wrote:
> Add GT stats to measure the time spent switching between pagefault
> mode
> and dma-fence mode. Also add a GT stat to indicate when pagefault
> suspend is skipped because the system is idle. These metrics will
> help
> profile pagefault workloads while 3D and display are enabled.
>
> v2:
> - Use GT stats helper functions (Francois)
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_gt_stats.c | 6 ++++++
> drivers/gpu/drm/xe/xe_gt_stats_types.h | 3 +++
> drivers/gpu/drm/xe/xe_hw_engine_group.c | 22 +++++++++++++++++++++-
> 3 files changed, 30 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_stats.c
> b/drivers/gpu/drm/xe/xe_gt_stats.c
> index 714045ad9354..fb2904bd0abd 100644
> --- a/drivers/gpu/drm/xe/xe_gt_stats.c
> +++ b/drivers/gpu/drm/xe/xe_gt_stats.c
> @@ -68,8 +68,14 @@ static const char *const
> stat_description[__XE_GT_STATS_NUM_IDS] = {
> DEF_STAT_STR(SVM_2M_BIND_US, "svm_2M_bind_us"),
> DEF_STAT_STR(HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT,
> "hw_engine_group_suspend_lr_queue_count"),
> + DEF_STAT_STR(HW_ENGINE_GROUP_SKIP_LR_QUEUE_COUNT,
> + "hw_engine_group_skip_lr_queue_count"),
> DEF_STAT_STR(HW_ENGINE_GROUP_WAIT_DMA_QUEUE_COUNT,
> "hw_engine_group_wait_dma_queue_count"),
> + DEF_STAT_STR(HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_US,
> + "hw_engine_group_suspend_lr_queue_us"),
> + DEF_STAT_STR(HW_ENGINE_GROUP_WAIT_DMA_QUEUE_US,
> + "hw_engine_group_wait_dma_queue_us"),
> };
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_gt_stats_types.h
> b/drivers/gpu/drm/xe/xe_gt_stats_types.h
> index aada5df421e5..b92d013091d5 100644
> --- a/drivers/gpu/drm/xe/xe_gt_stats_types.h
> +++ b/drivers/gpu/drm/xe/xe_gt_stats_types.h
> @@ -45,7 +45,10 @@ enum xe_gt_stats_id {
> XE_GT_STATS_ID_SVM_64K_BIND_US,
> XE_GT_STATS_ID_SVM_2M_BIND_US,
> XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT,
> + XE_GT_STATS_ID_HW_ENGINE_GROUP_SKIP_LR_QUEUE_COUNT,
> XE_GT_STATS_ID_HW_ENGINE_GROUP_WAIT_DMA_QUEUE_COUNT,
> + XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_US,
> + XE_GT_STATS_ID_HW_ENGINE_GROUP_WAIT_DMA_QUEUE_US,
> /* must be the last entry */
> __XE_GT_STATS_NUM_IDS,
> };
> diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> index 022fc0c30d38..9a53021bbfa7 100644
> --- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> @@ -200,7 +200,9 @@ static int
> xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
> {
> int err;
> struct xe_exec_queue *q;
> + struct xe_gt *gt = NULL;
> bool need_resume = false;
> + ktime_t start = xe_gt_stats_ktime_get();
>
> lockdep_assert_held_write(&group->mode_sem);
>
> @@ -215,10 +217,13 @@ static int
> xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
> return -EAGAIN;
>
> xe_gt_stats_incr(q->gt,
> XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT, 1);
> -
> + if (idle_skip_suspend)
> + xe_gt_stats_incr(q->gt,
> +
> XE_GT_STATS_ID_HW_ENGINE_GROUP_SKIP_LR_QUEUE_COUNT, 1);
>
> need_resume |= !idle_skip_suspend;
> q->ops->suspend(q);
> + gt = q->gt;
> }
>
> list_for_each_entry(q, &group->exec_queue_list,
> hw_engine_group_link) {
> @@ -230,6 +235,12 @@ static int
> xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
> return err;
> }
>
> + if (gt) {
> + xe_gt_stats_incr(gt,
> +
> XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_US,
> + xe_gt_stats_ktime_us_delta(start));
> + }
> +
> if (need_resume)
> xe_hw_engine_group_resume_faulting_lr_jobs(group);
>
> @@ -250,7 +261,9 @@ static int
> xe_hw_engine_group_wait_for_dma_fence_jobs(struct xe_hw_engine_group
> {
> long timeout;
> struct xe_exec_queue *q;
> + struct xe_gt *gt = NULL;
> struct dma_fence *fence;
> + ktime_t start = xe_gt_stats_ktime_get();
>
> lockdep_assert_held_write(&group->mode_sem);
>
> @@ -262,11 +275,18 @@ static int
> xe_hw_engine_group_wait_for_dma_fence_jobs(struct xe_hw_engine_group
> fence = xe_exec_queue_last_fence_get_for_resume(q,
> q->vm);
> timeout = dma_fence_wait(fence, false);
> dma_fence_put(fence);
> + gt = q->gt;
>
> if (timeout < 0)
> return -ETIME;
> }
>
> + if (gt) {
> + xe_gt_stats_incr(gt,
> +
> XE_GT_STATS_ID_HW_ENGINE_GROUP_WAIT_DMA_QUEUE_US,
> + xe_gt_stats_ktime_us_delta(start));
> + }
> +
> return 0;
> }
>
^ permalink raw reply [flat|nested] 24+ messages in thread* Re: [PATCH v2 7/7] drm/xe: Add more GT stats around pagefault mode switch flows
2025-12-12 18:28 ` [PATCH v2 7/7] drm/xe: Add more GT stats around pagefault mode switch flows Matthew Brost
2025-12-15 11:00 ` Thomas Hellström
@ 2025-12-15 13:05 ` Francois Dugast
1 sibling, 0 replies; 24+ messages in thread
From: Francois Dugast @ 2025-12-15 13:05 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom, michal.mrozek
On Fri, Dec 12, 2025 at 10:28:47AM -0800, Matthew Brost wrote:
> Add GT stats to measure the time spent switching between pagefault mode
> and dma-fence mode. Also add a GT stat to indicate when pagefault
> suspend is skipped because the system is idle. These metrics will help
> profile pagefault workloads while 3D and display are enabled.
>
> v2:
> - Use GT stats helper functions (Francois)
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
> ---
> drivers/gpu/drm/xe/xe_gt_stats.c | 6 ++++++
> drivers/gpu/drm/xe/xe_gt_stats_types.h | 3 +++
> drivers/gpu/drm/xe/xe_hw_engine_group.c | 22 +++++++++++++++++++++-
> 3 files changed, 30 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_stats.c b/drivers/gpu/drm/xe/xe_gt_stats.c
> index 714045ad9354..fb2904bd0abd 100644
> --- a/drivers/gpu/drm/xe/xe_gt_stats.c
> +++ b/drivers/gpu/drm/xe/xe_gt_stats.c
> @@ -68,8 +68,14 @@ static const char *const stat_description[__XE_GT_STATS_NUM_IDS] = {
> DEF_STAT_STR(SVM_2M_BIND_US, "svm_2M_bind_us"),
> DEF_STAT_STR(HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT,
> "hw_engine_group_suspend_lr_queue_count"),
> + DEF_STAT_STR(HW_ENGINE_GROUP_SKIP_LR_QUEUE_COUNT,
> + "hw_engine_group_skip_lr_queue_count"),
> DEF_STAT_STR(HW_ENGINE_GROUP_WAIT_DMA_QUEUE_COUNT,
> "hw_engine_group_wait_dma_queue_count"),
> + DEF_STAT_STR(HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_US,
> + "hw_engine_group_suspend_lr_queue_us"),
> + DEF_STAT_STR(HW_ENGINE_GROUP_WAIT_DMA_QUEUE_US,
> + "hw_engine_group_wait_dma_queue_us"),
> };
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_gt_stats_types.h b/drivers/gpu/drm/xe/xe_gt_stats_types.h
> index aada5df421e5..b92d013091d5 100644
> --- a/drivers/gpu/drm/xe/xe_gt_stats_types.h
> +++ b/drivers/gpu/drm/xe/xe_gt_stats_types.h
> @@ -45,7 +45,10 @@ enum xe_gt_stats_id {
> XE_GT_STATS_ID_SVM_64K_BIND_US,
> XE_GT_STATS_ID_SVM_2M_BIND_US,
> XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT,
> + XE_GT_STATS_ID_HW_ENGINE_GROUP_SKIP_LR_QUEUE_COUNT,
> XE_GT_STATS_ID_HW_ENGINE_GROUP_WAIT_DMA_QUEUE_COUNT,
> + XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_US,
> + XE_GT_STATS_ID_HW_ENGINE_GROUP_WAIT_DMA_QUEUE_US,
> /* must be the last entry */
> __XE_GT_STATS_NUM_IDS,
> };
> diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> index 022fc0c30d38..9a53021bbfa7 100644
> --- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
> +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
> @@ -200,7 +200,9 @@ static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
> {
> int err;
> struct xe_exec_queue *q;
> + struct xe_gt *gt = NULL;
> bool need_resume = false;
> + ktime_t start = xe_gt_stats_ktime_get();
>
> lockdep_assert_held_write(&group->mode_sem);
>
> @@ -215,10 +217,13 @@ static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
> return -EAGAIN;
>
> xe_gt_stats_incr(q->gt, XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_COUNT, 1);
> -
> + if (idle_skip_suspend)
> + xe_gt_stats_incr(q->gt,
> + XE_GT_STATS_ID_HW_ENGINE_GROUP_SKIP_LR_QUEUE_COUNT, 1);
>
> need_resume |= !idle_skip_suspend;
> q->ops->suspend(q);
> + gt = q->gt;
> }
>
> list_for_each_entry(q, &group->exec_queue_list, hw_engine_group_link) {
> @@ -230,6 +235,12 @@ static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
> return err;
> }
>
> + if (gt) {
> + xe_gt_stats_incr(gt,
> + XE_GT_STATS_ID_HW_ENGINE_GROUP_SUSPEND_LR_QUEUE_US,
> + xe_gt_stats_ktime_us_delta(start));
> + }
> +
> if (need_resume)
> xe_hw_engine_group_resume_faulting_lr_jobs(group);
>
> @@ -250,7 +261,9 @@ static int xe_hw_engine_group_wait_for_dma_fence_jobs(struct xe_hw_engine_group
> {
> long timeout;
> struct xe_exec_queue *q;
> + struct xe_gt *gt = NULL;
> struct dma_fence *fence;
> + ktime_t start = xe_gt_stats_ktime_get();
>
> lockdep_assert_held_write(&group->mode_sem);
>
> @@ -262,11 +275,18 @@ static int xe_hw_engine_group_wait_for_dma_fence_jobs(struct xe_hw_engine_group
> fence = xe_exec_queue_last_fence_get_for_resume(q, q->vm);
> timeout = dma_fence_wait(fence, false);
> dma_fence_put(fence);
> + gt = q->gt;
>
> if (timeout < 0)
> return -ETIME;
> }
>
> + if (gt) {
> + xe_gt_stats_incr(gt,
> + XE_GT_STATS_ID_HW_ENGINE_GROUP_WAIT_DMA_QUEUE_US,
> + xe_gt_stats_ktime_us_delta(start));
> + }
> +
> return 0;
> }
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* ✗ CI.checkpatch: warning for Fix performance when pagefaults and 3d/display share resources (rev2)
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
` (6 preceding siblings ...)
2025-12-12 18:28 ` [PATCH v2 7/7] drm/xe: Add more GT stats around pagefault mode switch flows Matthew Brost
@ 2025-12-12 22:37 ` Patchwork
2025-12-12 22:38 ` ✓ CI.KUnit: success " Patchwork
` (2 subsequent siblings)
10 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2025-12-12 22:37 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: Fix performance when pagefaults and 3d/display share resources (rev2)
URL : https://patchwork.freedesktop.org/series/158833/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
8f50e69d0ce3656564bbdf8b3e213d61470d463f
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 14faf1fcddbb4e3727dabe0e69db1e556ef4e256
Author: Matthew Brost <matthew.brost@intel.com>
Date: Fri Dec 12 10:28:47 2025 -0800
drm/xe: Add more GT stats around pagefault mode switch flows
Add GT stats to measure the time spent switching between pagefault mode
and dma-fence mode. Also add a GT stat to indicate when pagefault
suspend is skipped because the system is idle. These metrics will help
profile pagefault workloads while 3D and display are enabled.
v2:
- Use GT stats helper functions (Francois)
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch 54801dbae9da83dcad253f7fbe0e09864bfc028f drm-intel
29407e0d3416 drm/xe: Adjust long-running workload timeslices to reasonable values
82d936acadbb drm/xe: Use usleep_range for accurate long-running workload timeslicing
6d7a37623b4d drm/xe: Add debugfs knobs to control long running workload timeslicing
1bff613f0a2a drm/xe: Skip exec queue schedule toggle if queue is idle during suspend
-:123: ERROR:BRACKET_SPACE: space prohibited before open square bracket '['
#123: FILE: drivers/gpu/drm/xe/xe_guc_submit.c:1958:
+ u32 action [] = {
total: 1 errors, 0 warnings, 0 checks, 132 lines checked
5c053558870d drm/xe: Wait on in-syncs when swicthing to dma-fence mode
-:109: CHECK:LINE_SPACING: Please don't use multiple blank lines
#109: FILE: drivers/gpu/drm/xe/xe_hw_engine_group.c:219:
+
+
-:172: WARNING:DEEP_INDENTATION: Too many leading tabs - consider code refactoring
#172: FILE: drivers/gpu/drm/xe/xe_hw_engine_group.c:340:
+ if (err)
-:236: CHECK:BRACES: Blank lines aren't necessary after an open brace '{'
#236: FILE: drivers/gpu/drm/xe/xe_sync.c:255:
+{
+
total: 0 errors, 1 warnings, 2 checks, 199 lines checked
5b9ba0a64bd3 drm/xe: Add GT stats ktime helpers
14faf1fcddbb drm/xe: Add more GT stats around pagefault mode switch flows
^ permalink raw reply [flat|nested] 24+ messages in thread* ✓ CI.KUnit: success for Fix performance when pagefaults and 3d/display share resources (rev2)
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
` (7 preceding siblings ...)
2025-12-12 22:37 ` ✗ CI.checkpatch: warning for Fix performance when pagefaults and 3d/display share resources (rev2) Patchwork
@ 2025-12-12 22:38 ` Patchwork
2025-12-12 23:33 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-13 19:27 ` ✗ Xe.CI.Full: failure " Patchwork
10 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2025-12-12 22:38 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: Fix performance when pagefaults and 3d/display share resources (rev2)
URL : https://patchwork.freedesktop.org/series/158833/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[22:37:23] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[22:37:27] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=25
[22:38:05] Starting KUnit Kernel (1/1)...
[22:38:05] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[22:38:05] ================== guc_buf (11 subtests) ===================
[22:38:05] [PASSED] test_smallest
[22:38:05] [PASSED] test_largest
[22:38:05] [PASSED] test_granular
[22:38:05] [PASSED] test_unique
[22:38:05] [PASSED] test_overlap
[22:38:05] [PASSED] test_reusable
[22:38:05] [PASSED] test_too_big
[22:38:05] [PASSED] test_flush
[22:38:05] [PASSED] test_lookup
[22:38:05] [PASSED] test_data
[22:38:05] [PASSED] test_class
[22:38:05] ===================== [PASSED] guc_buf =====================
[22:38:05] =================== guc_dbm (7 subtests) ===================
[22:38:05] [PASSED] test_empty
[22:38:05] [PASSED] test_default
[22:38:05] ======================== test_size ========================
[22:38:05] [PASSED] 4
[22:38:05] [PASSED] 8
[22:38:05] [PASSED] 32
[22:38:05] [PASSED] 256
[22:38:05] ==================== [PASSED] test_size ====================
[22:38:05] ======================= test_reuse ========================
[22:38:05] [PASSED] 4
[22:38:05] [PASSED] 8
[22:38:05] [PASSED] 32
[22:38:05] [PASSED] 256
[22:38:05] =================== [PASSED] test_reuse ====================
[22:38:05] =================== test_range_overlap ====================
[22:38:05] [PASSED] 4
[22:38:05] [PASSED] 8
[22:38:05] [PASSED] 32
[22:38:05] [PASSED] 256
[22:38:05] =============== [PASSED] test_range_overlap ================
[22:38:05] =================== test_range_compact ====================
[22:38:05] [PASSED] 4
[22:38:05] [PASSED] 8
[22:38:05] [PASSED] 32
[22:38:05] [PASSED] 256
[22:38:05] =============== [PASSED] test_range_compact ================
[22:38:05] ==================== test_range_spare =====================
[22:38:05] [PASSED] 4
[22:38:05] [PASSED] 8
[22:38:05] [PASSED] 32
[22:38:05] [PASSED] 256
[22:38:05] ================ [PASSED] test_range_spare =================
[22:38:05] ===================== [PASSED] guc_dbm =====================
[22:38:05] =================== guc_idm (6 subtests) ===================
[22:38:05] [PASSED] bad_init
[22:38:05] [PASSED] no_init
[22:38:05] [PASSED] init_fini
[22:38:05] [PASSED] check_used
[22:38:05] [PASSED] check_quota
[22:38:05] [PASSED] check_all
[22:38:05] ===================== [PASSED] guc_idm =====================
[22:38:05] ================== no_relay (3 subtests) ===================
[22:38:05] [PASSED] xe_drops_guc2pf_if_not_ready
[22:38:05] [PASSED] xe_drops_guc2vf_if_not_ready
[22:38:05] [PASSED] xe_rejects_send_if_not_ready
[22:38:05] ==================== [PASSED] no_relay =====================
[22:38:05] ================== pf_relay (14 subtests) ==================
[22:38:05] [PASSED] pf_rejects_guc2pf_too_short
[22:38:05] [PASSED] pf_rejects_guc2pf_too_long
[22:38:05] [PASSED] pf_rejects_guc2pf_no_payload
[22:38:05] [PASSED] pf_fails_no_payload
[22:38:05] [PASSED] pf_fails_bad_origin
[22:38:05] [PASSED] pf_fails_bad_type
[22:38:05] [PASSED] pf_txn_reports_error
[22:38:05] [PASSED] pf_txn_sends_pf2guc
[22:38:05] [PASSED] pf_sends_pf2guc
[22:38:05] [SKIPPED] pf_loopback_nop
[22:38:05] [SKIPPED] pf_loopback_echo
[22:38:05] [SKIPPED] pf_loopback_fail
[22:38:05] [SKIPPED] pf_loopback_busy
[22:38:05] [SKIPPED] pf_loopback_retry
[22:38:05] ==================== [PASSED] pf_relay =====================
[22:38:05] ================== vf_relay (3 subtests) ===================
[22:38:05] [PASSED] vf_rejects_guc2vf_too_short
[22:38:05] [PASSED] vf_rejects_guc2vf_too_long
[22:38:05] [PASSED] vf_rejects_guc2vf_no_payload
[22:38:05] ==================== [PASSED] vf_relay =====================
[22:38:05] ================ pf_gt_config (6 subtests) =================
[22:38:05] [PASSED] fair_contexts_1vf
[22:38:05] [PASSED] fair_doorbells_1vf
[22:38:05] [PASSED] fair_ggtt_1vf
[22:38:05] ====================== fair_contexts ======================
[22:38:05] [PASSED] 1 VF
[22:38:05] [PASSED] 2 VFs
[22:38:05] [PASSED] 3 VFs
[22:38:05] [PASSED] 4 VFs
[22:38:05] [PASSED] 5 VFs
[22:38:05] [PASSED] 6 VFs
[22:38:05] [PASSED] 7 VFs
[22:38:05] [PASSED] 8 VFs
[22:38:05] [PASSED] 9 VFs
[22:38:05] [PASSED] 10 VFs
[22:38:05] [PASSED] 11 VFs
[22:38:05] [PASSED] 12 VFs
[22:38:05] [PASSED] 13 VFs
[22:38:05] [PASSED] 14 VFs
[22:38:05] [PASSED] 15 VFs
[22:38:05] [PASSED] 16 VFs
[22:38:05] [PASSED] 17 VFs
[22:38:05] [PASSED] 18 VFs
[22:38:05] [PASSED] 19 VFs
[22:38:05] [PASSED] 20 VFs
[22:38:05] [PASSED] 21 VFs
[22:38:05] [PASSED] 22 VFs
[22:38:05] [PASSED] 23 VFs
[22:38:05] [PASSED] 24 VFs
[22:38:05] [PASSED] 25 VFs
[22:38:05] [PASSED] 26 VFs
[22:38:05] [PASSED] 27 VFs
[22:38:05] [PASSED] 28 VFs
[22:38:05] [PASSED] 29 VFs
[22:38:05] [PASSED] 30 VFs
[22:38:05] [PASSED] 31 VFs
[22:38:05] [PASSED] 32 VFs
[22:38:05] [PASSED] 33 VFs
[22:38:05] [PASSED] 34 VFs
[22:38:05] [PASSED] 35 VFs
[22:38:05] [PASSED] 36 VFs
[22:38:05] [PASSED] 37 VFs
[22:38:05] [PASSED] 38 VFs
[22:38:05] [PASSED] 39 VFs
[22:38:05] [PASSED] 40 VFs
[22:38:05] [PASSED] 41 VFs
[22:38:05] [PASSED] 42 VFs
[22:38:05] [PASSED] 43 VFs
[22:38:05] [PASSED] 44 VFs
[22:38:05] [PASSED] 45 VFs
[22:38:05] [PASSED] 46 VFs
[22:38:05] [PASSED] 47 VFs
[22:38:05] [PASSED] 48 VFs
[22:38:05] [PASSED] 49 VFs
[22:38:05] [PASSED] 50 VFs
[22:38:05] [PASSED] 51 VFs
[22:38:05] [PASSED] 52 VFs
[22:38:05] [PASSED] 53 VFs
[22:38:05] [PASSED] 54 VFs
[22:38:05] [PASSED] 55 VFs
[22:38:05] [PASSED] 56 VFs
[22:38:05] [PASSED] 57 VFs
[22:38:05] [PASSED] 58 VFs
[22:38:05] [PASSED] 59 VFs
[22:38:05] [PASSED] 60 VFs
[22:38:05] [PASSED] 61 VFs
[22:38:05] [PASSED] 62 VFs
[22:38:05] [PASSED] 63 VFs
[22:38:05] ================== [PASSED] fair_contexts ==================
[22:38:05] ===================== fair_doorbells ======================
[22:38:05] [PASSED] 1 VF
[22:38:05] [PASSED] 2 VFs
[22:38:05] [PASSED] 3 VFs
[22:38:05] [PASSED] 4 VFs
[22:38:05] [PASSED] 5 VFs
[22:38:05] [PASSED] 6 VFs
[22:38:05] [PASSED] 7 VFs
[22:38:05] [PASSED] 8 VFs
[22:38:05] [PASSED] 9 VFs
[22:38:05] [PASSED] 10 VFs
[22:38:05] [PASSED] 11 VFs
[22:38:05] [PASSED] 12 VFs
[22:38:05] [PASSED] 13 VFs
[22:38:05] [PASSED] 14 VFs
[22:38:05] [PASSED] 15 VFs
[22:38:05] [PASSED] 16 VFs
[22:38:05] [PASSED] 17 VFs
[22:38:05] [PASSED] 18 VFs
[22:38:05] [PASSED] 19 VFs
[22:38:05] [PASSED] 20 VFs
[22:38:05] [PASSED] 21 VFs
[22:38:05] [PASSED] 22 VFs
[22:38:05] [PASSED] 23 VFs
[22:38:05] [PASSED] 24 VFs
[22:38:05] [PASSED] 25 VFs
[22:38:05] [PASSED] 26 VFs
[22:38:05] [PASSED] 27 VFs
[22:38:05] [PASSED] 28 VFs
[22:38:05] [PASSED] 29 VFs
[22:38:05] [PASSED] 30 VFs
[22:38:05] [PASSED] 31 VFs
[22:38:05] [PASSED] 32 VFs
[22:38:05] [PASSED] 33 VFs
[22:38:05] [PASSED] 34 VFs
[22:38:05] [PASSED] 35 VFs
[22:38:05] [PASSED] 36 VFs
[22:38:05] [PASSED] 37 VFs
[22:38:05] [PASSED] 38 VFs
[22:38:05] [PASSED] 39 VFs
[22:38:05] [PASSED] 40 VFs
[22:38:05] [PASSED] 41 VFs
[22:38:05] [PASSED] 42 VFs
[22:38:05] [PASSED] 43 VFs
[22:38:05] [PASSED] 44 VFs
[22:38:05] [PASSED] 45 VFs
[22:38:05] [PASSED] 46 VFs
[22:38:05] [PASSED] 47 VFs
[22:38:05] [PASSED] 48 VFs
[22:38:05] [PASSED] 49 VFs
[22:38:05] [PASSED] 50 VFs
[22:38:05] [PASSED] 51 VFs
[22:38:05] [PASSED] 52 VFs
[22:38:05] [PASSED] 53 VFs
[22:38:05] [PASSED] 54 VFs
[22:38:05] [PASSED] 55 VFs
[22:38:05] [PASSED] 56 VFs
[22:38:05] [PASSED] 57 VFs
[22:38:05] [PASSED] 58 VFs
[22:38:05] [PASSED] 59 VFs
[22:38:05] [PASSED] 60 VFs
[22:38:05] [PASSED] 61 VFs
[22:38:05] [PASSED] 62 VFs
[22:38:05] [PASSED] 63 VFs
[22:38:05] ================= [PASSED] fair_doorbells ==================
[22:38:05] ======================== fair_ggtt ========================
[22:38:05] [PASSED] 1 VF
[22:38:05] [PASSED] 2 VFs
[22:38:05] [PASSED] 3 VFs
[22:38:05] [PASSED] 4 VFs
[22:38:05] [PASSED] 5 VFs
[22:38:05] [PASSED] 6 VFs
[22:38:05] [PASSED] 7 VFs
[22:38:05] [PASSED] 8 VFs
[22:38:05] [PASSED] 9 VFs
[22:38:05] [PASSED] 10 VFs
[22:38:05] [PASSED] 11 VFs
[22:38:05] [PASSED] 12 VFs
[22:38:05] [PASSED] 13 VFs
[22:38:05] [PASSED] 14 VFs
[22:38:05] [PASSED] 15 VFs
[22:38:05] [PASSED] 16 VFs
[22:38:05] [PASSED] 17 VFs
[22:38:05] [PASSED] 18 VFs
[22:38:05] [PASSED] 19 VFs
[22:38:05] [PASSED] 20 VFs
[22:38:05] [PASSED] 21 VFs
[22:38:05] [PASSED] 22 VFs
[22:38:05] [PASSED] 23 VFs
[22:38:05] [PASSED] 24 VFs
[22:38:05] [PASSED] 25 VFs
[22:38:05] [PASSED] 26 VFs
[22:38:05] [PASSED] 27 VFs
[22:38:05] [PASSED] 28 VFs
[22:38:05] [PASSED] 29 VFs
[22:38:05] [PASSED] 30 VFs
[22:38:05] [PASSED] 31 VFs
[22:38:05] [PASSED] 32 VFs
[22:38:05] [PASSED] 33 VFs
[22:38:05] [PASSED] 34 VFs
[22:38:05] [PASSED] 35 VFs
[22:38:05] [PASSED] 36 VFs
[22:38:05] [PASSED] 37 VFs
[22:38:05] [PASSED] 38 VFs
[22:38:05] [PASSED] 39 VFs
[22:38:05] [PASSED] 40 VFs
[22:38:05] [PASSED] 41 VFs
[22:38:05] [PASSED] 42 VFs
[22:38:05] [PASSED] 43 VFs
[22:38:05] [PASSED] 44 VFs
[22:38:05] [PASSED] 45 VFs
[22:38:05] [PASSED] 46 VFs
[22:38:05] [PASSED] 47 VFs
[22:38:05] [PASSED] 48 VFs
[22:38:05] [PASSED] 49 VFs
[22:38:05] [PASSED] 50 VFs
[22:38:05] [PASSED] 51 VFs
[22:38:05] [PASSED] 52 VFs
[22:38:05] [PASSED] 53 VFs
[22:38:05] [PASSED] 54 VFs
[22:38:05] [PASSED] 55 VFs
[22:38:05] [PASSED] 56 VFs
[22:38:05] [PASSED] 57 VFs
[22:38:05] [PASSED] 58 VFs
[22:38:05] [PASSED] 59 VFs
[22:38:05] [PASSED] 60 VFs
[22:38:05] [PASSED] 61 VFs
[22:38:05] [PASSED] 62 VFs
[22:38:05] [PASSED] 63 VFs
[22:38:05] ==================== [PASSED] fair_ggtt ====================
[22:38:05] ================== [PASSED] pf_gt_config ===================
[22:38:05] ===================== lmtt (1 subtest) =====================
[22:38:05] ======================== test_ops =========================
[22:38:05] [PASSED] 2-level
[22:38:05] [PASSED] multi-level
[22:38:05] ==================== [PASSED] test_ops =====================
[22:38:05] ====================== [PASSED] lmtt =======================
[22:38:05] ================= pf_service (11 subtests) =================
[22:38:05] [PASSED] pf_negotiate_any
[22:38:05] [PASSED] pf_negotiate_base_match
[22:38:05] [PASSED] pf_negotiate_base_newer
[22:38:05] [PASSED] pf_negotiate_base_next
[22:38:05] [SKIPPED] pf_negotiate_base_older
[22:38:05] [PASSED] pf_negotiate_base_prev
[22:38:05] [PASSED] pf_negotiate_latest_match
[22:38:05] [PASSED] pf_negotiate_latest_newer
[22:38:05] [PASSED] pf_negotiate_latest_next
[22:38:05] [SKIPPED] pf_negotiate_latest_older
[22:38:05] [SKIPPED] pf_negotiate_latest_prev
[22:38:05] =================== [PASSED] pf_service ====================
[22:38:05] ================= xe_guc_g2g (2 subtests) ==================
[22:38:05] ============== xe_live_guc_g2g_kunit_default ==============
[22:38:05] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[22:38:05] ============== xe_live_guc_g2g_kunit_allmem ===============
[22:38:05] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[22:38:05] =================== [SKIPPED] xe_guc_g2g ===================
[22:38:05] =================== xe_mocs (2 subtests) ===================
[22:38:05] ================ xe_live_mocs_kernel_kunit ================
[22:38:05] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[22:38:05] ================ xe_live_mocs_reset_kunit =================
[22:38:05] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[22:38:05] ==================== [SKIPPED] xe_mocs =====================
[22:38:05] ================= xe_migrate (2 subtests) ==================
[22:38:05] ================= xe_migrate_sanity_kunit =================
[22:38:05] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[22:38:05] ================== xe_validate_ccs_kunit ==================
[22:38:05] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[22:38:05] =================== [SKIPPED] xe_migrate ===================
[22:38:05] ================== xe_dma_buf (1 subtest) ==================
[22:38:05] ==================== xe_dma_buf_kunit =====================
[22:38:05] ================ [SKIPPED] xe_dma_buf_kunit ================
[22:38:05] =================== [SKIPPED] xe_dma_buf ===================
[22:38:05] ================= xe_bo_shrink (1 subtest) =================
[22:38:05] =================== xe_bo_shrink_kunit ====================
[22:38:05] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[22:38:05] ================== [SKIPPED] xe_bo_shrink ==================
[22:38:05] ==================== xe_bo (2 subtests) ====================
[22:38:05] ================== xe_ccs_migrate_kunit ===================
[22:38:05] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[22:38:05] ==================== xe_bo_evict_kunit ====================
[22:38:05] =============== [SKIPPED] xe_bo_evict_kunit ================
[22:38:05] ===================== [SKIPPED] xe_bo ======================
[22:38:05] ==================== args (11 subtests) ====================
[22:38:05] [PASSED] count_args_test
[22:38:05] [PASSED] call_args_example
[22:38:05] [PASSED] call_args_test
[22:38:05] [PASSED] drop_first_arg_example
[22:38:05] [PASSED] drop_first_arg_test
[22:38:05] [PASSED] first_arg_example
[22:38:05] [PASSED] first_arg_test
[22:38:05] [PASSED] last_arg_example
[22:38:05] [PASSED] last_arg_test
[22:38:05] [PASSED] pick_arg_example
[22:38:05] [PASSED] sep_comma_example
[22:38:05] ====================== [PASSED] args =======================
[22:38:05] =================== xe_pci (3 subtests) ====================
[22:38:05] ==================== check_graphics_ip ====================
[22:38:05] [PASSED] 12.00 Xe_LP
[22:38:05] [PASSED] 12.10 Xe_LP+
[22:38:05] [PASSED] 12.55 Xe_HPG
[22:38:05] [PASSED] 12.60 Xe_HPC
[22:38:05] [PASSED] 12.70 Xe_LPG
[22:38:05] [PASSED] 12.71 Xe_LPG
[22:38:05] [PASSED] 12.74 Xe_LPG+
[22:38:05] [PASSED] 20.01 Xe2_HPG
[22:38:05] [PASSED] 20.02 Xe2_HPG
[22:38:05] [PASSED] 20.04 Xe2_LPG
[22:38:05] [PASSED] 30.00 Xe3_LPG
[22:38:05] [PASSED] 30.01 Xe3_LPG
[22:38:05] [PASSED] 30.03 Xe3_LPG
[22:38:05] [PASSED] 30.04 Xe3_LPG
[22:38:05] [PASSED] 30.05 Xe3_LPG
[22:38:05] [PASSED] 35.11 Xe3p_XPC
[22:38:05] ================ [PASSED] check_graphics_ip ================
[22:38:05] ===================== check_media_ip ======================
[22:38:05] [PASSED] 12.00 Xe_M
[22:38:05] [PASSED] 12.55 Xe_HPM
[22:38:05] [PASSED] 13.00 Xe_LPM+
[22:38:05] [PASSED] 13.01 Xe2_HPM
[22:38:05] [PASSED] 20.00 Xe2_LPM
[22:38:05] [PASSED] 30.00 Xe3_LPM
[22:38:05] [PASSED] 30.02 Xe3_LPM
[22:38:05] [PASSED] 35.00 Xe3p_LPM
[22:38:05] [PASSED] 35.03 Xe3p_HPM
[22:38:05] ================= [PASSED] check_media_ip ==================
[22:38:05] =================== check_platform_desc ===================
[22:38:05] [PASSED] 0x9A60 (TIGERLAKE)
[22:38:05] [PASSED] 0x9A68 (TIGERLAKE)
[22:38:05] [PASSED] 0x9A70 (TIGERLAKE)
[22:38:05] [PASSED] 0x9A40 (TIGERLAKE)
[22:38:05] [PASSED] 0x9A49 (TIGERLAKE)
[22:38:05] [PASSED] 0x9A59 (TIGERLAKE)
[22:38:05] [PASSED] 0x9A78 (TIGERLAKE)
[22:38:05] [PASSED] 0x9AC0 (TIGERLAKE)
[22:38:05] [PASSED] 0x9AC9 (TIGERLAKE)
[22:38:05] [PASSED] 0x9AD9 (TIGERLAKE)
[22:38:05] [PASSED] 0x9AF8 (TIGERLAKE)
[22:38:05] [PASSED] 0x4C80 (ROCKETLAKE)
[22:38:05] [PASSED] 0x4C8A (ROCKETLAKE)
[22:38:05] [PASSED] 0x4C8B (ROCKETLAKE)
[22:38:05] [PASSED] 0x4C8C (ROCKETLAKE)
[22:38:05] [PASSED] 0x4C90 (ROCKETLAKE)
[22:38:05] [PASSED] 0x4C9A (ROCKETLAKE)
[22:38:05] [PASSED] 0x4680 (ALDERLAKE_S)
[22:38:05] [PASSED] 0x4682 (ALDERLAKE_S)
[22:38:05] [PASSED] 0x4688 (ALDERLAKE_S)
[22:38:05] [PASSED] 0x468A (ALDERLAKE_S)
[22:38:05] [PASSED] 0x468B (ALDERLAKE_S)
[22:38:05] [PASSED] 0x4690 (ALDERLAKE_S)
[22:38:05] [PASSED] 0x4692 (ALDERLAKE_S)
[22:38:05] [PASSED] 0x4693 (ALDERLAKE_S)
[22:38:05] [PASSED] 0x46A0 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46A1 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46A2 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46A3 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46A6 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46A8 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46AA (ALDERLAKE_P)
[22:38:05] [PASSED] 0x462A (ALDERLAKE_P)
[22:38:05] [PASSED] 0x4626 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x4628 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46B0 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[22:38:05] [PASSED] 0x46B1 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46B2 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46B3 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46C0 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46C1 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46C2 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46C3 (ALDERLAKE_P)
[22:38:05] [PASSED] 0x46D0 (ALDERLAKE_N)
[22:38:05] [PASSED] 0x46D1 (ALDERLAKE_N)
[22:38:05] [PASSED] 0x46D2 (ALDERLAKE_N)
[22:38:05] [PASSED] 0x46D3 (ALDERLAKE_N)
[22:38:05] [PASSED] 0x46D4 (ALDERLAKE_N)
[22:38:05] [PASSED] 0xA721 (ALDERLAKE_P)
[22:38:05] [PASSED] 0xA7A1 (ALDERLAKE_P)
[22:38:05] [PASSED] 0xA7A9 (ALDERLAKE_P)
[22:38:05] [PASSED] 0xA7AC (ALDERLAKE_P)
[22:38:05] [PASSED] 0xA7AD (ALDERLAKE_P)
[22:38:05] [PASSED] 0xA720 (ALDERLAKE_P)
[22:38:05] [PASSED] 0xA7A0 (ALDERLAKE_P)
[22:38:05] [PASSED] 0xA7A8 (ALDERLAKE_P)
[22:38:05] [PASSED] 0xA7AA (ALDERLAKE_P)
[22:38:05] [PASSED] 0xA7AB (ALDERLAKE_P)
[22:38:05] [PASSED] 0xA780 (ALDERLAKE_S)
[22:38:05] [PASSED] 0xA781 (ALDERLAKE_S)
[22:38:05] [PASSED] 0xA782 (ALDERLAKE_S)
[22:38:05] [PASSED] 0xA783 (ALDERLAKE_S)
[22:38:05] [PASSED] 0xA788 (ALDERLAKE_S)
[22:38:05] [PASSED] 0xA789 (ALDERLAKE_S)
[22:38:05] [PASSED] 0xA78A (ALDERLAKE_S)
[22:38:05] [PASSED] 0xA78B (ALDERLAKE_S)
[22:38:05] [PASSED] 0x4905 (DG1)
[22:38:05] [PASSED] 0x4906 (DG1)
[22:38:05] [PASSED] 0x4907 (DG1)
[22:38:05] [PASSED] 0x4908 (DG1)
[22:38:05] [PASSED] 0x4909 (DG1)
[22:38:05] [PASSED] 0x56C0 (DG2)
[22:38:05] [PASSED] 0x56C2 (DG2)
[22:38:05] [PASSED] 0x56C1 (DG2)
[22:38:05] [PASSED] 0x7D51 (METEORLAKE)
[22:38:05] [PASSED] 0x7DD1 (METEORLAKE)
[22:38:05] [PASSED] 0x7D41 (METEORLAKE)
[22:38:05] [PASSED] 0x7D67 (METEORLAKE)
[22:38:05] [PASSED] 0xB640 (METEORLAKE)
[22:38:05] [PASSED] 0x56A0 (DG2)
[22:38:05] [PASSED] 0x56A1 (DG2)
[22:38:05] [PASSED] 0x56A2 (DG2)
[22:38:05] [PASSED] 0x56BE (DG2)
[22:38:05] [PASSED] 0x56BF (DG2)
[22:38:05] [PASSED] 0x5690 (DG2)
[22:38:05] [PASSED] 0x5691 (DG2)
[22:38:05] [PASSED] 0x5692 (DG2)
[22:38:05] [PASSED] 0x56A5 (DG2)
[22:38:05] [PASSED] 0x56A6 (DG2)
[22:38:05] [PASSED] 0x56B0 (DG2)
[22:38:05] [PASSED] 0x56B1 (DG2)
[22:38:05] [PASSED] 0x56BA (DG2)
[22:38:05] [PASSED] 0x56BB (DG2)
[22:38:05] [PASSED] 0x56BC (DG2)
[22:38:05] [PASSED] 0x56BD (DG2)
[22:38:05] [PASSED] 0x5693 (DG2)
[22:38:05] [PASSED] 0x5694 (DG2)
[22:38:05] [PASSED] 0x5695 (DG2)
[22:38:05] [PASSED] 0x56A3 (DG2)
[22:38:05] [PASSED] 0x56A4 (DG2)
[22:38:05] [PASSED] 0x56B2 (DG2)
[22:38:05] [PASSED] 0x56B3 (DG2)
[22:38:05] [PASSED] 0x5696 (DG2)
[22:38:05] [PASSED] 0x5697 (DG2)
[22:38:05] [PASSED] 0xB69 (PVC)
[22:38:05] [PASSED] 0xB6E (PVC)
[22:38:05] [PASSED] 0xBD4 (PVC)
[22:38:05] [PASSED] 0xBD5 (PVC)
[22:38:05] [PASSED] 0xBD6 (PVC)
[22:38:05] [PASSED] 0xBD7 (PVC)
[22:38:05] [PASSED] 0xBD8 (PVC)
[22:38:05] [PASSED] 0xBD9 (PVC)
[22:38:05] [PASSED] 0xBDA (PVC)
[22:38:05] [PASSED] 0xBDB (PVC)
[22:38:05] [PASSED] 0xBE0 (PVC)
[22:38:05] [PASSED] 0xBE1 (PVC)
[22:38:05] [PASSED] 0xBE5 (PVC)
[22:38:05] [PASSED] 0x7D40 (METEORLAKE)
[22:38:05] [PASSED] 0x7D45 (METEORLAKE)
[22:38:05] [PASSED] 0x7D55 (METEORLAKE)
[22:38:05] [PASSED] 0x7D60 (METEORLAKE)
[22:38:05] [PASSED] 0x7DD5 (METEORLAKE)
[22:38:05] [PASSED] 0x6420 (LUNARLAKE)
[22:38:05] [PASSED] 0x64A0 (LUNARLAKE)
[22:38:05] [PASSED] 0x64B0 (LUNARLAKE)
[22:38:05] [PASSED] 0xE202 (BATTLEMAGE)
[22:38:05] [PASSED] 0xE209 (BATTLEMAGE)
[22:38:05] [PASSED] 0xE20B (BATTLEMAGE)
[22:38:05] [PASSED] 0xE20C (BATTLEMAGE)
[22:38:05] [PASSED] 0xE20D (BATTLEMAGE)
[22:38:05] [PASSED] 0xE210 (BATTLEMAGE)
[22:38:05] [PASSED] 0xE211 (BATTLEMAGE)
[22:38:05] [PASSED] 0xE212 (BATTLEMAGE)
[22:38:05] [PASSED] 0xE216 (BATTLEMAGE)
[22:38:05] [PASSED] 0xE220 (BATTLEMAGE)
[22:38:05] [PASSED] 0xE221 (BATTLEMAGE)
[22:38:05] [PASSED] 0xE222 (BATTLEMAGE)
[22:38:05] [PASSED] 0xE223 (BATTLEMAGE)
[22:38:05] [PASSED] 0xB080 (PANTHERLAKE)
[22:38:05] [PASSED] 0xB081 (PANTHERLAKE)
[22:38:05] [PASSED] 0xB082 (PANTHERLAKE)
[22:38:05] [PASSED] 0xB083 (PANTHERLAKE)
[22:38:05] [PASSED] 0xB084 (PANTHERLAKE)
[22:38:05] [PASSED] 0xB085 (PANTHERLAKE)
[22:38:05] [PASSED] 0xB086 (PANTHERLAKE)
[22:38:05] [PASSED] 0xB087 (PANTHERLAKE)
[22:38:05] [PASSED] 0xB08F (PANTHERLAKE)
[22:38:05] [PASSED] 0xB090 (PANTHERLAKE)
[22:38:05] [PASSED] 0xB0A0 (PANTHERLAKE)
[22:38:05] [PASSED] 0xB0B0 (PANTHERLAKE)
[22:38:05] [PASSED] 0xD740 (NOVALAKE_S)
[22:38:05] [PASSED] 0xD741 (NOVALAKE_S)
[22:38:05] [PASSED] 0xD742 (NOVALAKE_S)
[22:38:05] [PASSED] 0xD743 (NOVALAKE_S)
[22:38:05] [PASSED] 0xD744 (NOVALAKE_S)
[22:38:05] [PASSED] 0xD745 (NOVALAKE_S)
[22:38:05] [PASSED] 0x674C (CRESCENTISLAND)
[22:38:05] [PASSED] 0xFD80 (PANTHERLAKE)
[22:38:05] [PASSED] 0xFD81 (PANTHERLAKE)
[22:38:05] =============== [PASSED] check_platform_desc ===============
[22:38:05] ===================== [PASSED] xe_pci ======================
[22:38:05] =================== xe_rtp (2 subtests) ====================
[22:38:05] =============== xe_rtp_process_to_sr_tests ================
[22:38:05] [PASSED] coalesce-same-reg
[22:38:05] [PASSED] no-match-no-add
[22:38:05] [PASSED] match-or
[22:38:05] [PASSED] match-or-xfail
[22:38:05] [PASSED] no-match-no-add-multiple-rules
[22:38:05] [PASSED] two-regs-two-entries
[22:38:05] [PASSED] clr-one-set-other
[22:38:05] [PASSED] set-field
[22:38:05] [PASSED] conflict-duplicate
[22:38:05] [PASSED] conflict-not-disjoint
[22:38:05] [PASSED] conflict-reg-type
[22:38:05] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[22:38:05] ================== xe_rtp_process_tests ===================
[22:38:05] [PASSED] active1
[22:38:05] [PASSED] active2
[22:38:05] [PASSED] active-inactive
[22:38:05] [PASSED] inactive-active
[22:38:05] [PASSED] inactive-1st_or_active-inactive
[22:38:05] [PASSED] inactive-2nd_or_active-inactive
[22:38:05] [PASSED] inactive-last_or_active-inactive
[22:38:05] [PASSED] inactive-no_or_active-inactive
[22:38:05] ============== [PASSED] xe_rtp_process_tests ===============
[22:38:05] ===================== [PASSED] xe_rtp ======================
[22:38:05] ==================== xe_wa (1 subtest) =====================
[22:38:05] ======================== xe_wa_gt =========================
[22:38:05] [PASSED] TIGERLAKE B0
[22:38:05] [PASSED] DG1 A0
[22:38:05] [PASSED] DG1 B0
[22:38:05] [PASSED] ALDERLAKE_S A0
[22:38:05] [PASSED] ALDERLAKE_S B0
[22:38:05] [PASSED] ALDERLAKE_S C0
[22:38:05] [PASSED] ALDERLAKE_S D0
[22:38:05] [PASSED] ALDERLAKE_P A0
[22:38:05] [PASSED] ALDERLAKE_P B0
[22:38:05] [PASSED] ALDERLAKE_P C0
[22:38:05] [PASSED] ALDERLAKE_S RPLS D0
[22:38:05] [PASSED] ALDERLAKE_P RPLU E0
[22:38:05] [PASSED] DG2 G10 C0
[22:38:05] [PASSED] DG2 G11 B1
[22:38:05] [PASSED] DG2 G12 A1
[22:38:05] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[22:38:05] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[22:38:05] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[22:38:05] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[22:38:05] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[22:38:05] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[22:38:05] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[22:38:05] ==================== [PASSED] xe_wa_gt =====================
[22:38:05] ====================== [PASSED] xe_wa ======================
[22:38:05] ============================================================
[22:38:05] Testing complete. Ran 510 tests: passed: 492, skipped: 18
[22:38:05] Elapsed time: 42.447s total, 4.374s configuring, 37.555s building, 0.493s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[22:38:06] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[22:38:07] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=25
[22:38:37] Starting KUnit Kernel (1/1)...
[22:38:37] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[22:38:37] ============ drm_test_pick_cmdline (2 subtests) ============
[22:38:37] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[22:38:37] =============== drm_test_pick_cmdline_named ===============
[22:38:37] [PASSED] NTSC
[22:38:37] [PASSED] NTSC-J
[22:38:37] [PASSED] PAL
[22:38:37] [PASSED] PAL-M
[22:38:37] =========== [PASSED] drm_test_pick_cmdline_named ===========
[22:38:37] ============== [PASSED] drm_test_pick_cmdline ==============
[22:38:37] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[22:38:37] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[22:38:37] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[22:38:37] =========== drm_validate_clone_mode (2 subtests) ===========
[22:38:37] ============== drm_test_check_in_clone_mode ===============
[22:38:37] [PASSED] in_clone_mode
[22:38:37] [PASSED] not_in_clone_mode
[22:38:37] ========== [PASSED] drm_test_check_in_clone_mode ===========
[22:38:37] =============== drm_test_check_valid_clones ===============
[22:38:37] [PASSED] not_in_clone_mode
[22:38:37] [PASSED] valid_clone
[22:38:37] [PASSED] invalid_clone
[22:38:37] =========== [PASSED] drm_test_check_valid_clones ===========
[22:38:37] ============= [PASSED] drm_validate_clone_mode =============
[22:38:37] ============= drm_validate_modeset (1 subtest) =============
[22:38:37] [PASSED] drm_test_check_connector_changed_modeset
[22:38:37] ============== [PASSED] drm_validate_modeset ===============
[22:38:37] ====== drm_test_bridge_get_current_state (2 subtests) ======
[22:38:37] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[22:38:37] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[22:38:37] ======== [PASSED] drm_test_bridge_get_current_state ========
[22:38:37] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[22:38:37] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[22:38:37] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[22:38:37] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[22:38:37] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[22:38:37] ============== drm_bridge_alloc (2 subtests) ===============
[22:38:37] [PASSED] drm_test_drm_bridge_alloc_basic
[22:38:37] [PASSED] drm_test_drm_bridge_alloc_get_put
[22:38:37] ================ [PASSED] drm_bridge_alloc =================
[22:38:37] ================== drm_buddy (8 subtests) ==================
[22:38:37] [PASSED] drm_test_buddy_alloc_limit
[22:38:37] [PASSED] drm_test_buddy_alloc_optimistic
[22:38:37] [PASSED] drm_test_buddy_alloc_pessimistic
[22:38:37] [PASSED] drm_test_buddy_alloc_pathological
[22:38:37] [PASSED] drm_test_buddy_alloc_contiguous
[22:38:37] [PASSED] drm_test_buddy_alloc_clear
[22:38:37] [PASSED] drm_test_buddy_alloc_range_bias
[22:38:37] [PASSED] drm_test_buddy_fragmentation_performance
[22:38:37] ==================== [PASSED] drm_buddy ====================
[22:38:37] ============= drm_cmdline_parser (40 subtests) =============
[22:38:37] [PASSED] drm_test_cmdline_force_d_only
[22:38:37] [PASSED] drm_test_cmdline_force_D_only_dvi
[22:38:37] [PASSED] drm_test_cmdline_force_D_only_hdmi
[22:38:37] [PASSED] drm_test_cmdline_force_D_only_not_digital
[22:38:37] [PASSED] drm_test_cmdline_force_e_only
[22:38:37] [PASSED] drm_test_cmdline_res
[22:38:37] [PASSED] drm_test_cmdline_res_vesa
[22:38:37] [PASSED] drm_test_cmdline_res_vesa_rblank
[22:38:37] [PASSED] drm_test_cmdline_res_rblank
[22:38:37] [PASSED] drm_test_cmdline_res_bpp
[22:38:37] [PASSED] drm_test_cmdline_res_refresh
[22:38:37] [PASSED] drm_test_cmdline_res_bpp_refresh
[22:38:37] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[22:38:37] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[22:38:37] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[22:38:37] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[22:38:37] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[22:38:37] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[22:38:37] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[22:38:37] [PASSED] drm_test_cmdline_res_margins_force_on
[22:38:37] [PASSED] drm_test_cmdline_res_vesa_margins
[22:38:37] [PASSED] drm_test_cmdline_name
[22:38:37] [PASSED] drm_test_cmdline_name_bpp
[22:38:37] [PASSED] drm_test_cmdline_name_option
[22:38:37] [PASSED] drm_test_cmdline_name_bpp_option
[22:38:37] [PASSED] drm_test_cmdline_rotate_0
[22:38:37] [PASSED] drm_test_cmdline_rotate_90
[22:38:37] [PASSED] drm_test_cmdline_rotate_180
[22:38:37] [PASSED] drm_test_cmdline_rotate_270
[22:38:37] [PASSED] drm_test_cmdline_hmirror
[22:38:37] [PASSED] drm_test_cmdline_vmirror
[22:38:37] [PASSED] drm_test_cmdline_margin_options
[22:38:37] [PASSED] drm_test_cmdline_multiple_options
[22:38:37] [PASSED] drm_test_cmdline_bpp_extra_and_option
[22:38:37] [PASSED] drm_test_cmdline_extra_and_option
[22:38:37] [PASSED] drm_test_cmdline_freestanding_options
[22:38:37] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[22:38:37] [PASSED] drm_test_cmdline_panel_orientation
[22:38:37] ================ drm_test_cmdline_invalid =================
[22:38:37] [PASSED] margin_only
[22:38:37] [PASSED] interlace_only
[22:38:37] [PASSED] res_missing_x
[22:38:37] [PASSED] res_missing_y
[22:38:37] [PASSED] res_bad_y
[22:38:37] [PASSED] res_missing_y_bpp
[22:38:37] [PASSED] res_bad_bpp
[22:38:37] [PASSED] res_bad_refresh
[22:38:37] [PASSED] res_bpp_refresh_force_on_off
[22:38:37] [PASSED] res_invalid_mode
[22:38:37] [PASSED] res_bpp_wrong_place_mode
[22:38:37] [PASSED] name_bpp_refresh
[22:38:37] [PASSED] name_refresh
[22:38:37] [PASSED] name_refresh_wrong_mode
[22:38:37] [PASSED] name_refresh_invalid_mode
[22:38:37] [PASSED] rotate_multiple
[22:38:37] [PASSED] rotate_invalid_val
[22:38:37] [PASSED] rotate_truncated
[22:38:37] [PASSED] invalid_option
[22:38:37] [PASSED] invalid_tv_option
[22:38:37] [PASSED] truncated_tv_option
[22:38:37] ============ [PASSED] drm_test_cmdline_invalid =============
[22:38:37] =============== drm_test_cmdline_tv_options ===============
[22:38:37] [PASSED] NTSC
[22:38:37] [PASSED] NTSC_443
[22:38:37] [PASSED] NTSC_J
[22:38:37] [PASSED] PAL
[22:38:37] [PASSED] PAL_M
[22:38:37] [PASSED] PAL_N
[22:38:37] [PASSED] SECAM
[22:38:37] [PASSED] MONO_525
[22:38:37] [PASSED] MONO_625
[22:38:37] =========== [PASSED] drm_test_cmdline_tv_options ===========
[22:38:37] =============== [PASSED] drm_cmdline_parser ================
[22:38:37] ========== drmm_connector_hdmi_init (20 subtests) ==========
[22:38:37] [PASSED] drm_test_connector_hdmi_init_valid
[22:38:37] [PASSED] drm_test_connector_hdmi_init_bpc_8
[22:38:37] [PASSED] drm_test_connector_hdmi_init_bpc_10
[22:38:37] [PASSED] drm_test_connector_hdmi_init_bpc_12
[22:38:37] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[22:38:37] [PASSED] drm_test_connector_hdmi_init_bpc_null
[22:38:37] [PASSED] drm_test_connector_hdmi_init_formats_empty
[22:38:37] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[22:38:37] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[22:38:37] [PASSED] supported_formats=0x9 yuv420_allowed=1
[22:38:37] [PASSED] supported_formats=0x9 yuv420_allowed=0
[22:38:37] [PASSED] supported_formats=0x3 yuv420_allowed=1
[22:38:37] [PASSED] supported_formats=0x3 yuv420_allowed=0
[22:38:37] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[22:38:37] [PASSED] drm_test_connector_hdmi_init_null_ddc
[22:38:37] [PASSED] drm_test_connector_hdmi_init_null_product
[22:38:37] [PASSED] drm_test_connector_hdmi_init_null_vendor
[22:38:37] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[22:38:37] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[22:38:37] [PASSED] drm_test_connector_hdmi_init_product_valid
[22:38:37] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[22:38:37] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[22:38:37] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[22:38:37] ========= drm_test_connector_hdmi_init_type_valid =========
[22:38:37] [PASSED] HDMI-A
[22:38:37] [PASSED] HDMI-B
[22:38:37] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[22:38:37] ======== drm_test_connector_hdmi_init_type_invalid ========
[22:38:37] [PASSED] Unknown
[22:38:37] [PASSED] VGA
[22:38:37] [PASSED] DVI-I
[22:38:37] [PASSED] DVI-D
[22:38:37] [PASSED] DVI-A
[22:38:37] [PASSED] Composite
[22:38:37] [PASSED] SVIDEO
[22:38:37] [PASSED] LVDS
[22:38:37] [PASSED] Component
[22:38:37] [PASSED] DIN
[22:38:37] [PASSED] DP
[22:38:37] [PASSED] TV
[22:38:37] [PASSED] eDP
[22:38:37] [PASSED] Virtual
[22:38:37] [PASSED] DSI
[22:38:37] [PASSED] DPI
[22:38:37] [PASSED] Writeback
[22:38:37] [PASSED] SPI
[22:38:37] [PASSED] USB
[22:38:37] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[22:38:37] ============ [PASSED] drmm_connector_hdmi_init =============
[22:38:37] ============= drmm_connector_init (3 subtests) =============
[22:38:37] [PASSED] drm_test_drmm_connector_init
[22:38:37] [PASSED] drm_test_drmm_connector_init_null_ddc
[22:38:37] ========= drm_test_drmm_connector_init_type_valid =========
[22:38:37] [PASSED] Unknown
[22:38:37] [PASSED] VGA
[22:38:37] [PASSED] DVI-I
[22:38:37] [PASSED] DVI-D
[22:38:37] [PASSED] DVI-A
[22:38:37] [PASSED] Composite
[22:38:37] [PASSED] SVIDEO
[22:38:37] [PASSED] LVDS
[22:38:37] [PASSED] Component
[22:38:37] [PASSED] DIN
[22:38:37] [PASSED] DP
[22:38:37] [PASSED] HDMI-A
[22:38:37] [PASSED] HDMI-B
[22:38:37] [PASSED] TV
[22:38:37] [PASSED] eDP
[22:38:37] [PASSED] Virtual
[22:38:37] [PASSED] DSI
[22:38:37] [PASSED] DPI
[22:38:37] [PASSED] Writeback
[22:38:37] [PASSED] SPI
[22:38:37] [PASSED] USB
[22:38:37] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[22:38:37] =============== [PASSED] drmm_connector_init ===============
[22:38:37] ========= drm_connector_dynamic_init (6 subtests) ==========
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_init
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_init_properties
[22:38:37] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[22:38:37] [PASSED] Unknown
[22:38:37] [PASSED] VGA
[22:38:37] [PASSED] DVI-I
[22:38:37] [PASSED] DVI-D
[22:38:37] [PASSED] DVI-A
[22:38:37] [PASSED] Composite
[22:38:37] [PASSED] SVIDEO
[22:38:37] [PASSED] LVDS
[22:38:37] [PASSED] Component
[22:38:37] [PASSED] DIN
[22:38:37] [PASSED] DP
[22:38:37] [PASSED] HDMI-A
[22:38:37] [PASSED] HDMI-B
[22:38:37] [PASSED] TV
[22:38:37] [PASSED] eDP
[22:38:37] [PASSED] Virtual
[22:38:37] [PASSED] DSI
[22:38:37] [PASSED] DPI
[22:38:37] [PASSED] Writeback
[22:38:37] [PASSED] SPI
[22:38:37] [PASSED] USB
[22:38:37] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[22:38:37] ======== drm_test_drm_connector_dynamic_init_name =========
[22:38:37] [PASSED] Unknown
[22:38:37] [PASSED] VGA
[22:38:37] [PASSED] DVI-I
[22:38:37] [PASSED] DVI-D
[22:38:37] [PASSED] DVI-A
[22:38:37] [PASSED] Composite
[22:38:37] [PASSED] SVIDEO
[22:38:37] [PASSED] LVDS
[22:38:37] [PASSED] Component
[22:38:37] [PASSED] DIN
[22:38:37] [PASSED] DP
[22:38:37] [PASSED] HDMI-A
[22:38:37] [PASSED] HDMI-B
[22:38:37] [PASSED] TV
[22:38:37] [PASSED] eDP
[22:38:37] [PASSED] Virtual
[22:38:37] [PASSED] DSI
[22:38:37] [PASSED] DPI
[22:38:37] [PASSED] Writeback
[22:38:37] [PASSED] SPI
[22:38:37] [PASSED] USB
[22:38:37] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[22:38:37] =========== [PASSED] drm_connector_dynamic_init ============
[22:38:37] ==== drm_connector_dynamic_register_early (4 subtests) =====
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[22:38:37] ====== [PASSED] drm_connector_dynamic_register_early =======
[22:38:37] ======= drm_connector_dynamic_register (7 subtests) ========
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[22:38:37] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[22:38:37] ========= [PASSED] drm_connector_dynamic_register ==========
[22:38:37] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[22:38:37] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[22:38:37] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[22:38:37] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[22:38:37] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[22:38:37] ========== drm_test_get_tv_mode_from_name_valid ===========
[22:38:37] [PASSED] NTSC
[22:38:37] [PASSED] NTSC-443
[22:38:37] [PASSED] NTSC-J
[22:38:37] [PASSED] PAL
[22:38:37] [PASSED] PAL-M
[22:38:37] [PASSED] PAL-N
[22:38:37] [PASSED] SECAM
[22:38:37] [PASSED] Mono
[22:38:37] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[22:38:37] [PASSED] drm_test_get_tv_mode_from_name_truncated
[22:38:37] ============ [PASSED] drm_get_tv_mode_from_name ============
[22:38:37] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[22:38:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[22:38:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[22:38:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[22:38:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[22:38:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[22:38:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[22:38:37] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[22:38:37] [PASSED] VIC 96
[22:38:37] [PASSED] VIC 97
[22:38:37] [PASSED] VIC 101
[22:38:37] [PASSED] VIC 102
[22:38:37] [PASSED] VIC 106
[22:38:37] [PASSED] VIC 107
[22:38:37] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[22:38:37] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[22:38:37] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[22:38:37] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[22:38:37] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[22:38:37] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[22:38:37] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[22:38:37] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[22:38:37] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[22:38:37] [PASSED] Automatic
[22:38:37] [PASSED] Full
[22:38:37] [PASSED] Limited 16:235
[22:38:37] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[22:38:37] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[22:38:37] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[22:38:37] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[22:38:37] === drm_test_drm_hdmi_connector_get_output_format_name ====
[22:38:37] [PASSED] RGB
[22:38:37] [PASSED] YUV 4:2:0
[22:38:37] [PASSED] YUV 4:2:2
[22:38:37] [PASSED] YUV 4:4:4
[22:38:37] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[22:38:37] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[22:38:37] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[22:38:37] ============= drm_damage_helper (21 subtests) ==============
[22:38:37] [PASSED] drm_test_damage_iter_no_damage
[22:38:37] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[22:38:37] [PASSED] drm_test_damage_iter_no_damage_src_moved
[22:38:37] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[22:38:37] [PASSED] drm_test_damage_iter_no_damage_not_visible
[22:38:37] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[22:38:37] [PASSED] drm_test_damage_iter_no_damage_no_fb
[22:38:37] [PASSED] drm_test_damage_iter_simple_damage
[22:38:37] [PASSED] drm_test_damage_iter_single_damage
[22:38:37] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[22:38:37] [PASSED] drm_test_damage_iter_single_damage_outside_src
[22:38:37] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[22:38:37] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[22:38:37] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[22:38:37] [PASSED] drm_test_damage_iter_single_damage_src_moved
[22:38:37] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[22:38:37] [PASSED] drm_test_damage_iter_damage
[22:38:37] [PASSED] drm_test_damage_iter_damage_one_intersect
[22:38:37] [PASSED] drm_test_damage_iter_damage_one_outside
[22:38:37] [PASSED] drm_test_damage_iter_damage_src_moved
[22:38:37] [PASSED] drm_test_damage_iter_damage_not_visible
[22:38:37] ================ [PASSED] drm_damage_helper ================
[22:38:37] ============== drm_dp_mst_helper (3 subtests) ==============
[22:38:37] ============== drm_test_dp_mst_calc_pbn_mode ==============
[22:38:37] [PASSED] Clock 154000 BPP 30 DSC disabled
[22:38:37] [PASSED] Clock 234000 BPP 30 DSC disabled
[22:38:37] [PASSED] Clock 297000 BPP 24 DSC disabled
[22:38:37] [PASSED] Clock 332880 BPP 24 DSC enabled
[22:38:37] [PASSED] Clock 324540 BPP 24 DSC enabled
[22:38:37] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[22:38:37] ============== drm_test_dp_mst_calc_pbn_div ===============
[22:38:37] [PASSED] Link rate 2000000 lane count 4
[22:38:37] [PASSED] Link rate 2000000 lane count 2
[22:38:37] [PASSED] Link rate 2000000 lane count 1
[22:38:37] [PASSED] Link rate 1350000 lane count 4
[22:38:37] [PASSED] Link rate 1350000 lane count 2
[22:38:37] [PASSED] Link rate 1350000 lane count 1
[22:38:37] [PASSED] Link rate 1000000 lane count 4
[22:38:37] [PASSED] Link rate 1000000 lane count 2
[22:38:37] [PASSED] Link rate 1000000 lane count 1
[22:38:37] [PASSED] Link rate 810000 lane count 4
[22:38:37] [PASSED] Link rate 810000 lane count 2
[22:38:37] [PASSED] Link rate 810000 lane count 1
[22:38:37] [PASSED] Link rate 540000 lane count 4
[22:38:37] [PASSED] Link rate 540000 lane count 2
[22:38:37] [PASSED] Link rate 540000 lane count 1
[22:38:37] [PASSED] Link rate 270000 lane count 4
[22:38:37] [PASSED] Link rate 270000 lane count 2
[22:38:37] [PASSED] Link rate 270000 lane count 1
[22:38:37] [PASSED] Link rate 162000 lane count 4
[22:38:37] [PASSED] Link rate 162000 lane count 2
[22:38:37] [PASSED] Link rate 162000 lane count 1
[22:38:37] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[22:38:37] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[22:38:37] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[22:38:37] [PASSED] DP_POWER_UP_PHY with port number
[22:38:37] [PASSED] DP_POWER_DOWN_PHY with port number
[22:38:37] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[22:38:37] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[22:38:37] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[22:38:37] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[22:38:37] [PASSED] DP_QUERY_PAYLOAD with port number
[22:38:37] [PASSED] DP_QUERY_PAYLOAD with VCPI
[22:38:37] [PASSED] DP_REMOTE_DPCD_READ with port number
[22:38:37] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[22:38:37] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[22:38:37] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[22:38:37] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[22:38:37] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[22:38:37] [PASSED] DP_REMOTE_I2C_READ with port number
[22:38:37] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[22:38:37] [PASSED] DP_REMOTE_I2C_READ with transactions array
[22:38:37] [PASSED] DP_REMOTE_I2C_WRITE with port number
[22:38:37] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[22:38:37] [PASSED] DP_REMOTE_I2C_WRITE with data array
[22:38:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[22:38:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[22:38:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[22:38:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[22:38:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[22:38:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[22:38:37] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[22:38:37] ================ [PASSED] drm_dp_mst_helper ================
[22:38:37] ================== drm_exec (7 subtests) ===================
[22:38:37] [PASSED] sanitycheck
[22:38:37] [PASSED] test_lock
[22:38:37] [PASSED] test_lock_unlock
[22:38:37] [PASSED] test_duplicates
[22:38:37] [PASSED] test_prepare
[22:38:37] [PASSED] test_prepare_array
[22:38:37] [PASSED] test_multiple_loops
[22:38:37] ==================== [PASSED] drm_exec =====================
[22:38:37] =========== drm_format_helper_test (17 subtests) ===========
[22:38:37] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[22:38:37] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[22:38:37] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[22:38:37] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[22:38:37] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[22:38:37] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[22:38:37] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[22:38:37] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[22:38:37] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[22:38:37] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[22:38:37] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[22:38:37] ============== drm_test_fb_xrgb8888_to_mono ===============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[22:38:37] ==================== drm_test_fb_swab =====================
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ================ [PASSED] drm_test_fb_swab =================
[22:38:37] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[22:38:37] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[22:38:37] [PASSED] single_pixel_source_buffer
[22:38:37] [PASSED] single_pixel_clip_rectangle
[22:38:37] [PASSED] well_known_colors
[22:38:37] [PASSED] destination_pitch
[22:38:37] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[22:38:37] ================= drm_test_fb_clip_offset =================
[22:38:37] [PASSED] pass through
[22:38:37] [PASSED] horizontal offset
[22:38:37] [PASSED] vertical offset
[22:38:37] [PASSED] horizontal and vertical offset
[22:38:37] [PASSED] horizontal offset (custom pitch)
[22:38:37] [PASSED] vertical offset (custom pitch)
[22:38:37] [PASSED] horizontal and vertical offset (custom pitch)
[22:38:37] ============= [PASSED] drm_test_fb_clip_offset =============
[22:38:37] =================== drm_test_fb_memcpy ====================
[22:38:37] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[22:38:37] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[22:38:37] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[22:38:37] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[22:38:37] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[22:38:37] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[22:38:37] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[22:38:37] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[22:38:37] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[22:38:37] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[22:38:37] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[22:38:37] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[22:38:37] =============== [PASSED] drm_test_fb_memcpy ================
[22:38:37] ============= [PASSED] drm_format_helper_test ==============
[22:38:37] ================= drm_format (18 subtests) =================
[22:38:37] [PASSED] drm_test_format_block_width_invalid
[22:38:37] [PASSED] drm_test_format_block_width_one_plane
[22:38:37] [PASSED] drm_test_format_block_width_two_plane
[22:38:37] [PASSED] drm_test_format_block_width_three_plane
[22:38:37] [PASSED] drm_test_format_block_width_tiled
[22:38:37] [PASSED] drm_test_format_block_height_invalid
[22:38:37] [PASSED] drm_test_format_block_height_one_plane
[22:38:37] [PASSED] drm_test_format_block_height_two_plane
[22:38:37] [PASSED] drm_test_format_block_height_three_plane
[22:38:37] [PASSED] drm_test_format_block_height_tiled
[22:38:37] [PASSED] drm_test_format_min_pitch_invalid
[22:38:37] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[22:38:37] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[22:38:37] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[22:38:37] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[22:38:37] [PASSED] drm_test_format_min_pitch_two_plane
[22:38:37] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[22:38:37] [PASSED] drm_test_format_min_pitch_tiled
[22:38:37] =================== [PASSED] drm_format ====================
[22:38:37] ============== drm_framebuffer (10 subtests) ===============
[22:38:37] ========== drm_test_framebuffer_check_src_coords ==========
[22:38:37] [PASSED] Success: source fits into fb
[22:38:37] [PASSED] Fail: overflowing fb with x-axis coordinate
[22:38:37] [PASSED] Fail: overflowing fb with y-axis coordinate
[22:38:37] [PASSED] Fail: overflowing fb with source width
[22:38:37] [PASSED] Fail: overflowing fb with source height
[22:38:37] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[22:38:37] [PASSED] drm_test_framebuffer_cleanup
[22:38:37] =============== drm_test_framebuffer_create ===============
[22:38:37] [PASSED] ABGR8888 normal sizes
[22:38:37] [PASSED] ABGR8888 max sizes
[22:38:37] [PASSED] ABGR8888 pitch greater than min required
[22:38:37] [PASSED] ABGR8888 pitch less than min required
[22:38:37] [PASSED] ABGR8888 Invalid width
[22:38:37] [PASSED] ABGR8888 Invalid buffer handle
[22:38:37] [PASSED] No pixel format
[22:38:37] [PASSED] ABGR8888 Width 0
[22:38:37] [PASSED] ABGR8888 Height 0
[22:38:37] [PASSED] ABGR8888 Out of bound height * pitch combination
[22:38:37] [PASSED] ABGR8888 Large buffer offset
[22:38:37] [PASSED] ABGR8888 Buffer offset for inexistent plane
[22:38:37] [PASSED] ABGR8888 Invalid flag
[22:38:37] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[22:38:37] [PASSED] ABGR8888 Valid buffer modifier
[22:38:37] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[22:38:37] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[22:38:37] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[22:38:37] [PASSED] NV12 Normal sizes
[22:38:37] [PASSED] NV12 Max sizes
[22:38:37] [PASSED] NV12 Invalid pitch
[22:38:37] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[22:38:37] [PASSED] NV12 different modifier per-plane
[22:38:37] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[22:38:37] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[22:38:37] [PASSED] NV12 Modifier for inexistent plane
[22:38:37] [PASSED] NV12 Handle for inexistent plane
[22:38:37] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[22:38:37] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[22:38:37] [PASSED] YVU420 Normal sizes
[22:38:37] [PASSED] YVU420 Max sizes
[22:38:37] [PASSED] YVU420 Invalid pitch
[22:38:37] [PASSED] YVU420 Different pitches
[22:38:37] [PASSED] YVU420 Different buffer offsets/pitches
[22:38:37] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[22:38:37] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[22:38:37] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[22:38:37] [PASSED] YVU420 Valid modifier
[22:38:37] [PASSED] YVU420 Different modifiers per plane
[22:38:37] [PASSED] YVU420 Modifier for inexistent plane
[22:38:37] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[22:38:37] [PASSED] X0L2 Normal sizes
[22:38:37] [PASSED] X0L2 Max sizes
[22:38:37] [PASSED] X0L2 Invalid pitch
[22:38:37] [PASSED] X0L2 Pitch greater than minimum required
[22:38:37] [PASSED] X0L2 Handle for inexistent plane
[22:38:37] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[22:38:37] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[22:38:37] [PASSED] X0L2 Valid modifier
[22:38:37] [PASSED] X0L2 Modifier for inexistent plane
[22:38:37] =========== [PASSED] drm_test_framebuffer_create ===========
[22:38:37] [PASSED] drm_test_framebuffer_free
[22:38:37] [PASSED] drm_test_framebuffer_init
[22:38:37] [PASSED] drm_test_framebuffer_init_bad_format
[22:38:37] [PASSED] drm_test_framebuffer_init_dev_mismatch
[22:38:37] [PASSED] drm_test_framebuffer_lookup
[22:38:37] [PASSED] drm_test_framebuffer_lookup_inexistent
[22:38:37] [PASSED] drm_test_framebuffer_modifiers_not_supported
[22:38:37] ================= [PASSED] drm_framebuffer =================
[22:38:37] ================ drm_gem_shmem (8 subtests) ================
[22:38:37] [PASSED] drm_gem_shmem_test_obj_create
[22:38:37] [PASSED] drm_gem_shmem_test_obj_create_private
[22:38:37] [PASSED] drm_gem_shmem_test_pin_pages
[22:38:37] [PASSED] drm_gem_shmem_test_vmap
[22:38:37] [PASSED] drm_gem_shmem_test_get_pages_sgt
[22:38:37] [PASSED] drm_gem_shmem_test_get_sg_table
[22:38:37] [PASSED] drm_gem_shmem_test_madvise
[22:38:37] [PASSED] drm_gem_shmem_test_purge
[22:38:37] ================== [PASSED] drm_gem_shmem ==================
[22:38:37] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[22:38:37] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[22:38:37] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[22:38:37] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[22:38:37] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[22:38:37] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[22:38:37] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[22:38:37] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[22:38:37] [PASSED] Automatic
[22:38:37] [PASSED] Full
[22:38:37] [PASSED] Limited 16:235
[22:38:37] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[22:38:37] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[22:38:37] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[22:38:37] [PASSED] drm_test_check_disable_connector
[22:38:37] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[22:38:37] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[22:38:37] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[22:38:37] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[22:38:37] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[22:38:37] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[22:38:37] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[22:38:37] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[22:38:37] [PASSED] drm_test_check_output_bpc_dvi
[22:38:37] [PASSED] drm_test_check_output_bpc_format_vic_1
[22:38:37] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[22:38:37] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[22:38:37] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[22:38:37] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[22:38:37] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[22:38:37] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[22:38:37] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[22:38:37] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[22:38:37] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[22:38:37] [PASSED] drm_test_check_broadcast_rgb_value
[22:38:37] [PASSED] drm_test_check_bpc_8_value
[22:38:37] [PASSED] drm_test_check_bpc_10_value
[22:38:37] [PASSED] drm_test_check_bpc_12_value
[22:38:37] [PASSED] drm_test_check_format_value
[22:38:37] [PASSED] drm_test_check_tmds_char_value
[22:38:37] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[22:38:37] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[22:38:37] [PASSED] drm_test_check_mode_valid
[22:38:37] [PASSED] drm_test_check_mode_valid_reject
[22:38:37] [PASSED] drm_test_check_mode_valid_reject_rate
[22:38:37] [PASSED] drm_test_check_mode_valid_reject_max_clock
[22:38:37] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[22:38:37] ================= drm_managed (2 subtests) =================
[22:38:37] [PASSED] drm_test_managed_release_action
[22:38:37] [PASSED] drm_test_managed_run_action
[22:38:37] =================== [PASSED] drm_managed ===================
[22:38:37] =================== drm_mm (6 subtests) ====================
[22:38:37] [PASSED] drm_test_mm_init
[22:38:37] [PASSED] drm_test_mm_debug
[22:38:37] [PASSED] drm_test_mm_align32
[22:38:37] [PASSED] drm_test_mm_align64
[22:38:37] [PASSED] drm_test_mm_lowest
[22:38:37] [PASSED] drm_test_mm_highest
[22:38:37] ===================== [PASSED] drm_mm ======================
[22:38:37] ============= drm_modes_analog_tv (5 subtests) =============
[22:38:37] [PASSED] drm_test_modes_analog_tv_mono_576i
[22:38:37] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[22:38:37] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[22:38:37] [PASSED] drm_test_modes_analog_tv_pal_576i
[22:38:37] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[22:38:37] =============== [PASSED] drm_modes_analog_tv ===============
[22:38:37] ============== drm_plane_helper (2 subtests) ===============
[22:38:37] =============== drm_test_check_plane_state ================
[22:38:37] [PASSED] clipping_simple
[22:38:37] [PASSED] clipping_rotate_reflect
[22:38:37] [PASSED] positioning_simple
[22:38:37] [PASSED] upscaling
[22:38:37] [PASSED] downscaling
[22:38:37] [PASSED] rounding1
[22:38:37] [PASSED] rounding2
[22:38:37] [PASSED] rounding3
[22:38:37] [PASSED] rounding4
[22:38:37] =========== [PASSED] drm_test_check_plane_state ============
[22:38:37] =========== drm_test_check_invalid_plane_state ============
[22:38:37] [PASSED] positioning_invalid
[22:38:37] [PASSED] upscaling_invalid
[22:38:37] [PASSED] downscaling_invalid
[22:38:37] ======= [PASSED] drm_test_check_invalid_plane_state ========
[22:38:37] ================ [PASSED] drm_plane_helper =================
[22:38:37] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[22:38:37] ====== drm_test_connector_helper_tv_get_modes_check =======
[22:38:37] [PASSED] None
[22:38:37] [PASSED] PAL
[22:38:37] [PASSED] NTSC
[22:38:37] [PASSED] Both, NTSC Default
[22:38:37] [PASSED] Both, PAL Default
[22:38:37] [PASSED] Both, NTSC Default, with PAL on command-line
[22:38:37] [PASSED] Both, PAL Default, with NTSC on command-line
[22:38:37] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[22:38:37] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[22:38:37] ================== drm_rect (9 subtests) ===================
[22:38:37] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[22:38:37] [PASSED] drm_test_rect_clip_scaled_not_clipped
[22:38:37] [PASSED] drm_test_rect_clip_scaled_clipped
[22:38:37] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[22:38:37] ================= drm_test_rect_intersect =================
[22:38:37] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[22:38:37] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[22:38:37] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[22:38:37] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[22:38:37] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[22:38:37] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[22:38:37] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[22:38:37] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[22:38:37] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[22:38:37] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[22:38:37] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[22:38:37] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[22:38:37] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[22:38:37] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[22:38:37] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[22:38:37] ============= [PASSED] drm_test_rect_intersect =============
[22:38:37] ================ drm_test_rect_calc_hscale ================
[22:38:37] [PASSED] normal use
[22:38:37] [PASSED] out of max range
[22:38:37] [PASSED] out of min range
[22:38:37] [PASSED] zero dst
[22:38:37] [PASSED] negative src
[22:38:37] [PASSED] negative dst
[22:38:37] ============ [PASSED] drm_test_rect_calc_hscale ============
[22:38:37] ================ drm_test_rect_calc_vscale ================
[22:38:37] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[22:38:37] [PASSED] out of max range
[22:38:37] [PASSED] out of min range
[22:38:37] [PASSED] zero dst
[22:38:37] [PASSED] negative src
[22:38:37] [PASSED] negative dst
[22:38:37] ============ [PASSED] drm_test_rect_calc_vscale ============
[22:38:37] ================== drm_test_rect_rotate ===================
[22:38:37] [PASSED] reflect-x
[22:38:37] [PASSED] reflect-y
[22:38:37] [PASSED] rotate-0
[22:38:37] [PASSED] rotate-90
[22:38:37] [PASSED] rotate-180
[22:38:37] [PASSED] rotate-270
[22:38:37] ============== [PASSED] drm_test_rect_rotate ===============
[22:38:37] ================ drm_test_rect_rotate_inv =================
[22:38:37] [PASSED] reflect-x
[22:38:37] [PASSED] reflect-y
[22:38:37] [PASSED] rotate-0
[22:38:37] [PASSED] rotate-90
[22:38:37] [PASSED] rotate-180
[22:38:37] [PASSED] rotate-270
[22:38:37] ============ [PASSED] drm_test_rect_rotate_inv =============
[22:38:37] ==================== [PASSED] drm_rect =====================
[22:38:37] ============ drm_sysfb_modeset_test (1 subtest) ============
[22:38:37] ============ drm_test_sysfb_build_fourcc_list =============
[22:38:37] [PASSED] no native formats
[22:38:37] [PASSED] XRGB8888 as native format
[22:38:37] [PASSED] remove duplicates
[22:38:37] [PASSED] convert alpha formats
[22:38:37] [PASSED] random formats
[22:38:37] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[22:38:37] ============= [PASSED] drm_sysfb_modeset_test ==============
[22:38:37] ================== drm_fixp (2 subtests) ===================
[22:38:37] [PASSED] drm_test_int2fixp
[22:38:37] [PASSED] drm_test_sm2fixp
[22:38:37] ==================== [PASSED] drm_fixp =====================
[22:38:37] ============================================================
[22:38:37] Testing complete. Ran 624 tests: passed: 624
[22:38:37] Elapsed time: 31.842s total, 1.656s configuring, 29.668s building, 0.470s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[22:38:38] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[22:38:39] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=25
[22:38:48] Starting KUnit Kernel (1/1)...
[22:38:48] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[22:38:48] ================= ttm_device (5 subtests) ==================
[22:38:48] [PASSED] ttm_device_init_basic
[22:38:48] [PASSED] ttm_device_init_multiple
[22:38:48] [PASSED] ttm_device_fini_basic
[22:38:48] [PASSED] ttm_device_init_no_vma_man
[22:38:48] ================== ttm_device_init_pools ==================
[22:38:48] [PASSED] No DMA allocations, no DMA32 required
[22:38:48] [PASSED] DMA allocations, DMA32 required
[22:38:48] [PASSED] No DMA allocations, DMA32 required
[22:38:48] [PASSED] DMA allocations, no DMA32 required
[22:38:48] ============== [PASSED] ttm_device_init_pools ==============
[22:38:48] =================== [PASSED] ttm_device ====================
[22:38:48] ================== ttm_pool (8 subtests) ===================
[22:38:48] ================== ttm_pool_alloc_basic ===================
[22:38:48] [PASSED] One page
[22:38:48] [PASSED] More than one page
[22:38:48] [PASSED] Above the allocation limit
[22:38:48] [PASSED] One page, with coherent DMA mappings enabled
[22:38:48] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[22:38:48] ============== [PASSED] ttm_pool_alloc_basic ===============
[22:38:48] ============== ttm_pool_alloc_basic_dma_addr ==============
[22:38:48] [PASSED] One page
[22:38:48] [PASSED] More than one page
[22:38:48] [PASSED] Above the allocation limit
[22:38:48] [PASSED] One page, with coherent DMA mappings enabled
[22:38:48] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[22:38:48] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[22:38:48] [PASSED] ttm_pool_alloc_order_caching_match
[22:38:48] [PASSED] ttm_pool_alloc_caching_mismatch
[22:38:48] [PASSED] ttm_pool_alloc_order_mismatch
[22:38:48] [PASSED] ttm_pool_free_dma_alloc
[22:38:48] [PASSED] ttm_pool_free_no_dma_alloc
[22:38:48] [PASSED] ttm_pool_fini_basic
[22:38:48] ==================== [PASSED] ttm_pool =====================
[22:38:48] ================ ttm_resource (8 subtests) =================
[22:38:48] ================= ttm_resource_init_basic =================
[22:38:48] [PASSED] Init resource in TTM_PL_SYSTEM
[22:38:48] [PASSED] Init resource in TTM_PL_VRAM
[22:38:48] [PASSED] Init resource in a private placement
[22:38:48] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[22:38:48] ============= [PASSED] ttm_resource_init_basic =============
[22:38:48] [PASSED] ttm_resource_init_pinned
[22:38:48] [PASSED] ttm_resource_fini_basic
[22:38:48] [PASSED] ttm_resource_manager_init_basic
[22:38:48] [PASSED] ttm_resource_manager_usage_basic
[22:38:48] [PASSED] ttm_resource_manager_set_used_basic
[22:38:48] [PASSED] ttm_sys_man_alloc_basic
[22:38:48] [PASSED] ttm_sys_man_free_basic
[22:38:48] ================== [PASSED] ttm_resource ===================
[22:38:48] =================== ttm_tt (15 subtests) ===================
[22:38:48] ==================== ttm_tt_init_basic ====================
[22:38:48] [PASSED] Page-aligned size
[22:38:48] [PASSED] Extra pages requested
[22:38:48] ================ [PASSED] ttm_tt_init_basic ================
[22:38:48] [PASSED] ttm_tt_init_misaligned
[22:38:48] [PASSED] ttm_tt_fini_basic
[22:38:48] [PASSED] ttm_tt_fini_sg
[22:38:48] [PASSED] ttm_tt_fini_shmem
[22:38:48] [PASSED] ttm_tt_create_basic
[22:38:48] [PASSED] ttm_tt_create_invalid_bo_type
[22:38:48] [PASSED] ttm_tt_create_ttm_exists
[22:38:48] [PASSED] ttm_tt_create_failed
[22:38:48] [PASSED] ttm_tt_destroy_basic
[22:38:48] [PASSED] ttm_tt_populate_null_ttm
[22:38:48] [PASSED] ttm_tt_populate_populated_ttm
[22:38:48] [PASSED] ttm_tt_unpopulate_basic
[22:38:48] [PASSED] ttm_tt_unpopulate_empty_ttm
[22:38:48] [PASSED] ttm_tt_swapin_basic
[22:38:48] ===================== [PASSED] ttm_tt ======================
[22:38:48] =================== ttm_bo (14 subtests) ===================
[22:38:48] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[22:38:48] [PASSED] Cannot be interrupted and sleeps
[22:38:48] [PASSED] Cannot be interrupted, locks straight away
[22:38:48] [PASSED] Can be interrupted, sleeps
[22:38:48] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[22:38:48] [PASSED] ttm_bo_reserve_locked_no_sleep
[22:38:48] [PASSED] ttm_bo_reserve_no_wait_ticket
[22:38:48] [PASSED] ttm_bo_reserve_double_resv
[22:38:48] [PASSED] ttm_bo_reserve_interrupted
[22:38:48] [PASSED] ttm_bo_reserve_deadlock
[22:38:49] [PASSED] ttm_bo_unreserve_basic
[22:38:49] [PASSED] ttm_bo_unreserve_pinned
[22:38:49] [PASSED] ttm_bo_unreserve_bulk
[22:38:49] [PASSED] ttm_bo_fini_basic
[22:38:49] [PASSED] ttm_bo_fini_shared_resv
[22:38:49] [PASSED] ttm_bo_pin_basic
[22:38:49] [PASSED] ttm_bo_pin_unpin_resource
[22:38:49] [PASSED] ttm_bo_multiple_pin_one_unpin
[22:38:49] ===================== [PASSED] ttm_bo ======================
[22:38:49] ============== ttm_bo_validate (21 subtests) ===============
[22:38:49] ============== ttm_bo_init_reserved_sys_man ===============
[22:38:49] [PASSED] Buffer object for userspace
[22:38:49] [PASSED] Kernel buffer object
[22:38:49] [PASSED] Shared buffer object
[22:38:49] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[22:38:49] ============== ttm_bo_init_reserved_mock_man ==============
[22:38:49] [PASSED] Buffer object for userspace
[22:38:49] [PASSED] Kernel buffer object
[22:38:49] [PASSED] Shared buffer object
[22:38:49] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[22:38:49] [PASSED] ttm_bo_init_reserved_resv
[22:38:49] ================== ttm_bo_validate_basic ==================
[22:38:49] [PASSED] Buffer object for userspace
[22:38:49] [PASSED] Kernel buffer object
[22:38:49] [PASSED] Shared buffer object
[22:38:49] ============== [PASSED] ttm_bo_validate_basic ==============
[22:38:49] [PASSED] ttm_bo_validate_invalid_placement
[22:38:49] ============= ttm_bo_validate_same_placement ==============
[22:38:49] [PASSED] System manager
[22:38:49] [PASSED] VRAM manager
[22:38:49] ========= [PASSED] ttm_bo_validate_same_placement ==========
[22:38:49] [PASSED] ttm_bo_validate_failed_alloc
[22:38:49] [PASSED] ttm_bo_validate_pinned
[22:38:49] [PASSED] ttm_bo_validate_busy_placement
[22:38:49] ================ ttm_bo_validate_multihop =================
[22:38:49] [PASSED] Buffer object for userspace
[22:38:49] [PASSED] Kernel buffer object
[22:38:49] [PASSED] Shared buffer object
[22:38:49] ============ [PASSED] ttm_bo_validate_multihop =============
[22:38:49] ========== ttm_bo_validate_no_placement_signaled ==========
[22:38:49] [PASSED] Buffer object in system domain, no page vector
[22:38:49] [PASSED] Buffer object in system domain with an existing page vector
[22:38:49] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[22:38:49] ======== ttm_bo_validate_no_placement_not_signaled ========
[22:38:49] [PASSED] Buffer object for userspace
[22:38:49] [PASSED] Kernel buffer object
[22:38:49] [PASSED] Shared buffer object
[22:38:49] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[22:38:49] [PASSED] ttm_bo_validate_move_fence_signaled
[22:38:49] ========= ttm_bo_validate_move_fence_not_signaled =========
[22:38:49] [PASSED] Waits for GPU
[22:38:49] [PASSED] Tries to lock straight away
[22:38:49] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[22:38:49] [PASSED] ttm_bo_validate_happy_evict
[22:38:49] [PASSED] ttm_bo_validate_all_pinned_evict
[22:38:49] [PASSED] ttm_bo_validate_allowed_only_evict
[22:38:49] [PASSED] ttm_bo_validate_deleted_evict
[22:38:49] [PASSED] ttm_bo_validate_busy_domain_evict
[22:38:49] [PASSED] ttm_bo_validate_evict_gutting
[22:38:49] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[22:38:49] ================= [PASSED] ttm_bo_validate =================
[22:38:49] ============================================================
[22:38:49] Testing complete. Ran 101 tests: passed: 101
[22:38:49] Elapsed time: 11.085s total, 1.649s configuring, 9.169s building, 0.226s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 24+ messages in thread* ✓ Xe.CI.BAT: success for Fix performance when pagefaults and 3d/display share resources (rev2)
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
` (8 preceding siblings ...)
2025-12-12 22:38 ` ✓ CI.KUnit: success " Patchwork
@ 2025-12-12 23:33 ` Patchwork
2025-12-13 19:27 ` ✗ Xe.CI.Full: failure " Patchwork
10 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2025-12-12 23:33 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 1527 bytes --]
== Series Details ==
Series: Fix performance when pagefaults and 3d/display share resources (rev2)
URL : https://patchwork.freedesktop.org/series/158833/
State : success
== Summary ==
CI Bug Log - changes from xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25_BAT -> xe-pw-158833v2_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (12 -> 12)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in xe-pw-158833v2_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@xe_waitfence@abstime:
- bat-dg2-oem2: [PASS][1] -> [TIMEOUT][2] ([Intel XE#6506])
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/bat-dg2-oem2/igt@xe_waitfence@abstime.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/bat-dg2-oem2/igt@xe_waitfence@abstime.html
[Intel XE#6506]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6506
Build changes
-------------
* Linux: xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25 -> xe-pw-158833v2
IGT_8665: 1806ab9c982ccaaa9d60cdde16bc1dc3bb250654 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25: 90eba5e4087d6932c174f97637833862c9f9ec25
xe-pw-158833v2: 158833v2
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/index.html
[-- Attachment #2: Type: text/html, Size: 2092 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread* ✗ Xe.CI.Full: failure for Fix performance when pagefaults and 3d/display share resources (rev2)
2025-12-12 18:28 [PATCH v2 0/7] Fix performance when pagefaults and 3d/display share resources Matthew Brost
` (9 preceding siblings ...)
2025-12-12 23:33 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-12-13 19:27 ` Patchwork
10 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2025-12-13 19:27 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 30146 bytes --]
== Series Details ==
Series: Fix performance when pagefaults and 3d/display share resources (rev2)
URL : https://patchwork.freedesktop.org/series/158833/
State : failure
== Summary ==
CI Bug Log - changes from xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25_FULL -> xe-pw-158833v2_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-158833v2_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-158833v2_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (2 -> 2)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-158833v2_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@kms_plane_multiple@2x-tiling-none@pipe-c-hdmi-a-3-pipe-b-dp-2:
- shard-bmg: [PASS][1] -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-8/igt@kms_plane_multiple@2x-tiling-none@pipe-c-hdmi-a-3-pipe-b-dp-2.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-2/igt@kms_plane_multiple@2x-tiling-none@pipe-c-hdmi-a-3-pipe-b-dp-2.html
* igt@xe_exec_multi_queue@two-queues-priority:
- shard-bmg: NOTRUN -> [SKIP][3] +35 other tests skip
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@xe_exec_multi_queue@two-queues-priority.html
Known issues
------------
Here are the changes found in xe-pw-158833v2_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
- shard-bmg: NOTRUN -> [SKIP][4] ([Intel XE#2233])
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
* igt@kms_big_fb@linear-32bpp-rotate-90:
- shard-bmg: NOTRUN -> [SKIP][5] ([Intel XE#2327]) +3 other tests skip
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_big_fb@linear-32bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-addfb-size-offset-overflow:
- shard-bmg: NOTRUN -> [SKIP][6] ([Intel XE#607])
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
- shard-bmg: NOTRUN -> [SKIP][7] ([Intel XE#1124]) +12 other tests skip
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
* igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p:
- shard-bmg: NOTRUN -> [SKIP][8] ([Intel XE#2314] / [Intel XE#2894])
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html
* igt@kms_bw@linear-tiling-2-displays-2560x1440p:
- shard-bmg: NOTRUN -> [SKIP][9] ([Intel XE#367]) +2 other tests skip
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@kms_bw@linear-tiling-2-displays-2560x1440p.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs:
- shard-bmg: NOTRUN -> [SKIP][10] ([Intel XE#2887]) +16 other tests skip
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc:
- shard-bmg: NOTRUN -> [SKIP][11] ([Intel XE#3432])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc.html
* igt@kms_chamelium_color@degamma:
- shard-bmg: NOTRUN -> [SKIP][12] ([Intel XE#2325])
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_chamelium_color@degamma.html
* igt@kms_chamelium_hpd@common-hpd-after-suspend:
- shard-bmg: NOTRUN -> [SKIP][13] ([Intel XE#2252]) +8 other tests skip
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_chamelium_hpd@common-hpd-after-suspend.html
* igt@kms_chamelium_sharpness_filter@filter-basic:
- shard-bmg: NOTRUN -> [SKIP][14] ([Intel XE#6507])
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_chamelium_sharpness_filter@filter-basic.html
* igt@kms_content_protection@type1:
- shard-bmg: NOTRUN -> [SKIP][15] ([Intel XE#2341])
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@kms_content_protection@type1.html
* igt@kms_cursor_crc@cursor-offscreen-256x85:
- shard-bmg: NOTRUN -> [SKIP][16] ([Intel XE#2320]) +3 other tests skip
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_cursor_crc@cursor-offscreen-256x85.html
* igt@kms_cursor_crc@cursor-onscreen-512x512:
- shard-bmg: NOTRUN -> [SKIP][17] ([Intel XE#2321]) +1 other test skip
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_cursor_crc@cursor-onscreen-512x512.html
* igt@kms_cursor_legacy@flip-vs-cursor-legacy:
- shard-bmg: [PASS][18] -> [FAIL][19] ([Intel XE#5299])
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-4/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-1/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions:
- shard-bmg: NOTRUN -> [SKIP][20] ([Intel XE#2286])
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions.html
* igt@kms_dsc@dsc-with-bpc-formats:
- shard-bmg: NOTRUN -> [SKIP][21] ([Intel XE#2244])
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_dsc@dsc-with-bpc-formats.html
* igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats:
- shard-bmg: NOTRUN -> [SKIP][22] ([Intel XE#4422])
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats.html
* igt@kms_fbcon_fbt@psr-suspend:
- shard-bmg: NOTRUN -> [SKIP][23] ([Intel XE#776])
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_fbcon_fbt@psr-suspend.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-bmg: NOTRUN -> [INCOMPLETE][24] ([Intel XE#2049] / [Intel XE#2597]) +1 other test incomplete
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-3/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling:
- shard-bmg: NOTRUN -> [SKIP][25] ([Intel XE#2293] / [Intel XE#2380]) +4 other tests skip
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode:
- shard-bmg: NOTRUN -> [SKIP][26] ([Intel XE#2293]) +4 other tests skip
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-blt:
- shard-bmg: NOTRUN -> [SKIP][27] ([Intel XE#2311]) +31 other tests skip
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-1p-offscreen-pri-indfb-draw-render:
- shard-bmg: NOTRUN -> [SKIP][28] ([Intel XE#4141]) +19 other tests skip
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-1p-offscreen-pri-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][29] ([Intel XE#2313]) +22 other tests skip
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_joiner@basic-big-joiner:
- shard-bmg: NOTRUN -> [SKIP][30] ([Intel XE#346] / [Intel XE#6590])
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-3/igt@kms_joiner@basic-big-joiner.html
* igt@kms_panel_fitting@atomic-fastset:
- shard-bmg: NOTRUN -> [SKIP][31] ([Intel XE#2486])
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_panel_fitting@atomic-fastset.html
* igt@kms_pipe_stress@stress-xrgb8888-ytiled:
- shard-bmg: NOTRUN -> [SKIP][32] ([Intel XE#4329])
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@kms_pipe_stress@stress-xrgb8888-ytiled.html
* igt@kms_plane_multiple@2x-tiling-none:
- shard-bmg: [PASS][33] -> [INCOMPLETE][34] ([Intel XE#5175])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-8/igt@kms_plane_multiple@2x-tiling-none.html
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-2/igt@kms_plane_multiple@2x-tiling-none.html
* igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-b:
- shard-bmg: NOTRUN -> [SKIP][35] ([Intel XE#5825]) +4 other tests skip
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-b.html
* igt@kms_pm_backlight@basic-brightness:
- shard-bmg: NOTRUN -> [SKIP][36] ([Intel XE#870])
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_pm_backlight@basic-brightness.html
* igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf:
- shard-bmg: NOTRUN -> [SKIP][37] ([Intel XE#1406] / [Intel XE#1489]) +6 other tests skip
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf.html
* igt@kms_psr2_su@page_flip-p010:
- shard-bmg: NOTRUN -> [SKIP][38] ([Intel XE#1406] / [Intel XE#2387])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_psr2_su@page_flip-p010.html
* igt@kms_psr@psr2-no-drrs:
- shard-bmg: NOTRUN -> [SKIP][39] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +14 other tests skip
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_psr@psr2-no-drrs.html
* igt@kms_rotation_crc@primary-rotation-270:
- shard-bmg: NOTRUN -> [SKIP][40] ([Intel XE#3414] / [Intel XE#3904]) +1 other test skip
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@kms_rotation_crc@primary-rotation-270.html
* igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180:
- shard-bmg: NOTRUN -> [SKIP][41] ([Intel XE#2330])
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180.html
* igt@kms_scaling_modes@scaling-mode-full:
- shard-bmg: NOTRUN -> [SKIP][42] ([Intel XE#2413])
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_scaling_modes@scaling-mode-full.html
* igt@kms_sharpness_filter@filter-suspend:
- shard-bmg: NOTRUN -> [SKIP][43] ([Intel XE#6503]) +2 other tests skip
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@kms_sharpness_filter@filter-suspend.html
* igt@kms_tiled_display@basic-test-pattern-with-chamelium:
- shard-bmg: NOTRUN -> [SKIP][44] ([Intel XE#2426])
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
* igt@kms_vrr@cmrr:
- shard-bmg: NOTRUN -> [SKIP][45] ([Intel XE#2168])
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_vrr@cmrr.html
* igt@kms_vrr@flip-suspend:
- shard-bmg: NOTRUN -> [SKIP][46] ([Intel XE#1499]) +1 other test skip
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@kms_vrr@flip-suspend.html
* igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1:
- shard-lnl: [PASS][47] -> [FAIL][48] ([Intel XE#2142]) +1 other test fail
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-lnl-5/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-lnl-5/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
* igt@xe_eudebug@basic-vm-bind-metadata-discovery:
- shard-bmg: NOTRUN -> [SKIP][49] ([Intel XE#4837]) +8 other tests skip
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@xe_eudebug@basic-vm-bind-metadata-discovery.html
* igt@xe_eudebug_online@set-breakpoint-sigint-debugger:
- shard-bmg: NOTRUN -> [SKIP][50] ([Intel XE#4837] / [Intel XE#6665]) +7 other tests skip
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@xe_eudebug_online@set-breakpoint-sigint-debugger.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue:
- shard-bmg: NOTRUN -> [SKIP][51] ([Intel XE#2322]) +9 other tests skip
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue.html
* igt@xe_exec_system_allocator@many-execqueues-mmap-huge-nomemset:
- shard-bmg: NOTRUN -> [SKIP][52] ([Intel XE#4943]) +37 other tests skip
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@xe_exec_system_allocator@many-execqueues-mmap-huge-nomemset.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-large-execqueues-mmap-prefetch:
- shard-bmg: NOTRUN -> [INCOMPLETE][53] ([Intel XE#6480])
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@xe_exec_system_allocator@threads-shared-vm-many-large-execqueues-mmap-prefetch.html
* igt@xe_module_load@force-load:
- shard-bmg: NOTRUN -> [SKIP][54] ([Intel XE#2457])
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@xe_module_load@force-load.html
* igt@xe_oa@oa-tlb-invalidate:
- shard-bmg: NOTRUN -> [SKIP][55] ([Intel XE#2248])
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@xe_oa@oa-tlb-invalidate.html
* igt@xe_pat@pat-index-xelp:
- shard-bmg: NOTRUN -> [SKIP][56] ([Intel XE#2245])
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@xe_pat@pat-index-xelp.html
* igt@xe_pm@d3cold-mocs:
- shard-bmg: NOTRUN -> [SKIP][57] ([Intel XE#2284]) +1 other test skip
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@xe_pm@d3cold-mocs.html
* igt@xe_pm@d3hot-i2c:
- shard-bmg: NOTRUN -> [SKIP][58] ([Intel XE#5742])
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@xe_pm@d3hot-i2c.html
* igt@xe_pmu@engine-activity-accuracy-50:
- shard-lnl: [PASS][59] -> [FAIL][60] ([Intel XE#6251]) +3 other tests fail
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-lnl-5/igt@xe_pmu@engine-activity-accuracy-50.html
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-lnl-8/igt@xe_pmu@engine-activity-accuracy-50.html
* igt@xe_pxp@pxp-termination-key-update-post-suspend:
- shard-bmg: NOTRUN -> [SKIP][61] ([Intel XE#4733]) +4 other tests skip
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@xe_pxp@pxp-termination-key-update-post-suspend.html
* igt@xe_query@multigpu-query-hwconfig:
- shard-bmg: NOTRUN -> [SKIP][62] ([Intel XE#944]) +1 other test skip
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@xe_query@multigpu-query-hwconfig.html
#### Possible fixes ####
* igt@kms_async_flips@alternate-sync-async-flip-atomic:
- shard-bmg: [FAIL][63] ([Intel XE#3718] / [Intel XE#6078]) -> [PASS][64] +1 other test pass
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-4/igt@kms_async_flips@alternate-sync-async-flip-atomic.html
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@kms_async_flips@alternate-sync-async-flip-atomic.html
* igt@kms_flip@flip-vs-suspend:
- shard-bmg: [INCOMPLETE][65] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][66] +1 other test pass
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-2/igt@kms_flip@flip-vs-suspend.html
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@kms_flip@flip-vs-suspend.html
* igt@xe_evict@evict-beng-mixed-many-threads-small:
- shard-bmg: [INCOMPLETE][67] ([Intel XE#6321]) -> [PASS][68]
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-2/igt@xe_evict@evict-beng-mixed-many-threads-small.html
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-2/igt@xe_evict@evict-beng-mixed-many-threads-small.html
* igt@xe_module_load@load:
- shard-bmg: ([PASS][69], [SKIP][70], [PASS][71], [PASS][72], [PASS][73], [PASS][74], [PASS][75], [PASS][76], [PASS][77], [PASS][78], [PASS][79], [PASS][80], [PASS][81], [PASS][82], [PASS][83], [PASS][84], [PASS][85], [PASS][86], [PASS][87], [PASS][88]) ([Intel XE#2457]) -> ([PASS][89], [PASS][90], [PASS][91], [PASS][92], [PASS][93], [PASS][94], [PASS][95], [PASS][96], [PASS][97], [PASS][98], [PASS][99], [PASS][100], [PASS][101], [PASS][102], [PASS][103], [PASS][104], [PASS][105], [PASS][106], [PASS][107], [PASS][108])
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-1/igt@xe_module_load@load.html
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-2/igt@xe_module_load@load.html
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-5/igt@xe_module_load@load.html
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-5/igt@xe_module_load@load.html
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-4/igt@xe_module_load@load.html
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-2/igt@xe_module_load@load.html
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-4/igt@xe_module_load@load.html
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-5/igt@xe_module_load@load.html
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-4/igt@xe_module_load@load.html
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-2/igt@xe_module_load@load.html
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-2/igt@xe_module_load@load.html
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-8/igt@xe_module_load@load.html
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-8/igt@xe_module_load@load.html
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-3/igt@xe_module_load@load.html
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-3/igt@xe_module_load@load.html
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-3/igt@xe_module_load@load.html
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-1/igt@xe_module_load@load.html
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-8/igt@xe_module_load@load.html
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-5/igt@xe_module_load@load.html
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-1/igt@xe_module_load@load.html
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-2/igt@xe_module_load@load.html
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-3/igt@xe_module_load@load.html
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-3/igt@xe_module_load@load.html
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@xe_module_load@load.html
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@xe_module_load@load.html
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-1/igt@xe_module_load@load.html
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-3/igt@xe_module_load@load.html
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-3/igt@xe_module_load@load.html
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@xe_module_load@load.html
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@xe_module_load@load.html
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@xe_module_load@load.html
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-5/igt@xe_module_load@load.html
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-8/igt@xe_module_load@load.html
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@xe_module_load@load.html
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-4/igt@xe_module_load@load.html
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-2/igt@xe_module_load@load.html
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-1/igt@xe_module_load@load.html
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-1/igt@xe_module_load@load.html
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-1/igt@xe_module_load@load.html
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-1/igt@xe_module_load@load.html
#### Warnings ####
* igt@kms_hdr@brightness-with-hdr:
- shard-bmg: [SKIP][109] ([Intel XE#3544]) -> [SKIP][110] ([Intel XE#3374] / [Intel XE#3544])
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-4/igt@kms_hdr@brightness-with-hdr.html
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-1/igt@kms_hdr@brightness-with-hdr.html
* igt@kms_tiled_display@basic-test-pattern:
- shard-bmg: [SKIP][111] ([Intel XE#2426]) -> [FAIL][112] ([Intel XE#1729])
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25/shard-bmg-8/igt@kms_tiled_display@basic-test-pattern.html
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/shard-bmg-2/igt@kms_tiled_display@basic-test-pattern.html
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
[Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
[Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
[Intel XE#2142]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2142
[Intel XE#2168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2168
[Intel XE#2233]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2233
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2245]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2245
[Intel XE#2248]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2248
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2286]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2286
[Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2330
[Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
[Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
[Intel XE#2387]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2387
[Intel XE#2413]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2413
[Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
[Intel XE#2457]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2457
[Intel XE#2486]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2486
[Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
[Intel XE#346]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/346
[Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#3718]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3718
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#4329]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4329
[Intel XE#4422]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4422
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#5175]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5175
[Intel XE#5299]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5299
[Intel XE#5742]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5742
[Intel XE#5825]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5825
[Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
[Intel XE#6078]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6078
[Intel XE#6251]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6251
[Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321
[Intel XE#6480]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6480
[Intel XE#6503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6503
[Intel XE#6507]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6507
[Intel XE#6590]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6590
[Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665
[Intel XE#776]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/776
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
Build changes
-------------
* Linux: xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25 -> xe-pw-158833v2
IGT_8665: 1806ab9c982ccaaa9d60cdde16bc1dc3bb250654 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4234-90eba5e4087d6932c174f97637833862c9f9ec25: 90eba5e4087d6932c174f97637833862c9f9ec25
xe-pw-158833v2: 158833v2
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158833v2/index.html
[-- Attachment #2: Type: text/html, Size: 33093 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread