* [PATCH 0/2] tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait
@ 2026-04-29 2:08 Niranjana Vishwanathapura
2026-04-29 2:08 ` [PATCH 1/2] tests/intel/xe_exec_multi_queue: use timestamp to check job start Niranjana Vishwanathapura
` (4 more replies)
0 siblings, 5 replies; 15+ messages in thread
From: Niranjana Vishwanathapura @ 2026-04-29 2:08 UTC (permalink / raw)
To: igt-dev
In 'priority' subtest, sleep() is used to ensure some submissions have
reached the hardware before we proceed in the test. But sleep duration
is indeterminate. Instead use a more deterministic method to ensure
those submissions have reached the hardware.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Niranjana Vishwanathapura (2):
tests/intel/xe_exec_multi_queue: use timestamp to check job start
tests/intel/xe_exec_multi_queue: replace sleep with barrier queue
tests/intel/xe_exec_multi_queue.c | 88 +++++++++++++++++++++++--------
1 file changed, 67 insertions(+), 21 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 1/2] tests/intel/xe_exec_multi_queue: use timestamp to check job start 2026-04-29 2:08 [PATCH 0/2] tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Niranjana Vishwanathapura @ 2026-04-29 2:08 ` Niranjana Vishwanathapura 2026-04-29 19:18 ` Summers, Stuart 2026-04-29 2:08 ` [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue Niranjana Vishwanathapura ` (3 subsequent siblings) 4 siblings, 1 reply; 15+ messages in thread From: Niranjana Vishwanathapura @ 2026-04-29 2:08 UTC (permalink / raw) To: igt-dev In __test_priority(), enable write_timestamp in xe_spin_init_opts() and use timestamp value to determine the queue switch order. This replaces the indeterminate sleep() with a more deterministic wait based on the GPU timestamp. Pre-set all spinners to preempt-wait before submission so each queue, once scheduled by HW, blocks at the semaphore. This ensures all queues are running on HW before testing priority-based scheduling. Assisted-by: GitHub Copilot:claude-sonnet-4.6 Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> --- tests/intel/xe_exec_multi_queue.c | 51 ++++++++++++++++++++++--------- 1 file changed, 37 insertions(+), 14 deletions(-) diff --git a/tests/intel/xe_exec_multi_queue.c b/tests/intel/xe_exec_multi_queue.c index ca96099d36..382705d065 100644 --- a/tests/intel/xe_exec_multi_queue.c +++ b/tests/intel/xe_exec_multi_queue.c @@ -454,25 +454,24 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci, for (i = 0; i < num_queues; i++) { uint64_t spin_addr = addr + i * sizeof(struct xe_spin); - xe_spin_init_opts(spin[i], .addr = spin_addr, .multi_queue_switch = true); + xe_spin_init_opts(spin[i], .addr = spin_addr, .multi_queue_switch = true, + .write_timestamp = true); + /* + * Pre-set all spinners to preempt-wait so each queue, once + * scheduled, immediately blocks at the QUEUE_SWITCH_MODE semaphore + * after writing its timestamp. The HW switches between queues at + * this point, allowing all of them to schedule deterministically. + */ + xe_spin_preempt_wait(spin[i]); sync.addr = spin_addr + (char *)&spin[i]->exec_sync - (char *)spin[i]; exec.exec_queue_id = exec_queues[i]; exec.address = spin_addr; xe_exec(fd, &exec); - - /* Wait for job on Q0 to start, other queues block behind Q0 */ - if (!i) - xe_spin_wait_started(spin[i]); } - sleep(sleep_duration); - - /* - * Expect the job on other queue to not get scheduled while the spinner - * on q0 is not waiting on preempt condition. - */ - for (i = 1; i < num_queues; i++) - igt_assert(!xe_spin_started(spin[i])); + /* Wait for all queues to start */ + for (i = 0; i < num_queues; i++) + xe_spin_wait_started(spin[i]); if (flags & DYN_PRIORITY) { /* Assign increasing order of priority for secondary queues */ @@ -485,6 +484,30 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci, sleep(sleep_duration); } + /* + * Clear timestamps and release all queues from the semaphore wait. + * The order in which they next write a timestamp reveals the + * priority-based scheduling order. + */ + for (i = 0; i < num_queues; i++) { + WRITE_ONCE(spin[i]->timestamp, 0); + xe_spin_preempt_nowait(spin[i]); + + /* + * For Q0, wait until it is running again to ensure it holds the engine + * when priority arbitration is triggered. + */ + if (!i) + while (!READ_ONCE(spin[i]->timestamp)); + } + + /* + * Verify that secondary queues have not been scheduled while Q0 + * holds the engine. + */ + for (i = 1; i < num_queues; i++) + igt_assert(!READ_ONCE(spin[i]->timestamp)); + /* * Trigger a queue switch by making the spinner on q0 to wait on preempt * condition, allowing job on q1 to get scheduled and finish. When we end @@ -501,7 +524,7 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci, i = 1; while (i < num_queues) { for (j = 1; j < num_queues; j++) { - if (xe_spin_started(spin[j]) && ((already_in_order & (1 << j)) == 0)) { + if (READ_ONCE(spin[j]->timestamp) && ((already_in_order & (1 << j)) == 0)) { start_order[i] = j; xe_spin_end(spin[j]); xe_wait_ufence(fd, &spin[j]->exec_sync, USER_FENCE_VALUE, -- 2.43.0 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] tests/intel/xe_exec_multi_queue: use timestamp to check job start 2026-04-29 2:08 ` [PATCH 1/2] tests/intel/xe_exec_multi_queue: use timestamp to check job start Niranjana Vishwanathapura @ 2026-04-29 19:18 ` Summers, Stuart 2026-04-29 19:24 ` Summers, Stuart 0 siblings, 1 reply; 15+ messages in thread From: Summers, Stuart @ 2026-04-29 19:18 UTC (permalink / raw) To: igt-dev@lists.freedesktop.org, Vishwanathapura, Niranjana On Tue, 2026-04-28 at 19:08 -0700, Niranjana Vishwanathapura wrote: > In __test_priority(), enable write_timestamp in xe_spin_init_opts() > and use timestamp value to determine the queue switch order. This > replaces the indeterminate sleep() with a more deterministic wait > based on the GPU timestamp. > > Pre-set all spinners to preempt-wait before submission so each > queue, once scheduled by HW, blocks at the semaphore. This ensures > all queues are running on HW before testing priority-based > scheduling. > > Assisted-by: GitHub Copilot:claude-sonnet-4.6 > Signed-off-by: Niranjana Vishwanathapura > <niranjana.vishwanathapura@intel.com> > --- > tests/intel/xe_exec_multi_queue.c | 51 ++++++++++++++++++++++------- > -- > 1 file changed, 37 insertions(+), 14 deletions(-) > > diff --git a/tests/intel/xe_exec_multi_queue.c > b/tests/intel/xe_exec_multi_queue.c > index ca96099d36..382705d065 100644 > --- a/tests/intel/xe_exec_multi_queue.c > +++ b/tests/intel/xe_exec_multi_queue.c > @@ -454,25 +454,24 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > for (i = 0; i < num_queues; i++) { > uint64_t spin_addr = addr + i * sizeof(struct > xe_spin); > > - xe_spin_init_opts(spin[i], .addr = spin_addr, > .multi_queue_switch = true); > + xe_spin_init_opts(spin[i], .addr = spin_addr, > .multi_queue_switch = true, > + .write_timestamp = true); > + /* > + * Pre-set all spinners to preempt-wait so each > queue, once > + * scheduled, immediately blocks at the > QUEUE_SWITCH_MODE semaphore > + * after writing its timestamp. The HW switches > between queues at > + * this point, allowing all of them to schedule > deterministically. > + */ > + xe_spin_preempt_wait(spin[i]); > sync.addr = spin_addr + (char *)&spin[i]->exec_sync - > (char *)spin[i]; > exec.exec_queue_id = exec_queues[i]; > exec.address = spin_addr; > xe_exec(fd, &exec); > - > - /* Wait for job on Q0 to start, other queues block > behind Q0 */ > - if (!i) > - xe_spin_wait_started(spin[i]); > } > > - sleep(sleep_duration); > - > - /* > - * Expect the job on other queue to not get scheduled while > the spinner > - * on q0 is not waiting on preempt condition. > - */ > - for (i = 1; i < num_queues; i++) > - igt_assert(!xe_spin_started(spin[i])); > + /* Wait for all queues to start */ > + for (i = 0; i < num_queues; i++) > + xe_spin_wait_started(spin[i]); So I see in the spinner batch that the c0ffee value is written before the timestamp... should we change that order in the batch to account for this scenario? And should we have a check before we go into the for loop to clear the timestamp below that all of the timestamps are non- zero first? I'm wondering if we can hit some race condition where we mark everything as started, but some of the queues (well.. at least one of hte queues) hasn't actually hit the semaphore wait yet. That's the only issue I see here really, everything else looks ok to me. And maybe I'm just being paranoid about the case above, just trying to see where we could have some hole here if things are running sufficiently slow for some reason. Thanks, Stuart > > if (flags & DYN_PRIORITY) { > /* Assign increasing order of priority for secondary > queues */ > @@ -485,6 +484,30 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > sleep(sleep_duration); > } > > + /* > + * Clear timestamps and release all queues from the semaphore > wait. > + * The order in which they next write a timestamp reveals the > + * priority-based scheduling order. > + */ > + for (i = 0; i < num_queues; i++) { > + WRITE_ONCE(spin[i]->timestamp, 0); > + xe_spin_preempt_nowait(spin[i]); > + > + /* > + * For Q0, wait until it is running again to ensure > it holds the engine > + * when priority arbitration is triggered. > + */ > + if (!i) > + while (!READ_ONCE(spin[i]->timestamp)); > + } > + > + /* > + * Verify that secondary queues have not been scheduled while > Q0 > + * holds the engine. > + */ > + for (i = 1; i < num_queues; i++) > + igt_assert(!READ_ONCE(spin[i]->timestamp)); > + > /* > * Trigger a queue switch by making the spinner on q0 to wait > on preempt > * condition, allowing job on q1 to get scheduled and finish. > When we end > @@ -501,7 +524,7 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > i = 1; > while (i < num_queues) { > for (j = 1; j < num_queues; j++) { > - if (xe_spin_started(spin[j]) && > ((already_in_order & (1 << j)) == 0)) { > + if (READ_ONCE(spin[j]->timestamp) && > ((already_in_order & (1 << j)) == 0)) { > start_order[i] = j; > xe_spin_end(spin[j]); > xe_wait_ufence(fd, &spin[j]- > >exec_sync, USER_FENCE_VALUE, ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] tests/intel/xe_exec_multi_queue: use timestamp to check job start 2026-04-29 19:18 ` Summers, Stuart @ 2026-04-29 19:24 ` Summers, Stuart 2026-04-30 4:04 ` Niranjana Vishwanathapura 0 siblings, 1 reply; 15+ messages in thread From: Summers, Stuart @ 2026-04-29 19:24 UTC (permalink / raw) To: igt-dev@lists.freedesktop.org, Vishwanathapura, Niranjana On Wed, 2026-04-29 at 19:18 +0000, Summers, Stuart wrote: > On Tue, 2026-04-28 at 19:08 -0700, Niranjana Vishwanathapura wrote: > > In __test_priority(), enable write_timestamp in xe_spin_init_opts() > > and use timestamp value to determine the queue switch order. This > > replaces the indeterminate sleep() with a more deterministic wait > > based on the GPU timestamp. > > > > Pre-set all spinners to preempt-wait before submission so each > > queue, once scheduled by HW, blocks at the semaphore. This ensures > > all queues are running on HW before testing priority-based > > scheduling. > > > > Assisted-by: GitHub Copilot:claude-sonnet-4.6 > > Signed-off-by: Niranjana Vishwanathapura > > <niranjana.vishwanathapura@intel.com> > > --- > > tests/intel/xe_exec_multi_queue.c | 51 ++++++++++++++++++++++----- > > -- > > -- > > 1 file changed, 37 insertions(+), 14 deletions(-) > > > > diff --git a/tests/intel/xe_exec_multi_queue.c > > b/tests/intel/xe_exec_multi_queue.c > > index ca96099d36..382705d065 100644 > > --- a/tests/intel/xe_exec_multi_queue.c > > +++ b/tests/intel/xe_exec_multi_queue.c > > @@ -454,25 +454,24 @@ __test_priority(int fd, struct > > drm_xe_engine_class_instance *eci, > > for (i = 0; i < num_queues; i++) { > > uint64_t spin_addr = addr + i * sizeof(struct > > xe_spin); > > > > - xe_spin_init_opts(spin[i], .addr = spin_addr, > > .multi_queue_switch = true); > > + xe_spin_init_opts(spin[i], .addr = spin_addr, > > .multi_queue_switch = true, > > + .write_timestamp = true); > > + /* > > + * Pre-set all spinners to preempt-wait so each > > queue, once > > + * scheduled, immediately blocks at the > > QUEUE_SWITCH_MODE semaphore > > + * after writing its timestamp. The HW switches > > between queues at > > + * this point, allowing all of them to schedule > > deterministically. > > + */ > > + xe_spin_preempt_wait(spin[i]); > > sync.addr = spin_addr + (char *)&spin[i]->exec_sync > > - > > (char *)spin[i]; > > exec.exec_queue_id = exec_queues[i]; > > exec.address = spin_addr; > > xe_exec(fd, &exec); > > - > > - /* Wait for job on Q0 to start, other queues block > > behind Q0 */ > > - if (!i) > > - xe_spin_wait_started(spin[i]); > > } > > > > - sleep(sleep_duration); > > - > > - /* > > - * Expect the job on other queue to not get scheduled while > > the spinner > > - * on q0 is not waiting on preempt condition. > > - */ > > - for (i = 1; i < num_queues; i++) > > - igt_assert(!xe_spin_started(spin[i])); > > + /* Wait for all queues to start */ > > + for (i = 0; i < num_queues; i++) > > + xe_spin_wait_started(spin[i]); > > So I see in the spinner batch that the c0ffee value is written before > the timestamp... should we change that order in the batch to account > for this scenario? And should we have a check before we go into the > for > loop to clear the timestamp below that all of the timestamps are non- > zero first? > > I'm wondering if we can hit some race condition where we mark > everything as started, but some of the queues (well.. at least one of > hte queues) hasn't actually hit the semaphore wait yet. > > That's the only issue I see here really, everything else looks ok to > me. And maybe I'm just being paranoid about the case above, just > trying > to see where we could have some hole here if things are running > sufficiently slow for some reason. And it could also be, of course, that the timestamp gets written but the semaphore wait hasn't been seen yet... so maybe in addition to the non-zero check, we should make sure it increments at least once somehow? -Stuart > > Thanks, > Stuart > > > > > if (flags & DYN_PRIORITY) { > > /* Assign increasing order of priority for > > secondary > > queues */ > > @@ -485,6 +484,30 @@ __test_priority(int fd, struct > > drm_xe_engine_class_instance *eci, > > sleep(sleep_duration); > > } > > > > + /* > > + * Clear timestamps and release all queues from the > > semaphore > > wait. > > + * The order in which they next write a timestamp reveals > > the > > + * priority-based scheduling order. > > + */ > > + for (i = 0; i < num_queues; i++) { > > + WRITE_ONCE(spin[i]->timestamp, 0); > > + xe_spin_preempt_nowait(spin[i]); > > + > > + /* > > + * For Q0, wait until it is running again to ensure > > it holds the engine > > + * when priority arbitration is triggered. > > + */ > > + if (!i) > > + while (!READ_ONCE(spin[i]->timestamp)); > > + } > > + > > + /* > > + * Verify that secondary queues have not been scheduled > > while > > Q0 > > + * holds the engine. > > + */ > > + for (i = 1; i < num_queues; i++) > > + igt_assert(!READ_ONCE(spin[i]->timestamp)); > > + > > /* > > * Trigger a queue switch by making the spinner on q0 to > > wait > > on preempt > > * condition, allowing job on q1 to get scheduled and > > finish. > > When we end > > @@ -501,7 +524,7 @@ __test_priority(int fd, struct > > drm_xe_engine_class_instance *eci, > > i = 1; > > while (i < num_queues) { > > for (j = 1; j < num_queues; j++) { > > - if (xe_spin_started(spin[j]) && > > ((already_in_order & (1 << j)) == 0)) { > > + if (READ_ONCE(spin[j]->timestamp) && > > ((already_in_order & (1 << j)) == 0)) { > > start_order[i] = j; > > xe_spin_end(spin[j]); > > xe_wait_ufence(fd, &spin[j]- > > > exec_sync, USER_FENCE_VALUE, > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] tests/intel/xe_exec_multi_queue: use timestamp to check job start 2026-04-29 19:24 ` Summers, Stuart @ 2026-04-30 4:04 ` Niranjana Vishwanathapura 0 siblings, 0 replies; 15+ messages in thread From: Niranjana Vishwanathapura @ 2026-04-30 4:04 UTC (permalink / raw) To: Summers, Stuart; +Cc: igt-dev@lists.freedesktop.org On Wed, Apr 29, 2026 at 12:24:31PM -0700, Summers, Stuart wrote: >On Wed, 2026-04-29 at 19:18 +0000, Summers, Stuart wrote: >> On Tue, 2026-04-28 at 19:08 -0700, Niranjana Vishwanathapura wrote: >> > In __test_priority(), enable write_timestamp in xe_spin_init_opts() >> > and use timestamp value to determine the queue switch order. This >> > replaces the indeterminate sleep() with a more deterministic wait >> > based on the GPU timestamp. >> > >> > Pre-set all spinners to preempt-wait before submission so each >> > queue, once scheduled by HW, blocks at the semaphore. This ensures >> > all queues are running on HW before testing priority-based >> > scheduling. >> > >> > Assisted-by: GitHub Copilot:claude-sonnet-4.6 >> > Signed-off-by: Niranjana Vishwanathapura >> > <niranjana.vishwanathapura@intel.com> >> > --- >> > tests/intel/xe_exec_multi_queue.c | 51 ++++++++++++++++++++++----- >> > -- >> > -- >> > 1 file changed, 37 insertions(+), 14 deletions(-) >> > >> > diff --git a/tests/intel/xe_exec_multi_queue.c >> > b/tests/intel/xe_exec_multi_queue.c >> > index ca96099d36..382705d065 100644 >> > --- a/tests/intel/xe_exec_multi_queue.c >> > +++ b/tests/intel/xe_exec_multi_queue.c >> > @@ -454,25 +454,24 @@ __test_priority(int fd, struct >> > drm_xe_engine_class_instance *eci, >> > for (i = 0; i < num_queues; i++) { >> > uint64_t spin_addr = addr + i * sizeof(struct >> > xe_spin); >> > >> > - xe_spin_init_opts(spin[i], .addr = spin_addr, >> > .multi_queue_switch = true); >> > + xe_spin_init_opts(spin[i], .addr = spin_addr, >> > .multi_queue_switch = true, >> > + .write_timestamp = true); >> > + /* >> > + * Pre-set all spinners to preempt-wait so each >> > queue, once >> > + * scheduled, immediately blocks at the >> > QUEUE_SWITCH_MODE semaphore >> > + * after writing its timestamp. The HW switches >> > between queues at >> > + * this point, allowing all of them to schedule >> > deterministically. >> > + */ >> > + xe_spin_preempt_wait(spin[i]); >> > sync.addr = spin_addr + (char *)&spin[i]->exec_sync >> > - >> > (char *)spin[i]; >> > exec.exec_queue_id = exec_queues[i]; >> > exec.address = spin_addr; >> > xe_exec(fd, &exec); >> > - >> > - /* Wait for job on Q0 to start, other queues block >> > behind Q0 */ >> > - if (!i) >> > - xe_spin_wait_started(spin[i]); >> > } >> > >> > - sleep(sleep_duration); >> > - >> > - /* >> > - * Expect the job on other queue to not get scheduled while >> > the spinner >> > - * on q0 is not waiting on preempt condition. >> > - */ >> > - for (i = 1; i < num_queues; i++) >> > - igt_assert(!xe_spin_started(spin[i])); >> > + /* Wait for all queues to start */ >> > + for (i = 0; i < num_queues; i++) >> > + xe_spin_wait_started(spin[i]); >> >> So I see in the spinner batch that the c0ffee value is written before >> the timestamp... should we change that order in the batch to account >> for this scenario? And should we have a check before we go into the >> for >> loop to clear the timestamp below that all of the timestamps are non- >> zero first? >> >> I'm wondering if we can hit some race condition where we mark >> everything as started, but some of the queues (well.. at least one of >> hte queues) hasn't actually hit the semaphore wait yet. >> >> That's the only issue I see here really, everything else looks ok to >> me. And maybe I'm just being paranoid about the case above, just >> trying >> to see where we could have some hole here if things are running >> sufficiently slow for some reason. > I understand your concern that GPU job might have started, and have written to start_addr, but not updated the timestamp yet by the time we allow switching to happen down below. But I am not sure if that happening is a real concern here as GPU runs much faster than the test here. But I will change the code to read the timestamp until it is non-zero here instead of xe_spin_wait_started(). That will ensure job has run past writing the timestamp before waiting on semaphore. Claude generated code like that to begin with but I changed it to use xe_spin_wait_started() thinking it will be less confusing for the reader. >And it could also be, of course, that the timestamp gets written but >the semaphore wait hasn't been seen yet... so maybe in addition to the >non-zero check, we should make sure it increments at least once >somehow? That should be fine as we clear the timestamp below and ensure Q0 is running before we allow queue-switch to happen. That ensures all queues are waiting on the semaphore at that point. All we need is to ensure Q0 is running and timestamp of all queues are cleared. Niranjana > >-Stuart > >> >> Thanks, >> Stuart >> >> > >> > if (flags & DYN_PRIORITY) { >> > /* Assign increasing order of priority for >> > secondary >> > queues */ >> > @@ -485,6 +484,30 @@ __test_priority(int fd, struct >> > drm_xe_engine_class_instance *eci, >> > sleep(sleep_duration); >> > } >> > >> > + /* >> > + * Clear timestamps and release all queues from the >> > semaphore >> > wait. >> > + * The order in which they next write a timestamp reveals >> > the >> > + * priority-based scheduling order. >> > + */ >> > + for (i = 0; i < num_queues; i++) { >> > + WRITE_ONCE(spin[i]->timestamp, 0); >> > + xe_spin_preempt_nowait(spin[i]); >> > + >> > + /* >> > + * For Q0, wait until it is running again to ensure >> > it holds the engine >> > + * when priority arbitration is triggered. >> > + */ >> > + if (!i) >> > + while (!READ_ONCE(spin[i]->timestamp)); >> > + } >> > + >> > + /* >> > + * Verify that secondary queues have not been scheduled >> > while >> > Q0 >> > + * holds the engine. >> > + */ >> > + for (i = 1; i < num_queues; i++) >> > + igt_assert(!READ_ONCE(spin[i]->timestamp)); >> > + >> > /* >> > * Trigger a queue switch by making the spinner on q0 to >> > wait >> > on preempt >> > * condition, allowing job on q1 to get scheduled and >> > finish. >> > When we end >> > @@ -501,7 +524,7 @@ __test_priority(int fd, struct >> > drm_xe_engine_class_instance *eci, >> > i = 1; >> > while (i < num_queues) { >> > for (j = 1; j < num_queues; j++) { >> > - if (xe_spin_started(spin[j]) && >> > ((already_in_order & (1 << j)) == 0)) { >> > + if (READ_ONCE(spin[j]->timestamp) && >> > ((already_in_order & (1 << j)) == 0)) { >> > start_order[i] = j; >> > xe_spin_end(spin[j]); >> > xe_wait_ufence(fd, &spin[j]- >> > > exec_sync, USER_FENCE_VALUE, >> > ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue 2026-04-29 2:08 [PATCH 0/2] tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Niranjana Vishwanathapura 2026-04-29 2:08 ` [PATCH 1/2] tests/intel/xe_exec_multi_queue: use timestamp to check job start Niranjana Vishwanathapura @ 2026-04-29 2:08 ` Niranjana Vishwanathapura 2026-04-29 18:27 ` Summers, Stuart 2026-04-29 19:28 ` Summers, Stuart 2026-04-29 3:16 ` ✓ Xe.CI.BAT: success for tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Patchwork ` (2 subsequent siblings) 4 siblings, 2 replies; 15+ messages in thread From: Niranjana Vishwanathapura @ 2026-04-29 2:08 UTC (permalink / raw) To: igt-dev In __test_priority() DYN_PRIORITY case, replace sleep() with a deterministic barrier using an extra queue in the same multi-queue group. After assigning priorities, submit a spinner to the extra queue, end it immediately and wait for its user fence to signal. This guarantees a full scheduler round-trip confirming the priority updates have taken effect before releasing the other queues. Increase exec_queues[] and spin[] array sizes by 1 to accommodate the extra barrier queue slot at index num_queues. Assisted-by: GitHub Copilot:claude-sonnet-4.6 Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> --- tests/intel/xe_exec_multi_queue.c | 37 +++++++++++++++++++++++++------ 1 file changed, 30 insertions(+), 7 deletions(-) diff --git a/tests/intel/xe_exec_multi_queue.c b/tests/intel/xe_exec_multi_queue.c index 382705d065..8c6fbb2d18 100644 --- a/tests/intel/xe_exec_multi_queue.c +++ b/tests/intel/xe_exec_multi_queue.c @@ -381,8 +381,8 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci, .syncs = to_user_pointer(&sync), }; uint64_t vm_sync = 0, addr = BASE_ADDRESS; - uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N]; - struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N]; + uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N + 1]; + struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N + 1]; uint32_t vm, num_queues, num_queue_priorities, bo = 0; uint32_t start_order[XE_EXEC_QUEUE_PRIORITY_N] = { 0 }; int64_t fence_timeout = NSEC_PER_SEC; @@ -403,7 +403,7 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci, .value = DRM_XE_MULTI_GROUP_CREATE, }; uint64_t ext = to_user_pointer(&multi_queue); - int i, j, sleep_duration = 1; + int i, j; void *bo_map; num_queue_priorities = XE_EXEC_QUEUE_NUM_PRIORITIES; @@ -415,12 +415,12 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci, eci[0].engine_class, eci[0].engine_instance); vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0); - bo_size = xe_bb_size(fd, sizeof(*spin[0]) * num_queues); + bo_size = xe_bb_size(fd, sizeof(*spin[0]) * (num_queues + 1)); bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci[0].gt_id), DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); bo_map = xe_bo_map(fd, bo, bo_size); - for (i = 0; i < num_queues; i++) + for (i = 0; i < num_queues + 1; i++) spin[i] = bo_map + i * sizeof(*spin[0]); /* Use the default priority for Q0 because we are explicitly waiting for it below */ @@ -430,6 +430,11 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci, if (flags & DYN_PRIORITY) { for (i = 1; i < num_queues; i++) exec_queues[i] = xe_exec_queue_create(fd, vm, eci, ext); + /* + * Create an extra queue in the same multi-queue group, used as + * a barrier to confirm priority updates have taken effect. + */ + exec_queues[num_queues] = xe_exec_queue_create(fd, vm, eci, ext); } else { struct drm_xe_ext_set_property mq_priority = { .base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY, @@ -474,14 +479,28 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci, xe_spin_wait_started(spin[i]); if (flags & DYN_PRIORITY) { + uint64_t barrier_spin_addr = addr + num_queues * sizeof(struct xe_spin); + /* Assign increasing order of priority for secondary queues */ for (i = 1; i < num_queues; i++) xe_exec_queue_set_property(fd, exec_queues[i], DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY, i % num_queue_priorities); - /* Wait for priorities to take effect */ - sleep(sleep_duration); + /* + * Submit a barrier job on the extra queue to ensure priority + * updates have taken effect before releasing the other queues. + */ + xe_spin_init_opts(spin[num_queues], .addr = barrier_spin_addr, + .preempt = true); + sync.addr = barrier_spin_addr + + ((char *)&spin[num_queues]->exec_sync - (char *)spin[num_queues]); + exec.exec_queue_id = exec_queues[num_queues]; + exec.address = barrier_spin_addr; + xe_exec(fd, &exec); + xe_spin_end(spin[num_queues]); + xe_wait_ufence(fd, &spin[num_queues]->exec_sync, USER_FENCE_VALUE, + exec_queues[num_queues], fence_timeout); } /* @@ -566,6 +585,10 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci, for (i = 0; i < num_queues; i++) xe_exec_queue_destroy(fd, exec_queues[i]); + /* Destroy the extra queue */ + if (flags & DYN_PRIORITY) + xe_exec_queue_destroy(fd, exec_queues[num_queues]); + munmap(bo_map, bo_size); gem_close(fd, bo); -- 2.43.0 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue 2026-04-29 2:08 ` [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue Niranjana Vishwanathapura @ 2026-04-29 18:27 ` Summers, Stuart 2026-04-29 20:52 ` Wang, X 2026-04-29 19:28 ` Summers, Stuart 1 sibling, 1 reply; 15+ messages in thread From: Summers, Stuart @ 2026-04-29 18:27 UTC (permalink / raw) To: igt-dev@lists.freedesktop.org, Vishwanathapura, Niranjana On Tue, 2026-04-28 at 19:08 -0700, Niranjana Vishwanathapura wrote: > In __test_priority() DYN_PRIORITY case, replace sleep() with a > deterministic barrier using an extra queue in the same multi-queue > group. After assigning priorities, submit a spinner to the extra > queue, end it immediately and wait for its user fence to signal. > This guarantees a full scheduler round-trip confirming the priority > updates have taken effect before releasing the other queues. > > Increase exec_queues[] and spin[] array sizes by 1 to accommodate > the extra barrier queue slot at index num_queues. > > Assisted-by: GitHub Copilot:claude-sonnet-4.6 > Signed-off-by: Niranjana Vishwanathapura > <niranjana.vishwanathapura@intel.com> > --- > tests/intel/xe_exec_multi_queue.c | 37 +++++++++++++++++++++++++---- > -- > 1 file changed, 30 insertions(+), 7 deletions(-) > > diff --git a/tests/intel/xe_exec_multi_queue.c > b/tests/intel/xe_exec_multi_queue.c > index 382705d065..8c6fbb2d18 100644 > --- a/tests/intel/xe_exec_multi_queue.c > +++ b/tests/intel/xe_exec_multi_queue.c > @@ -381,8 +381,8 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > .syncs = to_user_pointer(&sync), > }; > uint64_t vm_sync = 0, addr = BASE_ADDRESS; > - uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N]; > - struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N]; > + uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N + 1]; > + struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N + 1]; Since we're only really making use of this in the dynamic case, should we have "+ !!DYNAMIC" instead of "+ 1" here? I.e. we only care about the extra barrier one in the dynamic case? Thanks, Stuart > uint32_t vm, num_queues, num_queue_priorities, bo = 0; > uint32_t start_order[XE_EXEC_QUEUE_PRIORITY_N] = { 0 }; > int64_t fence_timeout = NSEC_PER_SEC; > @@ -403,7 +403,7 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > .value = DRM_XE_MULTI_GROUP_CREATE, > }; > uint64_t ext = to_user_pointer(&multi_queue); > - int i, j, sleep_duration = 1; > + int i, j; > void *bo_map; > > num_queue_priorities = XE_EXEC_QUEUE_NUM_PRIORITIES; > @@ -415,12 +415,12 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > eci[0].engine_class, eci[0].engine_instance); > > vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0); > - bo_size = xe_bb_size(fd, sizeof(*spin[0]) * num_queues); > + bo_size = xe_bb_size(fd, sizeof(*spin[0]) * (num_queues + > 1)); > > bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, > eci[0].gt_id), > DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); > bo_map = xe_bo_map(fd, bo, bo_size); > - for (i = 0; i < num_queues; i++) > + for (i = 0; i < num_queues + 1; i++) > spin[i] = bo_map + i * sizeof(*spin[0]); > > /* Use the default priority for Q0 because we are explicitly > waiting for it below */ > @@ -430,6 +430,11 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > if (flags & DYN_PRIORITY) { > for (i = 1; i < num_queues; i++) > exec_queues[i] = xe_exec_queue_create(fd, vm, > eci, ext); > + /* > + * Create an extra queue in the same multi-queue > group, used as > + * a barrier to confirm priority updates have taken > effect. > + */ > + exec_queues[num_queues] = xe_exec_queue_create(fd, > vm, eci, ext); > } else { > struct drm_xe_ext_set_property mq_priority = { > .base.name = > DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY, > @@ -474,14 +479,28 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > xe_spin_wait_started(spin[i]); > > if (flags & DYN_PRIORITY) { > + uint64_t barrier_spin_addr = addr + num_queues * > sizeof(struct xe_spin); > + > /* Assign increasing order of priority for secondary > queues */ > for (i = 1; i < num_queues; i++) > xe_exec_queue_set_property(fd, > exec_queues[i], > > DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY, > i % > num_queue_priorities); > > - /* Wait for priorities to take effect */ > - sleep(sleep_duration); > + /* > + * Submit a barrier job on the extra queue to ensure > priority > + * updates have taken effect before releasing the > other queues. > + */ > + xe_spin_init_opts(spin[num_queues], .addr = > barrier_spin_addr, > + .preempt = true); > + sync.addr = barrier_spin_addr + > + ((char *)&spin[num_queues]->exec_sync - (char > *)spin[num_queues]); > + exec.exec_queue_id = exec_queues[num_queues]; > + exec.address = barrier_spin_addr; > + xe_exec(fd, &exec); > + xe_spin_end(spin[num_queues]); > + xe_wait_ufence(fd, &spin[num_queues]->exec_sync, > USER_FENCE_VALUE, > + exec_queues[num_queues], > fence_timeout); > } > > /* > @@ -566,6 +585,10 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > for (i = 0; i < num_queues; i++) > xe_exec_queue_destroy(fd, exec_queues[i]); > > + /* Destroy the extra queue */ > + if (flags & DYN_PRIORITY) > + xe_exec_queue_destroy(fd, exec_queues[num_queues]); > + > munmap(bo_map, bo_size); > gem_close(fd, bo); > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue 2026-04-29 18:27 ` Summers, Stuart @ 2026-04-29 20:52 ` Wang, X 2026-04-30 4:06 ` Niranjana Vishwanathapura 0 siblings, 1 reply; 15+ messages in thread From: Wang, X @ 2026-04-29 20:52 UTC (permalink / raw) To: Summers, Stuart, igt-dev@lists.freedesktop.org, Vishwanathapura, Niranjana On 4/29/2026 11:27, Summers, Stuart wrote: > On Tue, 2026-04-28 at 19:08 -0700, Niranjana Vishwanathapura wrote: >> In __test_priority() DYN_PRIORITY case, replace sleep() with a >> deterministic barrier using an extra queue in the same multi-queue >> group. After assigning priorities, submit a spinner to the extra >> queue, end it immediately and wait for its user fence to signal. >> This guarantees a full scheduler round-trip confirming the priority >> updates have taken effect before releasing the other queues. >> >> Increase exec_queues[] and spin[] array sizes by 1 to accommodate >> the extra barrier queue slot at index num_queues. >> >> Assisted-by: GitHub Copilot:claude-sonnet-4.6 >> Signed-off-by: Niranjana Vishwanathapura >> <niranjana.vishwanathapura@intel.com> >> --- >> tests/intel/xe_exec_multi_queue.c | 37 +++++++++++++++++++++++++---- >> -- >> 1 file changed, 30 insertions(+), 7 deletions(-) >> >> diff --git a/tests/intel/xe_exec_multi_queue.c >> b/tests/intel/xe_exec_multi_queue.c >> index 382705d065..8c6fbb2d18 100644 >> --- a/tests/intel/xe_exec_multi_queue.c >> +++ b/tests/intel/xe_exec_multi_queue.c >> @@ -381,8 +381,8 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> .syncs = to_user_pointer(&sync), >> }; >> uint64_t vm_sync = 0, addr = BASE_ADDRESS; >> - uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N]; >> - struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N]; >> + uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N + 1]; >> + struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N + 1]; > Since we're only really making use of this in the dynamic case, should > we have "+ !!DYNAMIC" instead of "+ 1" here? I.e. we only care about > the extra barrier one in the dynamic case? > > Thanks, > Stuart flags is a runtime parameter, so + !!(flags & DYN_PRIORITY) would make the array size runtime-determined — effectively a VLA. Even in userspace, VLAs are generally discouraged due to unpredictable stack usage. The cost of one extra slot is negligible, so always using + 1 is simpler and avoids introducing a VLA. Thanks, Xin >> uint32_t vm, num_queues, num_queue_priorities, bo = 0; >> uint32_t start_order[XE_EXEC_QUEUE_PRIORITY_N] = { 0 }; >> int64_t fence_timeout = NSEC_PER_SEC; >> @@ -403,7 +403,7 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> .value = DRM_XE_MULTI_GROUP_CREATE, >> }; >> uint64_t ext = to_user_pointer(&multi_queue); >> - int i, j, sleep_duration = 1; >> + int i, j; >> void *bo_map; >> >> num_queue_priorities = XE_EXEC_QUEUE_NUM_PRIORITIES; >> @@ -415,12 +415,12 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> eci[0].engine_class, eci[0].engine_instance); >> >> vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0); >> - bo_size = xe_bb_size(fd, sizeof(*spin[0]) * num_queues); >> + bo_size = xe_bb_size(fd, sizeof(*spin[0]) * (num_queues + >> 1)); >> >> bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, >> eci[0].gt_id), >> DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); >> bo_map = xe_bo_map(fd, bo, bo_size); >> - for (i = 0; i < num_queues; i++) >> + for (i = 0; i < num_queues + 1; i++) >> spin[i] = bo_map + i * sizeof(*spin[0]); >> >> /* Use the default priority for Q0 because we are explicitly >> waiting for it below */ >> @@ -430,6 +430,11 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> if (flags & DYN_PRIORITY) { >> for (i = 1; i < num_queues; i++) >> exec_queues[i] = xe_exec_queue_create(fd, vm, >> eci, ext); >> + /* >> + * Create an extra queue in the same multi-queue >> group, used as >> + * a barrier to confirm priority updates have taken >> effect. >> + */ >> + exec_queues[num_queues] = xe_exec_queue_create(fd, >> vm, eci, ext); >> } else { >> struct drm_xe_ext_set_property mq_priority = { >> .base.name = >> DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY, >> @@ -474,14 +479,28 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> xe_spin_wait_started(spin[i]); >> >> if (flags & DYN_PRIORITY) { >> + uint64_t barrier_spin_addr = addr + num_queues * >> sizeof(struct xe_spin); >> + >> /* Assign increasing order of priority for secondary >> queues */ >> for (i = 1; i < num_queues; i++) >> xe_exec_queue_set_property(fd, >> exec_queues[i], >> >> DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY, >> i % >> num_queue_priorities); >> >> - /* Wait for priorities to take effect */ >> - sleep(sleep_duration); >> + /* >> + * Submit a barrier job on the extra queue to ensure >> priority >> + * updates have taken effect before releasing the >> other queues. >> + */ >> + xe_spin_init_opts(spin[num_queues], .addr = >> barrier_spin_addr, >> + .preempt = true); >> + sync.addr = barrier_spin_addr + >> + ((char *)&spin[num_queues]->exec_sync - (char >> *)spin[num_queues]); >> + exec.exec_queue_id = exec_queues[num_queues]; >> + exec.address = barrier_spin_addr; >> + xe_exec(fd, &exec); >> + xe_spin_end(spin[num_queues]); >> + xe_wait_ufence(fd, &spin[num_queues]->exec_sync, >> USER_FENCE_VALUE, >> + exec_queues[num_queues], >> fence_timeout); >> } >> >> /* >> @@ -566,6 +585,10 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> for (i = 0; i < num_queues; i++) >> xe_exec_queue_destroy(fd, exec_queues[i]); >> >> + /* Destroy the extra queue */ >> + if (flags & DYN_PRIORITY) >> + xe_exec_queue_destroy(fd, exec_queues[num_queues]); >> + >> munmap(bo_map, bo_size); >> gem_close(fd, bo); >> ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue 2026-04-29 20:52 ` Wang, X @ 2026-04-30 4:06 ` Niranjana Vishwanathapura 2026-04-30 21:53 ` Summers, Stuart 0 siblings, 1 reply; 15+ messages in thread From: Niranjana Vishwanathapura @ 2026-04-30 4:06 UTC (permalink / raw) To: Wang, X; +Cc: Summers, Stuart, igt-dev@lists.freedesktop.org On Wed, Apr 29, 2026 at 01:52:05PM -0700, Wang, X wrote: > > >On 4/29/2026 11:27, Summers, Stuart wrote: >>On Tue, 2026-04-28 at 19:08 -0700, Niranjana Vishwanathapura wrote: >>>In __test_priority() DYN_PRIORITY case, replace sleep() with a >>>deterministic barrier using an extra queue in the same multi-queue >>>group. After assigning priorities, submit a spinner to the extra >>>queue, end it immediately and wait for its user fence to signal. >>>This guarantees a full scheduler round-trip confirming the priority >>>updates have taken effect before releasing the other queues. >>> >>>Increase exec_queues[] and spin[] array sizes by 1 to accommodate >>>the extra barrier queue slot at index num_queues. >>> >>>Assisted-by: GitHub Copilot:claude-sonnet-4.6 >>>Signed-off-by: Niranjana Vishwanathapura >>><niranjana.vishwanathapura@intel.com> >>>--- >>> tests/intel/xe_exec_multi_queue.c | 37 +++++++++++++++++++++++++---- >>>-- >>> 1 file changed, 30 insertions(+), 7 deletions(-) >>> >>>diff --git a/tests/intel/xe_exec_multi_queue.c >>>b/tests/intel/xe_exec_multi_queue.c >>>index 382705d065..8c6fbb2d18 100644 >>>--- a/tests/intel/xe_exec_multi_queue.c >>>+++ b/tests/intel/xe_exec_multi_queue.c >>>@@ -381,8 +381,8 @@ __test_priority(int fd, struct >>>drm_xe_engine_class_instance *eci, >>> .syncs = to_user_pointer(&sync), >>> }; >>> uint64_t vm_sync = 0, addr = BASE_ADDRESS; >>>- uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N]; >>>- struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N]; >>>+ uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N + 1]; >>>+ struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N + 1]; >>Since we're only really making use of this in the dynamic case, should >>we have "+ !!DYNAMIC" instead of "+ 1" here? I.e. we only care about >>the extra barrier one in the dynamic case? >> >>Thanks, >>Stuart >flags is a runtime parameter, so + !!(flags & DYN_PRIORITY) >would make the array size runtime-determined — effectively a VLA. >Even in userspace, VLAs are generally discouraged due to >unpredictable stack usage. The cost of one extra slot is negligible, >so always using + 1 is simpler and avoids introducing a VLA. > Yes, we allocate enough space required to handle any scenario. That is much better than making the code complex to save an array element. Niranjana >Thanks, >Xin >>> uint32_t vm, num_queues, num_queue_priorities, bo = 0; >>> uint32_t start_order[XE_EXEC_QUEUE_PRIORITY_N] = { 0 }; >>> int64_t fence_timeout = NSEC_PER_SEC; >>>@@ -403,7 +403,7 @@ __test_priority(int fd, struct >>>drm_xe_engine_class_instance *eci, >>> .value = DRM_XE_MULTI_GROUP_CREATE, >>> }; >>> uint64_t ext = to_user_pointer(&multi_queue); >>>- int i, j, sleep_duration = 1; >>>+ int i, j; >>> void *bo_map; >>> num_queue_priorities = XE_EXEC_QUEUE_NUM_PRIORITIES; >>>@@ -415,12 +415,12 @@ __test_priority(int fd, struct >>>drm_xe_engine_class_instance *eci, >>> eci[0].engine_class, eci[0].engine_instance); >>> vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0); >>>- bo_size = xe_bb_size(fd, sizeof(*spin[0]) * num_queues); >>>+ bo_size = xe_bb_size(fd, sizeof(*spin[0]) * (num_queues + >>>1)); >>> bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, >>>eci[0].gt_id), >>> DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); >>> bo_map = xe_bo_map(fd, bo, bo_size); >>>- for (i = 0; i < num_queues; i++) >>>+ for (i = 0; i < num_queues + 1; i++) >>> spin[i] = bo_map + i * sizeof(*spin[0]); >>> /* Use the default priority for Q0 because we are explicitly >>>waiting for it below */ >>>@@ -430,6 +430,11 @@ __test_priority(int fd, struct >>>drm_xe_engine_class_instance *eci, >>> if (flags & DYN_PRIORITY) { >>> for (i = 1; i < num_queues; i++) >>> exec_queues[i] = xe_exec_queue_create(fd, vm, >>>eci, ext); >>>+ /* >>>+ * Create an extra queue in the same multi-queue >>>group, used as >>>+ * a barrier to confirm priority updates have taken >>>effect. >>>+ */ >>>+ exec_queues[num_queues] = xe_exec_queue_create(fd, >>>vm, eci, ext); >>> } else { >>> struct drm_xe_ext_set_property mq_priority = { >>> .base.name = >>>DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY, >>>@@ -474,14 +479,28 @@ __test_priority(int fd, struct >>>drm_xe_engine_class_instance *eci, >>> xe_spin_wait_started(spin[i]); >>> if (flags & DYN_PRIORITY) { >>>+ uint64_t barrier_spin_addr = addr + num_queues * >>>sizeof(struct xe_spin); >>>+ >>> /* Assign increasing order of priority for secondary >>>queues */ >>> for (i = 1; i < num_queues; i++) >>> xe_exec_queue_set_property(fd, >>>exec_queues[i], >>>DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY, >>> i % >>>num_queue_priorities); >>>- /* Wait for priorities to take effect */ >>>- sleep(sleep_duration); >>>+ /* >>>+ * Submit a barrier job on the extra queue to ensure >>>priority >>>+ * updates have taken effect before releasing the >>>other queues. >>>+ */ >>>+ xe_spin_init_opts(spin[num_queues], .addr = >>>barrier_spin_addr, >>>+ .preempt = true); >>>+ sync.addr = barrier_spin_addr + >>>+ ((char *)&spin[num_queues]->exec_sync - (char >>>*)spin[num_queues]); >>>+ exec.exec_queue_id = exec_queues[num_queues]; >>>+ exec.address = barrier_spin_addr; >>>+ xe_exec(fd, &exec); >>>+ xe_spin_end(spin[num_queues]); >>>+ xe_wait_ufence(fd, &spin[num_queues]->exec_sync, >>>USER_FENCE_VALUE, >>>+ exec_queues[num_queues], >>>fence_timeout); >>> } >>> /* >>>@@ -566,6 +585,10 @@ __test_priority(int fd, struct >>>drm_xe_engine_class_instance *eci, >>> for (i = 0; i < num_queues; i++) >>> xe_exec_queue_destroy(fd, exec_queues[i]); >>>+ /* Destroy the extra queue */ >>>+ if (flags & DYN_PRIORITY) >>>+ xe_exec_queue_destroy(fd, exec_queues[num_queues]); >>>+ >>> munmap(bo_map, bo_size); >>> gem_close(fd, bo); > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue 2026-04-30 4:06 ` Niranjana Vishwanathapura @ 2026-04-30 21:53 ` Summers, Stuart 0 siblings, 0 replies; 15+ messages in thread From: Summers, Stuart @ 2026-04-30 21:53 UTC (permalink / raw) To: Wang, X, Vishwanathapura, Niranjana; +Cc: igt-dev@lists.freedesktop.org On Wed, 2026-04-29 at 21:06 -0700, Niranjana Vishwanathapura wrote: > On Wed, Apr 29, 2026 at 01:52:05PM -0700, Wang, X wrote: > > > > > > On 4/29/2026 11:27, Summers, Stuart wrote: > > > On Tue, 2026-04-28 at 19:08 -0700, Niranjana Vishwanathapura > > > wrote: > > > > In __test_priority() DYN_PRIORITY case, replace sleep() with a > > > > deterministic barrier using an extra queue in the same multi- > > > > queue > > > > group. After assigning priorities, submit a spinner to the > > > > extra > > > > queue, end it immediately and wait for its user fence to > > > > signal. > > > > This guarantees a full scheduler round-trip confirming the > > > > priority > > > > updates have taken effect before releasing the other queues. > > > > > > > > Increase exec_queues[] and spin[] array sizes by 1 to > > > > accommodate > > > > the extra barrier queue slot at index num_queues. > > > > > > > > Assisted-by: GitHub Copilot:claude-sonnet-4.6 > > > > Signed-off-by: Niranjana Vishwanathapura > > > > <niranjana.vishwanathapura@intel.com> > > > > --- > > > > tests/intel/xe_exec_multi_queue.c | 37 > > > > +++++++++++++++++++++++++---- > > > > -- > > > > 1 file changed, 30 insertions(+), 7 deletions(-) > > > > > > > > diff --git a/tests/intel/xe_exec_multi_queue.c > > > > b/tests/intel/xe_exec_multi_queue.c > > > > index 382705d065..8c6fbb2d18 100644 > > > > --- a/tests/intel/xe_exec_multi_queue.c > > > > +++ b/tests/intel/xe_exec_multi_queue.c > > > > @@ -381,8 +381,8 @@ __test_priority(int fd, struct > > > > drm_xe_engine_class_instance *eci, > > > > .syncs = to_user_pointer(&sync), > > > > }; > > > > uint64_t vm_sync = 0, addr = BASE_ADDRESS; > > > > - uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N]; > > > > - struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N]; > > > > + uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N + 1]; > > > > + struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N + 1]; > > > Since we're only really making use of this in the dynamic case, > > > should > > > we have "+ !!DYNAMIC" instead of "+ 1" here? I.e. we only care > > > about > > > the extra barrier one in the dynamic case? > > > > > > Thanks, > > > Stuart > > flags is a runtime parameter, so + !!(flags & DYN_PRIORITY) > > would make the array size runtime-determined — effectively a VLA. > > Even in userspace, VLAs are generally discouraged due to > > unpredictable stack usage. The cost of one extra slot is > > negligible, > > so always using + 1 is simpler and avoids introducing a VLA. > > > > Yes, we allocate enough space required to handle any scenario. > That is much better than making the code complex to save an array > element. Yeah ok makes sense to me and I agree with the explanations. -Stuart > > Niranjana > > > Thanks, > > Xin > > > > uint32_t vm, num_queues, num_queue_priorities, bo = 0; > > > > uint32_t start_order[XE_EXEC_QUEUE_PRIORITY_N] = { 0 }; > > > > int64_t fence_timeout = NSEC_PER_SEC; > > > > @@ -403,7 +403,7 @@ __test_priority(int fd, struct > > > > drm_xe_engine_class_instance *eci, > > > > .value = DRM_XE_MULTI_GROUP_CREATE, > > > > }; > > > > uint64_t ext = to_user_pointer(&multi_queue); > > > > - int i, j, sleep_duration = 1; > > > > + int i, j; > > > > void *bo_map; > > > > num_queue_priorities = XE_EXEC_QUEUE_NUM_PRIORITIES; > > > > @@ -415,12 +415,12 @@ __test_priority(int fd, struct > > > > drm_xe_engine_class_instance *eci, > > > > eci[0].engine_class, eci[0].engine_instance); > > > > vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, > > > > 0); > > > > - bo_size = xe_bb_size(fd, sizeof(*spin[0]) * > > > > num_queues); > > > > + bo_size = xe_bb_size(fd, sizeof(*spin[0]) * (num_queues > > > > + > > > > 1)); > > > > bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, > > > > eci[0].gt_id), > > > > > > > > DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); > > > > bo_map = xe_bo_map(fd, bo, bo_size); > > > > - for (i = 0; i < num_queues; i++) > > > > + for (i = 0; i < num_queues + 1; i++) > > > > spin[i] = bo_map + i * sizeof(*spin[0]); > > > > /* Use the default priority for Q0 because we are > > > > explicitly > > > > waiting for it below */ > > > > @@ -430,6 +430,11 @@ __test_priority(int fd, struct > > > > drm_xe_engine_class_instance *eci, > > > > if (flags & DYN_PRIORITY) { > > > > for (i = 1; i < num_queues; i++) > > > > exec_queues[i] = > > > > xe_exec_queue_create(fd, vm, > > > > eci, ext); > > > > + /* > > > > + * Create an extra queue in the same multi- > > > > queue > > > > group, used as > > > > + * a barrier to confirm priority updates have > > > > taken > > > > effect. > > > > + */ > > > > + exec_queues[num_queues] = > > > > xe_exec_queue_create(fd, > > > > vm, eci, ext); > > > > } else { > > > > struct drm_xe_ext_set_property mq_priority = { > > > > .base.name = > > > > DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY, > > > > @@ -474,14 +479,28 @@ __test_priority(int fd, struct > > > > drm_xe_engine_class_instance *eci, > > > > xe_spin_wait_started(spin[i]); > > > > if (flags & DYN_PRIORITY) { > > > > + uint64_t barrier_spin_addr = addr + num_queues > > > > * > > > > sizeof(struct xe_spin); > > > > + > > > > /* Assign increasing order of priority for > > > > secondary > > > > queues */ > > > > for (i = 1; i < num_queues; i++) > > > > xe_exec_queue_set_property(fd, > > > > exec_queues[i], > > > > DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY, > > > > i % > > > > num_queue_priorities); > > > > - /* Wait for priorities to take effect */ > > > > - sleep(sleep_duration); > > > > + /* > > > > + * Submit a barrier job on the extra queue to > > > > ensure > > > > priority > > > > + * updates have taken effect before releasing > > > > the > > > > other queues. > > > > + */ > > > > + xe_spin_init_opts(spin[num_queues], .addr = > > > > barrier_spin_addr, > > > > + .preempt = true); > > > > + sync.addr = barrier_spin_addr + > > > > + ((char *)&spin[num_queues]->exec_sync - > > > > (char > > > > *)spin[num_queues]); > > > > + exec.exec_queue_id = exec_queues[num_queues]; > > > > + exec.address = barrier_spin_addr; > > > > + xe_exec(fd, &exec); > > > > + xe_spin_end(spin[num_queues]); > > > > + xe_wait_ufence(fd, &spin[num_queues]- > > > > >exec_sync, > > > > USER_FENCE_VALUE, > > > > + exec_queues[num_queues], > > > > fence_timeout); > > > > } > > > > /* > > > > @@ -566,6 +585,10 @@ __test_priority(int fd, struct > > > > drm_xe_engine_class_instance *eci, > > > > for (i = 0; i < num_queues; i++) > > > > xe_exec_queue_destroy(fd, exec_queues[i]); > > > > + /* Destroy the extra queue */ > > > > + if (flags & DYN_PRIORITY) > > > > + xe_exec_queue_destroy(fd, > > > > exec_queues[num_queues]); > > > > + > > > > munmap(bo_map, bo_size); > > > > gem_close(fd, bo); > > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue 2026-04-29 2:08 ` [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue Niranjana Vishwanathapura 2026-04-29 18:27 ` Summers, Stuart @ 2026-04-29 19:28 ` Summers, Stuart 2026-04-30 4:09 ` Niranjana Vishwanathapura 1 sibling, 1 reply; 15+ messages in thread From: Summers, Stuart @ 2026-04-29 19:28 UTC (permalink / raw) To: igt-dev@lists.freedesktop.org, Vishwanathapura, Niranjana On Tue, 2026-04-28 at 19:08 -0700, Niranjana Vishwanathapura wrote: > In __test_priority() DYN_PRIORITY case, replace sleep() with a > deterministic barrier using an extra queue in the same multi-queue > group. After assigning priorities, submit a spinner to the extra > queue, end it immediately and wait for its user fence to signal. > This guarantees a full scheduler round-trip confirming the priority > updates have taken effect before releasing the other queues. > > Increase exec_queues[] and spin[] array sizes by 1 to accommodate > the extra barrier queue slot at index num_queues. > > Assisted-by: GitHub Copilot:claude-sonnet-4.6 > Signed-off-by: Niranjana Vishwanathapura > <niranjana.vishwanathapura@intel.com> > --- > tests/intel/xe_exec_multi_queue.c | 37 +++++++++++++++++++++++++---- > -- > 1 file changed, 30 insertions(+), 7 deletions(-) > > diff --git a/tests/intel/xe_exec_multi_queue.c > b/tests/intel/xe_exec_multi_queue.c > index 382705d065..8c6fbb2d18 100644 > --- a/tests/intel/xe_exec_multi_queue.c > +++ b/tests/intel/xe_exec_multi_queue.c > @@ -381,8 +381,8 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > .syncs = to_user_pointer(&sync), > }; > uint64_t vm_sync = 0, addr = BASE_ADDRESS; > - uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N]; > - struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N]; > + uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N + 1]; > + struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N + 1]; > uint32_t vm, num_queues, num_queue_priorities, bo = 0; > uint32_t start_order[XE_EXEC_QUEUE_PRIORITY_N] = { 0 }; > int64_t fence_timeout = NSEC_PER_SEC; > @@ -403,7 +403,7 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > .value = DRM_XE_MULTI_GROUP_CREATE, > }; > uint64_t ext = to_user_pointer(&multi_queue); > - int i, j, sleep_duration = 1; > + int i, j; > void *bo_map; > > num_queue_priorities = XE_EXEC_QUEUE_NUM_PRIORITIES; > @@ -415,12 +415,12 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > eci[0].engine_class, eci[0].engine_instance); > > vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0); > - bo_size = xe_bb_size(fd, sizeof(*spin[0]) * num_queues); > + bo_size = xe_bb_size(fd, sizeof(*spin[0]) * (num_queues + > 1)); > > bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, > eci[0].gt_id), > DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); > bo_map = xe_bo_map(fd, bo, bo_size); > - for (i = 0; i < num_queues; i++) > + for (i = 0; i < num_queues + 1; i++) > spin[i] = bo_map + i * sizeof(*spin[0]); > > /* Use the default priority for Q0 because we are explicitly > waiting for it below */ > @@ -430,6 +430,11 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > if (flags & DYN_PRIORITY) { > for (i = 1; i < num_queues; i++) > exec_queues[i] = xe_exec_queue_create(fd, vm, > eci, ext); > + /* > + * Create an extra queue in the same multi-queue > group, used as > + * a barrier to confirm priority updates have taken > effect. > + */ > + exec_queues[num_queues] = xe_exec_queue_create(fd, > vm, eci, ext); Sorry for the multiple responses here... I realize you're doing this separate line explicitly so it's clear what and why, etc, but we're really just duplicating code here when we could have a num_queues + 1 in the for loop here. The comment here is the interesting part that will let us know in the future why we have that extra one (in addition to the commit message of course). Not a blocker, but I'd prefer to not change the inner portion of the for loop and just add the + 1 plus the comment above the loop... > } else { > struct drm_xe_ext_set_property mq_priority = { > .base.name = > DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY, > @@ -474,14 +479,28 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > xe_spin_wait_started(spin[i]); > > if (flags & DYN_PRIORITY) { > + uint64_t barrier_spin_addr = addr + num_queues * > sizeof(struct xe_spin); > + > /* Assign increasing order of priority for secondary > queues */ > for (i = 1; i < num_queues; i++) > xe_exec_queue_set_property(fd, > exec_queues[i], > > DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY, > i % > num_queue_priorities); > > - /* Wait for priorities to take effect */ > - sleep(sleep_duration); > + /* > + * Submit a barrier job on the extra queue to ensure > priority > + * updates have taken effect before releasing the > other queues. > + */ > + xe_spin_init_opts(spin[num_queues], .addr = > barrier_spin_addr, > + .preempt = true); Why are you setting preempt mode here? -Stuart > + sync.addr = barrier_spin_addr + > + ((char *)&spin[num_queues]->exec_sync - (char > *)spin[num_queues]); > + exec.exec_queue_id = exec_queues[num_queues]; > + exec.address = barrier_spin_addr; > + xe_exec(fd, &exec); > + xe_spin_end(spin[num_queues]); > + xe_wait_ufence(fd, &spin[num_queues]->exec_sync, > USER_FENCE_VALUE, > + exec_queues[num_queues], > fence_timeout); > } > > /* > @@ -566,6 +585,10 @@ __test_priority(int fd, struct > drm_xe_engine_class_instance *eci, > for (i = 0; i < num_queues; i++) > xe_exec_queue_destroy(fd, exec_queues[i]); > > + /* Destroy the extra queue */ > + if (flags & DYN_PRIORITY) > + xe_exec_queue_destroy(fd, exec_queues[num_queues]); > + > munmap(bo_map, bo_size); > gem_close(fd, bo); > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue 2026-04-29 19:28 ` Summers, Stuart @ 2026-04-30 4:09 ` Niranjana Vishwanathapura 0 siblings, 0 replies; 15+ messages in thread From: Niranjana Vishwanathapura @ 2026-04-30 4:09 UTC (permalink / raw) To: Summers, Stuart; +Cc: igt-dev@lists.freedesktop.org On Wed, Apr 29, 2026 at 12:28:09PM -0700, Summers, Stuart wrote: >On Tue, 2026-04-28 at 19:08 -0700, Niranjana Vishwanathapura wrote: >> In __test_priority() DYN_PRIORITY case, replace sleep() with a >> deterministic barrier using an extra queue in the same multi-queue >> group. After assigning priorities, submit a spinner to the extra >> queue, end it immediately and wait for its user fence to signal. >> This guarantees a full scheduler round-trip confirming the priority >> updates have taken effect before releasing the other queues. >> >> Increase exec_queues[] and spin[] array sizes by 1 to accommodate >> the extra barrier queue slot at index num_queues. >> >> Assisted-by: GitHub Copilot:claude-sonnet-4.6 >> Signed-off-by: Niranjana Vishwanathapura >> <niranjana.vishwanathapura@intel.com> >> --- >> tests/intel/xe_exec_multi_queue.c | 37 +++++++++++++++++++++++++---- >> -- >> 1 file changed, 30 insertions(+), 7 deletions(-) >> >> diff --git a/tests/intel/xe_exec_multi_queue.c >> b/tests/intel/xe_exec_multi_queue.c >> index 382705d065..8c6fbb2d18 100644 >> --- a/tests/intel/xe_exec_multi_queue.c >> +++ b/tests/intel/xe_exec_multi_queue.c >> @@ -381,8 +381,8 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> .syncs = to_user_pointer(&sync), >> }; >> uint64_t vm_sync = 0, addr = BASE_ADDRESS; >> - uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N]; >> - struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N]; >> + uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N + 1]; >> + struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N + 1]; >> uint32_t vm, num_queues, num_queue_priorities, bo = 0; >> uint32_t start_order[XE_EXEC_QUEUE_PRIORITY_N] = { 0 }; >> int64_t fence_timeout = NSEC_PER_SEC; >> @@ -403,7 +403,7 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> .value = DRM_XE_MULTI_GROUP_CREATE, >> }; >> uint64_t ext = to_user_pointer(&multi_queue); >> - int i, j, sleep_duration = 1; >> + int i, j; >> void *bo_map; >> >> num_queue_priorities = XE_EXEC_QUEUE_NUM_PRIORITIES; >> @@ -415,12 +415,12 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> eci[0].engine_class, eci[0].engine_instance); >> >> vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0); >> - bo_size = xe_bb_size(fd, sizeof(*spin[0]) * num_queues); >> + bo_size = xe_bb_size(fd, sizeof(*spin[0]) * (num_queues + >> 1)); >> >> bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, >> eci[0].gt_id), >> DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); >> bo_map = xe_bo_map(fd, bo, bo_size); >> - for (i = 0; i < num_queues; i++) >> + for (i = 0; i < num_queues + 1; i++) >> spin[i] = bo_map + i * sizeof(*spin[0]); >> >> /* Use the default priority for Q0 because we are explicitly >> waiting for it below */ >> @@ -430,6 +430,11 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> if (flags & DYN_PRIORITY) { >> for (i = 1; i < num_queues; i++) >> exec_queues[i] = xe_exec_queue_create(fd, vm, >> eci, ext); >> + /* >> + * Create an extra queue in the same multi-queue >> group, used as >> + * a barrier to confirm priority updates have taken >> effect. >> + */ >> + exec_queues[num_queues] = xe_exec_queue_create(fd, >> vm, eci, ext); > >Sorry for the multiple responses here... > >I realize you're doing this separate line explicitly so it's clear what >and why, etc, but we're really just duplicating code here when we could >have a num_queues + 1 in the for loop here. The comment here is the >interesting part that will let us know in the future why we have that >extra one (in addition to the commit message of course). > >Not a blocker, but I'd prefer to not change the inner portion of the >for loop and just add the + 1 plus the comment above the loop... Claude genreated this way and I kind of like it as it has required comment above and a matching xe_exec_queue_destroy() below. > >> } else { >> struct drm_xe_ext_set_property mq_priority = { >> .base.name = >> DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY, >> @@ -474,14 +479,28 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> xe_spin_wait_started(spin[i]); >> >> if (flags & DYN_PRIORITY) { >> + uint64_t barrier_spin_addr = addr + num_queues * >> sizeof(struct xe_spin); >> + >> /* Assign increasing order of priority for secondary >> queues */ >> for (i = 1; i < num_queues; i++) >> xe_exec_queue_set_property(fd, >> exec_queues[i], >> >> DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY, >> i % >> num_queue_priorities); >> >> - /* Wait for priorities to take effect */ >> - sleep(sleep_duration); >> + /* >> + * Submit a barrier job on the extra queue to ensure >> priority >> + * updates have taken effect before releasing the >> other queues. >> + */ >> + xe_spin_init_opts(spin[num_queues], .addr = >> barrier_spin_addr, >> + .preempt = true); > >Why are you setting preempt mode here? > Claude added it and I thought I removed it, but obviously I did not. Let me remove it. Niranjana >-Stuart > >> + sync.addr = barrier_spin_addr + >> + ((char *)&spin[num_queues]->exec_sync - (char >> *)spin[num_queues]); >> + exec.exec_queue_id = exec_queues[num_queues]; >> + exec.address = barrier_spin_addr; >> + xe_exec(fd, &exec); >> + xe_spin_end(spin[num_queues]); >> + xe_wait_ufence(fd, &spin[num_queues]->exec_sync, >> USER_FENCE_VALUE, >> + exec_queues[num_queues], >> fence_timeout); >> } >> >> /* >> @@ -566,6 +585,10 @@ __test_priority(int fd, struct >> drm_xe_engine_class_instance *eci, >> for (i = 0; i < num_queues; i++) >> xe_exec_queue_destroy(fd, exec_queues[i]); >> >> + /* Destroy the extra queue */ >> + if (flags & DYN_PRIORITY) >> + xe_exec_queue_destroy(fd, exec_queues[num_queues]); >> + >> munmap(bo_map, bo_size); >> gem_close(fd, bo); >> > ^ permalink raw reply [flat|nested] 15+ messages in thread
* ✓ Xe.CI.BAT: success for tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait 2026-04-29 2:08 [PATCH 0/2] tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Niranjana Vishwanathapura 2026-04-29 2:08 ` [PATCH 1/2] tests/intel/xe_exec_multi_queue: use timestamp to check job start Niranjana Vishwanathapura 2026-04-29 2:08 ` [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue Niranjana Vishwanathapura @ 2026-04-29 3:16 ` Patchwork 2026-04-29 3:21 ` ✗ i915.CI.BAT: failure " Patchwork 2026-04-29 12:54 ` ✗ Xe.CI.FULL: " Patchwork 4 siblings, 0 replies; 15+ messages in thread From: Patchwork @ 2026-04-29 3:16 UTC (permalink / raw) To: Niranjana Vishwanathapura; +Cc: igt-dev [-- Attachment #1: Type: text/plain, Size: 1186 bytes --] == Series Details == Series: tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait URL : https://patchwork.freedesktop.org/series/165673/ State : success == Summary == CI Bug Log - changes from XEIGT_8877_BAT -> XEIGTPW_15078_BAT ==================================================== Summary ------- **SUCCESS** No regressions found. Participating hosts (13 -> 13) ------------------------------ No changes in participating hosts Changes ------- No changes found Build changes ------------- * IGT: IGT_8877 -> IGTPW_15078 * Linux: xe-4947-41542c1ef015c1907cd9a9785c8c2453f4fa2877 -> xe-4948-a53aafc879e9c52b2776089762591d2766a27f0a IGTPW_15078: dc51a2f859cf0dae0498243600cc4bc75c957376 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git IGT_8877: 1749e432cd72ef2c99f1b4e9d6f24411f1161901 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git xe-4947-41542c1ef015c1907cd9a9785c8c2453f4fa2877: 41542c1ef015c1907cd9a9785c8c2453f4fa2877 xe-4948-a53aafc879e9c52b2776089762591d2766a27f0a: a53aafc879e9c52b2776089762591d2766a27f0a == Logs == For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/index.html [-- Attachment #2: Type: text/html, Size: 1745 bytes --] ^ permalink raw reply [flat|nested] 15+ messages in thread
* ✗ i915.CI.BAT: failure for tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait 2026-04-29 2:08 [PATCH 0/2] tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Niranjana Vishwanathapura ` (2 preceding siblings ...) 2026-04-29 3:16 ` ✓ Xe.CI.BAT: success for tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Patchwork @ 2026-04-29 3:21 ` Patchwork 2026-04-29 12:54 ` ✗ Xe.CI.FULL: " Patchwork 4 siblings, 0 replies; 15+ messages in thread From: Patchwork @ 2026-04-29 3:21 UTC (permalink / raw) To: Niranjana Vishwanathapura; +Cc: igt-dev [-- Attachment #1: Type: text/plain, Size: 8640 bytes --] == Series Details == Series: tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait URL : https://patchwork.freedesktop.org/series/165673/ State : failure == Summary == CI Bug Log - changes from IGT_8877 -> IGTPW_15078 ==================================================== Summary ------- **FAILURE** Serious unknown changes coming with IGTPW_15078 absolutely need to be verified manually. If you think the reported changes have nothing to do with the changes introduced in IGTPW_15078, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them to document this new failure mode, which will reduce false positives in CI. External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/index.html Participating hosts (39 -> 39) ------------------------------ Additional (3): bat-apl-1 bat-atsm-1 fi-pnv-d510 Missing (3): bat-dg2-13 fi-glk-j4005 fi-snb-2520m Possible new issues ------------------- Here are the unknown changes that may have been introduced in IGTPW_15078: ### IGT changes ### #### Possible regressions #### * igt@i915_selftest@live@perf: - bat-dg1-7: [PASS][1] -> [DMESG-FAIL][2] +1 other test dmesg-fail [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8877/bat-dg1-7/igt@i915_selftest@live@perf.html [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-dg1-7/igt@i915_selftest@live@perf.html Known issues ------------ Here are the changes found in IGTPW_15078 that come from known issues: ### IGT changes ### #### Issues hit #### * igt@dmabuf@all-tests: - fi-pnv-d510: NOTRUN -> [SKIP][3] +35 other tests skip [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/fi-pnv-d510/igt@dmabuf@all-tests.html - bat-atsm-1: NOTRUN -> [SKIP][4] ([i915#15931]) [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@dmabuf@all-tests.html * igt@fbdev@info: - bat-atsm-1: NOTRUN -> [SKIP][5] ([i915#1849] / [i915#2582]) [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@fbdev@info.html * igt@fbdev@read: - bat-atsm-1: NOTRUN -> [SKIP][6] ([i915#2582]) +3 other tests skip [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@fbdev@read.html * igt@gem_huc_copy@huc-copy: - bat-apl-1: NOTRUN -> [SKIP][7] +25 other tests skip [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-apl-1/igt@gem_huc_copy@huc-copy.html * igt@gem_mmap@basic: - bat-atsm-1: NOTRUN -> [SKIP][8] ([i915#4083]) [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@gem_mmap@basic.html * igt@gem_render_tiled_blits@basic: - bat-atsm-1: NOTRUN -> [SKIP][9] ([i915#4079]) [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@gem_render_tiled_blits@basic.html * igt@gem_tiled_fence_blits@basic: - bat-atsm-1: NOTRUN -> [SKIP][10] ([i915#4077]) +4 other tests skip [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@gem_tiled_fence_blits@basic.html * igt@gem_tiled_pread_basic@basic: - bat-atsm-1: NOTRUN -> [SKIP][11] ([i915#15657]) [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@gem_tiled_pread_basic@basic.html * igt@i915_pm_rps@basic-api: - bat-atsm-1: NOTRUN -> [SKIP][12] ([i915#6621]) [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@i915_pm_rps@basic-api.html * igt@i915_selftest@live@workarounds: - bat-arlh-2: [PASS][13] -> [DMESG-FAIL][14] ([i915#12061]) +1 other test dmesg-fail [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8877/bat-arlh-2/igt@i915_selftest@live@workarounds.html [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-arlh-2/igt@i915_selftest@live@workarounds.html - bat-dg2-14: [PASS][15] -> [DMESG-FAIL][16] ([i915#12061]) +1 other test dmesg-fail [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8877/bat-dg2-14/igt@i915_selftest@live@workarounds.html [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-dg2-14/igt@i915_selftest@live@workarounds.html - bat-atsm-1: NOTRUN -> [DMESG-FAIL][17] ([i915#12061]) +1 other test dmesg-fail [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@i915_selftest@live@workarounds.html * igt@kms_addfb_basic@size-max: - bat-atsm-1: NOTRUN -> [SKIP][18] ([i915#6077]) +37 other tests skip [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@kms_addfb_basic@size-max.html * igt@kms_cursor_legacy@basic-flip-after-cursor-atomic: - bat-atsm-1: NOTRUN -> [SKIP][19] ([i915#6078]) +22 other tests skip [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@kms_cursor_legacy@basic-flip-after-cursor-atomic.html * igt@kms_force_connector_basic@force-load-detect: - bat-atsm-1: NOTRUN -> [SKIP][20] ([i915#6093]) +4 other tests skip [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@kms_force_connector_basic@force-load-detect.html * igt@kms_hdmi_inject@inject-audio: - fi-tgl-1115g4: [PASS][21] -> [FAIL][22] ([i915#14867]) [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8877/fi-tgl-1115g4/igt@kms_hdmi_inject@inject-audio.html [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/fi-tgl-1115g4/igt@kms_hdmi_inject@inject-audio.html * igt@kms_pipe_crc_basic@read-crc-frame-sequence: - bat-atsm-1: NOTRUN -> [SKIP][23] ([i915#1836]) +6 other tests skip [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@kms_pipe_crc_basic@read-crc-frame-sequence.html * igt@kms_prop_blob@basic: - bat-atsm-1: NOTRUN -> [SKIP][24] ([i915#7357]) [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@kms_prop_blob@basic.html * igt@kms_setmode@basic-clone-single-crtc: - bat-atsm-1: NOTRUN -> [SKIP][25] ([i915#6094]) [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@kms_setmode@basic-clone-single-crtc.html * igt@prime_vgem@basic-write: - bat-atsm-1: NOTRUN -> [SKIP][26] +2 other tests skip [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-atsm-1/igt@prime_vgem@basic-write.html #### Possible fixes #### * igt@i915_selftest@live: - bat-dg2-8: [DMESG-FAIL][27] ([i915#12061]) -> [PASS][28] +1 other test pass [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8877/bat-dg2-8/igt@i915_selftest@live.html [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/bat-dg2-8/igt@i915_selftest@live.html [i915#12061]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12061 [i915#14867]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14867 [i915#15657]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15657 [i915#15931]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15931 [i915#1836]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1836 [i915#1849]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1849 [i915#2582]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2582 [i915#4077]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4077 [i915#4079]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4079 [i915#4083]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4083 [i915#6077]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6077 [i915#6078]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6078 [i915#6093]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6093 [i915#6094]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6094 [i915#6621]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6621 [i915#7357]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7357 Build changes ------------- * CI: CI-20190529 -> None * IGT: IGT_8877 -> IGTPW_15078 * Linux: CI_DRM_18376 -> CI_DRM_18377 CI-20190529: 20190529 CI_DRM_18376: 41542c1ef015c1907cd9a9785c8c2453f4fa2877 @ git://anongit.freedesktop.org/gfx-ci/linux CI_DRM_18377: a53aafc879e9c52b2776089762591d2766a27f0a @ git://anongit.freedesktop.org/gfx-ci/linux IGTPW_15078: dc51a2f859cf0dae0498243600cc4bc75c957376 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git IGT_8877: 1749e432cd72ef2c99f1b4e9d6f24411f1161901 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_15078/index.html [-- Attachment #2: Type: text/html, Size: 10084 bytes --] ^ permalink raw reply [flat|nested] 15+ messages in thread
* ✗ Xe.CI.FULL: failure for tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait 2026-04-29 2:08 [PATCH 0/2] tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Niranjana Vishwanathapura ` (3 preceding siblings ...) 2026-04-29 3:21 ` ✗ i915.CI.BAT: failure " Patchwork @ 2026-04-29 12:54 ` Patchwork 4 siblings, 0 replies; 15+ messages in thread From: Patchwork @ 2026-04-29 12:54 UTC (permalink / raw) To: Niranjana Vishwanathapura; +Cc: igt-dev [-- Attachment #1: Type: text/plain, Size: 47062 bytes --] == Series Details == Series: tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait URL : https://patchwork.freedesktop.org/series/165673/ State : failure == Summary == CI Bug Log - changes from XEIGT_8877_FULL -> XEIGTPW_15078_FULL ==================================================== Summary ------- **FAILURE** Serious unknown changes coming with XEIGTPW_15078_FULL absolutely need to be verified manually. If you think the reported changes have nothing to do with the changes introduced in XEIGTPW_15078_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them to document this new failure mode, which will reduce false positives in CI. Participating hosts (2 -> 2) ------------------------------ No changes in participating hosts Possible new issues ------------------- Here are the unknown changes that may have been introduced in XEIGTPW_15078_FULL: ### IGT changes ### #### Possible regressions #### * igt@xe_exec_reset@multi-queue-gt-reset: - shard-bmg: NOTRUN -> [SKIP][1] +1 other test skip [1]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-2/igt@xe_exec_reset@multi-queue-gt-reset.html * igt@xe_vm@overcommit-nonfault-vram-lr-external-nodefer: - shard-lnl: NOTRUN -> [SKIP][2] [2]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-1/igt@xe_vm@overcommit-nonfault-vram-lr-external-nodefer.html Known issues ------------ Here are the changes found in XEIGTPW_15078_FULL that come from known issues: ### IGT changes ### #### Issues hit #### * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy: - shard-bmg: NOTRUN -> [SKIP][3] ([Intel XE#2233]) [3]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html * igt@kms_big_fb@linear-32bpp-rotate-270: - shard-bmg: NOTRUN -> [SKIP][4] ([Intel XE#2327]) +3 other tests skip [4]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-9/igt@kms_big_fb@linear-32bpp-rotate-270.html * igt@kms_big_fb@x-tiled-16bpp-rotate-270: - shard-lnl: NOTRUN -> [SKIP][5] ([Intel XE#1407]) +1 other test skip [5]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-8/igt@kms_big_fb@x-tiled-16bpp-rotate-270.html * igt@kms_big_fb@y-tiled-64bpp-rotate-90: - shard-bmg: NOTRUN -> [SKIP][6] ([Intel XE#1124]) +10 other tests skip [6]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0: - shard-lnl: NOTRUN -> [SKIP][7] ([Intel XE#1124]) +5 other tests skip [7]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0.html * igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow: - shard-lnl: NOTRUN -> [SKIP][8] ([Intel XE#1477] / [Intel XE#7361]) [8]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-2/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html * igt@kms_ccs@bad-pixel-format-yf-tiled-ccs: - shard-lnl: NOTRUN -> [SKIP][9] ([Intel XE#2887]) +11 other tests skip [9]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-2/igt@kms_ccs@bad-pixel-format-yf-tiled-ccs.html * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs@pipe-a-hdmi-a-3: - shard-bmg: NOTRUN -> [SKIP][10] ([Intel XE#2652]) +8 other tests skip [10]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-7/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs@pipe-a-hdmi-a-3.html * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs: - shard-bmg: NOTRUN -> [SKIP][11] ([Intel XE#2887]) +11 other tests skip [11]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs.html * igt@kms_chamelium_color@ctm-negative: - shard-bmg: NOTRUN -> [SKIP][12] ([Intel XE#2325] / [Intel XE#7358]) [12]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-2/igt@kms_chamelium_color@ctm-negative.html * igt@kms_chamelium_hpd@hdmi-hpd-fast: - shard-lnl: NOTRUN -> [SKIP][13] ([Intel XE#373]) +4 other tests skip [13]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-2/igt@kms_chamelium_hpd@hdmi-hpd-fast.html * igt@kms_chamelium_hpd@hdmi-hpd-storm-disable: - shard-bmg: NOTRUN -> [SKIP][14] ([Intel XE#2252]) +8 other tests skip [14]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@kms_chamelium_hpd@hdmi-hpd-storm-disable.html * igt@kms_content_protection@atomic-dpms-hdcp14@pipe-a-dp-2: - shard-bmg: NOTRUN -> [FAIL][15] ([Intel XE#3304] / [Intel XE#7374]) +1 other test fail [15]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-10/igt@kms_content_protection@atomic-dpms-hdcp14@pipe-a-dp-2.html * igt@kms_content_protection@dp-mst-type-0-hdcp14: - shard-lnl: NOTRUN -> [SKIP][16] ([Intel XE#6974]) [16]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-2/igt@kms_content_protection@dp-mst-type-0-hdcp14.html * igt@kms_content_protection@dp-mst-type-1: - shard-bmg: NOTRUN -> [SKIP][17] ([Intel XE#2390] / [Intel XE#6974]) [17]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-2/igt@kms_content_protection@dp-mst-type-1.html * igt@kms_content_protection@lic-type-0-hdcp14: - shard-lnl: NOTRUN -> [SKIP][18] ([Intel XE#7642]) [18]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-3/igt@kms_content_protection@lic-type-0-hdcp14.html * igt@kms_content_protection@lic-type-1: - shard-bmg: NOTRUN -> [SKIP][19] ([Intel XE#7642]) [19]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@kms_content_protection@lic-type-1.html * igt@kms_content_protection@uevent-hdcp14: - shard-bmg: NOTRUN -> [FAIL][20] ([Intel XE#6707] / [Intel XE#7439]) +1 other test fail [20]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@kms_content_protection@uevent-hdcp14.html * igt@kms_cursor_crc@cursor-offscreen-512x170: - shard-lnl: NOTRUN -> [SKIP][21] ([Intel XE#2321] / [Intel XE#7355]) +1 other test skip [21]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@kms_cursor_crc@cursor-offscreen-512x170.html * igt@kms_cursor_crc@cursor-onscreen-512x170: - shard-bmg: NOTRUN -> [SKIP][22] ([Intel XE#2321] / [Intel XE#7355]) +2 other tests skip [22]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-9/igt@kms_cursor_crc@cursor-onscreen-512x170.html * igt@kms_cursor_crc@cursor-random-32x32: - shard-bmg: NOTRUN -> [SKIP][23] ([Intel XE#2320]) +3 other tests skip [23]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-10/igt@kms_cursor_crc@cursor-random-32x32.html * igt@kms_cursor_legacy@cursora-vs-flipb-legacy: - shard-lnl: NOTRUN -> [SKIP][24] ([Intel XE#309] / [Intel XE#7343]) +1 other test skip [24]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-1/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html * igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size: - shard-bmg: NOTRUN -> [SKIP][25] ([Intel XE#2286] / [Intel XE#6035]) [25]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html * igt@kms_dsc@dsc-basic: - shard-lnl: NOTRUN -> [SKIP][26] ([Intel XE#2244]) +1 other test skip [26]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-5/igt@kms_dsc@dsc-basic.html * igt@kms_dsc@dsc-fractional-bpp-with-bpc: - shard-bmg: NOTRUN -> [SKIP][27] ([Intel XE#2244]) [27]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@kms_dsc@dsc-fractional-bpp-with-bpc.html * igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats: - shard-bmg: NOTRUN -> [SKIP][28] ([Intel XE#4422] / [Intel XE#7442]) [28]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats.html * igt@kms_fbcon_fbt@fbc-suspend: - shard-bmg: NOTRUN -> [SKIP][29] ([Intel XE#4156] / [Intel XE#7425]) [29]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@kms_fbcon_fbt@fbc-suspend.html * igt@kms_feature_discovery@display-4x: - shard-bmg: NOTRUN -> [SKIP][30] ([Intel XE#1138] / [Intel XE#7344]) [30]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-7/igt@kms_feature_discovery@display-4x.html * igt@kms_flip@2x-flip-vs-rmfb-interruptible: - shard-lnl: NOTRUN -> [SKIP][31] ([Intel XE#1421]) +3 other tests skip [31]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html * igt@kms_flip@flip-vs-expired-vblank@c-edp1: - shard-lnl: [PASS][32] -> [FAIL][33] ([Intel XE#301] / [Intel XE#3149]) [32]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-lnl-7/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html [33]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-6/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html * igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling: - shard-bmg: NOTRUN -> [SKIP][34] ([Intel XE#7178] / [Intel XE#7351]) +3 other tests skip [34]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling.html * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling: - shard-lnl: NOTRUN -> [SKIP][35] ([Intel XE#7178] / [Intel XE#7351]) +1 other test skip [35]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-8/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html * igt@kms_flip_scaled_crc@flip-nv12-linear-to-nv12-linear-reflect-x: - shard-bmg: NOTRUN -> [SKIP][36] ([Intel XE#7179]) +1 other test skip [36]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-3/igt@kms_flip_scaled_crc@flip-nv12-linear-to-nv12-linear-reflect-x.html * igt@kms_frontbuffer_tracking@drrs-1p-pri-indfb-multidraw: - shard-lnl: NOTRUN -> [SKIP][37] ([Intel XE#6312] / [Intel XE#651]) +6 other tests skip [37]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@kms_frontbuffer_tracking@drrs-1p-pri-indfb-multidraw.html * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-mmap-wc: - shard-lnl: NOTRUN -> [SKIP][38] ([Intel XE#656]) +16 other tests skip [38]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-5/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-mmap-wc.html * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render: - shard-bmg: NOTRUN -> [SKIP][39] ([Intel XE#4141]) +14 other tests skip [39]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render.html * igt@kms_frontbuffer_tracking@fbc-argb161616f-draw-render: - shard-bmg: NOTRUN -> [SKIP][40] ([Intel XE#7061] / [Intel XE#7356]) +3 other tests skip [40]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-argb161616f-draw-render.html * igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-shrfb-plflip-blt: - shard-bmg: NOTRUN -> [SKIP][41] ([Intel XE#2311]) +25 other tests skip [41]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-shrfb-plflip-blt.html * igt@kms_frontbuffer_tracking@fbcdrrs-abgr161616f-draw-blt: - shard-lnl: NOTRUN -> [SKIP][42] ([Intel XE#7061] / [Intel XE#7356]) +1 other test skip [42]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@kms_frontbuffer_tracking@fbcdrrs-abgr161616f-draw-blt.html * igt@kms_frontbuffer_tracking@fbcpsr-tiling-y: - shard-bmg: NOTRUN -> [SKIP][43] ([Intel XE#2352] / [Intel XE#7399]) [43]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-pgflip-blt: - shard-bmg: NOTRUN -> [SKIP][44] ([Intel XE#2313]) +24 other tests skip [44]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-3/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-pgflip-blt.html * igt@kms_hdmi_inject@inject-audio: - shard-bmg: NOTRUN -> [SKIP][45] ([Intel XE#7308]) [45]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-8/igt@kms_hdmi_inject@inject-audio.html * igt@kms_joiner@invalid-modeset-force-ultra-joiner: - shard-bmg: NOTRUN -> [SKIP][46] ([Intel XE#6911] / [Intel XE#7466]) [46]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@kms_joiner@invalid-modeset-force-ultra-joiner.html * igt@kms_joiner@invalid-modeset-ultra-joiner: - shard-bmg: NOTRUN -> [SKIP][47] ([Intel XE#6911] / [Intel XE#7378]) [47]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@kms_joiner@invalid-modeset-ultra-joiner.html * igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner: - shard-lnl: NOTRUN -> [SKIP][48] ([Intel XE#7173] / [Intel XE#7294]) [48]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-7/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html * igt@kms_multipipe_modeset@basic-max-pipe-crc-check: - shard-bmg: NOTRUN -> [SKIP][49] ([Intel XE#7591]) [49]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-7/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html * igt@kms_plane@pixel-format-4-tiled-dg2-rc-ccs-cc-modifier-source-clamping: - shard-bmg: NOTRUN -> [SKIP][50] ([Intel XE#7283]) +4 other tests skip [50]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-8/igt@kms_plane@pixel-format-4-tiled-dg2-rc-ccs-cc-modifier-source-clamping.html * igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier@pipe-a-plane-5: - shard-lnl: NOTRUN -> [SKIP][51] ([Intel XE#7130]) +1 other test skip [51]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-1/igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier@pipe-a-plane-5.html * igt@kms_plane@pixel-format-y-tiled-gen12-rc-ccs-modifier-source-clamping: - shard-lnl: NOTRUN -> [SKIP][52] ([Intel XE#7283]) +2 other tests skip [52]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-8/igt@kms_plane@pixel-format-y-tiled-gen12-rc-ccs-modifier-source-clamping.html * igt@kms_plane_multiple@2x-tiling-yf: - shard-lnl: NOTRUN -> [SKIP][53] ([Intel XE#4596] / [Intel XE#5854]) [53]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-7/igt@kms_plane_multiple@2x-tiling-yf.html * igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-a: - shard-lnl: NOTRUN -> [SKIP][54] ([Intel XE#2763] / [Intel XE#6886]) +7 other tests skip [54]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-8/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-a.html * igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-75@pipe-a: - shard-bmg: NOTRUN -> [SKIP][55] ([Intel XE#2763] / [Intel XE#6886]) +4 other tests skip [55]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-10/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-75@pipe-a.html * igt@kms_pm_dc@dc5-dpms: - shard-lnl: [PASS][56] -> [FAIL][57] ([Intel XE#7340] / [Intel XE#7504]) [56]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-lnl-3/igt@kms_pm_dc@dc5-dpms.html [57]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-3/igt@kms_pm_dc@dc5-dpms.html * igt@kms_pm_dc@dc6-dpms: - shard-lnl: [PASS][58] -> [FAIL][59] ([Intel XE#7340]) [58]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-lnl-2/igt@kms_pm_dc@dc6-dpms.html [59]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-8/igt@kms_pm_dc@dc6-dpms.html * igt@kms_pm_dc@dc6-psr: - shard-bmg: NOTRUN -> [SKIP][60] ([Intel XE#7794]) +1 other test skip [60]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-3/igt@kms_pm_dc@dc6-psr.html * igt@kms_pm_rpm@modeset-lpsp: - shard-bmg: NOTRUN -> [SKIP][61] ([Intel XE#1439] / [Intel XE#3141] / [Intel XE#7383] / [Intel XE#836]) [61]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@kms_pm_rpm@modeset-lpsp.html * igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-sf: - shard-bmg: NOTRUN -> [SKIP][62] ([Intel XE#1489]) +7 other tests skip [62]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-sf.html * igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area-big-fb: - shard-lnl: NOTRUN -> [SKIP][63] ([Intel XE#2893] / [Intel XE#7304]) +2 other tests skip [63]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-6/igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area-big-fb.html * igt@kms_psr2_su@page_flip-xrgb8888: - shard-bmg: NOTRUN -> [SKIP][64] ([Intel XE#2387] / [Intel XE#7429]) [64]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-7/igt@kms_psr2_su@page_flip-xrgb8888.html * igt@kms_psr@fbc-psr-primary-render: - shard-bmg: NOTRUN -> [SKIP][65] ([Intel XE#2234] / [Intel XE#2850]) +11 other tests skip [65]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@kms_psr@fbc-psr-primary-render.html * igt@kms_psr@fbc-psr2-basic: - shard-lnl: NOTRUN -> [SKIP][66] ([Intel XE#1406] / [Intel XE#7345]) [66]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@kms_psr@fbc-psr2-basic.html * igt@kms_psr@fbc-psr2-basic@edp-1: - shard-lnl: NOTRUN -> [SKIP][67] ([Intel XE#1406] / [Intel XE#4609]) [67]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@kms_psr@fbc-psr2-basic@edp-1.html * igt@kms_psr@pr-dpms: - shard-lnl: NOTRUN -> [SKIP][68] ([Intel XE#1406]) +2 other tests skip [68]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-3/igt@kms_psr@pr-dpms.html * igt@kms_rotation_crc@bad-tiling: - shard-bmg: NOTRUN -> [SKIP][69] ([Intel XE#3904] / [Intel XE#7342]) [69]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@kms_rotation_crc@bad-tiling.html * igt@kms_rotation_crc@primary-y-tiled-reflect-x-270: - shard-lnl: NOTRUN -> [SKIP][70] ([Intel XE#3414] / [Intel XE#3904] / [Intel XE#7342]) +1 other test skip [70]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-1/igt@kms_rotation_crc@primary-y-tiled-reflect-x-270.html * igt@kms_sharpness_filter@filter-strength: - shard-bmg: NOTRUN -> [SKIP][71] ([Intel XE#6503]) [71]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-3/igt@kms_sharpness_filter@filter-strength.html * igt@kms_tiled_display@basic-test-pattern: - shard-bmg: NOTRUN -> [FAIL][72] ([Intel XE#1729] / [Intel XE#7424]) [72]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@kms_tiled_display@basic-test-pattern.html * igt@kms_tiled_display@basic-test-pattern-with-chamelium: - shard-lnl: NOTRUN -> [SKIP][73] ([Intel XE#362] / [Intel XE#5848]) [73]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-1/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html * igt@kms_tv_load_detect@load-detect: - shard-bmg: NOTRUN -> [SKIP][74] ([Intel XE#2450] / [Intel XE#5857]) [74]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@kms_tv_load_detect@load-detect.html * igt@kms_vrr@seamless-rr-switch-vrr: - shard-bmg: NOTRUN -> [SKIP][75] ([Intel XE#1499]) [75]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@kms_vrr@seamless-rr-switch-vrr.html * igt@xe_ccs@vm-bind-fault-mode-decompress: - shard-lnl: NOTRUN -> [SKIP][76] ([Intel XE#7644]) [76]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-5/igt@xe_ccs@vm-bind-fault-mode-decompress.html * igt@xe_compute_preempt@compute-preempt-many-vram: - shard-lnl: NOTRUN -> [SKIP][77] ([Intel XE#5191] / [Intel XE#7316] / [Intel XE#7346]) [77]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-1/igt@xe_compute_preempt@compute-preempt-many-vram.html * igt@xe_eudebug@basic-vm-access-parameters-userptr: - shard-lnl: NOTRUN -> [SKIP][78] ([Intel XE#7636]) +5 other tests skip [78]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-7/igt@xe_eudebug@basic-vm-access-parameters-userptr.html * igt@xe_eudebug@vma-ufence: - shard-bmg: NOTRUN -> [SKIP][79] ([Intel XE#7636]) +10 other tests skip [79]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-2/igt@xe_eudebug@vma-ufence.html * igt@xe_eudebug_sriov@deny-eudebug: - shard-lnl: NOTRUN -> [SKIP][80] ([Intel XE#4518] / [Intel XE#7404]) [80]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-7/igt@xe_eudebug_sriov@deny-eudebug.html * igt@xe_evict@evict-cm-threads-small: - shard-lnl: NOTRUN -> [SKIP][81] ([Intel XE#6540] / [Intel XE#688]) +5 other tests skip [81]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-7/igt@xe_evict@evict-cm-threads-small.html * igt@xe_evict@evict-mixed-many-threads-small: - shard-bmg: NOTRUN -> [INCOMPLETE][82] ([Intel XE#6321]) [82]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@xe_evict@evict-mixed-many-threads-small.html * igt@xe_evict@evict-small-multi-queue-priority-cm: - shard-bmg: NOTRUN -> [SKIP][83] ([Intel XE#7140]) [83]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@xe_evict@evict-small-multi-queue-priority-cm.html * igt@xe_exec_balancer@once-virtual-userptr-invalidate: - shard-lnl: NOTRUN -> [SKIP][84] ([Intel XE#7482]) +12 other tests skip [84]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-2/igt@xe_exec_balancer@once-virtual-userptr-invalidate.html * igt@xe_exec_basic@multigpu-once-basic-defer-mmap: - shard-lnl: NOTRUN -> [SKIP][85] ([Intel XE#1392]) +5 other tests skip [85]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-5/igt@xe_exec_basic@multigpu-once-basic-defer-mmap.html * igt@xe_exec_basic@multigpu-once-null-rebind: - shard-bmg: NOTRUN -> [SKIP][86] ([Intel XE#2322] / [Intel XE#7372]) +7 other tests skip [86]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-9/igt@xe_exec_basic@multigpu-once-null-rebind.html * igt@xe_exec_fault_mode@many-multi-queue-userptr-prefetch: - shard-lnl: NOTRUN -> [SKIP][87] ([Intel XE#7136]) +3 other tests skip [87]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@xe_exec_fault_mode@many-multi-queue-userptr-prefetch.html * igt@xe_exec_fault_mode@once-multi-queue-userptr-invalidate-imm: - shard-bmg: NOTRUN -> [SKIP][88] ([Intel XE#7136]) +7 other tests skip [88]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@xe_exec_fault_mode@once-multi-queue-userptr-invalidate-imm.html * igt@xe_exec_multi_queue@many-execs-preempt-mode-userptr: - shard-lnl: NOTRUN -> [SKIP][89] ([Intel XE#6874]) +16 other tests skip [89]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-1/igt@xe_exec_multi_queue@many-execs-preempt-mode-userptr.html * igt@xe_exec_multi_queue@one-queue-preempt-mode-fault-userptr: - shard-bmg: NOTRUN -> [SKIP][90] ([Intel XE#6874]) +24 other tests skip [90]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@xe_exec_multi_queue@one-queue-preempt-mode-fault-userptr.html * igt@xe_exec_threads@threads-multi-queue-cm-fd-userptr-rebind: - shard-bmg: NOTRUN -> [SKIP][91] ([Intel XE#7138]) +6 other tests skip [91]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@xe_exec_threads@threads-multi-queue-cm-fd-userptr-rebind.html * igt@xe_exec_threads@threads-multi-queue-rebind-err: - shard-lnl: NOTRUN -> [SKIP][92] ([Intel XE#7138]) +4 other tests skip [92]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@xe_exec_threads@threads-multi-queue-rebind-err.html * igt@xe_media_fill@media-fill: - shard-lnl: NOTRUN -> [SKIP][93] ([Intel XE#560] / [Intel XE#7321] / [Intel XE#7453]) [93]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-5/igt@xe_media_fill@media-fill.html * igt@xe_module_load@load: - shard-bmg: ([PASS][94], [PASS][95], [PASS][96], [PASS][97], [PASS][98], [PASS][99], [PASS][100], [PASS][101], [PASS][102], [PASS][103], [PASS][104], [PASS][105], [PASS][106], [PASS][107], [PASS][108], [PASS][109], [PASS][110], [PASS][111], [PASS][112], [PASS][113], [PASS][114], [PASS][115]) -> ([PASS][116], [PASS][117], [PASS][118], [PASS][119], [PASS][120], [PASS][121], [PASS][122], [PASS][123], [PASS][124], [PASS][125], [PASS][126], [PASS][127], [PASS][128], [PASS][129], [PASS][130], [PASS][131], [PASS][132], [PASS][133], [SKIP][134], [PASS][135], [PASS][136]) ([Intel XE#2457] / [Intel XE#7405]) [94]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-7/igt@xe_module_load@load.html [95]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-9/igt@xe_module_load@load.html [96]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-9/igt@xe_module_load@load.html [97]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-2/igt@xe_module_load@load.html [98]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-2/igt@xe_module_load@load.html [99]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-3/igt@xe_module_load@load.html [100]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-10/igt@xe_module_load@load.html [101]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-10/igt@xe_module_load@load.html [102]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-7/igt@xe_module_load@load.html [103]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-1/igt@xe_module_load@load.html [104]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-1/igt@xe_module_load@load.html [105]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-1/igt@xe_module_load@load.html [106]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-8/igt@xe_module_load@load.html [107]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-8/igt@xe_module_load@load.html [108]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-8/igt@xe_module_load@load.html [109]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-3/igt@xe_module_load@load.html [110]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-10/igt@xe_module_load@load.html [111]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-4/igt@xe_module_load@load.html [112]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-3/igt@xe_module_load@load.html [113]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-6/igt@xe_module_load@load.html [114]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-4/igt@xe_module_load@load.html [115]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-6/igt@xe_module_load@load.html [116]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-9/igt@xe_module_load@load.html [117]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-3/igt@xe_module_load@load.html [118]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-10/igt@xe_module_load@load.html [119]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@xe_module_load@load.html [120]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-8/igt@xe_module_load@load.html [121]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-2/igt@xe_module_load@load.html [122]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-2/igt@xe_module_load@load.html [123]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@xe_module_load@load.html [124]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@xe_module_load@load.html [125]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@xe_module_load@load.html [126]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@xe_module_load@load.html [127]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-10/igt@xe_module_load@load.html [128]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-10/igt@xe_module_load@load.html [129]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-6/igt@xe_module_load@load.html [130]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-3/igt@xe_module_load@load.html [131]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-3/igt@xe_module_load@load.html [132]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@xe_module_load@load.html [133]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-9/igt@xe_module_load@load.html [134]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@xe_module_load@load.html [135]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-7/igt@xe_module_load@load.html [136]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-7/igt@xe_module_load@load.html * igt@xe_multigpu_svm@mgpu-latency-basic: - shard-lnl: NOTRUN -> [SKIP][137] ([Intel XE#6964]) [137]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-2/igt@xe_multigpu_svm@mgpu-latency-basic.html * igt@xe_multigpu_svm@mgpu-pagefault-basic: - shard-bmg: NOTRUN -> [SKIP][138] ([Intel XE#6964]) +1 other test skip [138]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@xe_multigpu_svm@mgpu-pagefault-basic.html * igt@xe_non_msix@walker-interrupt-notification-non-msix: - shard-lnl: NOTRUN -> [SKIP][139] ([Intel XE#7622]) [139]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-5/igt@xe_non_msix@walker-interrupt-notification-non-msix.html * igt@xe_page_reclaim@binds-null-vma: - shard-lnl: NOTRUN -> [SKIP][140] ([Intel XE#7793]) +1 other test skip [140]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@xe_page_reclaim@binds-null-vma.html * igt@xe_page_reclaim@pde-vs-pd: - shard-bmg: NOTRUN -> [SKIP][141] ([Intel XE#7793]) +2 other tests skip [141]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-8/igt@xe_page_reclaim@pde-vs-pd.html * igt@xe_pat@pat-index-xehpc: - shard-lnl: NOTRUN -> [SKIP][142] ([Intel XE#1420] / [Intel XE#2838] / [Intel XE#7590]) [142]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-2/igt@xe_pat@pat-index-xehpc.html * igt@xe_pm@d3hot-mmap-vram: - shard-lnl: NOTRUN -> [SKIP][143] ([Intel XE#1948]) [143]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@xe_pm@d3hot-mmap-vram.html * igt@xe_pm@s2idle-d3cold-basic-exec: - shard-bmg: NOTRUN -> [SKIP][144] ([Intel XE#2284] / [Intel XE#7370]) +2 other tests skip [144]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-1/igt@xe_pm@s2idle-d3cold-basic-exec.html * igt@xe_pm@s3-d3cold-basic-exec: - shard-lnl: NOTRUN -> [SKIP][145] ([Intel XE#2284] / [Intel XE#366] / [Intel XE#7370]) [145]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-4/igt@xe_pm@s3-d3cold-basic-exec.html * igt@xe_pxp@pxp-optout: - shard-bmg: NOTRUN -> [SKIP][146] ([Intel XE#4733] / [Intel XE#7417]) +2 other tests skip [146]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-4/igt@xe_pxp@pxp-optout.html * igt@xe_query@multigpu-query-invalid-cs-cycles: - shard-lnl: NOTRUN -> [SKIP][147] ([Intel XE#944]) [147]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-5/igt@xe_query@multigpu-query-invalid-cs-cycles.html * igt@xe_query@multigpu-query-mem-usage: - shard-bmg: NOTRUN -> [SKIP][148] ([Intel XE#944]) +3 other tests skip [148]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-3/igt@xe_query@multigpu-query-mem-usage.html * igt@xe_sriov_auto_provisioning@fair-allocation: - shard-lnl: NOTRUN -> [SKIP][149] ([Intel XE#4130] / [Intel XE#7366]) [149]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-5/igt@xe_sriov_auto_provisioning@fair-allocation.html * igt@xe_sriov_flr@flr-vf1-clear: - shard-lnl: NOTRUN -> [SKIP][150] ([Intel XE#3342]) [150]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-8/igt@xe_sriov_flr@flr-vf1-clear.html #### Possible fixes #### * igt@kms_hdr@invalid-hdr: - shard-bmg: [SKIP][151] ([Intel XE#1503]) -> [PASS][152] [151]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-10/igt@kms_hdr@invalid-hdr.html [152]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-7/igt@kms_hdr@invalid-hdr.html * igt@kms_psr_stress_test@flip-primary-invalidate-overlay: - shard-lnl: [SKIP][153] ([Intel XE#4692] / [Intel XE#7508]) -> [PASS][154] [153]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-lnl-7/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html [154]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-5/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html * igt@xe_sriov_auto_provisioning@exclusive-ranges@numvfs-random: - shard-bmg: [FAIL][155] ([Intel XE#5937]) -> [PASS][156] +1 other test pass [155]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-1/igt@xe_sriov_auto_provisioning@exclusive-ranges@numvfs-random.html [156]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-2/igt@xe_sriov_auto_provisioning@exclusive-ranges@numvfs-random.html * igt@xe_sriov_flr@flr-twice: - shard-bmg: [FAIL][157] ([Intel XE#6569]) -> [PASS][158] [157]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-bmg-7/igt@xe_sriov_flr@flr-twice.html [158]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-bmg-8/igt@xe_sriov_flr@flr-twice.html #### Warnings #### * igt@kms_flip@flip-vs-expired-vblank: - shard-lnl: [FAIL][159] ([Intel XE#301]) -> [FAIL][160] ([Intel XE#301] / [Intel XE#3149]) [159]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8877/shard-lnl-7/igt@kms_flip@flip-vs-expired-vblank.html [160]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/shard-lnl-6/igt@kms_flip@flip-vs-expired-vblank.html {name}: This element is suppressed. This means it is ignored when computing the status of the difference (SUCCESS, WARNING, or FAILURE). [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124 [Intel XE#1138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1138 [Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392 [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406 [Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407 [Intel XE#1420]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1420 [Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421 [Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439 [Intel XE#1477]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1477 [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489 [Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499 [Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503 [Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729 [Intel XE#1948]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1948 [Intel XE#2233]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2233 [Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234 [Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244 [Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252 [Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284 [Intel XE#2286]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2286 [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311 [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313 [Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320 [Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321 [Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322 [Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325 [Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327 [Intel XE#2352]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2352 [Intel XE#2387]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2387 [Intel XE#2390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2390 [Intel XE#2450]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2450 [Intel XE#2457]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2457 [Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652 [Intel XE#2763]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2763 [Intel XE#2838]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2838 [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850 [Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887 [Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893 [Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301 [Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309 [Intel XE#3141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3141 [Intel XE#3149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3149 [Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304 [Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342 [Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414 [Intel XE#362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/362 [Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366 [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367 [Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373 [Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904 [Intel XE#4130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4130 [Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141 [Intel XE#4156]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4156 [Intel XE#4422]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4422 [Intel XE#4518]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4518 [Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596 [Intel XE#4609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4609 [Intel XE#4692]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4692 [Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733 [Intel XE#5191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5191 [Intel XE#560]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/560 [Intel XE#5848]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5848 [Intel XE#5854]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5854 [Intel XE#5857]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5857 [Intel XE#5937]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5937 [Intel XE#6035]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6035 [Intel XE#6312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6312 [Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321 [Intel XE#6503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6503 [Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651 [Intel XE#6540]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6540 [Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656 [Intel XE#6569]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6569 [Intel XE#6707]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6707 [Intel XE#6874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6874 [Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688 [Intel XE#6886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6886 [Intel XE#6911]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6911 [Intel XE#6964]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6964 [Intel XE#6974]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6974 [Intel XE#7061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7061 [Intel XE#7130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7130 [Intel XE#7136]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7136 [Intel XE#7138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7138 [Intel XE#7140]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7140 [Intel XE#7173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7173 [Intel XE#7178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7178 [Intel XE#7179]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7179 [Intel XE#7283]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7283 [Intel XE#7294]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7294 [Intel XE#7304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7304 [Intel XE#7308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7308 [Intel XE#7316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7316 [Intel XE#7321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7321 [Intel XE#7340]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7340 [Intel XE#7342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7342 [Intel XE#7343]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7343 [Intel XE#7344]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7344 [Intel XE#7345]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7345 [Intel XE#7346]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7346 [Intel XE#7351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7351 [Intel XE#7355]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7355 [Intel XE#7356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7356 [Intel XE#7358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7358 [Intel XE#7361]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7361 [Intel XE#7366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7366 [Intel XE#7370]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7370 [Intel XE#7372]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7372 [Intel XE#7374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7374 [Intel XE#7378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7378 [Intel XE#7383]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7383 [Intel XE#7399]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7399 [Intel XE#7404]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7404 [Intel XE#7405]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7405 [Intel XE#7417]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7417 [Intel XE#7424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7424 [Intel XE#7425]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7425 [Intel XE#7429]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7429 [Intel XE#7439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7439 [Intel XE#7442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7442 [Intel XE#7453]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7453 [Intel XE#7466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7466 [Intel XE#7482]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7482 [Intel XE#7504]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7504 [Intel XE#7508]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7508 [Intel XE#7590]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7590 [Intel XE#7591]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7591 [Intel XE#7622]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7622 [Intel XE#7636]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7636 [Intel XE#7642]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7642 [Intel XE#7644]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7644 [Intel XE#7676]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7676 [Intel XE#7679]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7679 [Intel XE#7793]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7793 [Intel XE#7794]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7794 [Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836 [Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944 Build changes ------------- * IGT: IGT_8877 -> IGTPW_15078 * Linux: xe-4947-41542c1ef015c1907cd9a9785c8c2453f4fa2877 -> xe-4948-a53aafc879e9c52b2776089762591d2766a27f0a IGTPW_15078: dc51a2f859cf0dae0498243600cc4bc75c957376 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git IGT_8877: 1749e432cd72ef2c99f1b4e9d6f24411f1161901 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git xe-4947-41542c1ef015c1907cd9a9785c8c2453f4fa2877: 41542c1ef015c1907cd9a9785c8c2453f4fa2877 xe-4948-a53aafc879e9c52b2776089762591d2766a27f0a: a53aafc879e9c52b2776089762591d2766a27f0a == Logs == For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_15078/index.html [-- Attachment #2: Type: text/html, Size: 52465 bytes --] ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2026-04-30 21:54 UTC | newest] Thread overview: 15+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-04-29 2:08 [PATCH 0/2] tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Niranjana Vishwanathapura 2026-04-29 2:08 ` [PATCH 1/2] tests/intel/xe_exec_multi_queue: use timestamp to check job start Niranjana Vishwanathapura 2026-04-29 19:18 ` Summers, Stuart 2026-04-29 19:24 ` Summers, Stuart 2026-04-30 4:04 ` Niranjana Vishwanathapura 2026-04-29 2:08 ` [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue Niranjana Vishwanathapura 2026-04-29 18:27 ` Summers, Stuart 2026-04-29 20:52 ` Wang, X 2026-04-30 4:06 ` Niranjana Vishwanathapura 2026-04-30 21:53 ` Summers, Stuart 2026-04-29 19:28 ` Summers, Stuart 2026-04-30 4:09 ` Niranjana Vishwanathapura 2026-04-29 3:16 ` ✓ Xe.CI.BAT: success for tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Patchwork 2026-04-29 3:21 ` ✗ i915.CI.BAT: failure " Patchwork 2026-04-29 12:54 ` ✗ Xe.CI.FULL: " Patchwork
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox