From: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
To: "Wang, X" <x.wang@intel.com>
Cc: "Summers, Stuart" <stuart.summers@intel.com>,
"igt-dev@lists.freedesktop.org" <igt-dev@lists.freedesktop.org>
Subject: Re: [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue
Date: Wed, 29 Apr 2026 21:06:57 -0700 [thread overview]
Message-ID: <afLVYeTiGaeiLCD0@nvishwa1-desk> (raw)
In-Reply-To: <34b7c49d-18d2-4f8a-8afd-0487b1c72c4c@intel.com>
On Wed, Apr 29, 2026 at 01:52:05PM -0700, Wang, X wrote:
>
>
>On 4/29/2026 11:27, Summers, Stuart wrote:
>>On Tue, 2026-04-28 at 19:08 -0700, Niranjana Vishwanathapura wrote:
>>>In __test_priority() DYN_PRIORITY case, replace sleep() with a
>>>deterministic barrier using an extra queue in the same multi-queue
>>>group. After assigning priorities, submit a spinner to the extra
>>>queue, end it immediately and wait for its user fence to signal.
>>>This guarantees a full scheduler round-trip confirming the priority
>>>updates have taken effect before releasing the other queues.
>>>
>>>Increase exec_queues[] and spin[] array sizes by 1 to accommodate
>>>the extra barrier queue slot at index num_queues.
>>>
>>>Assisted-by: GitHub Copilot:claude-sonnet-4.6
>>>Signed-off-by: Niranjana Vishwanathapura
>>><niranjana.vishwanathapura@intel.com>
>>>---
>>> tests/intel/xe_exec_multi_queue.c | 37 +++++++++++++++++++++++++----
>>>--
>>> 1 file changed, 30 insertions(+), 7 deletions(-)
>>>
>>>diff --git a/tests/intel/xe_exec_multi_queue.c
>>>b/tests/intel/xe_exec_multi_queue.c
>>>index 382705d065..8c6fbb2d18 100644
>>>--- a/tests/intel/xe_exec_multi_queue.c
>>>+++ b/tests/intel/xe_exec_multi_queue.c
>>>@@ -381,8 +381,8 @@ __test_priority(int fd, struct
>>>drm_xe_engine_class_instance *eci,
>>> .syncs = to_user_pointer(&sync),
>>> };
>>> uint64_t vm_sync = 0, addr = BASE_ADDRESS;
>>>- uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N];
>>>- struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N];
>>>+ uint32_t exec_queues[XE_EXEC_QUEUE_PRIORITY_N + 1];
>>>+ struct xe_spin *spin[XE_EXEC_QUEUE_PRIORITY_N + 1];
>>Since we're only really making use of this in the dynamic case, should
>>we have "+ !!DYNAMIC" instead of "+ 1" here? I.e. we only care about
>>the extra barrier one in the dynamic case?
>>
>>Thanks,
>>Stuart
>flags is a runtime parameter, so + !!(flags & DYN_PRIORITY)
>would make the array size runtime-determined — effectively a VLA.
>Even in userspace, VLAs are generally discouraged due to
>unpredictable stack usage. The cost of one extra slot is negligible,
>so always using + 1 is simpler and avoids introducing a VLA.
>
Yes, we allocate enough space required to handle any scenario.
That is much better than making the code complex to save an array
element.
Niranjana
>Thanks,
>Xin
>>> uint32_t vm, num_queues, num_queue_priorities, bo = 0;
>>> uint32_t start_order[XE_EXEC_QUEUE_PRIORITY_N] = { 0 };
>>> int64_t fence_timeout = NSEC_PER_SEC;
>>>@@ -403,7 +403,7 @@ __test_priority(int fd, struct
>>>drm_xe_engine_class_instance *eci,
>>> .value = DRM_XE_MULTI_GROUP_CREATE,
>>> };
>>> uint64_t ext = to_user_pointer(&multi_queue);
>>>- int i, j, sleep_duration = 1;
>>>+ int i, j;
>>> void *bo_map;
>>> num_queue_priorities = XE_EXEC_QUEUE_NUM_PRIORITIES;
>>>@@ -415,12 +415,12 @@ __test_priority(int fd, struct
>>>drm_xe_engine_class_instance *eci,
>>> eci[0].engine_class, eci[0].engine_instance);
>>> vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0);
>>>- bo_size = xe_bb_size(fd, sizeof(*spin[0]) * num_queues);
>>>+ bo_size = xe_bb_size(fd, sizeof(*spin[0]) * (num_queues +
>>>1));
>>> bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd,
>>>eci[0].gt_id),
>>> DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
>>> bo_map = xe_bo_map(fd, bo, bo_size);
>>>- for (i = 0; i < num_queues; i++)
>>>+ for (i = 0; i < num_queues + 1; i++)
>>> spin[i] = bo_map + i * sizeof(*spin[0]);
>>> /* Use the default priority for Q0 because we are explicitly
>>>waiting for it below */
>>>@@ -430,6 +430,11 @@ __test_priority(int fd, struct
>>>drm_xe_engine_class_instance *eci,
>>> if (flags & DYN_PRIORITY) {
>>> for (i = 1; i < num_queues; i++)
>>> exec_queues[i] = xe_exec_queue_create(fd, vm,
>>>eci, ext);
>>>+ /*
>>>+ * Create an extra queue in the same multi-queue
>>>group, used as
>>>+ * a barrier to confirm priority updates have taken
>>>effect.
>>>+ */
>>>+ exec_queues[num_queues] = xe_exec_queue_create(fd,
>>>vm, eci, ext);
>>> } else {
>>> struct drm_xe_ext_set_property mq_priority = {
>>> .base.name =
>>>DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
>>>@@ -474,14 +479,28 @@ __test_priority(int fd, struct
>>>drm_xe_engine_class_instance *eci,
>>> xe_spin_wait_started(spin[i]);
>>> if (flags & DYN_PRIORITY) {
>>>+ uint64_t barrier_spin_addr = addr + num_queues *
>>>sizeof(struct xe_spin);
>>>+
>>> /* Assign increasing order of priority for secondary
>>>queues */
>>> for (i = 1; i < num_queues; i++)
>>> xe_exec_queue_set_property(fd,
>>>exec_queues[i],
>>>DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY,
>>> i %
>>>num_queue_priorities);
>>>- /* Wait for priorities to take effect */
>>>- sleep(sleep_duration);
>>>+ /*
>>>+ * Submit a barrier job on the extra queue to ensure
>>>priority
>>>+ * updates have taken effect before releasing the
>>>other queues.
>>>+ */
>>>+ xe_spin_init_opts(spin[num_queues], .addr =
>>>barrier_spin_addr,
>>>+ .preempt = true);
>>>+ sync.addr = barrier_spin_addr +
>>>+ ((char *)&spin[num_queues]->exec_sync - (char
>>>*)spin[num_queues]);
>>>+ exec.exec_queue_id = exec_queues[num_queues];
>>>+ exec.address = barrier_spin_addr;
>>>+ xe_exec(fd, &exec);
>>>+ xe_spin_end(spin[num_queues]);
>>>+ xe_wait_ufence(fd, &spin[num_queues]->exec_sync,
>>>USER_FENCE_VALUE,
>>>+ exec_queues[num_queues],
>>>fence_timeout);
>>> }
>>> /*
>>>@@ -566,6 +585,10 @@ __test_priority(int fd, struct
>>>drm_xe_engine_class_instance *eci,
>>> for (i = 0; i < num_queues; i++)
>>> xe_exec_queue_destroy(fd, exec_queues[i]);
>>>+ /* Destroy the extra queue */
>>>+ if (flags & DYN_PRIORITY)
>>>+ xe_exec_queue_destroy(fd, exec_queues[num_queues]);
>>>+
>>> munmap(bo_map, bo_size);
>>> gem_close(fd, bo);
>
next prev parent reply other threads:[~2026-04-30 4:07 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-29 2:08 [PATCH 0/2] tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Niranjana Vishwanathapura
2026-04-29 2:08 ` [PATCH 1/2] tests/intel/xe_exec_multi_queue: use timestamp to check job start Niranjana Vishwanathapura
2026-04-29 19:18 ` Summers, Stuart
2026-04-29 19:24 ` Summers, Stuart
2026-04-30 4:04 ` Niranjana Vishwanathapura
2026-04-29 2:08 ` [PATCH 2/2] tests/intel/xe_exec_multi_queue: replace sleep with barrier queue Niranjana Vishwanathapura
2026-04-29 18:27 ` Summers, Stuart
2026-04-29 20:52 ` Wang, X
2026-04-30 4:06 ` Niranjana Vishwanathapura [this message]
2026-04-30 21:53 ` Summers, Stuart
2026-04-29 19:28 ` Summers, Stuart
2026-04-30 4:09 ` Niranjana Vishwanathapura
2026-04-29 3:16 ` ✓ Xe.CI.BAT: success for tests/intel/xe_exec_multi_queue: Replace sleep with deterministic wait Patchwork
2026-04-29 3:21 ` ✗ i915.CI.BAT: failure " Patchwork
2026-04-29 12:54 ` ✗ Xe.CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=afLVYeTiGaeiLCD0@nvishwa1-desk \
--to=niranjana.vishwanathapura@intel.com \
--cc=igt-dev@lists.freedesktop.org \
--cc=stuart.summers@intel.com \
--cc=x.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox