* [PATCH 0/3] tests/intel/xe_exec_reset: Validate multi-queue timestamping
@ 2026-05-06 18:31 Niranjana Vishwanathapura
2026-05-06 18:31 ` [PATCH 1/3] lib/xe/xe_spin: Enhance multi-queue switch option Niranjana Vishwanathapura
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Niranjana Vishwanathapura @ 2026-05-06 18:31 UTC (permalink / raw)
To: igt-dev
Multi-queue exec queues should use QUEUE_TIMESTAMP instead of CTX_TIMESTAMP
for the run ticks. Add the required support and the tests to validate this.
This patch series is dependent on XeKMD patch series
https://patchwork.freedesktop.org/series/164654/
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
gta (3):
lib/xe/xe_spin: Enhance multi-queue switch option
lib/xe/xe_spin: Add option for QUEUE_TIMESTAMP
tests/intel/xe_exec_reset: Add multi-queue long spin tests
lib/xe/xe_legacy.c | 2 ++
lib/xe/xe_spin.c | 22 ++++++++++++----------
lib/xe/xe_spin.h | 8 +++++++-
tests/intel/xe_exec_multi_queue.c | 2 +-
tests/intel/xe_exec_reset.c | 17 +++++++++++++++++
5 files changed, 39 insertions(+), 12 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 1/3] lib/xe/xe_spin: Enhance multi-queue switch option
2026-05-06 18:31 [PATCH 0/3] tests/intel/xe_exec_reset: Validate multi-queue timestamping Niranjana Vishwanathapura
@ 2026-05-06 18:31 ` Niranjana Vishwanathapura
2026-05-07 0:50 ` Wang, X
2026-05-07 19:15 ` Summers, Stuart
2026-05-06 18:31 ` [PATCH 2/3] lib/xe/xe_spin: Add option for QUEUE_TIMESTAMP Niranjana Vishwanathapura
2026-05-06 18:31 ` [PATCH 3/3] tests/intel/xe_exec_reset: Add multi-queue long spin tests Niranjana Vishwanathapura
2 siblings, 2 replies; 13+ messages in thread
From: Niranjana Vishwanathapura @ 2026-05-06 18:31 UTC (permalink / raw)
To: igt-dev
From: gta <gta@DUT4637NVLP.fm.intel.com>
Allow user to control whether the multi-queue switch should
happen after parsing the MI_SEMAPHORE_WAIT instruction or
only if the instruction is unsuccessful in getting the
semaphore, thus having to wait.
Ensure that in xe_exec_multi_queue@priority test, the multi-queue
switch happens only when the spinner has to wait for the semaphore.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
---
lib/xe/xe_spin.c | 13 +++++++------
lib/xe/xe_spin.h | 6 +++++-
tests/intel/xe_exec_multi_queue.c | 2 +-
3 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index 4dc110c222..14952ca90e 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -179,14 +179,15 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts)
* Insert a MI_SEMAPHORE_WAIT_CMD instruction with condition controlled
* by the user which acts as a queue switch point in multi queue mode.
*/
- if (opts->multi_queue_switch) {
+ if (opts->multi_queue_switch || opts->multi_queue_switch_on_wait) {
uint64_t wait_addr = opts->addr + offsetof(struct xe_spin, wait_cond);
+ uint32_t sema_cmd = MI_SEMAPHORE_WAIT_CMD | MI_SEMAPHORE_POLL |
+ MI_SEMAPHORE_SAD_EQ_SDD | 3;
- spin->batch[b++] = MI_SEMAPHORE_WAIT_CMD |
- MI_SEMAPHORE_POLL |
- MI_SEMAPHORE_QUEUE_SWITCH_MODE |
- MI_SEMAPHORE_SAD_EQ_SDD |
- 3;
+ if (opts->multi_queue_switch_on_wait)
+ sema_cmd |= MI_SEMAPHORE_QUEUE_SWITCH_MODE;
+
+ spin->batch[b++] = sema_cmd;
spin->batch[b++] = 0;
spin->batch[b++] = wait_addr;
spin->batch[b++] = wait_addr >> 32;
diff --git a/lib/xe/xe_spin.h b/lib/xe/xe_spin.h
index 31154997b9..db0febd8ab 100644
--- a/lib/xe/xe_spin.h
+++ b/lib/xe/xe_spin.h
@@ -46,7 +46,10 @@ struct xe_spin_mem_copy {
* struct xe_spin_opts
* @addr: offset of spinner within vm
* @preempt: allow spinner to be preempted or not
- * @multi_queue_switch: Add a multi-queue switch point
+ * @multi_queue_switch: Add a SEMAPHORE_WAIT multi-queue switch point
+ * and have the queue switch happen after command is parsed.
+ * @multi_queue_switch_on_wait: Add a SEMAPHORE_WAIT multi-queue switch point
+ * and have the queue switch only happen if waiting on the semaphore.
* @ctx_ticks: number of ticks after which spinner is stopped, applied if > 0
* @mem_copy: container of objects used for memory copy (optional)
*
@@ -56,6 +59,7 @@ struct xe_spin_opts {
uint64_t addr;
bool preempt;
bool multi_queue_switch;
+ bool multi_queue_switch_on_wait;
uint32_t ctx_ticks;
bool write_timestamp;
struct xe_spin_mem_copy *mem_copy;
diff --git a/tests/intel/xe_exec_multi_queue.c b/tests/intel/xe_exec_multi_queue.c
index 0479554bb6..6f90c3e4e8 100644
--- a/tests/intel/xe_exec_multi_queue.c
+++ b/tests/intel/xe_exec_multi_queue.c
@@ -459,7 +459,7 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci,
for (i = 0; i < num_queues; i++) {
uint64_t spin_addr = addr + i * sizeof(struct xe_spin);
- xe_spin_init_opts(spin[i], .addr = spin_addr, .multi_queue_switch = true,
+ xe_spin_init_opts(spin[i], .addr = spin_addr, .multi_queue_switch_on_wait = true,
.write_timestamp = true);
/*
* Pre-set all spinners to preempt-wait so each queue, once
--
2.43.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 2/3] lib/xe/xe_spin: Add option for QUEUE_TIMESTAMP
2026-05-06 18:31 [PATCH 0/3] tests/intel/xe_exec_reset: Validate multi-queue timestamping Niranjana Vishwanathapura
2026-05-06 18:31 ` [PATCH 1/3] lib/xe/xe_spin: Enhance multi-queue switch option Niranjana Vishwanathapura
@ 2026-05-06 18:31 ` Niranjana Vishwanathapura
2026-05-07 19:23 ` Summers, Stuart
2026-05-06 18:31 ` [PATCH 3/3] tests/intel/xe_exec_reset: Add multi-queue long spin tests Niranjana Vishwanathapura
2 siblings, 1 reply; 13+ messages in thread
From: Niranjana Vishwanathapura @ 2026-05-06 18:31 UTC (permalink / raw)
To: igt-dev
From: gta <gta@DUT4637NVLP.fm.intel.com>
In multi-queue case, CTX_TIMESTAMP register does not
provide the running time of individual queues of a group.
Provide the option for spinner to read timestamp from the
QUEUE_TIMESTAMP registers instead which is useful in the
multi-queue scenarios.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
---
lib/xe/xe_spin.c | 9 +++++----
lib/xe/xe_spin.h | 2 ++
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index 14952ca90e..8743107898 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -25,7 +25,8 @@
#define MI_LRI_CS_MMIO (1 << 19)
#define MI_LRR_DST_CS_MMIO (1 << 19)
#define MI_LRR_SRC_CS_MMIO (1 << 18)
-#define CTX_TIMESTAMP 0x3a8
+#define CTX_TIMESTAMP 0x3a8
+#define QUEUE_TIMESTAMP 0x4c0
#define CS_GPR(x) (0x600 + 8 * (x))
enum { START_TS, NOW_TS };
@@ -67,7 +68,7 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts)
spin->batch[b++] = CS_GPR(START_TS) + 4;
spin->batch[b++] = 0;
spin->batch[b++] = MI_LOAD_REGISTER_REG | MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
- spin->batch[b++] = CTX_TIMESTAMP;
+ spin->batch[b++] = opts->use_queue_timestamp ? QUEUE_TIMESTAMP : CTX_TIMESTAMP;
spin->batch[b++] = CS_GPR(START_TS);
}
@@ -83,7 +84,7 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts)
if (opts->write_timestamp) {
spin->batch[b++] = MI_LOAD_REGISTER_REG | MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
- spin->batch[b++] = CTX_TIMESTAMP;
+ spin->batch[b++] = opts->use_queue_timestamp ? QUEUE_TIMESTAMP : CTX_TIMESTAMP;
spin->batch[b++] = CS_GPR(NOW_TS);
spin->batch[b++] = MI_STORE_REGISTER_MEM_GEN8 | MI_SRM_CS_MMIO;
@@ -97,7 +98,7 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts)
spin->batch[b++] = CS_GPR(NOW_TS) + 4;
spin->batch[b++] = 0;
spin->batch[b++] = MI_LOAD_REGISTER_REG | MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
- spin->batch[b++] = CTX_TIMESTAMP;
+ spin->batch[b++] = opts->use_queue_timestamp ? QUEUE_TIMESTAMP : CTX_TIMESTAMP;
spin->batch[b++] = CS_GPR(NOW_TS);
/* delta = now - start; inverted to match COND_BBE */
diff --git a/lib/xe/xe_spin.h b/lib/xe/xe_spin.h
index db0febd8ab..75a6ada420 100644
--- a/lib/xe/xe_spin.h
+++ b/lib/xe/xe_spin.h
@@ -51,6 +51,7 @@ struct xe_spin_mem_copy {
* @multi_queue_switch_on_wait: Add a SEMAPHORE_WAIT multi-queue switch point
* and have the queue switch only happen if waiting on the semaphore.
* @ctx_ticks: number of ticks after which spinner is stopped, applied if > 0
+ * @use_queue_timestamp: Use QUEUE_TIMESTAMP register instead of CTX_TIMESTAMP
* @mem_copy: container of objects used for memory copy (optional)
*
* Used to initialize struct xe_spin spinner behavior.
@@ -62,6 +63,7 @@ struct xe_spin_opts {
bool multi_queue_switch_on_wait;
uint32_t ctx_ticks;
bool write_timestamp;
+ bool use_queue_timestamp;
struct xe_spin_mem_copy *mem_copy;
};
--
2.43.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 3/3] tests/intel/xe_exec_reset: Add multi-queue long spin tests
2026-05-06 18:31 [PATCH 0/3] tests/intel/xe_exec_reset: Validate multi-queue timestamping Niranjana Vishwanathapura
2026-05-06 18:31 ` [PATCH 1/3] lib/xe/xe_spin: Enhance multi-queue switch option Niranjana Vishwanathapura
2026-05-06 18:31 ` [PATCH 2/3] lib/xe/xe_spin: Add option for QUEUE_TIMESTAMP Niranjana Vishwanathapura
@ 2026-05-06 18:31 ` Niranjana Vishwanathapura
2026-05-07 19:37 ` Summers, Stuart
2 siblings, 1 reply; 13+ messages in thread
From: Niranjana Vishwanathapura @ 2026-05-06 18:31 UTC (permalink / raw)
To: igt-dev
From: gta <gta@DUT4637NVLP.fm.intel.com>
Add the following multi-queue tests and update xe_legacy_test_mode()
function to use queue_timestamp and multi_queue_switch spinner options
for the multi-queue scenarios.
multi-queue-long-spin-many-preempt
multi-queue-long-spin-reuse-many-preempt
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
---
lib/xe/xe_legacy.c | 2 ++
tests/intel/xe_exec_reset.c | 17 +++++++++++++++++
2 files changed, 19 insertions(+)
diff --git a/lib/xe/xe_legacy.c b/lib/xe/xe_legacy.c
index f9bd5bcb61..b7054756cd 100644
--- a/lib/xe/xe_legacy.c
+++ b/lib/xe/xe_legacy.c
@@ -67,6 +67,8 @@ xe_legacy_test_mode(int fd, struct drm_xe_engine_class_instance *eci,
} *data;
struct xe_spin_opts spin_opts = {
.preempt = flags & PREEMPT,
+ .multi_queue_switch = flags & MULTI_QUEUE,
+ .use_queue_timestamp = flags & MULTI_QUEUE,
#define THREE_SEC (3 * 1000000000ull)
.ctx_ticks = flags & LONG_SPIN ?
xe_spin_nsec_to_ticks(fd, 0, THREE_SEC) : 0,
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index a3cf290abf..f7ae94a424 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -1179,6 +1179,23 @@ int igt_main()
MULTI_QUEUE);
}
+ igt_subtest("multi-queue-long-spin-many-preempt")
+ xe_for_each_multi_queue_engine(fd, hwe) {
+ xe_legacy_test_mode(fd, hwe, 4, 8,
+ LONG_SPIN | MULTI_QUEUE,
+ LEGACY_MODE_ADDR, false);
+ break;
+ }
+
+ igt_subtest("multi-queue-long-spin-reuse-many-preempt")
+ xe_for_each_multi_queue_engine(fd, hwe) {
+ xe_legacy_test_mode(fd, hwe, 4, 8,
+ LONG_SPIN | MULTI_QUEUE |
+ LONG_SPIN_REUSE_QUEUE,
+ LEGACY_MODE_ADDR, false);
+ break;
+ }
+
igt_fixture()
drm_close_driver(fd);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] lib/xe/xe_spin: Enhance multi-queue switch option
2026-05-06 18:31 ` [PATCH 1/3] lib/xe/xe_spin: Enhance multi-queue switch option Niranjana Vishwanathapura
@ 2026-05-07 0:50 ` Wang, X
2026-05-07 19:15 ` Summers, Stuart
1 sibling, 0 replies; 13+ messages in thread
From: Wang, X @ 2026-05-07 0:50 UTC (permalink / raw)
To: Niranjana Vishwanathapura, igt-dev
On 5/6/2026 11:31, Niranjana Vishwanathapura wrote:
> From: gta <gta@DUT4637NVLP.fm.intel.com>
Hi Niranjana,
It looks like each patch has a stray "From: gta
<gta@DUT4637NVLP.fm.intel.com>"
line in the commit message body, likely due to a local git config mismatch.
Please fix and resend as v2.
Xin
> Allow user to control whether the multi-queue switch should
> happen after parsing the MI_SEMAPHORE_WAIT instruction or
> only if the instruction is unsuccessful in getting the
> semaphore, thus having to wait.
>
> Ensure that in xe_exec_multi_queue@priority test, the multi-queue
> switch happens only when the spinner has to wait for the semaphore.
>
> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
> ---
> lib/xe/xe_spin.c | 13 +++++++------
> lib/xe/xe_spin.h | 6 +++++-
> tests/intel/xe_exec_multi_queue.c | 2 +-
> 3 files changed, 13 insertions(+), 8 deletions(-)
>
> diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
> index 4dc110c222..14952ca90e 100644
> --- a/lib/xe/xe_spin.c
> +++ b/lib/xe/xe_spin.c
> @@ -179,14 +179,15 @@ void xe_spin_init(struct xe_spin *spin, struct xe_spin_opts *opts)
> * Insert a MI_SEMAPHORE_WAIT_CMD instruction with condition controlled
> * by the user which acts as a queue switch point in multi queue mode.
> */
> - if (opts->multi_queue_switch) {
> + if (opts->multi_queue_switch || opts->multi_queue_switch_on_wait) {
> uint64_t wait_addr = opts->addr + offsetof(struct xe_spin, wait_cond);
> + uint32_t sema_cmd = MI_SEMAPHORE_WAIT_CMD | MI_SEMAPHORE_POLL |
> + MI_SEMAPHORE_SAD_EQ_SDD | 3;
>
> - spin->batch[b++] = MI_SEMAPHORE_WAIT_CMD |
> - MI_SEMAPHORE_POLL |
> - MI_SEMAPHORE_QUEUE_SWITCH_MODE |
> - MI_SEMAPHORE_SAD_EQ_SDD |
> - 3;
> + if (opts->multi_queue_switch_on_wait)
> + sema_cmd |= MI_SEMAPHORE_QUEUE_SWITCH_MODE;
> +
> + spin->batch[b++] = sema_cmd;
> spin->batch[b++] = 0;
> spin->batch[b++] = wait_addr;
> spin->batch[b++] = wait_addr >> 32;
> diff --git a/lib/xe/xe_spin.h b/lib/xe/xe_spin.h
> index 31154997b9..db0febd8ab 100644
> --- a/lib/xe/xe_spin.h
> +++ b/lib/xe/xe_spin.h
> @@ -46,7 +46,10 @@ struct xe_spin_mem_copy {
> * struct xe_spin_opts
> * @addr: offset of spinner within vm
> * @preempt: allow spinner to be preempted or not
> - * @multi_queue_switch: Add a multi-queue switch point
> + * @multi_queue_switch: Add a SEMAPHORE_WAIT multi-queue switch point
> + * and have the queue switch happen after command is parsed.
> + * @multi_queue_switch_on_wait: Add a SEMAPHORE_WAIT multi-queue switch point
> + * and have the queue switch only happen if waiting on the semaphore.
> * @ctx_ticks: number of ticks after which spinner is stopped, applied if > 0
> * @mem_copy: container of objects used for memory copy (optional)
> *
> @@ -56,6 +59,7 @@ struct xe_spin_opts {
> uint64_t addr;
> bool preempt;
> bool multi_queue_switch;
> + bool multi_queue_switch_on_wait;
> uint32_t ctx_ticks;
> bool write_timestamp;
> struct xe_spin_mem_copy *mem_copy;
> diff --git a/tests/intel/xe_exec_multi_queue.c b/tests/intel/xe_exec_multi_queue.c
> index 0479554bb6..6f90c3e4e8 100644
> --- a/tests/intel/xe_exec_multi_queue.c
> +++ b/tests/intel/xe_exec_multi_queue.c
> @@ -459,7 +459,7 @@ __test_priority(int fd, struct drm_xe_engine_class_instance *eci,
> for (i = 0; i < num_queues; i++) {
> uint64_t spin_addr = addr + i * sizeof(struct xe_spin);
>
> - xe_spin_init_opts(spin[i], .addr = spin_addr, .multi_queue_switch = true,
> + xe_spin_init_opts(spin[i], .addr = spin_addr, .multi_queue_switch_on_wait = true,
> .write_timestamp = true);
> /*
> * Pre-set all spinners to preempt-wait so each queue, once
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] lib/xe/xe_spin: Enhance multi-queue switch option
2026-05-06 18:31 ` [PATCH 1/3] lib/xe/xe_spin: Enhance multi-queue switch option Niranjana Vishwanathapura
2026-05-07 0:50 ` Wang, X
@ 2026-05-07 19:15 ` Summers, Stuart
2026-05-07 20:11 ` Niranjana Vishwanathapura
1 sibling, 1 reply; 13+ messages in thread
From: Summers, Stuart @ 2026-05-07 19:15 UTC (permalink / raw)
To: igt-dev@lists.freedesktop.org, Vishwanathapura, Niranjana
On Wed, 2026-05-06 at 11:31 -0700, Niranjana Vishwanathapura wrote:
> From: gta <gta@DUT4637NVLP.fm.intel.com>
>
> Allow user to control whether the multi-queue switch should
> happen after parsing the MI_SEMAPHORE_WAIT instruction or
> only if the instruction is unsuccessful in getting the
> semaphore, thus having to wait.
>
> Ensure that in xe_exec_multi_queue@priority test, the multi-queue
> switch happens only when the spinner has to wait for the semaphore.
>
> Signed-off-by: Niranjana Vishwanathapura
> <niranjana.vishwanathapura@intel.com>
> ---
> lib/xe/xe_spin.c | 13 +++++++------
> lib/xe/xe_spin.h | 6 +++++-
> tests/intel/xe_exec_multi_queue.c | 2 +-
> 3 files changed, 13 insertions(+), 8 deletions(-)
>
> diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
> index 4dc110c222..14952ca90e 100644
> --- a/lib/xe/xe_spin.c
> +++ b/lib/xe/xe_spin.c
> @@ -179,14 +179,15 @@ void xe_spin_init(struct xe_spin *spin, struct
> xe_spin_opts *opts)
> * Insert a MI_SEMAPHORE_WAIT_CMD instruction with condition
> controlled
> * by the user which acts as a queue switch point in multi
> queue mode.
> */
> - if (opts->multi_queue_switch) {
> + if (opts->multi_queue_switch || opts-
> >multi_queue_switch_on_wait) {
I don't have any problem with the way you're doing this. I can see we
might want to use this without multi queue though at some point, so it
might be nice to have the switch_on_wait as an option on top of
multi_queue_switch - basically rename multi_queue_switch to
semaphore_wait or something and then add the multi_queue_switch_on_wait
as an additional option. But that also means we have to keep track of a
separate parameter in xe_spin_init_opts() and since multi queue is the
only user right now, it might be overkill.
Otherwise LGTM. Please apply the author related changes Xin had
requested before merging:
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
> uint64_t wait_addr = opts->addr + offsetof(struct
> xe_spin, wait_cond);
> + uint32_t sema_cmd = MI_SEMAPHORE_WAIT_CMD |
> MI_SEMAPHORE_POLL |
> + MI_SEMAPHORE_SAD_EQ_SDD | 3;
>
> - spin->batch[b++] = MI_SEMAPHORE_WAIT_CMD |
> - MI_SEMAPHORE_POLL |
> - MI_SEMAPHORE_QUEUE_SWITCH_MODE |
> - MI_SEMAPHORE_SAD_EQ_SDD |
> - 3;
> + if (opts->multi_queue_switch_on_wait)
> + sema_cmd |= MI_SEMAPHORE_QUEUE_SWITCH_MODE;
> +
> + spin->batch[b++] = sema_cmd;
> spin->batch[b++] = 0;
> spin->batch[b++] = wait_addr;
> spin->batch[b++] = wait_addr >> 32;
> diff --git a/lib/xe/xe_spin.h b/lib/xe/xe_spin.h
> index 31154997b9..db0febd8ab 100644
> --- a/lib/xe/xe_spin.h
> +++ b/lib/xe/xe_spin.h
> @@ -46,7 +46,10 @@ struct xe_spin_mem_copy {
> * struct xe_spin_opts
> * @addr: offset of spinner within vm
> * @preempt: allow spinner to be preempted or not
> - * @multi_queue_switch: Add a multi-queue switch point
> + * @multi_queue_switch: Add a SEMAPHORE_WAIT multi-queue switch
> point
> + * and have the queue switch happen after command is parsed.
> + * @multi_queue_switch_on_wait: Add a SEMAPHORE_WAIT multi-queue
> switch point
> + * and have the queue switch only happen if waiting on the
> semaphore.
> * @ctx_ticks: number of ticks after which spinner is stopped,
> applied if > 0
> * @mem_copy: container of objects used for memory copy (optional)
> *
> @@ -56,6 +59,7 @@ struct xe_spin_opts {
> uint64_t addr;
> bool preempt;
> bool multi_queue_switch;
> + bool multi_queue_switch_on_wait;
> uint32_t ctx_ticks;
> bool write_timestamp;
> struct xe_spin_mem_copy *mem_copy;
> diff --git a/tests/intel/xe_exec_multi_queue.c
> b/tests/intel/xe_exec_multi_queue.c
> index 0479554bb6..6f90c3e4e8 100644
> --- a/tests/intel/xe_exec_multi_queue.c
> +++ b/tests/intel/xe_exec_multi_queue.c
> @@ -459,7 +459,7 @@ __test_priority(int fd, struct
> drm_xe_engine_class_instance *eci,
> for (i = 0; i < num_queues; i++) {
> uint64_t spin_addr = addr + i * sizeof(struct
> xe_spin);
>
> - xe_spin_init_opts(spin[i], .addr = spin_addr,
> .multi_queue_switch = true,
> + xe_spin_init_opts(spin[i], .addr = spin_addr,
> .multi_queue_switch_on_wait = true,
> .write_timestamp = true);
> /*
> * Pre-set all spinners to preempt-wait so each
> queue, once
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/3] lib/xe/xe_spin: Add option for QUEUE_TIMESTAMP
2026-05-06 18:31 ` [PATCH 2/3] lib/xe/xe_spin: Add option for QUEUE_TIMESTAMP Niranjana Vishwanathapura
@ 2026-05-07 19:23 ` Summers, Stuart
2026-05-07 19:41 ` Umesh Nerlige Ramappa
0 siblings, 1 reply; 13+ messages in thread
From: Summers, Stuart @ 2026-05-07 19:23 UTC (permalink / raw)
To: igt-dev@lists.freedesktop.org, Vishwanathapura, Niranjana
On Wed, 2026-05-06 at 11:31 -0700, Niranjana Vishwanathapura wrote:
> From: gta <gta@DUT4637NVLP.fm.intel.com>
>
> In multi-queue case, CTX_TIMESTAMP register does not
> provide the running time of individual queues of a group.
> Provide the option for spinner to read timestamp from the
> QUEUE_TIMESTAMP registers instead which is useful in the
> multi-queue scenarios.
>
> Signed-off-by: Niranjana Vishwanathapura
> <niranjana.vishwanathapura@intel.com>
> ---
> lib/xe/xe_spin.c | 9 +++++----
> lib/xe/xe_spin.h | 2 ++
> 2 files changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
> index 14952ca90e..8743107898 100644
> --- a/lib/xe/xe_spin.c
> +++ b/lib/xe/xe_spin.c
> @@ -25,7 +25,8 @@
> #define MI_LRI_CS_MMIO (1 << 19)
> #define MI_LRR_DST_CS_MMIO (1 << 19)
> #define MI_LRR_SRC_CS_MMIO (1 << 18)
> -#define CTX_TIMESTAMP 0x3a8
> +#define CTX_TIMESTAMP 0x3a8
> +#define QUEUE_TIMESTAMP 0x4c0
> #define CS_GPR(x) (0x600 + 8 * (x))
>
> enum { START_TS, NOW_TS };
> @@ -67,7 +68,7 @@ void xe_spin_init(struct xe_spin *spin, struct
> xe_spin_opts *opts)
> spin->batch[b++] = CS_GPR(START_TS) + 4;
> spin->batch[b++] = 0;
> spin->batch[b++] = MI_LOAD_REGISTER_REG |
> MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
> - spin->batch[b++] = CTX_TIMESTAMP;
> + spin->batch[b++] = opts->use_queue_timestamp ?
> QUEUE_TIMESTAMP : CTX_TIMESTAMP;
I see in [1] we are programming this register in the ring also. Will
this cause issues if we overwrite this here? I see we do the same thing
for the context timestamp, so probably no issue here. But it would be
nice to document somehow the expected interaction between the kernel
ring programming and the user batch programming if any.
[1]:
https://patchwork.freedesktop.org/patch/723502/?series=164654&rev=5
> spin->batch[b++] = CS_GPR(START_TS);
> }
>
> @@ -83,7 +84,7 @@ void xe_spin_init(struct xe_spin *spin, struct
> xe_spin_opts *opts)
>
> if (opts->write_timestamp) {
> spin->batch[b++] = MI_LOAD_REGISTER_REG |
> MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
> - spin->batch[b++] = CTX_TIMESTAMP;
> + spin->batch[b++] = opts->use_queue_timestamp ?
> QUEUE_TIMESTAMP : CTX_TIMESTAMP;
> spin->batch[b++] = CS_GPR(NOW_TS);
>
> spin->batch[b++] = MI_STORE_REGISTER_MEM_GEN8 |
> MI_SRM_CS_MMIO;
> @@ -97,7 +98,7 @@ void xe_spin_init(struct xe_spin *spin, struct
> xe_spin_opts *opts)
> spin->batch[b++] = CS_GPR(NOW_TS) + 4;
> spin->batch[b++] = 0;
> spin->batch[b++] = MI_LOAD_REGISTER_REG |
> MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
> - spin->batch[b++] = CTX_TIMESTAMP;
> + spin->batch[b++] = opts->use_queue_timestamp ?
> QUEUE_TIMESTAMP : CTX_TIMESTAMP;
> spin->batch[b++] = CS_GPR(NOW_TS);
>
> /* delta = now - start; inverted to match COND_BBE */
> diff --git a/lib/xe/xe_spin.h b/lib/xe/xe_spin.h
> index db0febd8ab..75a6ada420 100644
> --- a/lib/xe/xe_spin.h
> +++ b/lib/xe/xe_spin.h
> @@ -51,6 +51,7 @@ struct xe_spin_mem_copy {
> * @multi_queue_switch_on_wait: Add a SEMAPHORE_WAIT multi-queue
> switch point
> * and have the queue switch only happen if waiting on the
> semaphore.
> * @ctx_ticks: number of ticks after which spinner is stopped,
> applied if > 0
> + * @use_queue_timestamp: Use QUEUE_TIMESTAMP register instead of
> CTX_TIMESTAMP
> * @mem_copy: container of objects used for memory copy (optional)
> *
> * Used to initialize struct xe_spin spinner behavior.
> @@ -62,6 +63,7 @@ struct xe_spin_opts {
> bool multi_queue_switch_on_wait;
> uint32_t ctx_ticks;
> bool write_timestamp;
> + bool use_queue_timestamp;
> struct xe_spin_mem_copy *mem_copy;
> };
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 3/3] tests/intel/xe_exec_reset: Add multi-queue long spin tests
2026-05-06 18:31 ` [PATCH 3/3] tests/intel/xe_exec_reset: Add multi-queue long spin tests Niranjana Vishwanathapura
@ 2026-05-07 19:37 ` Summers, Stuart
2026-05-07 20:11 ` Niranjana Vishwanathapura
0 siblings, 1 reply; 13+ messages in thread
From: Summers, Stuart @ 2026-05-07 19:37 UTC (permalink / raw)
To: igt-dev@lists.freedesktop.org, Vishwanathapura, Niranjana
On Wed, 2026-05-06 at 11:31 -0700, Niranjana Vishwanathapura wrote:
> From: gta <gta@DUT4637NVLP.fm.intel.com>
>
> Add the following multi-queue tests and update xe_legacy_test_mode()
> function to use queue_timestamp and multi_queue_switch spinner
> options
> for the multi-queue scenarios.
>
> multi-queue-long-spin-many-preempt
> multi-queue-long-spin-reuse-many-preempt
>
> Signed-off-by: Niranjana Vishwanathapura
> <niranjana.vishwanathapura@intel.com>
> ---
> lib/xe/xe_legacy.c | 2 ++
> tests/intel/xe_exec_reset.c | 17 +++++++++++++++++
> 2 files changed, 19 insertions(+)
>
> diff --git a/lib/xe/xe_legacy.c b/lib/xe/xe_legacy.c
> index f9bd5bcb61..b7054756cd 100644
> --- a/lib/xe/xe_legacy.c
> +++ b/lib/xe/xe_legacy.c
> @@ -67,6 +67,8 @@ xe_legacy_test_mode(int fd, struct
> drm_xe_engine_class_instance *eci,
> } *data;
> struct xe_spin_opts spin_opts = {
> .preempt = flags & PREEMPT,
> + .multi_queue_switch = flags & MULTI_QUEUE,
> + .use_queue_timestamp = flags & MULTI_QUEUE,
> #define THREE_SEC (3 * 1000000000ull)
> .ctx_ticks = flags & LONG_SPIN ?
> xe_spin_nsec_to_ticks(fd, 0, THREE_SEC) : 0,
> diff --git a/tests/intel/xe_exec_reset.c
> b/tests/intel/xe_exec_reset.c
> index a3cf290abf..f7ae94a424 100644
> --- a/tests/intel/xe_exec_reset.c
> +++ b/tests/intel/xe_exec_reset.c
> @@ -1179,6 +1179,23 @@ int igt_main()
> MULTI_QUEUE);
> }
>
> + igt_subtest("multi-queue-long-spin-many-preempt")
These need to be documented now. The existing non-multi queue tests
talk about preemption being the deciding factor in ensuring we aren't
getting engine resets. So here would be nice to call out lite restore
explicitly.
Other than documentation the changes look good.
Thanks,
Stuart
> + xe_for_each_multi_queue_engine(fd, hwe) {
> + xe_legacy_test_mode(fd, hwe, 4, 8,
> + LONG_SPIN | MULTI_QUEUE,
> + LEGACY_MODE_ADDR, false);
> + break;
> + }
> +
> + igt_subtest("multi-queue-long-spin-reuse-many-preempt")
> + xe_for_each_multi_queue_engine(fd, hwe) {
> + xe_legacy_test_mode(fd, hwe, 4, 8,
> + LONG_SPIN | MULTI_QUEUE |
> + LONG_SPIN_REUSE_QUEUE,
> + LEGACY_MODE_ADDR, false);
> + break;
> + }
> +
> igt_fixture()
> drm_close_driver(fd);
> }
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/3] lib/xe/xe_spin: Add option for QUEUE_TIMESTAMP
2026-05-07 19:23 ` Summers, Stuart
@ 2026-05-07 19:41 ` Umesh Nerlige Ramappa
2026-05-07 20:05 ` Summers, Stuart
2026-05-07 20:07 ` Niranjana Vishwanathapura
0 siblings, 2 replies; 13+ messages in thread
From: Umesh Nerlige Ramappa @ 2026-05-07 19:41 UTC (permalink / raw)
To: Summers, Stuart; +Cc: igt-dev@lists.freedesktop.org, Vishwanathapura, Niranjana
On Thu, May 07, 2026 at 07:23:50PM +0000, Summers, Stuart wrote:
>On Wed, 2026-05-06 at 11:31 -0700, Niranjana Vishwanathapura wrote:
>> From: gta <gta@DUT4637NVLP.fm.intel.com>
>>
>> In multi-queue case, CTX_TIMESTAMP register does not
>> provide the running time of individual queues of a group.
>> Provide the option for spinner to read timestamp from the
>> QUEUE_TIMESTAMP registers instead which is useful in the
>> multi-queue scenarios.
>>
>> Signed-off-by: Niranjana Vishwanathapura
>> <niranjana.vishwanathapura@intel.com>
>> ---
>> lib/xe/xe_spin.c | 9 +++++----
>> lib/xe/xe_spin.h | 2 ++
>> 2 files changed, 7 insertions(+), 4 deletions(-)
>>
>> diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
>> index 14952ca90e..8743107898 100644
>> --- a/lib/xe/xe_spin.c
>> +++ b/lib/xe/xe_spin.c
>> @@ -25,7 +25,8 @@
>> #define MI_LRI_CS_MMIO (1 << 19)
>> #define MI_LRR_DST_CS_MMIO (1 << 19)
>> #define MI_LRR_SRC_CS_MMIO (1 << 18)
>> -#define CTX_TIMESTAMP 0x3a8
>> +#define CTX_TIMESTAMP 0x3a8
>> +#define QUEUE_TIMESTAMP 0x4c0
>> #define CS_GPR(x) (0x600 + 8 * (x))
>>
>> enum { START_TS, NOW_TS };
>> @@ -67,7 +68,7 @@ void xe_spin_init(struct xe_spin *spin, struct
>> xe_spin_opts *opts)
>> spin->batch[b++] = CS_GPR(START_TS) + 4;
>> spin->batch[b++] = 0;
>> spin->batch[b++] = MI_LOAD_REGISTER_REG |
>> MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
>> - spin->batch[b++] = CTX_TIMESTAMP;
>> + spin->batch[b++] = opts->use_queue_timestamp ?
>> QUEUE_TIMESTAMP : CTX_TIMESTAMP;
>
>I see in [1] we are programming this register in the ring also. Will
>this cause issues if we overwrite this here? I see we do the same thing
>for the context timestamp, so probably no issue here. But it would be
>nice to document somehow the expected interaction between the kernel
>ring programming and the user batch programming if any.
>
>[1]:
>https://patchwork.freedesktop.org/patch/723502/?series=164654&rev=5
The MI_LOAD_REGISTER_REG source is TIMESTAMP register and destination is
the GPR reg, so we should be fine. If we were writing to the TIMESTAMP
registers, then that's going to mess up counts.
Regards,
Umesh
>
>> spin->batch[b++] = CS_GPR(START_TS);
>> }
>>
>> @@ -83,7 +84,7 @@ void xe_spin_init(struct xe_spin *spin, struct
>> xe_spin_opts *opts)
>>
>> if (opts->write_timestamp) {
>> spin->batch[b++] = MI_LOAD_REGISTER_REG |
>> MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
>> - spin->batch[b++] = CTX_TIMESTAMP;
>> + spin->batch[b++] = opts->use_queue_timestamp ?
>> QUEUE_TIMESTAMP : CTX_TIMESTAMP;
>> spin->batch[b++] = CS_GPR(NOW_TS);
>>
>> spin->batch[b++] = MI_STORE_REGISTER_MEM_GEN8 |
>> MI_SRM_CS_MMIO;
>> @@ -97,7 +98,7 @@ void xe_spin_init(struct xe_spin *spin, struct
>> xe_spin_opts *opts)
>> spin->batch[b++] = CS_GPR(NOW_TS) + 4;
>> spin->batch[b++] = 0;
>> spin->batch[b++] = MI_LOAD_REGISTER_REG |
>> MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
>> - spin->batch[b++] = CTX_TIMESTAMP;
>> + spin->batch[b++] = opts->use_queue_timestamp ?
>> QUEUE_TIMESTAMP : CTX_TIMESTAMP;
>> spin->batch[b++] = CS_GPR(NOW_TS);
>>
>> /* delta = now - start; inverted to match COND_BBE */
>> diff --git a/lib/xe/xe_spin.h b/lib/xe/xe_spin.h
>> index db0febd8ab..75a6ada420 100644
>> --- a/lib/xe/xe_spin.h
>> +++ b/lib/xe/xe_spin.h
>> @@ -51,6 +51,7 @@ struct xe_spin_mem_copy {
>> * @multi_queue_switch_on_wait: Add a SEMAPHORE_WAIT multi-queue
>> switch point
>> * and have the queue switch only happen if waiting on the
>> semaphore.
>> * @ctx_ticks: number of ticks after which spinner is stopped,
>> applied if > 0
>> + * @use_queue_timestamp: Use QUEUE_TIMESTAMP register instead of
>> CTX_TIMESTAMP
>> * @mem_copy: container of objects used for memory copy (optional)
>> *
>> * Used to initialize struct xe_spin spinner behavior.
>> @@ -62,6 +63,7 @@ struct xe_spin_opts {
>> bool multi_queue_switch_on_wait;
>> uint32_t ctx_ticks;
>> bool write_timestamp;
>> + bool use_queue_timestamp;
>> struct xe_spin_mem_copy *mem_copy;
>> };
>>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/3] lib/xe/xe_spin: Add option for QUEUE_TIMESTAMP
2026-05-07 19:41 ` Umesh Nerlige Ramappa
@ 2026-05-07 20:05 ` Summers, Stuart
2026-05-07 20:07 ` Niranjana Vishwanathapura
1 sibling, 0 replies; 13+ messages in thread
From: Summers, Stuart @ 2026-05-07 20:05 UTC (permalink / raw)
To: Nerlige Ramappa, Umesh
Cc: igt-dev@lists.freedesktop.org, Vishwanathapura, Niranjana
On Thu, 2026-05-07 at 12:41 -0700, Umesh Nerlige Ramappa wrote:
> On Thu, May 07, 2026 at 07:23:50PM +0000, Summers, Stuart wrote:
> > On Wed, 2026-05-06 at 11:31 -0700, Niranjana Vishwanathapura wrote:
> > > From: gta <gta@DUT4637NVLP.fm.intel.com>
> > >
> > > In multi-queue case, CTX_TIMESTAMP register does not
> > > provide the running time of individual queues of a group.
> > > Provide the option for spinner to read timestamp from the
> > > QUEUE_TIMESTAMP registers instead which is useful in the
> > > multi-queue scenarios.
> > >
> > > Signed-off-by: Niranjana Vishwanathapura
> > > <niranjana.vishwanathapura@intel.com>
> > > ---
> > > lib/xe/xe_spin.c | 9 +++++----
> > > lib/xe/xe_spin.h | 2 ++
> > > 2 files changed, 7 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
> > > index 14952ca90e..8743107898 100644
> > > --- a/lib/xe/xe_spin.c
> > > +++ b/lib/xe/xe_spin.c
> > > @@ -25,7 +25,8 @@
> > > #define MI_LRI_CS_MMIO (1 << 19)
> > > #define MI_LRR_DST_CS_MMIO (1 << 19)
> > > #define MI_LRR_SRC_CS_MMIO (1 << 18)
> > > -#define CTX_TIMESTAMP 0x3a8
> > > +#define CTX_TIMESTAMP 0x3a8
> > > +#define QUEUE_TIMESTAMP 0x4c0
> > > #define CS_GPR(x) (0x600 + 8 * (x))
> > >
> > > enum { START_TS, NOW_TS };
> > > @@ -67,7 +68,7 @@ void xe_spin_init(struct xe_spin *spin, struct
> > > xe_spin_opts *opts)
> > > spin->batch[b++] = CS_GPR(START_TS) + 4;
> > > spin->batch[b++] = 0;
> > > spin->batch[b++] = MI_LOAD_REGISTER_REG |
> > > MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
> > > - spin->batch[b++] = CTX_TIMESTAMP;
> > > + spin->batch[b++] = opts->use_queue_timestamp ?
> > > QUEUE_TIMESTAMP : CTX_TIMESTAMP;
> >
> > I see in [1] we are programming this register in the ring also.
> > Will
> > this cause issues if we overwrite this here? I see we do the same
> > thing
> > for the context timestamp, so probably no issue here. But it would
> > be
> > nice to document somehow the expected interaction between the
> > kernel
> > ring programming and the user batch programming if any.
> >
> > [1]:
> > https://patchwork.freedesktop.org/patch/723502/?series=164654&rev=5
>
> The MI_LOAD_REGISTER_REG source is TIMESTAMP register and destination
> is
> the GPR reg, so we should be fine. If we were writing to the
> TIMESTAMP
> registers, then that's going to mess up counts.
Ok got it. So basically we're only ever reading the timestamp and using
the GPR to update for the test specifically. Yeah makes sense.
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
One other thought I have is if we could make use of both the context
and multi queue timestamps together in some cases - like if we want to
test interaction between context switch and lite restore. Right now we
leave that kind of testing to the UMD side and don't cover in IGT. So
we don't really have any need to do that in the IGT library. And anyway
it would take more than just this one change to implement something
like that. But something that might be interesting to consider for the
future...
-Stuart
>
> Regards,
> Umesh
>
>
> >
> > > spin->batch[b++] = CS_GPR(START_TS);
> > > }
> > >
> > > @@ -83,7 +84,7 @@ void xe_spin_init(struct xe_spin *spin, struct
> > > xe_spin_opts *opts)
> > >
> > > if (opts->write_timestamp) {
> > > spin->batch[b++] = MI_LOAD_REGISTER_REG |
> > > MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
> > > - spin->batch[b++] = CTX_TIMESTAMP;
> > > + spin->batch[b++] = opts->use_queue_timestamp ?
> > > QUEUE_TIMESTAMP : CTX_TIMESTAMP;
> > > spin->batch[b++] = CS_GPR(NOW_TS);
> > >
> > > spin->batch[b++] = MI_STORE_REGISTER_MEM_GEN8 |
> > > MI_SRM_CS_MMIO;
> > > @@ -97,7 +98,7 @@ void xe_spin_init(struct xe_spin *spin, struct
> > > xe_spin_opts *opts)
> > > spin->batch[b++] = CS_GPR(NOW_TS) + 4;
> > > spin->batch[b++] = 0;
> > > spin->batch[b++] = MI_LOAD_REGISTER_REG |
> > > MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
> > > - spin->batch[b++] = CTX_TIMESTAMP;
> > > + spin->batch[b++] = opts->use_queue_timestamp ?
> > > QUEUE_TIMESTAMP : CTX_TIMESTAMP;
> > > spin->batch[b++] = CS_GPR(NOW_TS);
> > >
> > > /* delta = now - start; inverted to match
> > > COND_BBE */
> > > diff --git a/lib/xe/xe_spin.h b/lib/xe/xe_spin.h
> > > index db0febd8ab..75a6ada420 100644
> > > --- a/lib/xe/xe_spin.h
> > > +++ b/lib/xe/xe_spin.h
> > > @@ -51,6 +51,7 @@ struct xe_spin_mem_copy {
> > > * @multi_queue_switch_on_wait: Add a SEMAPHORE_WAIT multi-queue
> > > switch point
> > > * and have the queue switch only happen if waiting on the
> > > semaphore.
> > > * @ctx_ticks: number of ticks after which spinner is stopped,
> > > applied if > 0
> > > + * @use_queue_timestamp: Use QUEUE_TIMESTAMP register instead of
> > > CTX_TIMESTAMP
> > > * @mem_copy: container of objects used for memory copy
> > > (optional)
> > > *
> > > * Used to initialize struct xe_spin spinner behavior.
> > > @@ -62,6 +63,7 @@ struct xe_spin_opts {
> > > bool multi_queue_switch_on_wait;
> > > uint32_t ctx_ticks;
> > > bool write_timestamp;
> > > + bool use_queue_timestamp;
> > > struct xe_spin_mem_copy *mem_copy;
> > > };
> > >
> >
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/3] lib/xe/xe_spin: Add option for QUEUE_TIMESTAMP
2026-05-07 19:41 ` Umesh Nerlige Ramappa
2026-05-07 20:05 ` Summers, Stuart
@ 2026-05-07 20:07 ` Niranjana Vishwanathapura
1 sibling, 0 replies; 13+ messages in thread
From: Niranjana Vishwanathapura @ 2026-05-07 20:07 UTC (permalink / raw)
To: Umesh Nerlige Ramappa; +Cc: Summers, Stuart, igt-dev@lists.freedesktop.org
On Thu, May 07, 2026 at 12:41:37PM -0700, Umesh Nerlige Ramappa wrote:
>On Thu, May 07, 2026 at 07:23:50PM +0000, Summers, Stuart wrote:
>>On Wed, 2026-05-06 at 11:31 -0700, Niranjana Vishwanathapura wrote:
>>>From: gta <gta@DUT4637NVLP.fm.intel.com>
>>>
>>>In multi-queue case, CTX_TIMESTAMP register does not
>>>provide the running time of individual queues of a group.
>>>Provide the option for spinner to read timestamp from the
>>>QUEUE_TIMESTAMP registers instead which is useful in the
>>>multi-queue scenarios.
>>>
>>>Signed-off-by: Niranjana Vishwanathapura
>>><niranjana.vishwanathapura@intel.com>
>>>---
>>> lib/xe/xe_spin.c | 9 +++++----
>>> lib/xe/xe_spin.h | 2 ++
>>> 2 files changed, 7 insertions(+), 4 deletions(-)
>>>
>>>diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
>>>index 14952ca90e..8743107898 100644
>>>--- a/lib/xe/xe_spin.c
>>>+++ b/lib/xe/xe_spin.c
>>>@@ -25,7 +25,8 @@
>>> #define MI_LRI_CS_MMIO (1 << 19)
>>> #define MI_LRR_DST_CS_MMIO (1 << 19)
>>> #define MI_LRR_SRC_CS_MMIO (1 << 18)
>>>-#define CTX_TIMESTAMP 0x3a8
>>>+#define CTX_TIMESTAMP 0x3a8
>>>+#define QUEUE_TIMESTAMP 0x4c0
>>> #define CS_GPR(x) (0x600 + 8 * (x))
>>>
>>> enum { START_TS, NOW_TS };
>>>@@ -67,7 +68,7 @@ void xe_spin_init(struct xe_spin *spin, struct
>>>xe_spin_opts *opts)
>>> spin->batch[b++] = CS_GPR(START_TS) + 4;
>>> spin->batch[b++] = 0;
>>> spin->batch[b++] = MI_LOAD_REGISTER_REG |
>>>MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
>>>- spin->batch[b++] = CTX_TIMESTAMP;
>>>+ spin->batch[b++] = opts->use_queue_timestamp ?
>>>QUEUE_TIMESTAMP : CTX_TIMESTAMP;
>>
>>I see in [1] we are programming this register in the ring also. Will
>>this cause issues if we overwrite this here? I see we do the same thing
>>for the context timestamp, so probably no issue here. But it would be
>>nice to document somehow the expected interaction between the kernel
>>ring programming and the user batch programming if any.
>>
>>[1]:
>>https://patchwork.freedesktop.org/patch/723502/?series=164654&rev=5
>
>The MI_LOAD_REGISTER_REG source is TIMESTAMP register and destination
>is the GPR reg, so we should be fine. If we were writing to the
>TIMESTAMP registers, then that's going to mess up counts.
>
Yes. Note that we program the batch buffer to only to read from this
register here. KMD whitelists these registers with only read permission.
We are not changing any of that. Only using different timestamp resgister.
Niranjana
>Regards,
>Umesh
>
>
>>
>>> spin->batch[b++] = CS_GPR(START_TS);
>>> }
>>>
>>>@@ -83,7 +84,7 @@ void xe_spin_init(struct xe_spin *spin, struct
>>>xe_spin_opts *opts)
>>>
>>> if (opts->write_timestamp) {
>>> spin->batch[b++] = MI_LOAD_REGISTER_REG |
>>>MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
>>>- spin->batch[b++] = CTX_TIMESTAMP;
>>>+ spin->batch[b++] = opts->use_queue_timestamp ?
>>>QUEUE_TIMESTAMP : CTX_TIMESTAMP;
>>> spin->batch[b++] = CS_GPR(NOW_TS);
>>>
>>> spin->batch[b++] = MI_STORE_REGISTER_MEM_GEN8 |
>>>MI_SRM_CS_MMIO;
>>>@@ -97,7 +98,7 @@ void xe_spin_init(struct xe_spin *spin, struct
>>>xe_spin_opts *opts)
>>> spin->batch[b++] = CS_GPR(NOW_TS) + 4;
>>> spin->batch[b++] = 0;
>>> spin->batch[b++] = MI_LOAD_REGISTER_REG |
>>>MI_LRR_DST_CS_MMIO | MI_LRR_SRC_CS_MMIO;
>>>- spin->batch[b++] = CTX_TIMESTAMP;
>>>+ spin->batch[b++] = opts->use_queue_timestamp ?
>>>QUEUE_TIMESTAMP : CTX_TIMESTAMP;
>>> spin->batch[b++] = CS_GPR(NOW_TS);
>>>
>>> /* delta = now - start; inverted to match COND_BBE */
>>>diff --git a/lib/xe/xe_spin.h b/lib/xe/xe_spin.h
>>>index db0febd8ab..75a6ada420 100644
>>>--- a/lib/xe/xe_spin.h
>>>+++ b/lib/xe/xe_spin.h
>>>@@ -51,6 +51,7 @@ struct xe_spin_mem_copy {
>>> * @multi_queue_switch_on_wait: Add a SEMAPHORE_WAIT multi-queue
>>>switch point
>>> * and have the queue switch only happen if waiting on the
>>>semaphore.
>>> * @ctx_ticks: number of ticks after which spinner is stopped,
>>>applied if > 0
>>>+ * @use_queue_timestamp: Use QUEUE_TIMESTAMP register instead of
>>>CTX_TIMESTAMP
>>> * @mem_copy: container of objects used for memory copy (optional)
>>> *
>>> * Used to initialize struct xe_spin spinner behavior.
>>>@@ -62,6 +63,7 @@ struct xe_spin_opts {
>>> bool multi_queue_switch_on_wait;
>>> uint32_t ctx_ticks;
>>> bool write_timestamp;
>>>+ bool use_queue_timestamp;
>>> struct xe_spin_mem_copy *mem_copy;
>>> };
>>>
>>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/3] lib/xe/xe_spin: Enhance multi-queue switch option
2026-05-07 19:15 ` Summers, Stuart
@ 2026-05-07 20:11 ` Niranjana Vishwanathapura
0 siblings, 0 replies; 13+ messages in thread
From: Niranjana Vishwanathapura @ 2026-05-07 20:11 UTC (permalink / raw)
To: Summers, Stuart; +Cc: igt-dev@lists.freedesktop.org
On Thu, May 07, 2026 at 12:15:10PM -0700, Summers, Stuart wrote:
>On Wed, 2026-05-06 at 11:31 -0700, Niranjana Vishwanathapura wrote:
>> From: gta <gta@DUT4637NVLP.fm.intel.com>
>>
>> Allow user to control whether the multi-queue switch should
>> happen after parsing the MI_SEMAPHORE_WAIT instruction or
>> only if the instruction is unsuccessful in getting the
>> semaphore, thus having to wait.
>>
>> Ensure that in xe_exec_multi_queue@priority test, the multi-queue
>> switch happens only when the spinner has to wait for the semaphore.
>>
>> Signed-off-by: Niranjana Vishwanathapura
>> <niranjana.vishwanathapura@intel.com>
>> ---
>> lib/xe/xe_spin.c | 13 +++++++------
>> lib/xe/xe_spin.h | 6 +++++-
>> tests/intel/xe_exec_multi_queue.c | 2 +-
>> 3 files changed, 13 insertions(+), 8 deletions(-)
>>
>> diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
>> index 4dc110c222..14952ca90e 100644
>> --- a/lib/xe/xe_spin.c
>> +++ b/lib/xe/xe_spin.c
>> @@ -179,14 +179,15 @@ void xe_spin_init(struct xe_spin *spin, struct
>> xe_spin_opts *opts)
>> * Insert a MI_SEMAPHORE_WAIT_CMD instruction with condition
>> controlled
>> * by the user which acts as a queue switch point in multi
>> queue mode.
>> */
>> - if (opts->multi_queue_switch) {
>> + if (opts->multi_queue_switch || opts-
>> >multi_queue_switch_on_wait) {
>
>I don't have any problem with the way you're doing this. I can see we
>might want to use this without multi queue though at some point, so it
>might be nice to have the switch_on_wait as an option on top of
>multi_queue_switch - basically rename multi_queue_switch to
>semaphore_wait or something and then add the multi_queue_switch_on_wait
>as an additional option. But that also means we have to keep track of a
>separate parameter in xe_spin_init_opts() and since multi queue is the
>only user right now, it might be overkill.
>
Note that we use the bit MI_SEMAPHORE_QUEUE_SWITCH_MODE to indicate
whether multi-queue switch should happen on wait or after parsing
the command. It is a multi-queue specific bit. I don't see if that
option is available for non-multi-queue case. Yah, we can always
revisit if we get that option for non-multi-queue case also.
>Otherwise LGTM. Please apply the author related changes Xin had
>requested before merging:
>Reviewed-by: Stuart Summers <stuart.summers@intel.com>
Sure, Thanks,
Niranjana
>
>> uint64_t wait_addr = opts->addr + offsetof(struct
>> xe_spin, wait_cond);
>> + uint32_t sema_cmd = MI_SEMAPHORE_WAIT_CMD |
>> MI_SEMAPHORE_POLL |
>> + MI_SEMAPHORE_SAD_EQ_SDD | 3;
>>
>> - spin->batch[b++] = MI_SEMAPHORE_WAIT_CMD |
>> - MI_SEMAPHORE_POLL |
>> - MI_SEMAPHORE_QUEUE_SWITCH_MODE |
>> - MI_SEMAPHORE_SAD_EQ_SDD |
>> - 3;
>> + if (opts->multi_queue_switch_on_wait)
>> + sema_cmd |= MI_SEMAPHORE_QUEUE_SWITCH_MODE;
>> +
>> + spin->batch[b++] = sema_cmd;
>> spin->batch[b++] = 0;
>> spin->batch[b++] = wait_addr;
>> spin->batch[b++] = wait_addr >> 32;
>> diff --git a/lib/xe/xe_spin.h b/lib/xe/xe_spin.h
>> index 31154997b9..db0febd8ab 100644
>> --- a/lib/xe/xe_spin.h
>> +++ b/lib/xe/xe_spin.h
>> @@ -46,7 +46,10 @@ struct xe_spin_mem_copy {
>> * struct xe_spin_opts
>> * @addr: offset of spinner within vm
>> * @preempt: allow spinner to be preempted or not
>> - * @multi_queue_switch: Add a multi-queue switch point
>> + * @multi_queue_switch: Add a SEMAPHORE_WAIT multi-queue switch
>> point
>> + * and have the queue switch happen after command is parsed.
>> + * @multi_queue_switch_on_wait: Add a SEMAPHORE_WAIT multi-queue
>> switch point
>> + * and have the queue switch only happen if waiting on the
>> semaphore.
>> * @ctx_ticks: number of ticks after which spinner is stopped,
>> applied if > 0
>> * @mem_copy: container of objects used for memory copy (optional)
>> *
>> @@ -56,6 +59,7 @@ struct xe_spin_opts {
>> uint64_t addr;
>> bool preempt;
>> bool multi_queue_switch;
>> + bool multi_queue_switch_on_wait;
>> uint32_t ctx_ticks;
>> bool write_timestamp;
>> struct xe_spin_mem_copy *mem_copy;
>> diff --git a/tests/intel/xe_exec_multi_queue.c
>> b/tests/intel/xe_exec_multi_queue.c
>> index 0479554bb6..6f90c3e4e8 100644
>> --- a/tests/intel/xe_exec_multi_queue.c
>> +++ b/tests/intel/xe_exec_multi_queue.c
>> @@ -459,7 +459,7 @@ __test_priority(int fd, struct
>> drm_xe_engine_class_instance *eci,
>> for (i = 0; i < num_queues; i++) {
>> uint64_t spin_addr = addr + i * sizeof(struct
>> xe_spin);
>>
>> - xe_spin_init_opts(spin[i], .addr = spin_addr,
>> .multi_queue_switch = true,
>> + xe_spin_init_opts(spin[i], .addr = spin_addr,
>> .multi_queue_switch_on_wait = true,
>> .write_timestamp = true);
>> /*
>> * Pre-set all spinners to preempt-wait so each
>> queue, once
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 3/3] tests/intel/xe_exec_reset: Add multi-queue long spin tests
2026-05-07 19:37 ` Summers, Stuart
@ 2026-05-07 20:11 ` Niranjana Vishwanathapura
0 siblings, 0 replies; 13+ messages in thread
From: Niranjana Vishwanathapura @ 2026-05-07 20:11 UTC (permalink / raw)
To: Summers, Stuart; +Cc: igt-dev@lists.freedesktop.org
On Thu, May 07, 2026 at 12:37:23PM -0700, Summers, Stuart wrote:
>On Wed, 2026-05-06 at 11:31 -0700, Niranjana Vishwanathapura wrote:
>> From: gta <gta@DUT4637NVLP.fm.intel.com>
>>
>> Add the following multi-queue tests and update xe_legacy_test_mode()
>> function to use queue_timestamp and multi_queue_switch spinner
>> options
>> for the multi-queue scenarios.
>>
>> multi-queue-long-spin-many-preempt
>> multi-queue-long-spin-reuse-many-preempt
>>
>> Signed-off-by: Niranjana Vishwanathapura
>> <niranjana.vishwanathapura@intel.com>
>> ---
>> lib/xe/xe_legacy.c | 2 ++
>> tests/intel/xe_exec_reset.c | 17 +++++++++++++++++
>> 2 files changed, 19 insertions(+)
>>
>> diff --git a/lib/xe/xe_legacy.c b/lib/xe/xe_legacy.c
>> index f9bd5bcb61..b7054756cd 100644
>> --- a/lib/xe/xe_legacy.c
>> +++ b/lib/xe/xe_legacy.c
>> @@ -67,6 +67,8 @@ xe_legacy_test_mode(int fd, struct
>> drm_xe_engine_class_instance *eci,
>> } *data;
>> struct xe_spin_opts spin_opts = {
>> .preempt = flags & PREEMPT,
>> + .multi_queue_switch = flags & MULTI_QUEUE,
>> + .use_queue_timestamp = flags & MULTI_QUEUE,
>> #define THREE_SEC (3 * 1000000000ull)
>> .ctx_ticks = flags & LONG_SPIN ?
>> xe_spin_nsec_to_ticks(fd, 0, THREE_SEC) : 0,
>> diff --git a/tests/intel/xe_exec_reset.c
>> b/tests/intel/xe_exec_reset.c
>> index a3cf290abf..f7ae94a424 100644
>> --- a/tests/intel/xe_exec_reset.c
>> +++ b/tests/intel/xe_exec_reset.c
>> @@ -1179,6 +1179,23 @@ int igt_main()
>> MULTI_QUEUE);
>> }
>>
>> + igt_subtest("multi-queue-long-spin-many-preempt")
>
>These need to be documented now. The existing non-multi queue tests
>talk about preemption being the deciding factor in ensuring we aren't
>getting engine resets. So here would be nice to call out lite restore
>explicitly.
>
>Other than documentation the changes look good.
>
Sure, will add.
Niranjana
>Thanks,
>Stuart
>
>> + xe_for_each_multi_queue_engine(fd, hwe) {
>> + xe_legacy_test_mode(fd, hwe, 4, 8,
>> + LONG_SPIN | MULTI_QUEUE,
>> + LEGACY_MODE_ADDR, false);
>> + break;
>> + }
>> +
>> + igt_subtest("multi-queue-long-spin-reuse-many-preempt")
>> + xe_for_each_multi_queue_engine(fd, hwe) {
>> + xe_legacy_test_mode(fd, hwe, 4, 8,
>> + LONG_SPIN | MULTI_QUEUE |
>> + LONG_SPIN_REUSE_QUEUE,
>> + LEGACY_MODE_ADDR, false);
>> + break;
>> + }
>> +
>> igt_fixture()
>> drm_close_driver(fd);
>> }
>
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2026-05-07 20:12 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-06 18:31 [PATCH 0/3] tests/intel/xe_exec_reset: Validate multi-queue timestamping Niranjana Vishwanathapura
2026-05-06 18:31 ` [PATCH 1/3] lib/xe/xe_spin: Enhance multi-queue switch option Niranjana Vishwanathapura
2026-05-07 0:50 ` Wang, X
2026-05-07 19:15 ` Summers, Stuart
2026-05-07 20:11 ` Niranjana Vishwanathapura
2026-05-06 18:31 ` [PATCH 2/3] lib/xe/xe_spin: Add option for QUEUE_TIMESTAMP Niranjana Vishwanathapura
2026-05-07 19:23 ` Summers, Stuart
2026-05-07 19:41 ` Umesh Nerlige Ramappa
2026-05-07 20:05 ` Summers, Stuart
2026-05-07 20:07 ` Niranjana Vishwanathapura
2026-05-06 18:31 ` [PATCH 3/3] tests/intel/xe_exec_reset: Add multi-queue long spin tests Niranjana Vishwanathapura
2026-05-07 19:37 ` Summers, Stuart
2026-05-07 20:11 ` Niranjana Vishwanathapura
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox