Igt-dev Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH v3 0/2] RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure
@ 2023-12-06 11:47 Bommu Krishnaiah
  2023-12-06 11:47 ` [igt-dev] [PATCH v3 1/2] " Bommu Krishnaiah
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Bommu Krishnaiah @ 2023-12-06 11:47 UTC (permalink / raw)
  To: igt-dev; +Cc: Bommu Krishnaiah

remove the num_engines/instances members from drm_xe_wait_user_fence structure
and add a exec_queue_id member

invalid-exec_queue-wait subtest to excess behaviour when exec_queue reset happen

about test
Skipping the GPU mapping(vm_bind) for object, so that exec_queue
reset will happen and xe_wait_ufence will end return EIO not ETIME

I am able to see exec_queue reset was happened and xe_wait_user_fence_ioctl returned EIO

test result
root@DUT7075PVC:/home/gta# LD_LIBRARY_PATH=/home/gta/ ./xe_waitfence --r invalid-exec_queue-wait
IGT-Version: 1.28-g3c0162fc4 (x86_64) (Linux: 6.6.0-rc3-xe x86_64)
Opened device: /dev/dri/card0
Starting subtest: invalid-exec_queue-wait
Subtest invalid-exec_queue-wait: SUCCESS (0.993s)

dmesg logs
[  807.680378] [IGT] xe_waitfence: executing
[  807.699796] [drm:drm_stub_open [drm]]
[  807.704536] xe 0000:51:00.0: [drm:drm_open_helper [drm]] comm="xe_waitfence", pid=2952, minor=0
[  807.715155] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, DRM_IOCTL_VERSION
[  807.727328] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, DRM_IOCTL_VERSION
[  807.739580] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[  807.751518] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[  807.763550] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[  807.775525] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[  807.787556] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[  807.799494] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[  807.811531] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[  807.823476] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[  807.835577] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, DRM_IOCTL_VERSION
[  807.847921] [IGT] xe_waitfence: starting subtest invalid-exec_queue-wait
[  807.855528] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_VM_CREATE
[  807.891346] xe 0000:51:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT0: Applying GT save-restore MMIOs
[  807.901602] xe 0000:51:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT0: REG[0x9424] = 0x7ffffffc
-----------------
-----------------
[  808.560967] xe REG[0x4500-0x45ff]: deny rw access
[  808.566292] xe REG[0x1e3a8-0x1e3af]: allow read access
[  808.572161] xe 0000:51:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT0: Applying ccs3 save-restore MMIOs
[  808.582462] xe 0000:51:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT0: REG[0x260c4] = 0x3f7e0104
[  808.592096] xe 0000:51:00.0: [drm:xe_reg_sr_apply_whitelist [xe]] Whitelisting ccs3 registers
[  808.601962] xe REG[0x4400-0x45ff]: deny rw access
[  808.607281] xe REG[0x4500-0x45ff]: deny rw access
[  808.612608] xe REG[0x263a8-0x263af]: allow read access
[  808.618477] xe 0000:51:00.0: [drm] GT0: resumed
[  808.626283] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_EXEC_QUEUE_CREATE
[  808.638765] krishna xe_exec_queue_create_ioctl
[  808.645592] krishna args->exec_queue_id = 1
[  808.650328] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_GEM_CREATE
[  808.662621] xe 0000:51:00.0: [drm:xe_migrate_clear [xe]] Pass 0, size: 262144
[  808.672889] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_GEM_MMAP_OFFSET
[  808.685733] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_EXEC
[  808.696900] krishna args->exec_queue_id = 1
[  808.702700] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_WAIT_USER_FENCE
[  808.704620] xe 0000:51:00.0: [drm:pf_queue_work_func [xe]]
                ASID: 1048575
                VFID: 0
                PDATA: 0x00a3
                Faulted Address: 0x00000000001a0000
                FaultType: 0
                AccessType: 0
                FaultLevel: 4
                EngineClass: 3
                EngineInstance: 0
[  808.750685] xe 0000:51:00.0: [drm:pf_queue_work_func [xe]] Fault response: Unsuccessful -22
[  808.760519] xe 0000:51:00.0: [drm:xe_guc_exec_queue_memory_cat_error_handler [xe]] Engine memory cat error: guc_id=2
[  808.773237] xe 0000:51:00.0: [drm] exec gueue reset detected
[  808.773965] xe 0000:51:00.0: [drm] Timedout job: seqno=4294967169, guc_id=2, flags=0x8
[  808.779632] xe 0000:51:00.0: [drm:xe_wait_user_fence_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_wait_user_fence.c:174: err < 0
[  808.789655] xe 0000:51:00.0: [drm] Xe device coredump has been created
[  808.803796] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence", pid=2952, ret=-5
[  808.811133] xe 0000:51:00.0: [drm] Check your /sys/class/drm/card0/device/devcoredump/data
[  808.811220] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_EXEC_QUEUE_DESTROY
[  808.823605] xe 0000:51:00.0: [drm] Engine reset: guc_id=2
[  808.829862] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, DRM_IOCTL_GEM_CLOSE
[  808.843312] xe 0000:51:00.0: [drm:guc_exec_queue_timedout_job [xe]] Timedout signaled job: seqno=4294967169, guc_id=2, flags=0x9
[  808.848707] [IGT] xe_waitfence: finished subtest invalid-exec_queue-wait, SUCCESS
[  808.882255] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, DRM_IOCTL_VERSION
[  808.894404] xe 0000:51:00.0: [drm:drm_file_free.part.0 [drm]] comm="xe_waitfence", pid=2952, dev=0xe200, open_count=1
[  808.907374] xe 0000:51:00.0: [drm:drm_lastclose [drm]]
[  808.913594] xe 0000:51:00.0: [drm:drm_lastclose [drm]] driver lastclose completed

Need validate below tests 
xe_exec_balancer.c
xe_exec_compute_mode.c
xe_exec_fault_mode.c
xe_exec_reset.c
xe_exec_threads.c
xe_waitfence.c


Bommu Krishnaiah (2):
  drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence
    structure
  drm-uapi/xe: kill xe_wait_user_fence_ioctl when exec_queue reset
    happen

 include/drm-uapi/xe_drm.h          |  16 ++--
 lib/xe/xe_ioctl.c                  |  42 +--------
 lib/xe/xe_ioctl.h                  |   6 +-
 tests/intel/xe_evict.c             |   2 +-
 tests/intel/xe_exec_balancer.c     |   6 +-
 tests/intel/xe_exec_compute_mode.c |   6 +-
 tests/intel/xe_exec_fault_mode.c   |   6 +-
 tests/intel/xe_exec_reset.c        |   2 +-
 tests/intel/xe_exec_threads.c      |   6 +-
 tests/intel/xe_waitfence.c         | 135 ++++++++++++++++++++++-------
 10 files changed, 131 insertions(+), 96 deletions(-)

-- 
2.25.1

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [igt-dev] [PATCH v3 1/2] drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure
  2023-12-06 11:47 [igt-dev] [PATCH v3 0/2] RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Bommu Krishnaiah
@ 2023-12-06 11:47 ` Bommu Krishnaiah
  2023-12-06 11:47 ` [igt-dev] [PATCH v3 2/2] drm-uapi/xe: kill xe_wait_user_fence_ioctl when exec_queue reset happen Bommu Krishnaiah
  2023-12-06 12:27 ` [igt-dev] ✗ Fi.CI.BUILD: failure for RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure (rev3) Patchwork
  2 siblings, 0 replies; 4+ messages in thread
From: Bommu Krishnaiah @ 2023-12-06 11:47 UTC (permalink / raw)
  To: igt-dev; +Cc: Bommu Krishnaiah, Rodrigo Vivi

remove the num_engines/instances members from drm_xe_wait_user_fence structure
and add a exec_queue_id member

Right now this is only checking if the engine list is sane and nothing
else. In the end every operation with this IOCTL is a soft check.
So, let's formalize that and only use this IOCTL to wait on the fence.

exec_queue_id member will help to user space to get proper error code
from kernel while in exec_queue reset

Signed-off-by: Bommu Krishnaiah <krishnaiah.bommu@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h          | 16 +++---
 lib/xe/xe_ioctl.c                  | 42 ++--------------
 lib/xe/xe_ioctl.h                  |  6 +--
 tests/intel/xe_evict.c             |  2 +-
 tests/intel/xe_exec_balancer.c     |  6 +--
 tests/intel/xe_exec_compute_mode.c |  6 +--
 tests/intel/xe_exec_fault_mode.c   |  6 +--
 tests/intel/xe_exec_reset.c        |  2 +-
 tests/intel/xe_exec_threads.c      |  6 +--
 tests/intel/xe_waitfence.c         | 81 ++++++++++++++----------------
 10 files changed, 64 insertions(+), 109 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 590f7b7af..ceaca3991 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -1024,8 +1024,7 @@ struct drm_xe_wait_user_fence {
 	/** @op: wait operation (type of comparison) */
 	__u16 op;
 
-#define DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP	(1 << 0)	/* e.g. Wait on VM bind */
-#define DRM_XE_UFENCE_WAIT_FLAG_ABSTIME	(1 << 1)
+#define DRM_XE_UFENCE_WAIT_FLAG_ABSTIME	(1 << 0)
 	/** @flags: wait flags */
 	__u16 flags;
 
@@ -1059,16 +1058,13 @@ struct drm_xe_wait_user_fence {
 	__s64 timeout;
 
 	/**
-	 * @num_engines: number of engine instances to wait on, must be zero
-	 * when DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP set
+	 * @exec_queue_id: exec_queue_id returned from xe_exec_queue_create_ioctl
+	 * exec_queue_id is help to find exec_queue reset status
 	 */
-	__u64 num_engines;
+	__u32 exec_queue_id;
 
-	/**
-	 * @instances: user pointer to array of drm_xe_engine_class_instance to
-	 * wait on, must be NULL when DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP set
-	 */
-	__u64 instances;
+	/** @reserved: Reserved */
+	__u32 pad2;
 
 	/** @reserved: Reserved */
 	__u64 reserved[2];
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index c91bf25c4..87e2db5b4 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -458,18 +458,16 @@ void xe_exec_wait(int fd, uint32_t exec_queue, uint64_t addr)
 }
 
 int64_t xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
-		       struct drm_xe_engine_class_instance *eci,
-		       int64_t timeout)
+		       uint32_t exec_queue, int64_t timeout)
 {
 	struct drm_xe_wait_user_fence wait = {
 		.addr = to_user_pointer(addr),
 		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
-		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP : 0,
+		.flags = 0,
 		.value = value,
 		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
 		.timeout = timeout,
-		.num_engines = eci ? 1 :0,
-		.instances = eci ? to_user_pointer(eci) : 0,
+		.exec_queue_id = exec_queue,
 	};
 
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait), 0);
@@ -477,40 +475,6 @@ int64_t xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
 	return wait.timeout;
 }
 
-/**
- * xe_wait_ufence_abstime:
- * @fd: xe device fd
- * @addr: address of value to compare
- * @value: expected value (equal) in @address
- * @eci: engine class instance
- * @timeout: absolute time when wait expire
- *
- * Function compares @value with memory pointed by @addr until they are equal.
- *
- * Returns elapsed time in nanoseconds if user fence was signalled.
- */
-int64_t xe_wait_ufence_abstime(int fd, uint64_t *addr, uint64_t value,
-			       struct drm_xe_engine_class_instance *eci,
-			       int64_t timeout)
-{
-	struct drm_xe_wait_user_fence wait = {
-		.addr = to_user_pointer(addr),
-		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
-		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP | DRM_XE_UFENCE_WAIT_FLAG_ABSTIME : 0,
-		.value = value,
-		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
-		.timeout = timeout,
-		.num_engines = eci ? 1 : 0,
-		.instances = eci ? to_user_pointer(eci) : 0,
-	};
-	struct timespec ts;
-
-	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait), 0);
-	igt_assert_eq(clock_gettime(CLOCK_MONOTONIC, &ts), 0);
-
-	return ts.tv_sec * 1e9 + ts.tv_nsec;
-}
-
 void xe_force_gt_reset(int fd, int gt)
 {
 	char reset_string[128];
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index 32b34b15a..186544a34 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -89,11 +89,9 @@ void xe_exec_sync(int fd, uint32_t exec_queue, uint64_t addr,
 		  struct drm_xe_sync *sync, uint32_t num_syncs);
 void xe_exec_wait(int fd, uint32_t exec_queue, uint64_t addr);
 int64_t xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
-		       struct drm_xe_engine_class_instance *eci,
-		       int64_t timeout);
+		       uint32_t exec_queue,int64_t timeout);
 int64_t xe_wait_ufence_abstime(int fd, uint64_t *addr, uint64_t value,
-			       struct drm_xe_engine_class_instance *eci,
-			       int64_t timeout);
+			       uint32_t exec_queue, int64_t timeout);
 void xe_force_gt_reset(int fd, int gt);
 
 #endif /* XE_IOCTL_H */
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index 89dc46fae..d15598c54 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -352,7 +352,7 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
 		data = xe_bo_map(fd, __bo,
 				 ALIGN(sizeof(*data) * n_execs, 0x1000));
 		xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE,
-			       NULL, TWENTY_SEC);
+			       exec_queues[i], TWENTY_SEC);
 		igt_assert_eq(data[i].data, 0xc0ffee);
 	}
 	munmap(data, ALIGN(sizeof(*data) * n_execs, 0x1000));
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index 79ff65e89..283370c45 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -514,7 +514,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 
 		if (flags & REBIND && i + 1 != n_execs) {
 			xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE,
-				       NULL, ONE_SEC);
+				        exec_queues[e], ONE_SEC);
 			xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, NULL,
 					   0);
 
@@ -542,7 +542,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 				 * an invalidate.
 				 */
 				xe_wait_ufence(fd, &data[i].exec_sync,
-					       USER_FENCE_VALUE, NULL, ONE_SEC);
+					       USER_FENCE_VALUE, exec_queues[e], ONE_SEC);
 				igt_assert_eq(data[i].data, 0xc0ffee);
 			} else if (i * 2 != n_execs) {
 				/*
@@ -571,7 +571,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 
 	j = flags & INVALIDATE && n_execs ? n_execs - 1 : 0;
 	for (i = j; i < n_execs; i++)
-		xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE, NULL,
+		xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE, exec_queues[i],
 			       ONE_SEC);
 
 	/* Wait for all execs to complete */
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index 7d3004d65..5efe6bd00 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -198,7 +198,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 		if (flags & REBIND && i + 1 != n_execs) {
 			xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE,
-				       NULL, fence_timeout);
+				       exec_queues[e], fence_timeout);
 			xe_vm_unbind_async(fd, vm, bind_exec_queues[e], 0,
 					   addr, bo_size, NULL, 0);
 
@@ -227,7 +227,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 				 * an invalidate.
 				 */
 				xe_wait_ufence(fd, &data[i].exec_sync,
-					       USER_FENCE_VALUE, NULL,
+					       USER_FENCE_VALUE, exec_queues[e],
 					       fence_timeout);
 				igt_assert_eq(data[i].data, 0xc0ffee);
 			} else if (i * 2 != n_execs) {
@@ -257,7 +257,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 	j = flags & INVALIDATE ? n_execs - 1 : 0;
 	for (i = j; i < n_execs; i++)
-		xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE, NULL,
+		xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE, exec_queues[i],
 			       fence_timeout);
 
 	/* Wait for all execs to complete */
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index ee7cbb604..f61248384 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -230,7 +230,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 		if (flags & REBIND && i + 1 != n_execs) {
 			xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE,
-				       NULL, ONE_SEC);
+				       exec_queues[e], ONE_SEC);
 			xe_vm_unbind_async(fd, vm, bind_exec_queues[e], 0,
 					   addr, bo_size, NULL, 0);
 
@@ -259,7 +259,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 				 * an invalidate.
 				 */
 				xe_wait_ufence(fd, &data[i].exec_sync,
-					       USER_FENCE_VALUE, NULL, ONE_SEC);
+					       USER_FENCE_VALUE, exec_queues[e], ONE_SEC);
 				igt_assert_eq(data[i].data, 0xc0ffee);
 			} else if (i * 2 != n_execs) {
 				/*
@@ -290,7 +290,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		j = flags & INVALIDATE ? n_execs - 1 : 0;
 		for (i = j; i < n_execs; i++)
 			xe_wait_ufence(fd, &data[i].exec_sync,
-				       USER_FENCE_VALUE, NULL, ONE_SEC);
+				       USER_FENCE_VALUE, exec_queues[i], ONE_SEC);
 	}
 
 	sync[0].addr = to_user_pointer(&data[0].vm_sync);
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index edfd27fe0..221138eb0 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -618,7 +618,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 
 	for (i = 1; i < n_execs; i++)
 		xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE,
-			       NULL, THREE_SEC);
+			       exec_queues[i], THREE_SEC);
 
 	sync[0].addr = to_user_pointer(&data[0].vm_sync);
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index fcb926698..f0c6116d1 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -359,7 +359,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 			for (j = i - 0x20; j <= i; ++j)
 				xe_wait_ufence(fd, &data[j].exec_sync,
 					       USER_FENCE_VALUE,
-					       NULL, fence_timeout);
+					       exec_queues[e], fence_timeout);
 			xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size,
 					   NULL, 0);
 
@@ -389,7 +389,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 				for (j = i == 0x20 ? 0 : i - 0x1f; j <= i; ++j)
 					xe_wait_ufence(fd, &data[j].exec_sync,
 						       USER_FENCE_VALUE,
-						       NULL, fence_timeout);
+						       exec_queues[e], fence_timeout);
 				igt_assert_eq(data[i].data, 0xc0ffee);
 			} else if (i * 2 != n_execs) {
 				/*
@@ -421,7 +421,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 	j = flags & INVALIDATE ?
 		(flags & RACE ? n_execs / 2 + 1 : n_execs - 1) : 0;
 	for (i = j; i < n_execs; i++)
-		xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE, NULL,
+		xe_wait_ufence(fd, &data[i].exec_sync, USER_FENCE_VALUE, exec_queues[i],
 			       fence_timeout);
 
 	/* Wait for all execs to complete */
diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
index 3be987954..b724eaa0e 100644
--- a/tests/intel/xe_waitfence.c
+++ b/tests/intel/xe_waitfence.c
@@ -37,22 +37,51 @@ static void do_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 }
 
 static int64_t wait_with_eci_abstime(int fd, uint64_t *addr, uint64_t value,
-				     struct drm_xe_engine_class_instance *eci,
-				     int64_t timeout)
+				     uint32_t exec_queue, int64_t timeout)
 {
 	struct drm_xe_wait_user_fence wait = {
 		.addr = to_user_pointer(addr),
 		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
-		.flags = !eci ? 0 : DRM_XE_UFENCE_WAIT_FLAG_ABSTIME,
+		.flags = !exec_queue ? 0 : DRM_XE_UFENCE_WAIT_FLAG_ABSTIME,
 		.value = value,
 		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
 		.timeout = timeout,
-		.num_engines = eci ? 1 : 0,
-		.instances = eci ? to_user_pointer(eci) : 0,
+		.exec_queue_id = exec_queue,
+	};
+	struct timespec ts;
+
+	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait), 0);
+	igt_assert_eq(clock_gettime(CLOCK_MONOTONIC, &ts), 0);
+
+	return ts.tv_sec * 1e9 + ts.tv_nsec;
+}
+
+/**
+ * xe_wait_ufence_abstime:
+ * @fd: xe device fd
+ * @addr: address of value to compare
+ * @value: expected value (equal) in @address
+ * @exec_queue: exec_queue id
+ * @timeout: absolute time when wait expire
+ *
+ * Function compares @value with memory pointed by @addr until they are equal.
+ *
+ * Returns elapsed time in nanoseconds if user fence was signalled.
+ */
+int64_t xe_wait_ufence_abstime(int fd, uint64_t *addr, uint64_t value,
+			       uint32_t exec_queue, int64_t timeout)
+{
+	struct drm_xe_wait_user_fence wait = {
+		.addr = to_user_pointer(addr),
+		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
+		.flags = !exec_queue ? DRM_XE_UFENCE_WAIT_FLAG_ABSTIME : 0,
+		.value = value,
+		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
+		.timeout = timeout,
+		.exec_queue_id = exec_queue,
 	};
 	struct timespec ts;
 
-	igt_assert(eci);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait), 0);
 	igt_assert_eq(clock_gettime(CLOCK_MONOTONIC, &ts), 0);
 
@@ -82,7 +111,6 @@ enum waittype {
 static void
 waitfence(int fd, enum waittype wt)
 {
-	struct drm_xe_engine *engine = NULL;
 	struct timespec ts;
 	int64_t current, signalled;
 	uint32_t bo_1;
@@ -115,11 +143,11 @@ waitfence(int fd, enum waittype wt)
 		igt_debug("wait type: RELTIME - timeout: %ld, timeout left: %ld\n",
 			  MS_TO_NS(10), timeout);
 	} else if (wt == ENGINE) {
-		engine = xe_engine(fd, 1);
+		uint32_t exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_COPY);
 		clock_gettime(CLOCK_MONOTONIC, &ts);
 		current = ts.tv_sec * 1e9 + ts.tv_nsec;
 		timeout = current + MS_TO_NS(10);
-		signalled = wait_with_eci_abstime(fd, &wait_fence, 7, &engine->instance, timeout);
+		signalled = wait_with_eci_abstime(fd, &wait_fence, 7, exec_queue, timeout);
 		igt_debug("wait type: ENGINE ABSTIME - timeout: %" PRId64
 			  ", signalled: %" PRId64
 			  ", elapsed: %" PRId64 "\n",
@@ -166,8 +194,7 @@ invalid_flag(int fd)
 		.value = 1,
 		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
 		.timeout = -1,
-		.num_engines = 0,
-		.instances = 0,
+		.exec_queue_id = exec_queue,
 	};
 
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
@@ -191,8 +218,7 @@ invalid_ops(int fd)
 		.value = 1,
 		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
 		.timeout = 1,
-		.num_engines = 0,
-		.instances = 0,
+		.exec_queue_id = exec_queue,
 	};
 
 	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
@@ -204,32 +230,6 @@ invalid_ops(int fd)
 	do_ioctl_err(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait, EINVAL);
 }
 
-static void
-invalid_engine(int fd)
-{
-	uint32_t bo;
-
-	struct drm_xe_wait_user_fence wait = {
-		.addr = to_user_pointer(&wait_fence),
-		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
-		.flags = 0,
-		.value = 1,
-		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
-		.timeout = -1,
-		.num_engines = 1,
-		.instances = 0,
-	};
-
-	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
-
-	bo = xe_bo_create(fd, vm, 0x40000, vram_if_possible(fd, 0), 0);
-
-	do_bind(fd, vm, bo, 0, 0x200000, 0x40000, 1);
-
-	do_ioctl_err(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait, EFAULT);
-}
-
-
 igt_main
 {
 	int fd;
@@ -252,9 +252,6 @@ igt_main
 	igt_subtest("invalid-ops")
 		invalid_ops(fd);
 
-	igt_subtest("invalid-engine")
-		invalid_engine(fd);
-
 	igt_fixture
 		drm_close_driver(fd);
 }
-- 
2.25.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [igt-dev] [PATCH v3 2/2] drm-uapi/xe: kill xe_wait_user_fence_ioctl when exec_queue reset happen
  2023-12-06 11:47 [igt-dev] [PATCH v3 0/2] RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Bommu Krishnaiah
  2023-12-06 11:47 ` [igt-dev] [PATCH v3 1/2] " Bommu Krishnaiah
@ 2023-12-06 11:47 ` Bommu Krishnaiah
  2023-12-06 12:27 ` [igt-dev] ✗ Fi.CI.BUILD: failure for RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure (rev3) Patchwork
  2 siblings, 0 replies; 4+ messages in thread
From: Bommu Krishnaiah @ 2023-12-06 11:47 UTC (permalink / raw)
  To: igt-dev; +Cc: Bommu Krishnaiah, Rodrigo Vivi

Skipping the GPU mapping(vm_bind) for object, so that exec_queue
reset will happen and xe_wait_ufence will end return EIO not ETIME

Signed-off-by: Bommu Krishnaiah <krishnaiah.bommu@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
---
 tests/intel/xe_waitfence.c | 84 +++++++++++++++++++++++++++++++++++++-
 1 file changed, 82 insertions(+), 2 deletions(-)

diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
index b724eaa0e..1e2fd1661 100644
--- a/tests/intel/xe_waitfence.c
+++ b/tests/intel/xe_waitfence.c
@@ -178,8 +178,8 @@ waitfence(int fd, enum waittype wt)
  * SUBTEST: invalid-ops
  * Description: Check query with invalid ops returns expected error code
  *
- * SUBTEST: invalid-engine
- * Description: Check query with invalid engine info returns expected error code
+ * SUBTEST: invalid-exec_queue-wait
+ * Description: Check xe_wait_ufence will return expected error code while exec_queue reset happen
  */
 
 static void
@@ -230,6 +230,83 @@ invalid_ops(int fd)
 	do_ioctl_err(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait, EINVAL);
 }
 
+static void
+invalid_exec_queue_wait(int fd)
+{
+	uint32_t bo, b;
+	uint64_t batch_offset;
+	uint64_t batch_addr;
+	uint64_t sdi_offset;
+	uint64_t sdi_addr;
+	uint64_t addr = 0x1a0000;
+
+	struct {
+		uint32_t batch[16];
+		uint64_t pad;
+		uint64_t vm_sync;
+		uint64_t exec_sync;
+		uint32_t data;
+	} *data;
+
+#define USER_FENCE_VALUE        0xdeadbeefdeadbeefull
+	struct drm_xe_sync sync[1] = {
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
+			.timeline_value = USER_FENCE_VALUE },
+	};
+
+	struct drm_xe_exec exec = {
+		.num_batch_buffer = 1,
+		.num_syncs = 1,
+		.syncs = to_user_pointer(sync),
+	};
+
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
+	uint32_t exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_COPY);
+	struct drm_xe_wait_user_fence1 wait = {
+		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
+		.flags = 0,
+		.value = 0xaabbaa,
+		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
+		.timeout = -1,
+		.exec_queue_id = exec_queue,
+	};
+
+	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
+	data = xe_bo_map(fd, bo, 0x40000);
+
+	batch_offset = (char *)&data[0].batch - (char *)data;
+	batch_addr = addr + batch_offset;
+	sdi_offset = (char *)&data[0].data - (char *)data;
+	sdi_addr = addr + sdi_offset;
+
+	b = 0;
+	data[0].batch[b++] = MI_STORE_DWORD_IMM_GEN4;
+	data[0].batch[b++] = sdi_addr;
+	data[0].batch[b++] = sdi_addr >> 32;
+	data[0].batch[b++] = 0xaabbcc;
+	data[0].batch[b++] = MI_BATCH_BUFFER_END;
+	igt_assert(b <= ARRAY_SIZE(data[0].batch));
+
+	wait.addr = to_user_pointer(&data[0].exec_sync);
+	exec.exec_queue_id = exec_queue;
+	exec.address = batch_addr;
+
+	xe_exec(fd, &exec);
+
+	/**
+	  * Skipping the GPU mapping(vm_bind) for object, so that exec_queue
+	  * reset will happen and xe_wait_ufence will end return EIO not ETIME
+	  */
+	do_ioctl_err(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait, EIO);
+
+	xe_exec_queue_destroy(fd, exec_queue);
+
+	if (bo) {
+		 munmap(data, 0x40000);
+		 gem_close(fd, bo);
+	}
+}
+
 igt_main
 {
 	int fd;
@@ -252,6 +329,9 @@ igt_main
 	igt_subtest("invalid-ops")
 		invalid_ops(fd);
 
+	igt_subtest("invalid-exec_queue-wait")
+		invalid_exec_queue_wait(fd);
+
 	igt_fixture
 		drm_close_driver(fd);
 }
-- 
2.25.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [igt-dev] ✗ Fi.CI.BUILD: failure for RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure (rev3)
  2023-12-06 11:47 [igt-dev] [PATCH v3 0/2] RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Bommu Krishnaiah
  2023-12-06 11:47 ` [igt-dev] [PATCH v3 1/2] " Bommu Krishnaiah
  2023-12-06 11:47 ` [igt-dev] [PATCH v3 2/2] drm-uapi/xe: kill xe_wait_user_fence_ioctl when exec_queue reset happen Bommu Krishnaiah
@ 2023-12-06 12:27 ` Patchwork
  2 siblings, 0 replies; 4+ messages in thread
From: Patchwork @ 2023-12-06 12:27 UTC (permalink / raw)
  To: Bommu Krishnaiah; +Cc: igt-dev

== Series Details ==

Series: RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure (rev3)
URL   : https://patchwork.freedesktop.org/series/127364/
State : failure

== Summary ==

IGT patchset build failed on latest successful build
f42cda2a9ad74846fb59eea436cc98c8789849f9 lib/igt_core: initialize srandom seed on startup

Tail of build.log:
      |            void *
In file included from ../../../usr/src/igt-gpu-tools/tests/intel/xe_exec_threads.c:21:
../../../usr/src/igt-gpu-tools/lib/xe/xe_ioctl.h:92:19: note: expected ‘uint32_t’ {aka ‘unsigned int’} but argument is of type ‘void *’
   92 |          uint32_t exec_queue,int64_t timeout);
      |          ~~~~~~~~~^~~~~~~~~~
../../../usr/src/igt-gpu-tools/tests/intel/xe_exec_threads.c:433:57: warning: passing argument 4 of ‘xe_wait_ufence’ makes integer from pointer without a cast [-Wint-conversion]
  433 |  xe_wait_ufence(fd, &data[0].vm_sync, USER_FENCE_VALUE, NULL, fence_timeout);
      |                                                         ^~~~
      |                                                         |
      |                                                         void *
In file included from ../../../usr/src/igt-gpu-tools/tests/intel/xe_exec_threads.c:21:
../../../usr/src/igt-gpu-tools/lib/xe/xe_ioctl.h:92:19: note: expected ‘uint32_t’ {aka ‘unsigned int’} but argument is of type ‘void *’
   92 |          uint32_t exec_queue,int64_t timeout);
      |          ~~~~~~~~~^~~~~~~~~~
[690/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_registers_dg1.c.o'.
[691/1662] Compiling C object 'tests/59830eb@@kms_chamelium_hpd@exe/chamelium_kms_chamelium_hpd.c.o'.
[692/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_registers_adl.c.o'.
[693/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_chv.c.o'.
[694/1662] Compiling C object 'tests/59830eb@@kms_chamelium_frames@exe/chamelium_kms_chamelium_frames.c.o'.
[695/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_kblgt3.c.o'.
[696/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_cflgt3.c.o'.
[697/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_tglgt1.c.o'.
[698/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_sklgt3.c.o'.
[699/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_sklgt2.c.o'.
[700/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_sklgt4.c.o'.
[701/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_glk.c.o'.
[702/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_cnl.c.o'.
[703/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_bxt.c.o'.
[704/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_cflgt2.c.o'.
[705/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_kblgt2.c.o'.
[706/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_bdw.c.o'.
[707/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_rkl.c.o'.
[708/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_dg1.c.o'.
[709/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_icl.c.o'.
[710/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_tglgt2.c.o'.
[711/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_adl.c.o'.
[712/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_ehl.c.o'.
[713/1662] Compiling C object 'tests/59830eb@@gem_exec_fence@exe/intel_gem_exec_fence.c.o'.
[714/1662] Compiling C object 'tests/59830eb@@xe_vm@exe/intel_xe_vm.c.o'.
[715/1662] Compiling C object 'tests/59830eb@@kms_frontbuffer_tracking@exe/intel_kms_frontbuffer_tracking.c.o'.
[716/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_registers_acmgt1.c.o'.
[717/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_registers_acmgt2.c.o'.
[718/1662] Compiling C object 'tests/59830eb@@perf_pmu@exe/intel_perf_pmu.c.o'.
[719/1662] Compiling C object 'tests/59830eb@@perf@exe/intel_perf.c.o'.
[720/1662] Compiling C object 'tests/59830eb@@gem_exec_schedule@exe/intel_gem_exec_schedule.c.o'.
[721/1662] Linking target lib/libigt.so.0.
[722/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_equations.c.o'.
[723/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_acmgt1.c.o'.
[724/1662] Compiling C object 'lib/76b5a35@@i915_perf@sha/meson-generated_.._i915_perf_metrics_acmgt2.c.o'.
ninja: build stopped: subcommand failed.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-12-06 12:27 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-06 11:47 [igt-dev] [PATCH v3 0/2] RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Bommu Krishnaiah
2023-12-06 11:47 ` [igt-dev] [PATCH v3 1/2] " Bommu Krishnaiah
2023-12-06 11:47 ` [igt-dev] [PATCH v3 2/2] drm-uapi/xe: kill xe_wait_user_fence_ioctl when exec_queue reset happen Bommu Krishnaiah
2023-12-06 12:27 ` [igt-dev] ✗ Fi.CI.BUILD: failure for RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure (rev3) Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox