* [igt-dev] [PATCH 0/2] RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure
@ 2023-12-05 14:36 Bommu Krishnaiah
2023-12-05 14:36 ` [igt-dev] [PATCH 1/2] drm-uapi/xe: Kill exec_queue_set_property Bommu Krishnaiah
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Bommu Krishnaiah @ 2023-12-05 14:36 UTC (permalink / raw)
To: igt-dev; +Cc: Bommu Krishnaiah
remove the num_engines/instances members from drm_xe_wait_user_fence structure
and add a exec_queue_id member
This test to excess behaviour when exec_queue reset happen
about test
Skipping the GPU mapping(vm_bind) for object, so that exec_queue
reset will happen and xe_wait_ufence will end return EIO not ETIME
I am able to see exec_queue reset was happened and xe_wait_user_fence_ioctl returned EIO
test result
root@DUT7075PVC:/home/gta# LD_LIBRARY_PATH=/home/gta/ ./xe_waitfence --r invalid-exec_queue-wait
IGT-Version: 1.28-g3c0162fc4 (x86_64) (Linux: 6.6.0-rc3-xe x86_64)
Opened device: /dev/dri/card0
Starting subtest: invalid-exec_queue-wait
Subtest invalid-exec_queue-wait: SUCCESS (0.993s)
dmesg logs
[ 807.680378] [IGT] xe_waitfence: executing
[ 807.699796] [drm:drm_stub_open [drm]]
[ 807.704536] xe 0000:51:00.0: [drm:drm_open_helper [drm]] comm="xe_waitfence", pid=2952, minor=0
[ 807.715155] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, DRM_IOCTL_VERSION
[ 807.727328] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, DRM_IOCTL_VERSION
[ 807.739580] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[ 807.751518] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[ 807.763550] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[ 807.775525] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[ 807.787556] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[ 807.799494] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[ 807.811531] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[ 807.823476] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_DEVICE_QUERY
[ 807.835577] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, DRM_IOCTL_VERSION
[ 807.847921] [IGT] xe_waitfence: starting subtest invalid-exec_queue-wait
[ 807.855528] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_VM_CREATE
[ 807.891346] xe 0000:51:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT0: Applying GT save-restore MMIOs
[ 807.901602] xe 0000:51:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT0: REG[0x9424] = 0x7ffffffc
-----------------
-----------------
[ 808.560967] xe REG[0x4500-0x45ff]: deny rw access
[ 808.566292] xe REG[0x1e3a8-0x1e3af]: allow read access
[ 808.572161] xe 0000:51:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT0: Applying ccs3 save-restore MMIOs
[ 808.582462] xe 0000:51:00.0: [drm:xe_reg_sr_apply_mmio [xe]] GT0: REG[0x260c4] = 0x3f7e0104
[ 808.592096] xe 0000:51:00.0: [drm:xe_reg_sr_apply_whitelist [xe]] Whitelisting ccs3 registers
[ 808.601962] xe REG[0x4400-0x45ff]: deny rw access
[ 808.607281] xe REG[0x4500-0x45ff]: deny rw access
[ 808.612608] xe REG[0x263a8-0x263af]: allow read access
[ 808.618477] xe 0000:51:00.0: [drm] GT0: resumed
[ 808.626283] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_EXEC_QUEUE_CREATE
[ 808.638765] krishna xe_exec_queue_create_ioctl
[ 808.645592] krishna args->exec_queue_id = 1
[ 808.650328] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_GEM_CREATE
[ 808.662621] xe 0000:51:00.0: [drm:xe_migrate_clear [xe]] Pass 0, size: 262144
[ 808.672889] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_GEM_MMAP_OFFSET
[ 808.685733] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_EXEC
[ 808.696900] krishna args->exec_queue_id = 1
[ 808.702700] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_WAIT_USER_FENCE
[ 808.704620] xe 0000:51:00.0: [drm:pf_queue_work_func [xe]]
ASID: 1048575
VFID: 0
PDATA: 0x00a3
Faulted Address: 0x00000000001a0000
FaultType: 0
AccessType: 0
FaultLevel: 4
EngineClass: 3
EngineInstance: 0
[ 808.750685] xe 0000:51:00.0: [drm:pf_queue_work_func [xe]] Fault response: Unsuccessful -22
[ 808.760519] xe 0000:51:00.0: [drm:xe_guc_exec_queue_memory_cat_error_handler [xe]] Engine memory cat error: guc_id=2
[ 808.773237] xe 0000:51:00.0: [drm] exec gueue reset detected
[ 808.773965] xe 0000:51:00.0: [drm] Timedout job: seqno=4294967169, guc_id=2, flags=0x8
[ 808.779632] xe 0000:51:00.0: [drm:xe_wait_user_fence_ioctl [xe]] Ioctl argument check failed at drivers/gpu/drm/xe/xe_wait_user_fence.c:174: err < 0
[ 808.789655] xe 0000:51:00.0: [drm] Xe device coredump has been created
[ 808.803796] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence", pid=2952, ret=-5
[ 808.811133] xe 0000:51:00.0: [drm] Check your /sys/class/drm/card0/device/devcoredump/data
[ 808.811220] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, XE_EXEC_QUEUE_DESTROY
[ 808.823605] xe 0000:51:00.0: [drm] Engine reset: guc_id=2
[ 808.829862] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, DRM_IOCTL_GEM_CLOSE
[ 808.843312] xe 0000:51:00.0: [drm:guc_exec_queue_timedout_job [xe]] Timedout signaled job: seqno=4294967169, guc_id=2, flags=0x9
[ 808.848707] [IGT] xe_waitfence: finished subtest invalid-exec_queue-wait, SUCCESS
[ 808.882255] xe 0000:51:00.0: [drm:drm_ioctl [drm]] comm="xe_waitfence" pid=2952, dev=0xe200, auth=1, DRM_IOCTL_VERSION
[ 808.894404] xe 0000:51:00.0: [drm:drm_file_free.part.0 [drm]] comm="xe_waitfence", pid=2952, dev=0xe200, open_count=1
[ 808.907374] xe 0000:51:00.0: [drm:drm_lastclose [drm]]
[ 808.913594] xe 0000:51:00.0: [drm:drm_lastclose [drm]] driver lastclose completed
[ 808.922416] [IGT] xe_waitfence: exiting, ret=0
Need to modify below tests as per uapi changes, with current patch below tests will fail
xe_exec_balancer.c
xe_exec_compute_mode.c
xe_exec_fault_mode.c
xe_exec_reset.c
xe_exec_threads.c
xe_waitfence.c
Bommu Krishnaiah (1):
drm-uapi/xe: kill xe_wait_user_fence_ioctl when exec_queue reset
Francois Dugast (1):
drm-uapi/xe: Kill exec_queue_set_property
include/drm-uapi/xe_drm.h | 48 ++++++----------------
tests/intel/xe_waitfence.c | 83 ++++++++++++++++++++++++++++++++++++++
2 files changed, 96 insertions(+), 35 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [igt-dev] [PATCH 1/2] drm-uapi/xe: Kill exec_queue_set_property
2023-12-05 14:36 [igt-dev] [PATCH 0/2] RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Bommu Krishnaiah
@ 2023-12-05 14:36 ` Bommu Krishnaiah
2023-12-05 14:36 ` [igt-dev] [PATCH 2/2] drm-uapi/xe: kill xe_wait_user_fence_ioctl when exec_queue reset Bommu Krishnaiah
2023-12-05 15:32 ` [igt-dev] ✗ Fi.CI.BUILD: failure for RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Patchwork
2 siblings, 0 replies; 4+ messages in thread
From: Bommu Krishnaiah @ 2023-12-05 14:36 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Francois Dugast <francois.dugast@intel.com>
Align with commit ("drm/xe/uapi: Kill exec_queue_set_property")
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
include/drm-uapi/xe_drm.h | 48 +++++++++++----------------------------
1 file changed, 13 insertions(+), 35 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index e2cce951c..590f7b7af 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -105,10 +105,9 @@ struct xe_user_extension {
#define DRM_XE_VM_BIND 0x05
#define DRM_XE_EXEC_QUEUE_CREATE 0x06
#define DRM_XE_EXEC_QUEUE_DESTROY 0x07
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x08
-#define DRM_XE_EXEC_QUEUE_GET_PROPERTY 0x09
-#define DRM_XE_EXEC 0x0a
-#define DRM_XE_WAIT_USER_FENCE 0x0b
+#define DRM_XE_EXEC_QUEUE_GET_PROPERTY 0x08
+#define DRM_XE_EXEC 0x09
+#define DRM_XE_WAIT_USER_FENCE 0x0a
/* Must be kept compact -- no holes */
#define DRM_IOCTL_XE_DEVICE_QUERY DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_DEVICE_QUERY, struct drm_xe_device_query)
@@ -860,38 +859,17 @@ struct drm_xe_vm_bind {
/* Monitor 64MB contiguous region with 2M sub-granularity */
#define DRM_XE_ACC_GRANULARITY_64M 3
-/**
- * struct drm_xe_exec_queue_set_property - exec queue set property
- *
- * Same namespace for extensions as drm_xe_exec_queue_create
- */
-struct drm_xe_exec_queue_set_property {
- /** @extensions: Pointer to the first extension struct, if any */
- __u64 extensions;
-
- /** @exec_queue_id: Exec queue ID */
- __u32 exec_queue_id;
-
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT 2
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE 3
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT 4
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER 5
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY 6
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY 7
- /** @property: property to set */
- __u32 property;
-
- /** @value: property value */
- __u64 value;
-
- /** @reserved: Reserved */
- __u64 reserved[2];
-};
-
struct drm_xe_exec_queue_create {
-#define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
+#define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT 2
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE 3
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT 4
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER 5
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY 6
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY 7
+
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
--
2.25.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [igt-dev] [PATCH 2/2] drm-uapi/xe: kill xe_wait_user_fence_ioctl when exec_queue reset
2023-12-05 14:36 [igt-dev] [PATCH 0/2] RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Bommu Krishnaiah
2023-12-05 14:36 ` [igt-dev] [PATCH 1/2] drm-uapi/xe: Kill exec_queue_set_property Bommu Krishnaiah
@ 2023-12-05 14:36 ` Bommu Krishnaiah
2023-12-05 15:32 ` [igt-dev] ✗ Fi.CI.BUILD: failure for RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Patchwork
2 siblings, 0 replies; 4+ messages in thread
From: Bommu Krishnaiah @ 2023-12-05 14:36 UTC (permalink / raw)
To: igt-dev; +Cc: Bommu Krishnaiah, Rodrigo Vivi
Skipping the GPU mapping(vm_bind) for object, so that exec_queue
reset will happen and xe_wait_ufence will end return EIO not ETIME
Signed-off-by: Bommu Krishnaiah <krishnaiah.bommu@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
tests/intel/xe_waitfence.c | 83 ++++++++++++++++++++++++++++++++++++++
1 file changed, 83 insertions(+)
diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
index 3be987954..ac3c64652 100644
--- a/tests/intel/xe_waitfence.c
+++ b/tests/intel/xe_waitfence.c
@@ -152,6 +152,9 @@ waitfence(int fd, enum waittype wt)
*
* SUBTEST: invalid-engine
* Description: Check query with invalid engine info returns expected error code
+ *
+ * SUBTEST: invalid-exec_queue-wait
+ * Description: Check xe_wait_ufence will return expected error code while exec_queue reset happen
*/
static void
@@ -229,6 +232,83 @@ invalid_engine(int fd)
do_ioctl_err(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait, EFAULT);
}
+static void
+invalid_exec_queue_wait(int fd)
+{
+ uint32_t bo, b;
+ uint64_t batch_offset;
+ uint64_t batch_addr;
+ uint64_t sdi_offset;
+ uint64_t sdi_addr;
+ uint64_t addr = 0x1a0000;
+
+ struct {
+ uint32_t batch[16];
+ uint64_t pad;
+ uint64_t vm_sync;
+ uint64_t exec_sync;
+ uint32_t data;
+ } *data;
+
+#define USER_FENCE_VALUE 0xdeadbeefdeadbeefull
+ struct drm_xe_sync sync[1] = {
+ { .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
+ .timeline_value = USER_FENCE_VALUE },
+ };
+
+ struct drm_xe_exec exec = {
+ .num_batch_buffer = 1,
+ .num_syncs = 1,
+ .syncs = to_user_pointer(sync),
+ };
+
+ uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
+ uint32_t exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_COPY);
+ struct drm_xe_wait_user_fence1 wait = {
+ .op = DRM_XE_UFENCE_WAIT_OP_EQ,
+ .flags = 0,
+ .value = 0xaabbaa,
+ .mask = DRM_XE_UFENCE_WAIT_MASK_U64,
+ .timeout = -1,
+ .exec_queue_id = exec_queue,
+ };
+
+ bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
+ data = xe_bo_map(fd, bo, 0x40000);
+
+ batch_offset = (char *)&data[0].batch - (char *)data;
+ batch_addr = addr + batch_offset;
+ sdi_offset = (char *)&data[0].data - (char *)data;
+ sdi_addr = addr + sdi_offset;
+
+ b = 0;
+ data[0].batch[b++] = MI_STORE_DWORD_IMM_GEN4;
+ data[0].batch[b++] = sdi_addr;
+ data[0].batch[b++] = sdi_addr >> 32;
+ data[0].batch[b++] = 0xaabbcc;
+ data[0].batch[b++] = MI_BATCH_BUFFER_END;
+ igt_assert(b <= ARRAY_SIZE(data[0].batch));
+
+ wait.addr = to_user_pointer(&data[0].exec_sync);
+ exec.exec_queue_id = exec_queue;
+ exec.address = batch_addr;
+
+ xe_exec(fd, &exec);
+
+ /**
+ * Skipping the GPU mapping(vm_bind) for object, so that exec_queue
+ * reset will happen and xe_wait_ufence will end return EIO not ETIME
+ */
+ do_ioctl_err(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait, EIO);
+
+ xe_exec_queue_destroy(fd, exec_queue);
+
+ if (bo) {
+ munmap(data, 0x40000);
+ gem_close(fd, bo);
+ }
+}
+
igt_main
{
@@ -255,6 +335,9 @@ igt_main
igt_subtest("invalid-engine")
invalid_engine(fd);
+ igt_subtest("invalid-exec_queue-wait")
+ invalid_exec_queue_wait(fd);
+
igt_fixture
drm_close_driver(fd);
}
--
2.25.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [igt-dev] ✗ Fi.CI.BUILD: failure for RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure
2023-12-05 14:36 [igt-dev] [PATCH 0/2] RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Bommu Krishnaiah
2023-12-05 14:36 ` [igt-dev] [PATCH 1/2] drm-uapi/xe: Kill exec_queue_set_property Bommu Krishnaiah
2023-12-05 14:36 ` [igt-dev] [PATCH 2/2] drm-uapi/xe: kill xe_wait_user_fence_ioctl when exec_queue reset Bommu Krishnaiah
@ 2023-12-05 15:32 ` Patchwork
2 siblings, 0 replies; 4+ messages in thread
From: Patchwork @ 2023-12-05 15:32 UTC (permalink / raw)
To: Bommu Krishnaiah; +Cc: igt-dev
== Series Details ==
Series: RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure
URL : https://patchwork.freedesktop.org/series/127364/
State : failure
== Summary ==
IGT patchset build failed on latest successful build
48a47d91b7727215b965690c69d84159c8fb1aa2 Revert "test/xe_spin_batch: Add spin-fixed-duration-with-preempter"
Tail of build.log:
[650/1662] Compiling C object 'tests/v3d/cad21b8@@v3d_job_submission@exe/v3d_job_submission.c.o'.
[651/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_prw@exe/gem_prw.c.o'.
[652/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_set_domain@exe/gem_set_domain.c.o'.
[653/1662] Compiling C object 'tests/59830eb@@xe_exec_reset@exe/intel_xe_exec_reset.c.o'.
[654/1662] Compiling C object 'tests/vc4/e4667e8@@vc4_wait_bo@exe/vc4_wait_bo.c.o'.
[655/1662] Compiling C object 'tests/59830eb@@xe_evict@exe/intel_xe_evict.c.o'.
[656/1662] Compiling C object 'tests/vc4/e4667e8@@vc4_perfmon@exe/vc4_perfmon.c.o'.
[657/1662] Compiling C object 'tests/vmwgfx/776e741@@vmw_mob_stress@exe/vmw_mob_stress.c.o'.
[658/1662] Compiling C object 'tests/59830eb@@xe_query@exe/intel_xe_query.c.o'.
[659/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_exec_ctx@exe/gem_exec_ctx.c.o'.
[660/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_create@exe/gem_create.c.o'.
[661/1662] Compiling C object 'tests/vmwgfx/776e741@@vmw_tri@exe/vmw_tri.c.o'.
[662/1662] Compiling C object 'tests/vmwgfx/776e741@@vmw_ref_count@exe/vmw_ref_count.c.o'.
[663/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_exec_nop@exe/gem_exec_nop.c.o'.
[664/1662] Compiling C object 'tests/59830eb@@kms_chamelium_color@exe/chamelium_kms_chamelium_color.c.o'.
[665/1662] Compiling C object 'tests/v3d/cad21b8@@v3d_submit_csd@exe/v3d_submit_csd.c.o'.
[666/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_exec_fault@exe/gem_exec_fault.c.o'.
[667/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_exec_reloc@exe/gem_exec_reloc.c.o'.
[668/1662] Compiling C object 'tests/vmwgfx/776e741@@vmw_execution_buffer@exe/vmw_execution_buffer.c.o'.
[669/1662] Compiling C object 'tests/59830eb@@kms_chamelium_edid@exe/chamelium_kms_chamelium_edid.c.o'.
[670/1662] Compiling C object 'tests/59830eb@@kms_chamelium_hpd@exe/chamelium_kms_chamelium_hpd.c.o'.
[671/1662] Compiling C object 'tests/vmwgfx/776e741@@vmw_surface_copy@exe/vmw_surface_copy.c.o'.
[672/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_busy@exe/gem_busy.c.o'.
[673/1662] Compiling C object 'tests/59830eb@@xe_pat@exe/intel_xe_pat.c.o'.
[674/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_exec_trace@exe/gem_exec_trace.c.o'.
[675/1662] Compiling C object 'tests/59830eb@@kms_psr2_sf@exe/intel_kms_psr2_sf.c.o'.
[676/1662] Compiling C object 'tests/v3d/cad21b8@@v3d_submit_cl@exe/v3d_submit_cl.c.o'.
[677/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_userptr_benchmark@exe/gem_userptr_benchmark.c.o'.
[678/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_blt@exe/gem_blt.c.o'.
[679/1662] Compiling C object 'tests/59830eb@@kms_chamelium_audio@exe/chamelium_kms_chamelium_audio.c.o'.
[680/1662] Compiling C object 'tests/59830eb@@kms_pm_rpm@exe/intel_kms_pm_rpm.c.o'.
[681/1662] Compiling C object 'tests/59830eb@@i915_query@exe/intel_i915_query.c.o'.
[682/1662] Compiling C object 'tests/59830eb@@xe_intel_bb@exe/intel_xe_intel_bb.c.o'.
[683/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_syslatency@exe/gem_syslatency.c.o'.
[684/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_latency@exe/gem_latency.c.o'.
[685/1662] Compiling C object 'tests/59830eb@@kms_chamelium_frames@exe/chamelium_kms_chamelium_frames.c.o'.
[686/1662] Compiling C object 'tests/59830eb@@gem_exec_balancer@exe/intel_gem_exec_balancer.c.o'.
[687/1662] Compiling C object 'tests/59830eb@@xe_exec_threads@exe/intel_xe_exec_threads.c.o'.
[688/1662] Compiling C object 'tests/59830eb@@gem_stress@exe/intel_gem_stress.c.o'.
[689/1662] Compiling C object 'tests/59830eb@@xe_vm@exe/intel_xe_vm.c.o'.
[690/1662] Generating i915-perf-equations with a custom command.
[691/1662] Compiling C object 'tests/59830eb@@gem_exec_fence@exe/intel_gem_exec_fence.c.o'.
[692/1662] Compiling C object 'tests/59830eb@@gem_concurrent_all@exe/intel_gem_concurrent_all.c.o'.
[693/1662] Compiling C object 'tests/59830eb@@perf_pmu@exe/intel_perf_pmu.c.o'.
[694/1662] Compiling C object 'tests/59830eb@@kms_frontbuffer_tracking@exe/intel_kms_frontbuffer_tracking.c.o'.
[695/1662] Compiling C object 'benchmarks/cb1d2fd@@gem_wsim@exe/gem_wsim.c.o'.
[696/1662] Compiling C object 'tests/59830eb@@perf@exe/intel_perf.c.o'.
[697/1662] Compiling C object 'tests/59830eb@@gem_exec_schedule@exe/intel_gem_exec_schedule.c.o'.
[698/1662] Linking target lib/libigt.so.0.
ninja: build stopped: subcommand failed.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-12-05 15:32 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-05 14:36 [igt-dev] [PATCH 0/2] RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Bommu Krishnaiah
2023-12-05 14:36 ` [igt-dev] [PATCH 1/2] drm-uapi/xe: Kill exec_queue_set_property Bommu Krishnaiah
2023-12-05 14:36 ` [igt-dev] [PATCH 2/2] drm-uapi/xe: kill xe_wait_user_fence_ioctl when exec_queue reset Bommu Krishnaiah
2023-12-05 15:32 ` [igt-dev] ✗ Fi.CI.BUILD: failure for RFC: drm-uapi/xe: add exec_queue_id member to drm_xe_wait_user_fence structure Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox