* [igt-dev] [PATCH v4 01/14] drm-uapi/xe_drm: Align with new PMU interface
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 11:33 ` Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 02/14] tests/intel/xe_query: Add a test for querying engine cycles Francois Dugast
` (15 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Align with commit ("drm/xe/pmu: Enable PMU interface")
Cc: Francois Dugast <francois.dugast@intel.com>
Cc: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
include/drm-uapi/xe_drm.h | 40 +++++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 804c02270..13cd6a73d 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -1053,6 +1053,46 @@ struct drm_xe_vm_madvise {
__u64 reserved[2];
};
+/**
+ * DOC: XE PMU event config IDs
+ *
+ * Check 'man perf_event_open' to use the ID's XE_PMU_XXXX listed in xe_drm.h
+ * in 'struct perf_event_attr' as part of perf_event_open syscall to read a
+ * particular event.
+ *
+ * For example to open the XE_PMU_INTERRUPTS(0):
+ *
+ * .. code-block:: C
+ *
+ * struct perf_event_attr attr;
+ * long long count;
+ * int cpu = 0;
+ * int fd;
+ *
+ * memset(&attr, 0, sizeof(struct perf_event_attr));
+ * attr.type = type; // eg: /sys/bus/event_source/devices/xe_0000_56_00.0/type
+ * attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
+ * attr.use_clockid = 1;
+ * attr.clockid = CLOCK_MONOTONIC;
+ * attr.config = XE_PMU_INTERRUPTS(0);
+ *
+ * fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
+ */
+
+/*
+ * Top bits of every counter are GT id.
+ */
+#define __XE_PMU_GT_SHIFT (56)
+
+#define ___XE_PMU_OTHER(gt, x) \
+ (((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
+
+#define XE_PMU_INTERRUPTS(gt) ___XE_PMU_OTHER(gt, 0)
+#define XE_PMU_RENDER_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 1)
+#define XE_PMU_COPY_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 2)
+#define XE_PMU_MEDIA_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 3)
+#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 4)
+
#if defined(__cplusplus)
}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 01/14] drm-uapi/xe_drm: Align with new PMU interface
2023-09-28 11:05 ` [igt-dev] [PATCH v4 01/14] drm-uapi/xe_drm: Align with new PMU interface Francois Dugast
@ 2023-09-28 11:33 ` Francois Dugast
0 siblings, 0 replies; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:33 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
On Thu, Sep 28, 2023 at 11:05:03AM +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
>
> Align with commit ("drm/xe/pmu: Enable PMU interface")
>
> Cc: Francois Dugast <francois.dugast@intel.com>
> Cc: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
It seems this one does not really belong to this series as the uAPI commit
has already been merged, but anyway it is missing so:
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 40 +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 40 insertions(+)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 804c02270..13cd6a73d 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -1053,6 +1053,46 @@ struct drm_xe_vm_madvise {
> __u64 reserved[2];
> };
>
> +/**
> + * DOC: XE PMU event config IDs
> + *
> + * Check 'man perf_event_open' to use the ID's XE_PMU_XXXX listed in xe_drm.h
> + * in 'struct perf_event_attr' as part of perf_event_open syscall to read a
> + * particular event.
> + *
> + * For example to open the XE_PMU_INTERRUPTS(0):
> + *
> + * .. code-block:: C
> + *
> + * struct perf_event_attr attr;
> + * long long count;
> + * int cpu = 0;
> + * int fd;
> + *
> + * memset(&attr, 0, sizeof(struct perf_event_attr));
> + * attr.type = type; // eg: /sys/bus/event_source/devices/xe_0000_56_00.0/type
> + * attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
> + * attr.use_clockid = 1;
> + * attr.clockid = CLOCK_MONOTONIC;
> + * attr.config = XE_PMU_INTERRUPTS(0);
> + *
> + * fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
> + */
> +
> +/*
> + * Top bits of every counter are GT id.
> + */
> +#define __XE_PMU_GT_SHIFT (56)
> +
> +#define ___XE_PMU_OTHER(gt, x) \
> + (((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
> +
> +#define XE_PMU_INTERRUPTS(gt) ___XE_PMU_OTHER(gt, 0)
> +#define XE_PMU_RENDER_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 1)
> +#define XE_PMU_COPY_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 2)
> +#define XE_PMU_MEDIA_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 3)
> +#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 4)
> +
> #if defined(__cplusplus)
> }
> #endif
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] [PATCH v4 02/14] tests/intel/xe_query: Add a test for querying engine cycles
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 01/14] drm-uapi/xe_drm: Align with new PMU interface Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 14:33 ` Rodrigo Vivi
2023-09-28 11:05 ` [igt-dev] [PATCH v4 03/14] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi Francois Dugast
` (14 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
The DRM_XE_QUERY_ENGINE_CYCLES query provides a way for the user to obtain
CPU and GPU timestamps as close to each other as possible.
Add a test to query engine cycles and GPU/CPU time correlation as well as
validate the parameters.
Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
[Rodrigo rebased after s/cs/engine]
---
include/drm-uapi/xe_drm.h | 104 +++++++++++++++-----
tests/intel/xe_query.c | 195 ++++++++++++++++++++++++++++++++++++++
2 files changed, 275 insertions(+), 24 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 13cd6a73d..8a702e6f4 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -128,6 +128,25 @@ struct xe_user_extension {
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_VM_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_VM_MADVISE, struct drm_xe_vm_madvise)
+/** struct drm_xe_engine_class_instance - instance of an engine class */
+struct drm_xe_engine_class_instance {
+#define DRM_XE_ENGINE_CLASS_RENDER 0
+#define DRM_XE_ENGINE_CLASS_COPY 1
+#define DRM_XE_ENGINE_CLASS_VIDEO_DECODE 2
+#define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE 3
+#define DRM_XE_ENGINE_CLASS_COMPUTE 4
+ /*
+ * Kernel only class (not actual hardware engine class). Used for
+ * creating ordered queues of VM bind operations.
+ */
+#define DRM_XE_ENGINE_CLASS_VM_BIND 5
+ __u16 engine_class;
+
+ __u16 engine_instance;
+ __u16 gt_id;
+ __u16 rsvd;
+};
+
/**
* enum drm_xe_memory_class - Supported memory classes.
*/
@@ -219,6 +238,60 @@ struct drm_xe_query_mem_region {
__u64 reserved[6];
};
+/**
+ * struct drm_xe_query_engine_cycles - correlate CPU and GPU timestamps
+ *
+ * If a query is made with a struct drm_xe_device_query where .query is equal to
+ * DRM_XE_DEVICE_QUERY_ENGINE_CYCLES, then the reply uses struct drm_xe_query_engine_cycles
+ * in .data. struct drm_xe_query_engine_cycles is allocated by the user and
+ * .data points to this allocated structure.
+ *
+ * The query returns the engine cycles and the frequency that can
+ * be used to calculate the engine timestamp. In addition the
+ * query returns a set of cpu timestamps that indicate when the command
+ * streamer cycle count was captured.
+ */
+struct drm_xe_query_engine_cycles {
+ /**
+ * @eci: This is input by the user and is the engine for which command
+ * streamer cycles is queried.
+ */
+ struct drm_xe_engine_class_instance eci;
+
+ /**
+ * @clockid: This is input by the user and is the reference clock id for
+ * CPU timestamp. For definition, see clock_gettime(2) and
+ * perf_event_open(2). Supported clock ids are CLOCK_MONOTONIC,
+ * CLOCK_MONOTONIC_RAW, CLOCK_REALTIME, CLOCK_BOOTTIME, CLOCK_TAI.
+ */
+ __s32 clockid;
+
+ /** @width: Width of the engine cycle counter in bits. */
+ __u32 width;
+
+ /**
+ * @engine_cycles: Engine cycles as read from its register
+ * at 0x358 offset.
+ */
+ __u64 engine_cycles;
+
+ /** @engine_frequency: Frequency of the engine cycles in Hz. */
+ __u64 engine_frequency;
+
+ /**
+ * @cpu_timestamp: CPU timestamp in ns. The timestamp is captured before
+ * reading the engine_cycles register using the reference clockid set by the
+ * user.
+ */
+ __u64 cpu_timestamp;
+
+ /**
+ * @cpu_delta: Time delta in ns captured around reading the lower dword
+ * of the engine_cycles register.
+ */
+ __u64 cpu_delta;
+};
+
/**
* struct drm_xe_query_mem_usage - describe memory regions and usage
*
@@ -385,12 +458,13 @@ struct drm_xe_device_query {
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
-#define DRM_XE_DEVICE_QUERY_ENGINES 0
-#define DRM_XE_DEVICE_QUERY_MEM_USAGE 1
-#define DRM_XE_DEVICE_QUERY_CONFIG 2
-#define DRM_XE_DEVICE_QUERY_GTS 3
-#define DRM_XE_DEVICE_QUERY_HWCONFIG 4
-#define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY 5
+#define DRM_XE_DEVICE_QUERY_ENGINES 0
+#define DRM_XE_DEVICE_QUERY_MEM_USAGE 1
+#define DRM_XE_DEVICE_QUERY_CONFIG 2
+#define DRM_XE_DEVICE_QUERY_GTS 3
+#define DRM_XE_DEVICE_QUERY_HWCONFIG 4
+#define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY 5
+#define DRM_XE_DEVICE_QUERY_ENGINE_CYCLES 6
/** @query: The type of data to query */
__u32 query;
@@ -732,24 +806,6 @@ struct drm_xe_exec_queue_set_property {
__u64 reserved[2];
};
-/** struct drm_xe_engine_class_instance - instance of an engine class */
-struct drm_xe_engine_class_instance {
-#define DRM_XE_ENGINE_CLASS_RENDER 0
-#define DRM_XE_ENGINE_CLASS_COPY 1
-#define DRM_XE_ENGINE_CLASS_VIDEO_DECODE 2
-#define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE 3
-#define DRM_XE_ENGINE_CLASS_COMPUTE 4
- /*
- * Kernel only class (not actual hardware engine class). Used for
- * creating ordered queues of VM bind operations.
- */
-#define DRM_XE_ENGINE_CLASS_VM_BIND 5
- __u16 engine_class;
-
- __u16 engine_instance;
- __u16 gt_id;
-};
-
struct drm_xe_exec_queue_create {
#define XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
/** @extensions: Pointer to the first extension struct, if any */
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 5966968d3..3e7460ff4 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -476,6 +476,195 @@ test_query_invalid_extension(int fd)
do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
}
+static bool
+query_engine_cycles_supported(int fd)
+{
+ struct drm_xe_device_query query = {
+ .extensions = 0,
+ .query = DRM_XE_DEVICE_QUERY_ENGINE_CYCLES,
+ .size = 0,
+ .data = 0,
+ };
+
+ return igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query) == 0;
+}
+
+static void
+query_engine_cycles(int fd, struct drm_xe_query_engine_cycles *resp)
+{
+ struct drm_xe_device_query query = {
+ .extensions = 0,
+ .query = DRM_XE_DEVICE_QUERY_ENGINE_CYCLES,
+ .size = sizeof(*resp),
+ .data = to_user_pointer(resp),
+ };
+
+ do_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
+ igt_assert(query.size);
+}
+
+static void
+__engine_cycles(int fd, struct drm_xe_engine_class_instance *hwe)
+{
+ struct drm_xe_query_engine_cycles ts1 = {};
+ struct drm_xe_query_engine_cycles ts2 = {};
+ uint64_t delta_cpu, delta_cs, delta_delta;
+ unsigned int exec_queue;
+ int i, usable = 0;
+ igt_spin_t *spin;
+ uint64_t ahnd;
+ uint32_t vm;
+ struct {
+ int32_t id;
+ const char *name;
+ } clock[] = {
+ { CLOCK_MONOTONIC, "CLOCK_MONOTONIC" },
+ { CLOCK_MONOTONIC_RAW, "CLOCK_MONOTONIC_RAW" },
+ { CLOCK_REALTIME, "CLOCK_REALTIME" },
+ { CLOCK_BOOTTIME, "CLOCK_BOOTTIME" },
+ { CLOCK_TAI, "CLOCK_TAI" },
+ };
+
+ igt_debug("engine[%u:%u]\n",
+ hwe->engine_class,
+ hwe->engine_instance);
+
+ vm = xe_vm_create(fd, 0, 0);
+ exec_queue = xe_exec_queue_create(fd, vm, hwe, 0);
+ ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
+ spin = igt_spin_new(fd, .ahnd = ahnd, .engine = exec_queue, .vm = vm);
+
+ /* Try a new clock every 10 iterations. */
+#define NUM_SNAPSHOTS 10
+ for (i = 0; i < NUM_SNAPSHOTS * ARRAY_SIZE(clock); i++) {
+ int index = i / NUM_SNAPSHOTS;
+
+ ts1.eci = *hwe;
+ ts1.clockid = clock[index].id;
+
+ ts2.eci = *hwe;
+ ts2.clockid = clock[index].id;
+
+ query_engine_cycles(fd, &ts1);
+ query_engine_cycles(fd, &ts2);
+
+ igt_debug("[1] cpu_ts before %llu, reg read time %llu\n",
+ ts1.cpu_timestamp,
+ ts1.cpu_delta);
+ igt_debug("[1] engine_ts %llu, freq %llu Hz, width %u\n",
+ ts1.engine_cycles, ts1.engine_frequency, ts1.width);
+
+ igt_debug("[2] cpu_ts before %llu, reg read time %llu\n",
+ ts2.cpu_timestamp,
+ ts2.cpu_delta);
+ igt_debug("[2] engine_ts %llu, freq %llu Hz, width %u\n",
+ ts2.engine_cycles, ts2.engine_frequency, ts2.width);
+
+ delta_cpu = ts2.cpu_timestamp - ts1.cpu_timestamp;
+
+ if (ts2.engine_cycles >= ts1.engine_cycles)
+ delta_cs = (ts2.engine_cycles - ts1.engine_cycles) *
+ NSEC_PER_SEC / ts1.engine_frequency;
+ else
+ delta_cs = (((1 << ts2.width) - ts2.engine_cycles) + ts1.engine_cycles) *
+ NSEC_PER_SEC / ts1.engine_frequency;
+
+ igt_debug("delta_cpu[%lu], delta_cs[%lu]\n",
+ delta_cpu, delta_cs);
+
+ delta_delta = delta_cpu > delta_cs ?
+ delta_cpu - delta_cs :
+ delta_cs - delta_cpu;
+ igt_debug("delta_delta %lu\n", delta_delta);
+
+ if (delta_delta < 5000)
+ usable++;
+
+ /*
+ * User needs few good snapshots of the timestamps to
+ * synchronize cpu time with cs time. Check if we have enough
+ * usable values before moving to the next clockid.
+ */
+ if (!((i + 1) % NUM_SNAPSHOTS)) {
+ igt_debug("clock %s\n", clock[index].name);
+ igt_debug("usable %d\n", usable);
+ igt_assert(usable > 2);
+ usable = 0;
+ }
+ }
+
+ igt_spin_free(fd, spin);
+ xe_exec_queue_destroy(fd, exec_queue);
+ xe_vm_destroy(fd, vm);
+ put_ahnd(ahnd);
+}
+
+/**
+ * SUBTEST: query-cs-cycles
+ * Description: Query CPU-GPU timestamp correlation
+ */
+static void test_query_engine_cycles(int fd)
+{
+ struct drm_xe_engine_class_instance *hwe;
+
+ igt_require(query_engine_cycles_supported(fd));
+
+ xe_for_each_hw_engine(fd, hwe) {
+ igt_assert(hwe);
+ __engine_cycles(fd, hwe);
+ }
+}
+
+/**
+ * SUBTEST: query-invalid-cs-cycles
+ * Description: Check query with invalid arguments returns expected error code.
+ */
+static void test_engine_cycles_invalid(int fd)
+{
+ struct drm_xe_engine_class_instance *hwe;
+ struct drm_xe_query_engine_cycles ts = {};
+ struct drm_xe_device_query query = {
+ .extensions = 0,
+ .query = DRM_XE_DEVICE_QUERY_ENGINE_CYCLES,
+ .size = sizeof(ts),
+ .data = to_user_pointer(&ts),
+ };
+
+ igt_require(query_engine_cycles_supported(fd));
+
+ /* get one engine */
+ xe_for_each_hw_engine(fd, hwe)
+ break;
+
+ /* sanity check engine selection is valid */
+ ts.eci = *hwe;
+ query_engine_cycles(fd, &ts);
+
+ /* bad instance */
+ ts.eci = *hwe;
+ ts.eci.engine_instance = 0xffff;
+ do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+ ts.eci = *hwe;
+
+ /* bad class */
+ ts.eci.engine_class = 0xffff;
+ do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+ ts.eci = *hwe;
+
+ /* bad gt */
+ ts.eci.gt_id = 0xffff;
+ do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+ ts.eci = *hwe;
+
+ /* bad clockid */
+ ts.clockid = -1;
+ do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+ ts.clockid = 0;
+
+ /* sanity check */
+ query_engine_cycles(fd, &ts);
+}
+
igt_main
{
int xe;
@@ -501,6 +690,12 @@ igt_main
igt_subtest("query-topology")
test_query_gt_topology(xe);
+ igt_subtest("query-cs-cycles")
+ test_query_engine_cycles(xe);
+
+ igt_subtest("query-invalid-cs-cycles")
+ test_engine_cycles_invalid(xe);
+
igt_subtest("query-invalid-query")
test_query_invalid_query(xe);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 02/14] tests/intel/xe_query: Add a test for querying engine cycles
2023-09-28 11:05 ` [igt-dev] [PATCH v4 02/14] tests/intel/xe_query: Add a test for querying engine cycles Francois Dugast
@ 2023-09-28 14:33 ` Rodrigo Vivi
0 siblings, 0 replies; 31+ messages in thread
From: Rodrigo Vivi @ 2023-09-28 14:33 UTC (permalink / raw)
To: Francois Dugast; +Cc: igt-dev
On Thu, Sep 28, 2023 at 11:05:04AM +0000, Francois Dugast wrote:
> From: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
>
> The DRM_XE_QUERY_ENGINE_CYCLES query provides a way for the user to obtain
> CPU and GPU timestamps as close to each other as possible.
>
> Add a test to query engine cycles and GPU/CPU time correlation as well as
> validate the parameters.
>
> Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> [Rodrigo rebased after s/cs/engine]
while fixing the naming here and in the kernel side I got confident
that this is the right test for that uapi and patch is correct:
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 104 +++++++++++++++-----
> tests/intel/xe_query.c | 195 ++++++++++++++++++++++++++++++++++++++
> 2 files changed, 275 insertions(+), 24 deletions(-)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 13cd6a73d..8a702e6f4 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -128,6 +128,25 @@ struct xe_user_extension {
> #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> #define DRM_IOCTL_XE_VM_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_VM_MADVISE, struct drm_xe_vm_madvise)
>
> +/** struct drm_xe_engine_class_instance - instance of an engine class */
> +struct drm_xe_engine_class_instance {
> +#define DRM_XE_ENGINE_CLASS_RENDER 0
> +#define DRM_XE_ENGINE_CLASS_COPY 1
> +#define DRM_XE_ENGINE_CLASS_VIDEO_DECODE 2
> +#define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE 3
> +#define DRM_XE_ENGINE_CLASS_COMPUTE 4
> + /*
> + * Kernel only class (not actual hardware engine class). Used for
> + * creating ordered queues of VM bind operations.
> + */
> +#define DRM_XE_ENGINE_CLASS_VM_BIND 5
> + __u16 engine_class;
> +
> + __u16 engine_instance;
> + __u16 gt_id;
> + __u16 rsvd;
> +};
> +
> /**
> * enum drm_xe_memory_class - Supported memory classes.
> */
> @@ -219,6 +238,60 @@ struct drm_xe_query_mem_region {
> __u64 reserved[6];
> };
>
> +/**
> + * struct drm_xe_query_engine_cycles - correlate CPU and GPU timestamps
> + *
> + * If a query is made with a struct drm_xe_device_query where .query is equal to
> + * DRM_XE_DEVICE_QUERY_ENGINE_CYCLES, then the reply uses struct drm_xe_query_engine_cycles
> + * in .data. struct drm_xe_query_engine_cycles is allocated by the user and
> + * .data points to this allocated structure.
> + *
> + * The query returns the engine cycles and the frequency that can
> + * be used to calculate the engine timestamp. In addition the
> + * query returns a set of cpu timestamps that indicate when the command
> + * streamer cycle count was captured.
> + */
> +struct drm_xe_query_engine_cycles {
> + /**
> + * @eci: This is input by the user and is the engine for which command
> + * streamer cycles is queried.
> + */
> + struct drm_xe_engine_class_instance eci;
> +
> + /**
> + * @clockid: This is input by the user and is the reference clock id for
> + * CPU timestamp. For definition, see clock_gettime(2) and
> + * perf_event_open(2). Supported clock ids are CLOCK_MONOTONIC,
> + * CLOCK_MONOTONIC_RAW, CLOCK_REALTIME, CLOCK_BOOTTIME, CLOCK_TAI.
> + */
> + __s32 clockid;
> +
> + /** @width: Width of the engine cycle counter in bits. */
> + __u32 width;
> +
> + /**
> + * @engine_cycles: Engine cycles as read from its register
> + * at 0x358 offset.
> + */
> + __u64 engine_cycles;
> +
> + /** @engine_frequency: Frequency of the engine cycles in Hz. */
> + __u64 engine_frequency;
> +
> + /**
> + * @cpu_timestamp: CPU timestamp in ns. The timestamp is captured before
> + * reading the engine_cycles register using the reference clockid set by the
> + * user.
> + */
> + __u64 cpu_timestamp;
> +
> + /**
> + * @cpu_delta: Time delta in ns captured around reading the lower dword
> + * of the engine_cycles register.
> + */
> + __u64 cpu_delta;
> +};
> +
> /**
> * struct drm_xe_query_mem_usage - describe memory regions and usage
> *
> @@ -385,12 +458,13 @@ struct drm_xe_device_query {
> /** @extensions: Pointer to the first extension struct, if any */
> __u64 extensions;
>
> -#define DRM_XE_DEVICE_QUERY_ENGINES 0
> -#define DRM_XE_DEVICE_QUERY_MEM_USAGE 1
> -#define DRM_XE_DEVICE_QUERY_CONFIG 2
> -#define DRM_XE_DEVICE_QUERY_GTS 3
> -#define DRM_XE_DEVICE_QUERY_HWCONFIG 4
> -#define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY 5
> +#define DRM_XE_DEVICE_QUERY_ENGINES 0
> +#define DRM_XE_DEVICE_QUERY_MEM_USAGE 1
> +#define DRM_XE_DEVICE_QUERY_CONFIG 2
> +#define DRM_XE_DEVICE_QUERY_GTS 3
> +#define DRM_XE_DEVICE_QUERY_HWCONFIG 4
> +#define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY 5
> +#define DRM_XE_DEVICE_QUERY_ENGINE_CYCLES 6
> /** @query: The type of data to query */
> __u32 query;
>
> @@ -732,24 +806,6 @@ struct drm_xe_exec_queue_set_property {
> __u64 reserved[2];
> };
>
> -/** struct drm_xe_engine_class_instance - instance of an engine class */
> -struct drm_xe_engine_class_instance {
> -#define DRM_XE_ENGINE_CLASS_RENDER 0
> -#define DRM_XE_ENGINE_CLASS_COPY 1
> -#define DRM_XE_ENGINE_CLASS_VIDEO_DECODE 2
> -#define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE 3
> -#define DRM_XE_ENGINE_CLASS_COMPUTE 4
> - /*
> - * Kernel only class (not actual hardware engine class). Used for
> - * creating ordered queues of VM bind operations.
> - */
> -#define DRM_XE_ENGINE_CLASS_VM_BIND 5
> - __u16 engine_class;
> -
> - __u16 engine_instance;
> - __u16 gt_id;
> -};
> -
> struct drm_xe_exec_queue_create {
> #define XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
> /** @extensions: Pointer to the first extension struct, if any */
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index 5966968d3..3e7460ff4 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -476,6 +476,195 @@ test_query_invalid_extension(int fd)
> do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
> }
>
> +static bool
> +query_engine_cycles_supported(int fd)
> +{
> + struct drm_xe_device_query query = {
> + .extensions = 0,
> + .query = DRM_XE_DEVICE_QUERY_ENGINE_CYCLES,
> + .size = 0,
> + .data = 0,
> + };
> +
> + return igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query) == 0;
> +}
> +
> +static void
> +query_engine_cycles(int fd, struct drm_xe_query_engine_cycles *resp)
> +{
> + struct drm_xe_device_query query = {
> + .extensions = 0,
> + .query = DRM_XE_DEVICE_QUERY_ENGINE_CYCLES,
> + .size = sizeof(*resp),
> + .data = to_user_pointer(resp),
> + };
> +
> + do_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
> + igt_assert(query.size);
> +}
> +
> +static void
> +__engine_cycles(int fd, struct drm_xe_engine_class_instance *hwe)
> +{
> + struct drm_xe_query_engine_cycles ts1 = {};
> + struct drm_xe_query_engine_cycles ts2 = {};
> + uint64_t delta_cpu, delta_cs, delta_delta;
> + unsigned int exec_queue;
> + int i, usable = 0;
> + igt_spin_t *spin;
> + uint64_t ahnd;
> + uint32_t vm;
> + struct {
> + int32_t id;
> + const char *name;
> + } clock[] = {
> + { CLOCK_MONOTONIC, "CLOCK_MONOTONIC" },
> + { CLOCK_MONOTONIC_RAW, "CLOCK_MONOTONIC_RAW" },
> + { CLOCK_REALTIME, "CLOCK_REALTIME" },
> + { CLOCK_BOOTTIME, "CLOCK_BOOTTIME" },
> + { CLOCK_TAI, "CLOCK_TAI" },
> + };
> +
> + igt_debug("engine[%u:%u]\n",
> + hwe->engine_class,
> + hwe->engine_instance);
> +
> + vm = xe_vm_create(fd, 0, 0);
> + exec_queue = xe_exec_queue_create(fd, vm, hwe, 0);
> + ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
> + spin = igt_spin_new(fd, .ahnd = ahnd, .engine = exec_queue, .vm = vm);
> +
> + /* Try a new clock every 10 iterations. */
> +#define NUM_SNAPSHOTS 10
> + for (i = 0; i < NUM_SNAPSHOTS * ARRAY_SIZE(clock); i++) {
> + int index = i / NUM_SNAPSHOTS;
> +
> + ts1.eci = *hwe;
> + ts1.clockid = clock[index].id;
> +
> + ts2.eci = *hwe;
> + ts2.clockid = clock[index].id;
> +
> + query_engine_cycles(fd, &ts1);
> + query_engine_cycles(fd, &ts2);
> +
> + igt_debug("[1] cpu_ts before %llu, reg read time %llu\n",
> + ts1.cpu_timestamp,
> + ts1.cpu_delta);
> + igt_debug("[1] engine_ts %llu, freq %llu Hz, width %u\n",
> + ts1.engine_cycles, ts1.engine_frequency, ts1.width);
> +
> + igt_debug("[2] cpu_ts before %llu, reg read time %llu\n",
> + ts2.cpu_timestamp,
> + ts2.cpu_delta);
> + igt_debug("[2] engine_ts %llu, freq %llu Hz, width %u\n",
> + ts2.engine_cycles, ts2.engine_frequency, ts2.width);
> +
> + delta_cpu = ts2.cpu_timestamp - ts1.cpu_timestamp;
> +
> + if (ts2.engine_cycles >= ts1.engine_cycles)
> + delta_cs = (ts2.engine_cycles - ts1.engine_cycles) *
> + NSEC_PER_SEC / ts1.engine_frequency;
> + else
> + delta_cs = (((1 << ts2.width) - ts2.engine_cycles) + ts1.engine_cycles) *
> + NSEC_PER_SEC / ts1.engine_frequency;
> +
> + igt_debug("delta_cpu[%lu], delta_cs[%lu]\n",
> + delta_cpu, delta_cs);
> +
> + delta_delta = delta_cpu > delta_cs ?
> + delta_cpu - delta_cs :
> + delta_cs - delta_cpu;
> + igt_debug("delta_delta %lu\n", delta_delta);
> +
> + if (delta_delta < 5000)
> + usable++;
> +
> + /*
> + * User needs few good snapshots of the timestamps to
> + * synchronize cpu time with cs time. Check if we have enough
> + * usable values before moving to the next clockid.
> + */
> + if (!((i + 1) % NUM_SNAPSHOTS)) {
> + igt_debug("clock %s\n", clock[index].name);
> + igt_debug("usable %d\n", usable);
> + igt_assert(usable > 2);
> + usable = 0;
> + }
> + }
> +
> + igt_spin_free(fd, spin);
> + xe_exec_queue_destroy(fd, exec_queue);
> + xe_vm_destroy(fd, vm);
> + put_ahnd(ahnd);
> +}
> +
> +/**
> + * SUBTEST: query-cs-cycles
> + * Description: Query CPU-GPU timestamp correlation
> + */
> +static void test_query_engine_cycles(int fd)
> +{
> + struct drm_xe_engine_class_instance *hwe;
> +
> + igt_require(query_engine_cycles_supported(fd));
> +
> + xe_for_each_hw_engine(fd, hwe) {
> + igt_assert(hwe);
> + __engine_cycles(fd, hwe);
> + }
> +}
> +
> +/**
> + * SUBTEST: query-invalid-cs-cycles
> + * Description: Check query with invalid arguments returns expected error code.
> + */
> +static void test_engine_cycles_invalid(int fd)
> +{
> + struct drm_xe_engine_class_instance *hwe;
> + struct drm_xe_query_engine_cycles ts = {};
> + struct drm_xe_device_query query = {
> + .extensions = 0,
> + .query = DRM_XE_DEVICE_QUERY_ENGINE_CYCLES,
> + .size = sizeof(ts),
> + .data = to_user_pointer(&ts),
> + };
> +
> + igt_require(query_engine_cycles_supported(fd));
> +
> + /* get one engine */
> + xe_for_each_hw_engine(fd, hwe)
> + break;
> +
> + /* sanity check engine selection is valid */
> + ts.eci = *hwe;
> + query_engine_cycles(fd, &ts);
> +
> + /* bad instance */
> + ts.eci = *hwe;
> + ts.eci.engine_instance = 0xffff;
> + do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
> + ts.eci = *hwe;
> +
> + /* bad class */
> + ts.eci.engine_class = 0xffff;
> + do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
> + ts.eci = *hwe;
> +
> + /* bad gt */
> + ts.eci.gt_id = 0xffff;
> + do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
> + ts.eci = *hwe;
> +
> + /* bad clockid */
> + ts.clockid = -1;
> + do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
> + ts.clockid = 0;
> +
> + /* sanity check */
> + query_engine_cycles(fd, &ts);
> +}
> +
> igt_main
> {
> int xe;
> @@ -501,6 +690,12 @@ igt_main
> igt_subtest("query-topology")
> test_query_gt_topology(xe);
>
> + igt_subtest("query-cs-cycles")
> + test_query_engine_cycles(xe);
> +
> + igt_subtest("query-invalid-cs-cycles")
> + test_engine_cycles_invalid(xe);
> +
> igt_subtest("query-invalid-query")
> test_query_invalid_query(xe);
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] [PATCH v4 03/14] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 01/14] drm-uapi/xe_drm: Align with new PMU interface Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 02/14] tests/intel/xe_query: Add a test for querying engine cycles Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 04/14] drm-uapi/xe_drm: Remove MMIO ioctl and " Francois Dugast
` (13 subsequent siblings)
16 siblings, 0 replies; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
Align with commit ("drm/xe/uapi: Separate VM_BIND's operation and flag")
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
include/drm-uapi/xe_drm.h | 14 ++++++++------
lib/intel_batchbuffer.c | 11 +++++++----
lib/xe/xe_ioctl.c | 31 ++++++++++++++++---------------
lib/xe/xe_ioctl.h | 6 +++---
lib/xe/xe_util.c | 9 ++++++---
tests/intel/xe_exec_basic.c | 2 +-
tests/intel/xe_exec_threads.c | 2 +-
tests/intel/xe_vm.c | 16 +++++++++-------
8 files changed, 51 insertions(+), 40 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 8a702e6f4..807d8ac2c 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -660,8 +660,10 @@ struct drm_xe_vm_bind_op {
#define XE_VM_BIND_OP_RESTART 0x3
#define XE_VM_BIND_OP_UNMAP_ALL 0x4
#define XE_VM_BIND_OP_PREFETCH 0x5
+ /** @op: Bind operation to perform */
+ __u32 op;
-#define XE_VM_BIND_FLAG_READONLY (0x1 << 16)
+#define XE_VM_BIND_FLAG_READONLY (0x1 << 0)
/*
* A bind ops completions are always async, hence the support for out
* sync. This flag indicates the allocation of the memory for new page
@@ -686,12 +688,12 @@ struct drm_xe_vm_bind_op {
* configured in the VM and must be set if the VM is configured with
* DRM_XE_VM_CREATE_ASYNC_BIND_OPS and not in an error state.
*/
-#define XE_VM_BIND_FLAG_ASYNC (0x1 << 17)
+#define XE_VM_BIND_FLAG_ASYNC (0x1 << 1)
/*
* Valid on a faulting VM only, do the MAP operation immediately rather
* than deferring the MAP to the page fault handler.
*/
-#define XE_VM_BIND_FLAG_IMMEDIATE (0x1 << 18)
+#define XE_VM_BIND_FLAG_IMMEDIATE (0x1 << 2)
/*
* When the NULL flag is set, the page tables are setup with a special
* bit which indicates writes are dropped and all reads return zero. In
@@ -699,9 +701,9 @@ struct drm_xe_vm_bind_op {
* operations, the BO handle MBZ, and the BO offset MBZ. This flag is
* intended to implement VK sparse bindings.
*/
-#define XE_VM_BIND_FLAG_NULL (0x1 << 19)
- /** @op: Operation to perform (lower 16 bits) and flags (upper 16 bits) */
- __u32 op;
+#define XE_VM_BIND_FLAG_NULL (0x1 << 3)
+ /** @flags: Bind flags */
+ __u32 flags;
/** @mem_region: Memory region to prefetch VMA to, instance not a mask */
__u32 region;
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index e7b1b755f..6e668d28c 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1281,7 +1281,8 @@ void intel_bb_destroy(struct intel_bb *ibb)
}
static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
- uint32_t op, uint32_t region)
+ uint32_t op, uint32_t flags,
+ uint32_t region)
{
struct drm_i915_gem_exec_object2 **objects = ibb->objects;
struct drm_xe_vm_bind_op *bind_ops, *ops;
@@ -1298,6 +1299,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
ops->obj = objects[i]->handle;
ops->op = op;
+ ops->flags = flags;
ops->obj_offset = 0;
ops->addr = objects[i]->offset;
ops->range = objects[i]->rsvd1;
@@ -1323,9 +1325,10 @@ static void __unbind_xe_objects(struct intel_bb *ibb)
if (ibb->num_objects > 1) {
struct drm_xe_vm_bind_op *bind_ops;
- uint32_t op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
+ uint32_t op = XE_VM_BIND_OP_UNMAP;
+ uint32_t flags = XE_VM_BIND_FLAG_ASYNC;
- bind_ops = xe_alloc_bind_ops(ibb, op, 0);
+ bind_ops = xe_alloc_bind_ops(ibb, op, flags, 0);
xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
ibb->num_objects, syncs, 2);
free(bind_ops);
@@ -2354,7 +2357,7 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
syncs[0].handle = syncobj_create(ibb->fd, 0);
if (ibb->num_objects > 1) {
- bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, 0);
+ bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, 0);
xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
ibb->num_objects, syncs, 1);
free(bind_ops);
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 730dcfd16..48cd185de 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -67,7 +67,7 @@ void xe_vm_unbind_all_async(int fd, uint32_t vm, uint32_t exec_queue,
uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, exec_queue, bo, 0, 0, 0,
- XE_VM_BIND_OP_UNMAP_ALL | XE_VM_BIND_FLAG_ASYNC,
+ XE_VM_BIND_OP_UNMAP_ALL, XE_VM_BIND_FLAG_ASYNC,
sync, num_syncs, 0, 0);
}
@@ -91,8 +91,8 @@ void xe_vm_bind_array(int fd, uint32_t vm, uint32_t exec_queue,
int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
- struct drm_xe_sync *sync, uint32_t num_syncs, uint32_t region,
- uint64_t ext)
+ uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
+ uint32_t region, uint64_t ext)
{
struct drm_xe_vm_bind bind = {
.extensions = ext,
@@ -103,6 +103,7 @@ int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
.bind.range = size,
.bind.addr = addr,
.bind.op = op,
+ .bind.flags = flags,
.bind.region = region,
.num_syncs = num_syncs,
.syncs = (uintptr_t)sync,
@@ -117,11 +118,11 @@ int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
void __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
uint64_t offset, uint64_t addr, uint64_t size,
- uint32_t op, struct drm_xe_sync *sync,
+ uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
uint32_t num_syncs, uint32_t region, uint64_t ext)
{
igt_assert_eq(__xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
- op, sync, num_syncs, region, ext), 0);
+ op, flags, sync, num_syncs, region, ext), 0);
}
void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
@@ -129,7 +130,7 @@ void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
struct drm_xe_sync *sync, uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size,
- XE_VM_BIND_OP_MAP, sync, num_syncs, 0, 0);
+ XE_VM_BIND_OP_MAP, 0, sync, num_syncs, 0, 0);
}
void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
@@ -137,7 +138,7 @@ void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
struct drm_xe_sync *sync, uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, 0, 0, offset, addr, size,
- XE_VM_BIND_OP_UNMAP, sync, num_syncs, 0, 0);
+ XE_VM_BIND_OP_UNMAP, 0, sync, num_syncs, 0, 0);
}
void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t offset,
@@ -146,7 +147,7 @@ void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t off
uint32_t region)
{
__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
- XE_VM_BIND_OP_PREFETCH | XE_VM_BIND_FLAG_ASYNC,
+ XE_VM_BIND_OP_PREFETCH, XE_VM_BIND_FLAG_ASYNC,
sync, num_syncs, region, 0);
}
@@ -155,7 +156,7 @@ void xe_vm_bind_async(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
struct drm_xe_sync *sync, uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
- XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, sync,
+ XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, sync,
num_syncs, 0, 0);
}
@@ -165,7 +166,7 @@ void xe_vm_bind_async_flags(int fd, uint32_t vm, uint32_t exec_queue, uint32_t b
uint32_t flags)
{
__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
- XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC | flags,
+ XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC | flags,
sync, num_syncs, 0, 0);
}
@@ -174,7 +175,7 @@ void xe_vm_bind_userptr_async(int fd, uint32_t vm, uint32_t exec_queue,
struct drm_xe_sync *sync, uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
- XE_VM_BIND_OP_MAP_USERPTR | XE_VM_BIND_FLAG_ASYNC,
+ XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC,
sync, num_syncs, 0, 0);
}
@@ -184,7 +185,7 @@ void xe_vm_bind_userptr_async_flags(int fd, uint32_t vm, uint32_t exec_queue,
uint32_t num_syncs, uint32_t flags)
{
__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
- XE_VM_BIND_OP_MAP_USERPTR | XE_VM_BIND_FLAG_ASYNC |
+ XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC |
flags, sync, num_syncs, 0, 0);
}
@@ -193,7 +194,7 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
struct drm_xe_sync *sync, uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
- XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC, sync,
+ XE_VM_BIND_OP_UNMAP, XE_VM_BIND_FLAG_ASYNC, sync,
num_syncs, 0, 0);
}
@@ -205,8 +206,8 @@ static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
.handle = syncobj_create(fd, 0),
};
- __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, &sync, 1, 0,
- 0);
+ __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, &sync, 1,
+ 0, 0);
igt_assert(syncobj_wait(fd, &sync.handle, 1, INT64_MAX, 0, NULL));
syncobj_destroy(fd, sync.handle);
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index 6c281b3bf..f0e4109dc 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -19,11 +19,11 @@ uint32_t xe_cs_prefetch_size(int fd);
uint32_t xe_vm_create(int fd, uint32_t flags, uint64_t ext);
int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
- struct drm_xe_sync *sync, uint32_t num_syncs, uint32_t region,
- uint64_t ext);
+ uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
+ uint32_t region, uint64_t ext);
void __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
uint64_t offset, uint64_t addr, uint64_t size,
- uint32_t op, struct drm_xe_sync *sync,
+ uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
uint32_t num_syncs, uint32_t region, uint64_t ext);
void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
uint64_t addr, uint64_t size,
diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
index 2f9ffe2f1..5fa4d4610 100644
--- a/lib/xe/xe_util.c
+++ b/lib/xe/xe_util.c
@@ -116,7 +116,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
{
struct drm_xe_vm_bind_op *bind_ops, *ops;
struct xe_object *obj;
- uint32_t num_objects = 0, i = 0, op;
+ uint32_t num_objects = 0, i = 0, op, flags;
igt_list_for_each_entry(obj, obj_list, link)
num_objects++;
@@ -134,13 +134,16 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
ops = &bind_ops[i];
if (obj->bind_op == XE_OBJECT_BIND) {
- op = XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC;
+ op = XE_VM_BIND_OP_MAP;
+ flags = XE_VM_BIND_FLAG_ASYNC;
ops->obj = obj->handle;
} else {
- op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
+ op = XE_VM_BIND_OP_UNMAP;
+ flags = XE_VM_BIND_FLAG_ASYNC;
}
ops->op = op;
+ ops->flags = flags;
ops->obj_offset = 0;
ops->addr = obj->offset;
ops->range = obj->size;
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index a4414e052..e29398aaa 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -170,7 +170,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
if (flags & SPARSE)
__xe_vm_bind_assert(fd, vm[i], bind_exec_queues[i],
0, 0, sparse_addr[i], bo_size,
- XE_VM_BIND_OP_MAP |
+ XE_VM_BIND_OP_MAP,
XE_VM_BIND_FLAG_ASYNC |
XE_VM_BIND_FLAG_NULL, sync,
1, 0, 0);
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index 12e76874e..1f9af894f 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -609,7 +609,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
if (rebind_error_inject == i)
__xe_vm_bind_assert(fd, vm, bind_exec_queues[e],
0, 0, addr, bo_size,
- XE_VM_BIND_OP_UNMAP |
+ XE_VM_BIND_OP_UNMAP,
XE_VM_BIND_FLAG_ASYNC |
INJECT_ERROR, sync_all,
n_exec_queues, 0, 0);
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 4952ea786..f96305851 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -316,7 +316,7 @@ static void userptr_invalid(int fd)
vm = xe_vm_create(fd, 0, 0);
munmap(data, size);
ret = __xe_vm_bind(fd, vm, 0, 0, to_user_pointer(data), 0x40000,
- size, XE_VM_BIND_OP_MAP_USERPTR, NULL, 0, 0, 0);
+ size, XE_VM_BIND_OP_MAP_USERPTR, 0, NULL, 0, 0, 0);
igt_assert(ret == -EFAULT);
xe_vm_destroy(fd, vm);
@@ -437,7 +437,7 @@ static void vm_async_ops_err(int fd, bool destroy)
if (i == N_BINDS / 8) /* Inject error on this bind */
__xe_vm_bind_assert(fd, vm, 0, bo, 0,
addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_MAP |
+ bo_size, XE_VM_BIND_OP_MAP,
XE_VM_BIND_FLAG_ASYNC |
INJECT_ERROR, &sync, 1, 0, 0);
else
@@ -451,7 +451,7 @@ static void vm_async_ops_err(int fd, bool destroy)
if (i == N_BINDS / 8)
__xe_vm_bind_assert(fd, vm, 0, 0, 0,
addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_UNMAP |
+ bo_size, XE_VM_BIND_OP_UNMAP,
XE_VM_BIND_FLAG_ASYNC |
INJECT_ERROR, &sync, 1, 0, 0);
else
@@ -465,7 +465,7 @@ static void vm_async_ops_err(int fd, bool destroy)
if (i == N_BINDS / 8)
__xe_vm_bind_assert(fd, vm, 0, bo, 0,
addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_MAP |
+ bo_size, XE_VM_BIND_OP_MAP,
XE_VM_BIND_FLAG_ASYNC |
INJECT_ERROR, &sync, 1, 0, 0);
else
@@ -479,7 +479,7 @@ static void vm_async_ops_err(int fd, bool destroy)
if (i == N_BINDS / 8)
__xe_vm_bind_assert(fd, vm, 0, 0, 0,
addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_UNMAP |
+ bo_size, XE_VM_BIND_OP_UNMAP,
XE_VM_BIND_FLAG_ASYNC |
INJECT_ERROR, &sync, 1, 0, 0);
else
@@ -928,7 +928,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
bind_ops[i].range = bo_size;
bind_ops[i].addr = addr;
bind_ops[i].tile_mask = 0x1 << eci->gt_id;
- bind_ops[i].op = XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC;
+ bind_ops[i].op = XE_VM_BIND_OP_MAP;
+ bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
bind_ops[i].region = 0;
bind_ops[i].reserved[0] = 0;
bind_ops[i].reserved[1] = 0;
@@ -972,7 +973,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
for (i = 0; i < n_execs; ++i) {
bind_ops[i].obj = 0;
- bind_ops[i].op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
+ bind_ops[i].op = XE_VM_BIND_OP_UNMAP;
+ bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
}
syncobj_reset(fd, &sync[0].handle, 1);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [igt-dev] [PATCH v4 04/14] drm-uapi/xe_drm: Remove MMIO ioctl and align with latest uapi
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (2 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 03/14] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 14:36 ` Rodrigo Vivi
2023-09-28 11:05 ` [igt-dev] [PATCH v4 05/14] xe_exec_balancer: Enable parallel submission and compute mode Francois Dugast
` (12 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
Align with commit ("drm/xe/uapi: Remove MMIO ioctl")
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
include/drm-uapi/xe_drm.h | 31 +-
tests/intel-ci/xe-fast-feedback.testlist | 2 -
tests/intel/xe_mmio.c | 91 ------
tests/meson.build | 1 -
tools/meson.build | 1 -
tools/xe_reg.c | 366 -----------------------
6 files changed, 4 insertions(+), 488 deletions(-)
delete mode 100644 tests/intel/xe_mmio.c
delete mode 100644 tools/xe_reg.c
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 807d8ac2c..143918b9e 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -106,11 +106,10 @@ struct xe_user_extension {
#define DRM_XE_EXEC_QUEUE_CREATE 0x06
#define DRM_XE_EXEC_QUEUE_DESTROY 0x07
#define DRM_XE_EXEC 0x08
-#define DRM_XE_MMIO 0x09
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x0a
-#define DRM_XE_WAIT_USER_FENCE 0x0b
-#define DRM_XE_VM_MADVISE 0x0c
-#define DRM_XE_EXEC_QUEUE_GET_PROPERTY 0x0d
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x09
+#define DRM_XE_WAIT_USER_FENCE 0x0a
+#define DRM_XE_VM_MADVISE 0x0b
+#define DRM_XE_EXEC_QUEUE_GET_PROPERTY 0x0c
/* Must be kept compact -- no holes */
#define DRM_IOCTL_XE_DEVICE_QUERY DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_DEVICE_QUERY, struct drm_xe_device_query)
@@ -123,7 +122,6 @@ struct xe_user_extension {
#define DRM_IOCTL_XE_EXEC_QUEUE_GET_PROPERTY DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_GET_PROPERTY, struct drm_xe_exec_queue_get_property)
#define DRM_IOCTL_XE_EXEC_QUEUE_DESTROY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_DESTROY, struct drm_xe_exec_queue_destroy)
#define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
-#define DRM_IOCTL_XE_MMIO DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MMIO, struct drm_xe_mmio)
#define DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY, struct drm_xe_exec_queue_set_property)
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_VM_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_VM_MADVISE, struct drm_xe_vm_madvise)
@@ -936,27 +934,6 @@ struct drm_xe_exec {
__u64 reserved[2];
};
-struct drm_xe_mmio {
- /** @extensions: Pointer to the first extension struct, if any */
- __u64 extensions;
-
- __u32 addr;
-
-#define DRM_XE_MMIO_8BIT 0x0
-#define DRM_XE_MMIO_16BIT 0x1
-#define DRM_XE_MMIO_32BIT 0x2
-#define DRM_XE_MMIO_64BIT 0x3
-#define DRM_XE_MMIO_BITS_MASK 0x3
-#define DRM_XE_MMIO_READ 0x4
-#define DRM_XE_MMIO_WRITE 0x8
- __u32 flags;
-
- __u64 value;
-
- /** @reserved: Reserved */
- __u64 reserved[2];
-};
-
/**
* struct drm_xe_wait_user_fence - wait user fence
*
diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
index 610cc958c..a9fe43b08 100644
--- a/tests/intel-ci/xe-fast-feedback.testlist
+++ b/tests/intel-ci/xe-fast-feedback.testlist
@@ -141,8 +141,6 @@ igt@xe_mmap@bad-object
igt@xe_mmap@system
igt@xe_mmap@vram
igt@xe_mmap@vram-system
-igt@xe_mmio@mmio-timestamp
-igt@xe_mmio@mmio-invalid
igt@xe_pm_residency@gt-c6-on-idle
igt@xe_prime_self_import@basic-with_one_bo
igt@xe_prime_self_import@basic-with_fd_dup
diff --git a/tests/intel/xe_mmio.c b/tests/intel/xe_mmio.c
deleted file mode 100644
index 9ac544770..000000000
--- a/tests/intel/xe_mmio.c
+++ /dev/null
@@ -1,91 +0,0 @@
-// SPDX-License-Identifier: MIT
-/*
- * Copyright © 2023 Intel Corporation
- */
-
-/**
- * TEST: Test if mmio feature
- * Category: Software building block
- * Sub-category: mmio
- * Functionality: mmap
- */
-
-#include "igt.h"
-
-#include "xe_drm.h"
-#include "xe/xe_ioctl.h"
-#include "xe/xe_query.h"
-
-#include <string.h>
-
-#define RCS_TIMESTAMP 0x2358
-
-/**
- * SUBTEST: mmio-timestamp
- * Test category: functionality test
- * Description:
- * Try to run mmio ioctl with 32 and 64 bits and check it a timestamp
- * matches
- */
-
-static void test_xe_mmio_timestamp(int fd)
-{
- int ret;
- struct drm_xe_mmio mmio = {
- .addr = RCS_TIMESTAMP,
- .flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_64BIT,
- };
- ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
- if (!ret)
- igt_debug("RCS_TIMESTAMP 64b = 0x%llx\n", mmio.value);
- igt_assert(!ret);
- mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_32BIT;
- mmio.value = 0;
- ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
- if (!ret)
- igt_debug("RCS_TIMESTAMP 32b = 0x%llx\n", mmio.value);
- igt_assert(!ret);
-}
-
-
-/**
- * SUBTEST: mmio-invalid
- * Test category: negative test
- * Description: Try to run mmio ioctl with 8, 16 and 32 and 64 bits mmio
- */
-
-static void test_xe_mmio_invalid(int fd)
-{
- int ret;
- struct drm_xe_mmio mmio = {
- .addr = RCS_TIMESTAMP,
- .flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_8BIT,
- };
- ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
- igt_assert(ret);
- mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_16BIT;
- mmio.value = 0;
- ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
- igt_assert(ret);
- mmio.addr = RCS_TIMESTAMP;
- mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_64BIT;
- mmio.value = 0x1;
- ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
- igt_assert(ret);
-}
-
-igt_main
-{
- int fd;
-
- igt_fixture
- fd = drm_open_driver(DRIVER_XE);
-
- igt_subtest("mmio-timestamp")
- test_xe_mmio_timestamp(fd);
- igt_subtest("mmio-invalid")
- test_xe_mmio_invalid(fd);
-
- igt_fixture
- drm_close_driver(fd);
-}
diff --git a/tests/meson.build b/tests/meson.build
index 974cb433b..c3de337c8 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -293,7 +293,6 @@ intel_xe_progs = [
'xe_live_ktest',
'xe_media_fill',
'xe_mmap',
- 'xe_mmio',
'xe_module_load',
'xe_noexec_ping_pong',
'xe_pm',
diff --git a/tools/meson.build b/tools/meson.build
index 21e244c24..ac79d8b58 100644
--- a/tools/meson.build
+++ b/tools/meson.build
@@ -42,7 +42,6 @@ tools_progs = [
'intel_gvtg_test',
'dpcd_reg',
'lsgpu',
- 'xe_reg',
]
tool_deps = igt_deps
tool_deps += zlib
diff --git a/tools/xe_reg.c b/tools/xe_reg.c
deleted file mode 100644
index 1f7b384d3..000000000
--- a/tools/xe_reg.c
+++ /dev/null
@@ -1,366 +0,0 @@
-// SPDX-License-Identifier: MIT
-/*
- * Copyright © 2021 Intel Corporation
- */
-
-#include "igt.h"
-#include "igt_device_scan.h"
-
-#include "xe_drm.h"
-
-#include <stdlib.h>
-#include <stdio.h>
-#include <string.h>
-
-#define DECL_XE_MMIO_READ_FN(bits) \
-static inline uint##bits##_t \
-xe_mmio_read##bits(int fd, uint32_t reg) \
-{ \
- struct drm_xe_mmio mmio = { \
- .addr = reg, \
- .flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_##bits##BIT, \
- }; \
-\
- igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio), 0); \
-\
- return mmio.value;\
-}\
-static inline void \
-xe_mmio_write##bits(int fd, uint32_t reg, uint##bits##_t value) \
-{ \
- struct drm_xe_mmio mmio = { \
- .addr = reg, \
- .flags = DRM_XE_MMIO_WRITE | DRM_XE_MMIO_##bits##BIT, \
- .value = value, \
- }; \
-\
- igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio), 0); \
-}
-
-DECL_XE_MMIO_READ_FN(8)
-DECL_XE_MMIO_READ_FN(16)
-DECL_XE_MMIO_READ_FN(32)
-DECL_XE_MMIO_READ_FN(64)
-
-static void print_help(FILE *fp)
-{
- fprintf(fp, "usage: xe_reg read REG1 [REG2]...\n");
- fprintf(fp, " xe_reg write REG VALUE\n");
-}
-
-enum ring {
- RING_UNKNOWN = -1,
- RING_RCS0,
- RING_BCS0,
-};
-
-static const struct ring_info {
- enum ring ring;
- const char *name;
- uint32_t mmio_base;
-} ring_info[] = {
- {RING_RCS0, "rcs0", 0x02000, },
- {RING_BCS0, "bcs0", 0x22000, },
-};
-
-static const struct ring_info *ring_info_for_name(const char *name)
-{
- int i;
-
- for (i = 0; i < ARRAY_SIZE(ring_info); i++)
- if (strcmp(name, ring_info[i].name) == 0)
- return &ring_info[i];
-
- return NULL;
-}
-
-struct reg_info {
- const char *name;
- bool is_ring;
- uint32_t addr_low;
- uint32_t addr_high;
-} reg_info[] = {
-#define REG32(name, addr) { #name, false, addr }
-#define REG64(name, low, high) { #name, false, low, high }
-#define RING_REG32(name, addr) { #name, true, addr }
-#define RING_REG64(name, low, high) { #name, true, low, high }
-
- RING_REG64(ACTHD, 0x74, 0x5c),
- RING_REG32(BB_ADDR_DIFF, 0x154),
- RING_REG64(BB_ADDR, 0x140, 0x168),
- RING_REG32(BB_PER_CTX_PTR, 0x2c0),
- RING_REG64(EXECLIST_STATUS, 0x234, 0x238),
- RING_REG64(EXECLIST_SQ0, 0x510, 0x514),
- RING_REG64(EXECLIST_SQ1, 0x518, 0x51c),
- RING_REG32(HWS_PGA, 0x80),
- RING_REG32(INDIRECT_CTX, 0x1C4),
- RING_REG32(INDIRECT_CTX_OFFSET, 0x1C8),
- RING_REG32(NOPID, 0x94),
- RING_REG64(PML4E, 0x270, 0x274),
- RING_REG32(RING_BUFFER_CTL, 0x3c),
- RING_REG32(RING_BUFFER_HEAD, 0x34),
- RING_REG32(RING_BUFFER_START, 0x38),
- RING_REG32(RING_BUFFER_TAIL, 0x30),
- RING_REG64(SBB_ADDR, 0x114, 0x11c),
- RING_REG32(SBB_STATE, 0x118),
-
-#undef REG32
-#undef REG64
-#undef RING_REG32
-#undef RING_REG64
-};
-
-static const struct reg_info *reg_info_for_name(const char *name)
-{
- int i;
-
- for (i = 0; i < ARRAY_SIZE(reg_info); i++)
- if (strcmp(name, reg_info[i].name) == 0)
- return ®_info[i];
-
- return NULL;
-}
-
-static int print_reg_for_info(int xe, FILE *fp, const struct reg_info *reg,
- const struct ring_info *ring)
-{
- if (reg->is_ring) {
- if (!ring) {
- fprintf(stderr, "%s is a ring register but --ring "
- "not set\n", reg->name);
- return EXIT_FAILURE;
- }
-
- if (reg->addr_high) {
- uint32_t low = xe_mmio_read32(xe, reg->addr_low +
- ring->mmio_base);
- uint32_t high = xe_mmio_read32(xe, reg->addr_high +
- ring->mmio_base);
-
- fprintf(fp, "%s[%s] = 0x%08x %08x\n", reg->name,
- ring->name, high, low);
- } else {
- uint32_t value = xe_mmio_read32(xe, reg->addr_low +
- ring->mmio_base);
-
- fprintf(fp, "%s[%s] = 0x%08x\n", reg->name,
- ring->name, value);
- }
- } else {
- if (reg->addr_high) {
- uint32_t low = xe_mmio_read32(xe, reg->addr_low);
- uint32_t high = xe_mmio_read32(xe, reg->addr_high);
-
- fprintf(fp, "%s = 0x%08x %08x\n", reg->name, high, low);
- } else {
- uint32_t value = xe_mmio_read32(xe, reg->addr_low);
-
- fprintf(fp, "%s = 0x%08x\n", reg->name, value);
- }
- }
-
- return 0;
-}
-
-static void print_reg_for_addr(int xe, FILE *fp, uint32_t addr)
-{
- uint32_t value = xe_mmio_read32(xe, addr);
-
- fprintf(fp, "MMIO[0x%05x] = 0x%08x\n", addr, value);
-}
-
-enum opt {
- OPT_UNKNOWN = '?',
- OPT_END = -1,
- OPT_DEVICE,
- OPT_RING,
- OPT_ALL,
-};
-
-static int read_reg(int argc, char *argv[])
-{
- int xe, i, err, index;
- unsigned long reg_addr;
- char *endp = NULL;
- const struct ring_info *ring = NULL;
- enum opt opt;
- bool dump_all = false;
-
- static struct option options[] = {
- { "device", required_argument, NULL, OPT_DEVICE },
- { "ring", required_argument, NULL, OPT_RING },
- { "all", no_argument, NULL, OPT_ALL },
- };
-
- for (opt = 0; opt != OPT_END; ) {
- opt = getopt_long(argc, argv, "", options, &index);
-
- switch (opt) {
- case OPT_DEVICE:
- igt_device_filter_add(optarg);
- break;
- case OPT_RING:
- ring = ring_info_for_name(optarg);
- if (!ring) {
- fprintf(stderr, "invalid ring: %s\n", optarg);
- return EXIT_FAILURE;
- }
- break;
- case OPT_ALL:
- dump_all = true;
- break;
- case OPT_END:
- break;
- case OPT_UNKNOWN:
- return EXIT_FAILURE;
- }
- }
-
- argc -= optind;
- argv += optind;
-
- xe = drm_open_driver(DRIVER_XE);
- if (dump_all) {
- for (i = 0; i < ARRAY_SIZE(reg_info); i++) {
- if (reg_info[i].is_ring != !!ring)
- continue;
-
- print_reg_for_info(xe, stdout, ®_info[i], ring);
- }
- } else {
- for (i = 0; i < argc; i++) {
- const struct reg_info *reg = reg_info_for_name(argv[i]);
- if (reg) {
- err = print_reg_for_info(xe, stdout, reg, ring);
- if (err)
- return err;
- continue;
- }
- reg_addr = strtoul(argv[i], &endp, 16);
- if (!reg_addr || reg_addr >= (4 << 20) || *endp) {
- fprintf(stderr, "invalid reg address '%s'\n",
- argv[i]);
- return EXIT_FAILURE;
- }
- print_reg_for_addr(xe, stdout, reg_addr);
- }
- }
-
- return 0;
-}
-
-static int write_reg_for_info(int xe, const struct reg_info *reg,
- const struct ring_info *ring,
- uint64_t value)
-{
- if (reg->is_ring) {
- if (!ring) {
- fprintf(stderr, "%s is a ring register but --ring "
- "not set\n", reg->name);
- return EXIT_FAILURE;
- }
-
- xe_mmio_write32(xe, reg->addr_low + ring->mmio_base, value);
- if (reg->addr_high) {
- xe_mmio_write32(xe, reg->addr_high + ring->mmio_base,
- value >> 32);
- }
- } else {
- xe_mmio_write32(xe, reg->addr_low, value);
- if (reg->addr_high)
- xe_mmio_write32(xe, reg->addr_high, value >> 32);
- }
-
- return 0;
-}
-
-static void write_reg_for_addr(int xe, uint32_t addr, uint32_t value)
-{
- xe_mmio_write32(xe, addr, value);
-}
-
-static int write_reg(int argc, char *argv[])
-{
- int xe, index;
- unsigned long reg_addr;
- char *endp = NULL;
- const struct ring_info *ring = NULL;
- enum opt opt;
- const char *reg_name;
- const struct reg_info *reg;
- uint64_t value;
-
- static struct option options[] = {
- { "device", required_argument, NULL, OPT_DEVICE },
- { "ring", required_argument, NULL, OPT_RING },
- };
-
- for (opt = 0; opt != OPT_END; ) {
- opt = getopt_long(argc, argv, "", options, &index);
-
- switch (opt) {
- case OPT_DEVICE:
- igt_device_filter_add(optarg);
- break;
- case OPT_RING:
- ring = ring_info_for_name(optarg);
- if (!ring) {
- fprintf(stderr, "invalid ring: %s\n", optarg);
- return EXIT_FAILURE;
- }
- break;
- case OPT_END:
- break;
- case OPT_UNKNOWN:
- return EXIT_FAILURE;
- default:
- break;
- }
- }
-
- argc -= optind;
- argv += optind;
-
- if (argc != 2) {
- print_help(stderr);
- return EXIT_FAILURE;
- }
-
- reg_name = argv[0];
- value = strtoull(argv[1], &endp, 0);
- if (*endp) {
- fprintf(stderr, "Invalid register value: %s\n", argv[1]);
- return EXIT_FAILURE;
- }
-
- xe = drm_open_driver(DRIVER_XE);
-
- reg = reg_info_for_name(reg_name);
- if (reg)
- return write_reg_for_info(xe, reg, ring, value);
-
- reg_addr = strtoul(reg_name, &endp, 16);
- if (!reg_addr || reg_addr >= (4 << 20) || *endp) {
- fprintf(stderr, "invalid reg address '%s'\n", reg_name);
- return EXIT_FAILURE;
- }
- write_reg_for_addr(xe, reg_addr, value);
-
- return 0;
-}
-
-int main(int argc, char *argv[])
-{
- if (argc < 2) {
- print_help(stderr);
- return EXIT_FAILURE;
- }
-
- if (strcmp(argv[1], "read") == 0)
- return read_reg(argc - 1, argv + 1);
- else if (strcmp(argv[1], "write") == 0)
- return write_reg(argc - 1, argv + 1);
-
- fprintf(stderr, "invalid sub-command: %s", argv[1]);
- return EXIT_FAILURE;
-}
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 04/14] drm-uapi/xe_drm: Remove MMIO ioctl and align with latest uapi
2023-09-28 11:05 ` [igt-dev] [PATCH v4 04/14] drm-uapi/xe_drm: Remove MMIO ioctl and " Francois Dugast
@ 2023-09-28 14:36 ` Rodrigo Vivi
0 siblings, 0 replies; 31+ messages in thread
From: Rodrigo Vivi @ 2023-09-28 14:36 UTC (permalink / raw)
To: Francois Dugast; +Cc: igt-dev
On Thu, Sep 28, 2023 at 11:05:06AM +0000, Francois Dugast wrote:
> Align with commit ("drm/xe/uapi: Remove MMIO ioctl")
it is probably worth mentioning that the tools/xe_reg that
this patch is removing is useless since tools/intel_reg also
works on Xe. And that the tile addition to tools/intel_reg
should happen regardless of Xe and in a follow-up work.
With this msg,
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 31 +-
> tests/intel-ci/xe-fast-feedback.testlist | 2 -
> tests/intel/xe_mmio.c | 91 ------
> tests/meson.build | 1 -
> tools/meson.build | 1 -
> tools/xe_reg.c | 366 -----------------------
> 6 files changed, 4 insertions(+), 488 deletions(-)
> delete mode 100644 tests/intel/xe_mmio.c
> delete mode 100644 tools/xe_reg.c
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 807d8ac2c..143918b9e 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -106,11 +106,10 @@ struct xe_user_extension {
> #define DRM_XE_EXEC_QUEUE_CREATE 0x06
> #define DRM_XE_EXEC_QUEUE_DESTROY 0x07
> #define DRM_XE_EXEC 0x08
> -#define DRM_XE_MMIO 0x09
> -#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x0a
> -#define DRM_XE_WAIT_USER_FENCE 0x0b
> -#define DRM_XE_VM_MADVISE 0x0c
> -#define DRM_XE_EXEC_QUEUE_GET_PROPERTY 0x0d
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x09
> +#define DRM_XE_WAIT_USER_FENCE 0x0a
> +#define DRM_XE_VM_MADVISE 0x0b
> +#define DRM_XE_EXEC_QUEUE_GET_PROPERTY 0x0c
>
> /* Must be kept compact -- no holes */
> #define DRM_IOCTL_XE_DEVICE_QUERY DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_DEVICE_QUERY, struct drm_xe_device_query)
> @@ -123,7 +122,6 @@ struct xe_user_extension {
> #define DRM_IOCTL_XE_EXEC_QUEUE_GET_PROPERTY DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_GET_PROPERTY, struct drm_xe_exec_queue_get_property)
> #define DRM_IOCTL_XE_EXEC_QUEUE_DESTROY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_DESTROY, struct drm_xe_exec_queue_destroy)
> #define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> -#define DRM_IOCTL_XE_MMIO DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MMIO, struct drm_xe_mmio)
> #define DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY, struct drm_xe_exec_queue_set_property)
> #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> #define DRM_IOCTL_XE_VM_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_VM_MADVISE, struct drm_xe_vm_madvise)
> @@ -936,27 +934,6 @@ struct drm_xe_exec {
> __u64 reserved[2];
> };
>
> -struct drm_xe_mmio {
> - /** @extensions: Pointer to the first extension struct, if any */
> - __u64 extensions;
> -
> - __u32 addr;
> -
> -#define DRM_XE_MMIO_8BIT 0x0
> -#define DRM_XE_MMIO_16BIT 0x1
> -#define DRM_XE_MMIO_32BIT 0x2
> -#define DRM_XE_MMIO_64BIT 0x3
> -#define DRM_XE_MMIO_BITS_MASK 0x3
> -#define DRM_XE_MMIO_READ 0x4
> -#define DRM_XE_MMIO_WRITE 0x8
> - __u32 flags;
> -
> - __u64 value;
> -
> - /** @reserved: Reserved */
> - __u64 reserved[2];
> -};
> -
> /**
> * struct drm_xe_wait_user_fence - wait user fence
> *
> diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
> index 610cc958c..a9fe43b08 100644
> --- a/tests/intel-ci/xe-fast-feedback.testlist
> +++ b/tests/intel-ci/xe-fast-feedback.testlist
> @@ -141,8 +141,6 @@ igt@xe_mmap@bad-object
> igt@xe_mmap@system
> igt@xe_mmap@vram
> igt@xe_mmap@vram-system
> -igt@xe_mmio@mmio-timestamp
> -igt@xe_mmio@mmio-invalid
> igt@xe_pm_residency@gt-c6-on-idle
> igt@xe_prime_self_import@basic-with_one_bo
> igt@xe_prime_self_import@basic-with_fd_dup
> diff --git a/tests/intel/xe_mmio.c b/tests/intel/xe_mmio.c
> deleted file mode 100644
> index 9ac544770..000000000
> --- a/tests/intel/xe_mmio.c
> +++ /dev/null
> @@ -1,91 +0,0 @@
> -// SPDX-License-Identifier: MIT
> -/*
> - * Copyright © 2023 Intel Corporation
> - */
> -
> -/**
> - * TEST: Test if mmio feature
> - * Category: Software building block
> - * Sub-category: mmio
> - * Functionality: mmap
> - */
> -
> -#include "igt.h"
> -
> -#include "xe_drm.h"
> -#include "xe/xe_ioctl.h"
> -#include "xe/xe_query.h"
> -
> -#include <string.h>
> -
> -#define RCS_TIMESTAMP 0x2358
> -
> -/**
> - * SUBTEST: mmio-timestamp
> - * Test category: functionality test
> - * Description:
> - * Try to run mmio ioctl with 32 and 64 bits and check it a timestamp
> - * matches
> - */
> -
> -static void test_xe_mmio_timestamp(int fd)
> -{
> - int ret;
> - struct drm_xe_mmio mmio = {
> - .addr = RCS_TIMESTAMP,
> - .flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_64BIT,
> - };
> - ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
> - if (!ret)
> - igt_debug("RCS_TIMESTAMP 64b = 0x%llx\n", mmio.value);
> - igt_assert(!ret);
> - mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_32BIT;
> - mmio.value = 0;
> - ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
> - if (!ret)
> - igt_debug("RCS_TIMESTAMP 32b = 0x%llx\n", mmio.value);
> - igt_assert(!ret);
> -}
> -
> -
> -/**
> - * SUBTEST: mmio-invalid
> - * Test category: negative test
> - * Description: Try to run mmio ioctl with 8, 16 and 32 and 64 bits mmio
> - */
> -
> -static void test_xe_mmio_invalid(int fd)
> -{
> - int ret;
> - struct drm_xe_mmio mmio = {
> - .addr = RCS_TIMESTAMP,
> - .flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_8BIT,
> - };
> - ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
> - igt_assert(ret);
> - mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_16BIT;
> - mmio.value = 0;
> - ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
> - igt_assert(ret);
> - mmio.addr = RCS_TIMESTAMP;
> - mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_64BIT;
> - mmio.value = 0x1;
> - ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
> - igt_assert(ret);
> -}
> -
> -igt_main
> -{
> - int fd;
> -
> - igt_fixture
> - fd = drm_open_driver(DRIVER_XE);
> -
> - igt_subtest("mmio-timestamp")
> - test_xe_mmio_timestamp(fd);
> - igt_subtest("mmio-invalid")
> - test_xe_mmio_invalid(fd);
> -
> - igt_fixture
> - drm_close_driver(fd);
> -}
> diff --git a/tests/meson.build b/tests/meson.build
> index 974cb433b..c3de337c8 100644
> --- a/tests/meson.build
> +++ b/tests/meson.build
> @@ -293,7 +293,6 @@ intel_xe_progs = [
> 'xe_live_ktest',
> 'xe_media_fill',
> 'xe_mmap',
> - 'xe_mmio',
> 'xe_module_load',
> 'xe_noexec_ping_pong',
> 'xe_pm',
> diff --git a/tools/meson.build b/tools/meson.build
> index 21e244c24..ac79d8b58 100644
> --- a/tools/meson.build
> +++ b/tools/meson.build
> @@ -42,7 +42,6 @@ tools_progs = [
> 'intel_gvtg_test',
> 'dpcd_reg',
> 'lsgpu',
> - 'xe_reg',
> ]
> tool_deps = igt_deps
> tool_deps += zlib
> diff --git a/tools/xe_reg.c b/tools/xe_reg.c
> deleted file mode 100644
> index 1f7b384d3..000000000
> --- a/tools/xe_reg.c
> +++ /dev/null
> @@ -1,366 +0,0 @@
> -// SPDX-License-Identifier: MIT
> -/*
> - * Copyright © 2021 Intel Corporation
> - */
> -
> -#include "igt.h"
> -#include "igt_device_scan.h"
> -
> -#include "xe_drm.h"
> -
> -#include <stdlib.h>
> -#include <stdio.h>
> -#include <string.h>
> -
> -#define DECL_XE_MMIO_READ_FN(bits) \
> -static inline uint##bits##_t \
> -xe_mmio_read##bits(int fd, uint32_t reg) \
> -{ \
> - struct drm_xe_mmio mmio = { \
> - .addr = reg, \
> - .flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_##bits##BIT, \
> - }; \
> -\
> - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio), 0); \
> -\
> - return mmio.value;\
> -}\
> -static inline void \
> -xe_mmio_write##bits(int fd, uint32_t reg, uint##bits##_t value) \
> -{ \
> - struct drm_xe_mmio mmio = { \
> - .addr = reg, \
> - .flags = DRM_XE_MMIO_WRITE | DRM_XE_MMIO_##bits##BIT, \
> - .value = value, \
> - }; \
> -\
> - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio), 0); \
> -}
> -
> -DECL_XE_MMIO_READ_FN(8)
> -DECL_XE_MMIO_READ_FN(16)
> -DECL_XE_MMIO_READ_FN(32)
> -DECL_XE_MMIO_READ_FN(64)
> -
> -static void print_help(FILE *fp)
> -{
> - fprintf(fp, "usage: xe_reg read REG1 [REG2]...\n");
> - fprintf(fp, " xe_reg write REG VALUE\n");
> -}
> -
> -enum ring {
> - RING_UNKNOWN = -1,
> - RING_RCS0,
> - RING_BCS0,
> -};
> -
> -static const struct ring_info {
> - enum ring ring;
> - const char *name;
> - uint32_t mmio_base;
> -} ring_info[] = {
> - {RING_RCS0, "rcs0", 0x02000, },
> - {RING_BCS0, "bcs0", 0x22000, },
> -};
> -
> -static const struct ring_info *ring_info_for_name(const char *name)
> -{
> - int i;
> -
> - for (i = 0; i < ARRAY_SIZE(ring_info); i++)
> - if (strcmp(name, ring_info[i].name) == 0)
> - return &ring_info[i];
> -
> - return NULL;
> -}
> -
> -struct reg_info {
> - const char *name;
> - bool is_ring;
> - uint32_t addr_low;
> - uint32_t addr_high;
> -} reg_info[] = {
> -#define REG32(name, addr) { #name, false, addr }
> -#define REG64(name, low, high) { #name, false, low, high }
> -#define RING_REG32(name, addr) { #name, true, addr }
> -#define RING_REG64(name, low, high) { #name, true, low, high }
> -
> - RING_REG64(ACTHD, 0x74, 0x5c),
> - RING_REG32(BB_ADDR_DIFF, 0x154),
> - RING_REG64(BB_ADDR, 0x140, 0x168),
> - RING_REG32(BB_PER_CTX_PTR, 0x2c0),
> - RING_REG64(EXECLIST_STATUS, 0x234, 0x238),
> - RING_REG64(EXECLIST_SQ0, 0x510, 0x514),
> - RING_REG64(EXECLIST_SQ1, 0x518, 0x51c),
> - RING_REG32(HWS_PGA, 0x80),
> - RING_REG32(INDIRECT_CTX, 0x1C4),
> - RING_REG32(INDIRECT_CTX_OFFSET, 0x1C8),
> - RING_REG32(NOPID, 0x94),
> - RING_REG64(PML4E, 0x270, 0x274),
> - RING_REG32(RING_BUFFER_CTL, 0x3c),
> - RING_REG32(RING_BUFFER_HEAD, 0x34),
> - RING_REG32(RING_BUFFER_START, 0x38),
> - RING_REG32(RING_BUFFER_TAIL, 0x30),
> - RING_REG64(SBB_ADDR, 0x114, 0x11c),
> - RING_REG32(SBB_STATE, 0x118),
> -
> -#undef REG32
> -#undef REG64
> -#undef RING_REG32
> -#undef RING_REG64
> -};
> -
> -static const struct reg_info *reg_info_for_name(const char *name)
> -{
> - int i;
> -
> - for (i = 0; i < ARRAY_SIZE(reg_info); i++)
> - if (strcmp(name, reg_info[i].name) == 0)
> - return ®_info[i];
> -
> - return NULL;
> -}
> -
> -static int print_reg_for_info(int xe, FILE *fp, const struct reg_info *reg,
> - const struct ring_info *ring)
> -{
> - if (reg->is_ring) {
> - if (!ring) {
> - fprintf(stderr, "%s is a ring register but --ring "
> - "not set\n", reg->name);
> - return EXIT_FAILURE;
> - }
> -
> - if (reg->addr_high) {
> - uint32_t low = xe_mmio_read32(xe, reg->addr_low +
> - ring->mmio_base);
> - uint32_t high = xe_mmio_read32(xe, reg->addr_high +
> - ring->mmio_base);
> -
> - fprintf(fp, "%s[%s] = 0x%08x %08x\n", reg->name,
> - ring->name, high, low);
> - } else {
> - uint32_t value = xe_mmio_read32(xe, reg->addr_low +
> - ring->mmio_base);
> -
> - fprintf(fp, "%s[%s] = 0x%08x\n", reg->name,
> - ring->name, value);
> - }
> - } else {
> - if (reg->addr_high) {
> - uint32_t low = xe_mmio_read32(xe, reg->addr_low);
> - uint32_t high = xe_mmio_read32(xe, reg->addr_high);
> -
> - fprintf(fp, "%s = 0x%08x %08x\n", reg->name, high, low);
> - } else {
> - uint32_t value = xe_mmio_read32(xe, reg->addr_low);
> -
> - fprintf(fp, "%s = 0x%08x\n", reg->name, value);
> - }
> - }
> -
> - return 0;
> -}
> -
> -static void print_reg_for_addr(int xe, FILE *fp, uint32_t addr)
> -{
> - uint32_t value = xe_mmio_read32(xe, addr);
> -
> - fprintf(fp, "MMIO[0x%05x] = 0x%08x\n", addr, value);
> -}
> -
> -enum opt {
> - OPT_UNKNOWN = '?',
> - OPT_END = -1,
> - OPT_DEVICE,
> - OPT_RING,
> - OPT_ALL,
> -};
> -
> -static int read_reg(int argc, char *argv[])
> -{
> - int xe, i, err, index;
> - unsigned long reg_addr;
> - char *endp = NULL;
> - const struct ring_info *ring = NULL;
> - enum opt opt;
> - bool dump_all = false;
> -
> - static struct option options[] = {
> - { "device", required_argument, NULL, OPT_DEVICE },
> - { "ring", required_argument, NULL, OPT_RING },
> - { "all", no_argument, NULL, OPT_ALL },
> - };
> -
> - for (opt = 0; opt != OPT_END; ) {
> - opt = getopt_long(argc, argv, "", options, &index);
> -
> - switch (opt) {
> - case OPT_DEVICE:
> - igt_device_filter_add(optarg);
> - break;
> - case OPT_RING:
> - ring = ring_info_for_name(optarg);
> - if (!ring) {
> - fprintf(stderr, "invalid ring: %s\n", optarg);
> - return EXIT_FAILURE;
> - }
> - break;
> - case OPT_ALL:
> - dump_all = true;
> - break;
> - case OPT_END:
> - break;
> - case OPT_UNKNOWN:
> - return EXIT_FAILURE;
> - }
> - }
> -
> - argc -= optind;
> - argv += optind;
> -
> - xe = drm_open_driver(DRIVER_XE);
> - if (dump_all) {
> - for (i = 0; i < ARRAY_SIZE(reg_info); i++) {
> - if (reg_info[i].is_ring != !!ring)
> - continue;
> -
> - print_reg_for_info(xe, stdout, ®_info[i], ring);
> - }
> - } else {
> - for (i = 0; i < argc; i++) {
> - const struct reg_info *reg = reg_info_for_name(argv[i]);
> - if (reg) {
> - err = print_reg_for_info(xe, stdout, reg, ring);
> - if (err)
> - return err;
> - continue;
> - }
> - reg_addr = strtoul(argv[i], &endp, 16);
> - if (!reg_addr || reg_addr >= (4 << 20) || *endp) {
> - fprintf(stderr, "invalid reg address '%s'\n",
> - argv[i]);
> - return EXIT_FAILURE;
> - }
> - print_reg_for_addr(xe, stdout, reg_addr);
> - }
> - }
> -
> - return 0;
> -}
> -
> -static int write_reg_for_info(int xe, const struct reg_info *reg,
> - const struct ring_info *ring,
> - uint64_t value)
> -{
> - if (reg->is_ring) {
> - if (!ring) {
> - fprintf(stderr, "%s is a ring register but --ring "
> - "not set\n", reg->name);
> - return EXIT_FAILURE;
> - }
> -
> - xe_mmio_write32(xe, reg->addr_low + ring->mmio_base, value);
> - if (reg->addr_high) {
> - xe_mmio_write32(xe, reg->addr_high + ring->mmio_base,
> - value >> 32);
> - }
> - } else {
> - xe_mmio_write32(xe, reg->addr_low, value);
> - if (reg->addr_high)
> - xe_mmio_write32(xe, reg->addr_high, value >> 32);
> - }
> -
> - return 0;
> -}
> -
> -static void write_reg_for_addr(int xe, uint32_t addr, uint32_t value)
> -{
> - xe_mmio_write32(xe, addr, value);
> -}
> -
> -static int write_reg(int argc, char *argv[])
> -{
> - int xe, index;
> - unsigned long reg_addr;
> - char *endp = NULL;
> - const struct ring_info *ring = NULL;
> - enum opt opt;
> - const char *reg_name;
> - const struct reg_info *reg;
> - uint64_t value;
> -
> - static struct option options[] = {
> - { "device", required_argument, NULL, OPT_DEVICE },
> - { "ring", required_argument, NULL, OPT_RING },
> - };
> -
> - for (opt = 0; opt != OPT_END; ) {
> - opt = getopt_long(argc, argv, "", options, &index);
> -
> - switch (opt) {
> - case OPT_DEVICE:
> - igt_device_filter_add(optarg);
> - break;
> - case OPT_RING:
> - ring = ring_info_for_name(optarg);
> - if (!ring) {
> - fprintf(stderr, "invalid ring: %s\n", optarg);
> - return EXIT_FAILURE;
> - }
> - break;
> - case OPT_END:
> - break;
> - case OPT_UNKNOWN:
> - return EXIT_FAILURE;
> - default:
> - break;
> - }
> - }
> -
> - argc -= optind;
> - argv += optind;
> -
> - if (argc != 2) {
> - print_help(stderr);
> - return EXIT_FAILURE;
> - }
> -
> - reg_name = argv[0];
> - value = strtoull(argv[1], &endp, 0);
> - if (*endp) {
> - fprintf(stderr, "Invalid register value: %s\n", argv[1]);
> - return EXIT_FAILURE;
> - }
> -
> - xe = drm_open_driver(DRIVER_XE);
> -
> - reg = reg_info_for_name(reg_name);
> - if (reg)
> - return write_reg_for_info(xe, reg, ring, value);
> -
> - reg_addr = strtoul(reg_name, &endp, 16);
> - if (!reg_addr || reg_addr >= (4 << 20) || *endp) {
> - fprintf(stderr, "invalid reg address '%s'\n", reg_name);
> - return EXIT_FAILURE;
> - }
> - write_reg_for_addr(xe, reg_addr, value);
> -
> - return 0;
> -}
> -
> -int main(int argc, char *argv[])
> -{
> - if (argc < 2) {
> - print_help(stderr);
> - return EXIT_FAILURE;
> - }
> -
> - if (strcmp(argv[1], "read") == 0)
> - return read_reg(argc - 1, argv + 1);
> - else if (strcmp(argv[1], "write") == 0)
> - return write_reg(argc - 1, argv + 1);
> -
> - fprintf(stderr, "invalid sub-command: %s", argv[1]);
> - return EXIT_FAILURE;
> -}
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] [PATCH v4 05/14] xe_exec_balancer: Enable parallel submission and compute mode
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (3 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 04/14] drm-uapi/xe_drm: Remove MMIO ioctl and " Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-29 16:27 ` Souza, Jose
2023-09-28 11:05 ` [igt-dev] [PATCH v4 06/14] xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a compute VM Francois Dugast
` (11 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Matthew Brost <matthew.brost@intel.com>
This is now supported. Test it.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
tests/intel/xe_exec_balancer.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index 0314b4cd2..a4a438db7 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -383,6 +383,12 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
* @virtual-userptr-rebind: virtual userptr rebind
* @virtual-userptr-invalidate: virtual userptr invalidate
* @virtual-userptr-invalidate-race: virtual userptr invalidate racy
+ * @parallel-basic: parallel basic
+ * @parallel-userptr: parallel userptr
+ * @parallel-rebind: parallel rebind
+ * @parallel-userptr-rebind: parallel userptr rebind
+ * @parallel-userptr-invalidate: parallel userptr invalidate
+ * @parallel-userptr-invalidate-race: parallel userptr invalidate racy
*/
static void
@@ -460,8 +466,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
};
struct drm_xe_exec_queue_create create = {
.vm_id = vm,
- .width = 1,
- .num_placements = num_placements,
+ .width = flags & PARALLEL ? num_placements : 1,
+ .num_placements = flags & PARALLEL ? 1 : num_placements,
.instances = to_user_pointer(eci),
.extensions = to_user_pointer(&ext),
};
@@ -470,6 +476,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
&create), 0);
exec_queues[i] = create.exec_queue_id;
}
+ exec.num_batch_buffer = flags & PARALLEL ? num_placements : 1;
sync[0].addr = to_user_pointer(&data[0].vm_sync);
if (bo)
@@ -487,8 +494,12 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
uint64_t batch_addr = addr + batch_offset;
uint64_t sdi_offset = (char *)&data[i].data - (char *)data;
uint64_t sdi_addr = addr + sdi_offset;
+ uint64_t batches[MAX_INSTANCE];
int e = i % n_exec_queues;
+ for (j = 0; j < num_placements && flags & PARALLEL; ++j)
+ batches[j] = batch_addr;
+
b = 0;
data[i].batch[b++] = MI_STORE_DWORD_IMM_GEN4;
data[i].batch[b++] = sdi_addr;
@@ -500,7 +511,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
sync[0].addr = addr + (char *)&data[i].exec_sync - (char *)data;
exec.exec_queue_id = exec_queues[e];
- exec.address = batch_addr;
+ exec.address = flags & PARALLEL ?
+ to_user_pointer(batches) : batch_addr;
xe_exec(fd, &exec);
if (flags & REBIND && i + 1 != n_execs) {
@@ -661,9 +673,6 @@ igt_main
test_exec(fd, gt, class, 1, 0,
s->flags);
- if (s->flags & PARALLEL)
- continue;
-
igt_subtest_f("once-cm-%s", s->name)
xe_for_each_gt(fd, gt)
xe_for_each_hw_engine_class(class)
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 05/14] xe_exec_balancer: Enable parallel submission and compute mode
2023-09-28 11:05 ` [igt-dev] [PATCH v4 05/14] xe_exec_balancer: Enable parallel submission and compute mode Francois Dugast
@ 2023-09-29 16:27 ` Souza, Jose
0 siblings, 0 replies; 31+ messages in thread
From: Souza, Jose @ 2023-09-29 16:27 UTC (permalink / raw)
To: igt-dev@lists.freedesktop.org, Dugast, Francois; +Cc: Vivi, Rodrigo
On Thu, 2023-09-28 at 11:05 +0000, Francois Dugast wrote:
> From: Matthew Brost <matthew.brost@intel.com>
>
> This is now supported. Test it.
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
> tests/intel/xe_exec_balancer.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
> index 0314b4cd2..a4a438db7 100644
> --- a/tests/intel/xe_exec_balancer.c
> +++ b/tests/intel/xe_exec_balancer.c
> @@ -383,6 +383,12 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
> * @virtual-userptr-rebind: virtual userptr rebind
> * @virtual-userptr-invalidate: virtual userptr invalidate
> * @virtual-userptr-invalidate-race: virtual userptr invalidate racy
> + * @parallel-basic: parallel basic
> + * @parallel-userptr: parallel userptr
> + * @parallel-rebind: parallel rebind
> + * @parallel-userptr-rebind: parallel userptr rebind
> + * @parallel-userptr-invalidate: parallel userptr invalidate
> + * @parallel-userptr-invalidate-race: parallel userptr invalidate racy
> */
>
> static void
> @@ -460,8 +466,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
> };
> struct drm_xe_exec_queue_create create = {
> .vm_id = vm,
> - .width = 1,
> - .num_placements = num_placements,
> + .width = flags & PARALLEL ? num_placements : 1,
> + .num_placements = flags & PARALLEL ? 1 : num_placements,
> .instances = to_user_pointer(eci),
> .extensions = to_user_pointer(&ext),
> };
> @@ -470,6 +476,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
> &create), 0);
> exec_queues[i] = create.exec_queue_id;
> }
> + exec.num_batch_buffer = flags & PARALLEL ? num_placements : 1;
>
> sync[0].addr = to_user_pointer(&data[0].vm_sync);
> if (bo)
> @@ -487,8 +494,12 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
> uint64_t batch_addr = addr + batch_offset;
> uint64_t sdi_offset = (char *)&data[i].data - (char *)data;
> uint64_t sdi_addr = addr + sdi_offset;
> + uint64_t batches[MAX_INSTANCE];
> int e = i % n_exec_queues;
>
> + for (j = 0; j < num_placements && flags & PARALLEL; ++j)
> + batches[j] = batch_addr;
> +
> b = 0;
> data[i].batch[b++] = MI_STORE_DWORD_IMM_GEN4;
> data[i].batch[b++] = sdi_addr;
> @@ -500,7 +511,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
> sync[0].addr = addr + (char *)&data[i].exec_sync - (char *)data;
>
> exec.exec_queue_id = exec_queues[e];
> - exec.address = batch_addr;
> + exec.address = flags & PARALLEL ?
> + to_user_pointer(batches) : batch_addr;
> xe_exec(fd, &exec);
>
> if (flags & REBIND && i + 1 != n_execs) {
> @@ -661,9 +673,6 @@ igt_main
> test_exec(fd, gt, class, 1, 0,
> s->flags);
>
> - if (s->flags & PARALLEL)
> - continue;
> -
> igt_subtest_f("once-cm-%s", s->name)
> xe_for_each_gt(fd, gt)
> xe_for_each_hw_engine_class(class)
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] [PATCH v4 06/14] xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a compute VM
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (4 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 05/14] xe_exec_balancer: Enable parallel submission and compute mode Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 07/14] xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE Francois Dugast
` (10 subsequent siblings)
16 siblings, 0 replies; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Matthew Brost <matthew.brost@intel.com>
XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE was used creating a compute VM.
This just happened to work as it is same value as
DRM_XE_VM_CREATE_COMPUTE_MODE. Fix this and use correct flag,
DRM_XE_VM_CREATE_COMPUTE_MODE.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
tests/intel/xe_exec_threads.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index 1f9af894f..d19708f80 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -286,7 +286,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
if (!vm) {
vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
- XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE, 0);
+ DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
owns_vm = true;
}
@@ -1076,7 +1076,7 @@ static void threads(int fd, int flags)
to_user_pointer(&ext));
vm_compute_mode = xe_vm_create(fd,
DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
- XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
+ DRM_XE_VM_CREATE_COMPUTE_MODE,
0);
vm_err_thread.capture = &capture;
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [igt-dev] [PATCH v4 07/14] xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (5 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 06/14] xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a compute VM Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 08/14] drm-uapi/xe: Use common drm_xe_ext_set_property extension Francois Dugast
` (9 subsequent siblings)
16 siblings, 0 replies; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Matthew Brost <matthew.brost@intel.com>
XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE has been removed from uAPI,
remove all references in Xe tests.
Align with commits
("drm/xe: Remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE from uAPI") and
("drm/xe: Deprecate XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE implementation")
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
[Rodrigo updated header with built version from make header_install]
[Rodrigo added the commit subjects of the kernel uapi changes]
---
include/drm-uapi/xe_drm.h | 19 ++++++-------------
tests/intel/xe_evict.c | 14 +++-----------
tests/intel/xe_exec_balancer.c | 8 +-------
tests/intel/xe_exec_compute_mode.c | 20 ++------------------
tests/intel/xe_exec_reset.c | 10 ++--------
tests/intel/xe_exec_threads.c | 13 ++-----------
tests/intel/xe_noexec_ping_pong.c | 10 +---------
7 files changed, 17 insertions(+), 77 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 143918b9e..734af1b62 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -781,21 +781,14 @@ struct drm_xe_exec_queue_set_property {
/** @exec_queue_id: Exec queue ID */
__u32 exec_queue_id;
-#define XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
+#define XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
#define XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
#define XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT 2
- /*
- * Long running or ULLS engine mode. DMA fences not allowed in this
- * mode. Must match the value of DRM_XE_VM_CREATE_COMPUTE_MODE, serves
- * as a sanity check the UMD knows what it is doing. Can only be set at
- * engine create time.
- */
-#define XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE 3
-#define XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE 4
-#define XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT 5
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER 6
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY 7
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY 8
+#define XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE 3
+#define XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT 4
+#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER 5
+#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY 6
+#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY 7
/** @property: property to set */
__u32 property;
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index 5b64e56b4..5d8981f8d 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -252,19 +252,11 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
}
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
-
if (flags & MULTI_VM)
- exec_queues[i] = xe_exec_queue_create(fd, i & 1 ? vm2 : vm, eci,
- to_user_pointer(&ext));
+ exec_queues[i] = xe_exec_queue_create(fd, i & 1 ? vm2 :
+ vm, eci, 0);
else
- exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
- to_user_pointer(&ext));
+ exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
}
for (i = 0; i < n_execs; i++) {
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index a4a438db7..f4f5440f4 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -458,18 +458,12 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
memset(data, 0, bo_size);
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
struct drm_xe_exec_queue_create create = {
.vm_id = vm,
.width = flags & PARALLEL ? num_placements : 1,
.num_placements = flags & PARALLEL ? 1 : num_placements,
.instances = to_user_pointer(eci),
- .extensions = to_user_pointer(&ext),
+ .extensions = 0,
};
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE,
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index 6d1084727..02e7ef201 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -120,15 +120,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
xe_get_default_alignment(fd));
for (i = 0; (flags & EXEC_QUEUE_EARLY) && i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
-
- exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
- to_user_pointer(&ext));
+ exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
if (flags & BIND_EXECQUEUE)
bind_exec_queues[i] =
xe_bind_exec_queue_create(fd, vm, 0);
@@ -156,15 +148,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
memset(data, 0, bo_size);
for (i = 0; !(flags & EXEC_QUEUE_EARLY) && i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
-
- exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
- to_user_pointer(&ext));
+ exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
if (flags & BIND_EXECQUEUE)
bind_exec_queues[i] =
xe_bind_exec_queue_create(fd, vm, 0);
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 6e3f0aa4b..68e17cc98 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -540,14 +540,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
memset(data, 0, bo_size);
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property compute = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
- .base.next_extension = to_user_pointer(&compute),
+ .base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
.value = 1000,
@@ -557,7 +551,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
if (flags & EXEC_QUEUE_RESET)
ext = to_user_pointer(&preempt_timeout);
else
- ext = to_user_pointer(&compute);
+ ext = 0;
exec_queues[i] = xe_exec_queue_create(fd, vm, eci, ext);
};
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index d19708f80..306d8113d 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -313,17 +313,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
}
memset(data, 0, bo_size);
- for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
-
- exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
- to_user_pointer(&ext));
- };
+ for (i = 0; i < n_exec_queues; i++)
+ exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
pthread_barrier_wait(&barrier);
diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
index 3f486adf9..88b22ed11 100644
--- a/tests/intel/xe_noexec_ping_pong.c
+++ b/tests/intel/xe_noexec_ping_pong.c
@@ -64,13 +64,6 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
* stats.
*/
for (i = 0; i < NUM_VMS; ++i) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
-
vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
for (j = 0; j < NUM_BOS; ++j) {
igt_debug("Creating bo size %lu for vm %u\n",
@@ -82,8 +75,7 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
xe_vm_bind(fd, vm[i], bo[i][j], 0, 0x40000 + j*bo_size,
bo_size, NULL, 0);
}
- exec_queues[i] = xe_exec_queue_create(fd, vm[i], eci,
- to_user_pointer(&ext));
+ exec_queues[i] = xe_exec_queue_create(fd, vm[i], eci, 0);
}
igt_info("Now sleeping for %ds.\n", SECONDS_TO_WAIT);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* [igt-dev] [PATCH v4 08/14] drm-uapi/xe: Use common drm_xe_ext_set_property extension
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (6 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 07/14] xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 12:19 ` Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 09/14] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension Francois Dugast
` (8 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Align with commit ("drm/xe/uapi: Use common drm_xe_ext_set_property extension")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 21 +++------------------
tests/intel/xe_exec_reset.c | 10 +++++-----
tests/intel/xe_exec_threads.c | 4 ++--
tests/intel/xe_vm.c | 2 +-
4 files changed, 11 insertions(+), 26 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 734af1b62..a2dc80727 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -569,12 +569,11 @@ struct drm_xe_vm_bind_op_error_capture {
__u64 size;
};
-/** struct drm_xe_ext_vm_set_property - VM set property extension */
-struct drm_xe_ext_vm_set_property {
+/** struct drm_xe_ext_set_property - XE set property extension */
+struct drm_xe_ext_set_property {
/** @base: base user extension */
struct xe_user_extension base;
-#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS 0
/** @property: property to set */
__u32 property;
@@ -590,6 +589,7 @@ struct drm_xe_ext_vm_set_property {
struct drm_xe_vm_create {
#define XE_VM_EXTENSION_SET_PROPERTY 0
+#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS 0
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
@@ -754,21 +754,6 @@ struct drm_xe_vm_bind {
__u64 reserved[2];
};
-/** struct drm_xe_ext_exec_queue_set_property - exec queue set property extension */
-struct drm_xe_ext_exec_queue_set_property {
- /** @base: base user extension */
- struct xe_user_extension base;
-
- /** @property: property to set */
- __u32 property;
-
- /** @pad: MBZ */
- __u32 pad;
-
- /** @value: property value */
- __u64 value;
-};
-
/**
* struct drm_xe_exec_queue_set_property - exec queue set property
*
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 68e17cc98..ca8d7cc13 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -185,13 +185,13 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
data = xe_bo_map(fd, bo, bo_size);
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property job_timeout = {
+ struct drm_xe_ext_set_property job_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
.value = 50,
};
- struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+ struct drm_xe_ext_set_property preempt_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
@@ -372,13 +372,13 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
data = xe_bo_map(fd, bo, bo_size);
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property job_timeout = {
+ struct drm_xe_ext_set_property job_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
.value = 50,
};
- struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+ struct drm_xe_ext_set_property preempt_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
@@ -540,7 +540,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
memset(data, 0, bo_size);
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+ struct drm_xe_ext_set_property preempt_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index 306d8113d..b22c9c052 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -518,7 +518,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
memset(sync_all, 0, sizeof(sync_all));
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+ struct drm_xe_ext_set_property preempt_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
@@ -1054,7 +1054,7 @@ static void threads(int fd, int flags)
pthread_cond_init(&cond, 0);
if (flags & SHARED_VM) {
- struct drm_xe_ext_vm_set_property ext = {
+ struct drm_xe_ext_set_property ext = {
.base.next_extension = 0,
.base.name = XE_VM_EXTENSION_SET_PROPERTY,
.property =
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index f96305851..75e7a384b 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -404,7 +404,7 @@ static void vm_async_ops_err(int fd, bool destroy)
};
#define N_BINDS 32
struct drm_xe_vm_bind_op_error_capture capture = {};
- struct drm_xe_ext_vm_set_property ext = {
+ struct drm_xe_ext_set_property ext = {
.base.next_extension = 0,
.base.name = XE_VM_EXTENSION_SET_PROPERTY,
.property = XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 08/14] drm-uapi/xe: Use common drm_xe_ext_set_property extension
2023-09-28 11:05 ` [igt-dev] [PATCH v4 08/14] drm-uapi/xe: Use common drm_xe_ext_set_property extension Francois Dugast
@ 2023-09-28 12:19 ` Francois Dugast
0 siblings, 0 replies; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 12:19 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
On Thu, Sep 28, 2023 at 11:05:10AM +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
>
> Align with commit ("drm/xe/uapi: Use common drm_xe_ext_set_property extension")
>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 21 +++------------------
> tests/intel/xe_exec_reset.c | 10 +++++-----
> tests/intel/xe_exec_threads.c | 4 ++--
> tests/intel/xe_vm.c | 2 +-
> 4 files changed, 11 insertions(+), 26 deletions(-)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 734af1b62..a2dc80727 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -569,12 +569,11 @@ struct drm_xe_vm_bind_op_error_capture {
> __u64 size;
> };
>
> -/** struct drm_xe_ext_vm_set_property - VM set property extension */
> -struct drm_xe_ext_vm_set_property {
> +/** struct drm_xe_ext_set_property - XE set property extension */
> +struct drm_xe_ext_set_property {
> /** @base: base user extension */
> struct xe_user_extension base;
>
> -#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS 0
> /** @property: property to set */
> __u32 property;
>
> @@ -590,6 +589,7 @@ struct drm_xe_ext_vm_set_property {
>
> struct drm_xe_vm_create {
> #define XE_VM_EXTENSION_SET_PROPERTY 0
> +#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS 0
> /** @extensions: Pointer to the first extension struct, if any */
> __u64 extensions;
>
> @@ -754,21 +754,6 @@ struct drm_xe_vm_bind {
> __u64 reserved[2];
> };
>
> -/** struct drm_xe_ext_exec_queue_set_property - exec queue set property extension */
> -struct drm_xe_ext_exec_queue_set_property {
> - /** @base: base user extension */
> - struct xe_user_extension base;
> -
> - /** @property: property to set */
> - __u32 property;
> -
> - /** @pad: MBZ */
> - __u32 pad;
> -
> - /** @value: property value */
> - __u64 value;
> -};
> -
> /**
> * struct drm_xe_exec_queue_set_property - exec queue set property
> *
> diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
> index 68e17cc98..ca8d7cc13 100644
> --- a/tests/intel/xe_exec_reset.c
> +++ b/tests/intel/xe_exec_reset.c
> @@ -185,13 +185,13 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
> data = xe_bo_map(fd, bo, bo_size);
>
> for (i = 0; i < n_exec_queues; i++) {
> - struct drm_xe_ext_exec_queue_set_property job_timeout = {
> + struct drm_xe_ext_set_property job_timeout = {
> .base.next_extension = 0,
> .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> .property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
> .value = 50,
> };
> - struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
> + struct drm_xe_ext_set_property preempt_timeout = {
> .base.next_extension = 0,
> .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> .property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
> @@ -372,13 +372,13 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
> data = xe_bo_map(fd, bo, bo_size);
>
> for (i = 0; i < n_exec_queues; i++) {
> - struct drm_xe_ext_exec_queue_set_property job_timeout = {
> + struct drm_xe_ext_set_property job_timeout = {
> .base.next_extension = 0,
> .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> .property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
> .value = 50,
> };
> - struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
> + struct drm_xe_ext_set_property preempt_timeout = {
> .base.next_extension = 0,
> .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> .property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
> @@ -540,7 +540,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
> memset(data, 0, bo_size);
>
> for (i = 0; i < n_exec_queues; i++) {
> - struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
> + struct drm_xe_ext_set_property preempt_timeout = {
> .base.next_extension = 0,
> .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> .property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
> diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> index 306d8113d..b22c9c052 100644
> --- a/tests/intel/xe_exec_threads.c
> +++ b/tests/intel/xe_exec_threads.c
> @@ -518,7 +518,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>
> memset(sync_all, 0, sizeof(sync_all));
> for (i = 0; i < n_exec_queues; i++) {
> - struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
> + struct drm_xe_ext_set_property preempt_timeout = {
> .base.next_extension = 0,
> .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> .property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
> @@ -1054,7 +1054,7 @@ static void threads(int fd, int flags)
> pthread_cond_init(&cond, 0);
>
> if (flags & SHARED_VM) {
> - struct drm_xe_ext_vm_set_property ext = {
> + struct drm_xe_ext_set_property ext = {
> .base.next_extension = 0,
> .base.name = XE_VM_EXTENSION_SET_PROPERTY,
> .property =
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index f96305851..75e7a384b 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -404,7 +404,7 @@ static void vm_async_ops_err(int fd, bool destroy)
> };
> #define N_BINDS 32
> struct drm_xe_vm_bind_op_error_capture capture = {};
> - struct drm_xe_ext_vm_set_property ext = {
> + struct drm_xe_ext_set_property ext = {
> .base.next_extension = 0,
> .base.name = XE_VM_EXTENSION_SET_PROPERTY,
> .property = XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] [PATCH v4 09/14] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (7 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 08/14] drm-uapi/xe: Use common drm_xe_ext_set_property extension Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 13:36 ` Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 10/14] xe: Update to new VM bind uAPI Francois Dugast
` (7 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Align with commit ("drm/xe: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 23 +----------------------
tests/intel/xe_exec_threads.c | 14 +-------------
tests/intel/xe_vm.c | 13 +------------
3 files changed, 3 insertions(+), 47 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index a2dc80727..0a05a12b2 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -552,23 +552,6 @@ struct drm_xe_gem_mmap_offset {
__u64 reserved[2];
};
-/**
- * struct drm_xe_vm_bind_op_error_capture - format of VM bind op error capture
- */
-struct drm_xe_vm_bind_op_error_capture {
- /** @error: errno that occurred */
- __s32 error;
-
- /** @op: operation that encounter an error */
- __u32 op;
-
- /** @addr: address of bind op */
- __u64 addr;
-
- /** @size: size of bind */
- __u64 size;
-};
-
/** struct drm_xe_ext_set_property - XE set property extension */
struct drm_xe_ext_set_property {
/** @base: base user extension */
@@ -589,7 +572,6 @@ struct drm_xe_ext_set_property {
struct drm_xe_vm_create {
#define XE_VM_EXTENSION_SET_PROPERTY 0
-#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS 0
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
@@ -674,10 +656,7 @@ struct drm_xe_vm_bind_op {
* practice the bind op is good and will complete.
*
* If this flag is set and doesn't return an error, the bind op can
- * still fail and recovery is needed. If configured, the bind op that
- * caused the error will be captured in drm_xe_vm_bind_op_error_capture.
- * Once the user sees the error (via a ufence +
- * XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS), it should free memory
+ * still fail and recovery is needed. It should free memory
* via non-async unbinds, and then restart all queued async binds op via
* XE_VM_BIND_OP_RESTART. Or alternatively the user should destroy the
* VM.
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index b22c9c052..c9a51fc00 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -740,7 +740,6 @@ static void *thread(void *data)
struct vm_thread_data {
pthread_t thread;
- struct drm_xe_vm_bind_op_error_capture *capture;
int fd;
int vm;
};
@@ -772,7 +771,6 @@ static void *vm_async_ops_err_thread(void *data)
/* Restart and wait for next error */
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
&bind), 0);
- args->capture->error = 0;
ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
}
@@ -1021,7 +1019,6 @@ static void threads(int fd, int flags)
int n_hw_engines = 0, class;
uint64_t i = 0;
uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
- struct drm_xe_vm_bind_op_error_capture capture = {};
struct vm_thread_data vm_err_thread = {};
bool go = false;
int n_threads = 0;
@@ -1054,23 +1051,14 @@ static void threads(int fd, int flags)
pthread_cond_init(&cond, 0);
if (flags & SHARED_VM) {
- struct drm_xe_ext_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_VM_EXTENSION_SET_PROPERTY,
- .property =
- XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
- .value = to_user_pointer(&capture),
- };
-
vm_legacy_mode = xe_vm_create(fd,
DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
- to_user_pointer(&ext));
+ 0);
vm_compute_mode = xe_vm_create(fd,
DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
DRM_XE_VM_CREATE_COMPUTE_MODE,
0);
- vm_err_thread.capture = &capture;
vm_err_thread.fd = fd;
vm_err_thread.vm = vm_legacy_mode;
pthread_create(&vm_err_thread.thread, 0,
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 75e7a384b..89df6149a 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -324,7 +324,6 @@ static void userptr_invalid(int fd)
struct vm_thread_data {
pthread_t thread;
- struct drm_xe_vm_bind_op_error_capture *capture;
int fd;
int vm;
uint32_t bo;
@@ -388,7 +387,6 @@ static void *vm_async_ops_err_thread(void *data)
/* Restart and wait for next error */
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
&bind), 0);
- args->capture->error = 0;
ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
}
@@ -403,24 +401,15 @@ static void vm_async_ops_err(int fd, bool destroy)
.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
};
#define N_BINDS 32
- struct drm_xe_vm_bind_op_error_capture capture = {};
- struct drm_xe_ext_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_VM_EXTENSION_SET_PROPERTY,
- .property = XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
- .value = to_user_pointer(&capture),
- };
struct vm_thread_data thread = {};
uint32_t syncobjs[N_BINDS];
size_t bo_size = 0x1000 * 32;
uint32_t bo;
int i, j;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
- to_user_pointer(&ext));
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
bo = xe_bo_create(fd, 0, vm, bo_size);
- thread.capture = &capture;
thread.fd = fd;
thread.vm = vm;
thread.bo = bo;
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 09/14] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension
2023-09-28 11:05 ` [igt-dev] [PATCH v4 09/14] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension Francois Dugast
@ 2023-09-28 13:36 ` Francois Dugast
0 siblings, 0 replies; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 13:36 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
On Thu, Sep 28, 2023 at 11:05:11AM +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
>
> Align with commit ("drm/xe: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension")
>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 23 +----------------------
> tests/intel/xe_exec_threads.c | 14 +-------------
> tests/intel/xe_vm.c | 13 +------------
> 3 files changed, 3 insertions(+), 47 deletions(-)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index a2dc80727..0a05a12b2 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -552,23 +552,6 @@ struct drm_xe_gem_mmap_offset {
> __u64 reserved[2];
> };
>
> -/**
> - * struct drm_xe_vm_bind_op_error_capture - format of VM bind op error capture
> - */
> -struct drm_xe_vm_bind_op_error_capture {
> - /** @error: errno that occurred */
> - __s32 error;
> -
> - /** @op: operation that encounter an error */
> - __u32 op;
> -
> - /** @addr: address of bind op */
> - __u64 addr;
> -
> - /** @size: size of bind */
> - __u64 size;
> -};
> -
> /** struct drm_xe_ext_set_property - XE set property extension */
> struct drm_xe_ext_set_property {
> /** @base: base user extension */
> @@ -589,7 +572,6 @@ struct drm_xe_ext_set_property {
>
> struct drm_xe_vm_create {
> #define XE_VM_EXTENSION_SET_PROPERTY 0
> -#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS 0
> /** @extensions: Pointer to the first extension struct, if any */
> __u64 extensions;
>
> @@ -674,10 +656,7 @@ struct drm_xe_vm_bind_op {
> * practice the bind op is good and will complete.
> *
> * If this flag is set and doesn't return an error, the bind op can
> - * still fail and recovery is needed. If configured, the bind op that
> - * caused the error will be captured in drm_xe_vm_bind_op_error_capture.
> - * Once the user sees the error (via a ufence +
> - * XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS), it should free memory
> + * still fail and recovery is needed. It should free memory
> * via non-async unbinds, and then restart all queued async binds op via
> * XE_VM_BIND_OP_RESTART. Or alternatively the user should destroy the
> * VM.
> diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> index b22c9c052..c9a51fc00 100644
> --- a/tests/intel/xe_exec_threads.c
> +++ b/tests/intel/xe_exec_threads.c
> @@ -740,7 +740,6 @@ static void *thread(void *data)
>
> struct vm_thread_data {
> pthread_t thread;
> - struct drm_xe_vm_bind_op_error_capture *capture;
> int fd;
> int vm;
> };
> @@ -772,7 +771,6 @@ static void *vm_async_ops_err_thread(void *data)
> /* Restart and wait for next error */
> igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
> &bind), 0);
> - args->capture->error = 0;
> ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> }
>
> @@ -1021,7 +1019,6 @@ static void threads(int fd, int flags)
> int n_hw_engines = 0, class;
> uint64_t i = 0;
> uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
> - struct drm_xe_vm_bind_op_error_capture capture = {};
> struct vm_thread_data vm_err_thread = {};
> bool go = false;
> int n_threads = 0;
> @@ -1054,23 +1051,14 @@ static void threads(int fd, int flags)
> pthread_cond_init(&cond, 0);
>
> if (flags & SHARED_VM) {
> - struct drm_xe_ext_set_property ext = {
> - .base.next_extension = 0,
> - .base.name = XE_VM_EXTENSION_SET_PROPERTY,
> - .property =
> - XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
> - .value = to_user_pointer(&capture),
> - };
> -
> vm_legacy_mode = xe_vm_create(fd,
> DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
> - to_user_pointer(&ext));
> + 0);
> vm_compute_mode = xe_vm_create(fd,
> DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> DRM_XE_VM_CREATE_COMPUTE_MODE,
> 0);
>
> - vm_err_thread.capture = &capture;
> vm_err_thread.fd = fd;
> vm_err_thread.vm = vm_legacy_mode;
> pthread_create(&vm_err_thread.thread, 0,
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index 75e7a384b..89df6149a 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -324,7 +324,6 @@ static void userptr_invalid(int fd)
>
> struct vm_thread_data {
> pthread_t thread;
> - struct drm_xe_vm_bind_op_error_capture *capture;
> int fd;
> int vm;
> uint32_t bo;
> @@ -388,7 +387,6 @@ static void *vm_async_ops_err_thread(void *data)
> /* Restart and wait for next error */
> igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
> &bind), 0);
> - args->capture->error = 0;
> ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> }
>
> @@ -403,24 +401,15 @@ static void vm_async_ops_err(int fd, bool destroy)
> .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> };
> #define N_BINDS 32
> - struct drm_xe_vm_bind_op_error_capture capture = {};
> - struct drm_xe_ext_set_property ext = {
> - .base.next_extension = 0,
> - .base.name = XE_VM_EXTENSION_SET_PROPERTY,
> - .property = XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
> - .value = to_user_pointer(&capture),
> - };
> struct vm_thread_data thread = {};
> uint32_t syncobjs[N_BINDS];
> size_t bo_size = 0x1000 * 32;
> uint32_t bo;
> int i, j;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
> - to_user_pointer(&ext));
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> bo = xe_bo_create(fd, 0, vm, bo_size);
>
> - thread.capture = &capture;
> thread.fd = fd;
> thread.vm = vm;
> thread.bo = bo;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] [PATCH v4 10/14] xe: Update to new VM bind uAPI
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (8 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 09/14] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-29 16:32 ` Souza, Jose
2023-09-28 11:05 ` [igt-dev] [PATCH v4 11/14] drm-uapi/xe: Replace useless 'instance' per unique gt_id Francois Dugast
` (6 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Matthew Brost <matthew.brost@intel.com>
Sync vs. async changes and new error handling.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
[Rodrigo rebased and fixed conflicts]
---
include/drm-uapi/xe_drm.h | 50 ++------
lib/igt_fb.c | 2 +-
lib/intel_batchbuffer.c | 2 +-
lib/intel_compute.c | 2 +-
lib/xe/xe_ioctl.c | 15 +--
lib/xe/xe_ioctl.h | 3 +-
lib/xe/xe_query.c | 2 +-
tests/intel/xe_ccs.c | 4 +-
tests/intel/xe_create.c | 6 +-
tests/intel/xe_drm_fdinfo.c | 4 +-
tests/intel/xe_evict.c | 23 ++--
tests/intel/xe_exec_balancer.c | 6 +-
tests/intel/xe_exec_basic.c | 6 +-
tests/intel/xe_exec_compute_mode.c | 6 +-
tests/intel/xe_exec_fault_mode.c | 6 +-
tests/intel/xe_exec_reset.c | 8 +-
tests/intel/xe_exec_store.c | 4 +-
tests/intel/xe_exec_threads.c | 112 +++++------------
tests/intel/xe_exercise_blt.c | 2 +-
tests/intel/xe_guc_pc.c | 2 +-
tests/intel/xe_huc_copy.c | 2 +-
tests/intel/xe_intel_bb.c | 2 +-
tests/intel/xe_pm.c | 2 +-
tests/intel/xe_vm.c | 189 ++---------------------------
tests/intel/xe_waitfence.c | 19 +--
25 files changed, 102 insertions(+), 377 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 0a05a12b2..80b4c76f3 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -134,10 +134,11 @@ struct drm_xe_engine_class_instance {
#define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE 3
#define DRM_XE_ENGINE_CLASS_COMPUTE 4
/*
- * Kernel only class (not actual hardware engine class). Used for
+ * Kernel only classes (not actual hardware engine class). Used for
* creating ordered queues of VM bind operations.
*/
-#define DRM_XE_ENGINE_CLASS_VM_BIND 5
+#define DRM_XE_ENGINE_CLASS_VM_BIND_ASYNC 5
+#define DRM_XE_ENGINE_CLASS_VM_BIND_SYNC 6
__u16 engine_class;
__u16 engine_instance;
@@ -577,7 +578,7 @@ struct drm_xe_vm_create {
#define DRM_XE_VM_CREATE_SCRATCH_PAGE (0x1 << 0)
#define DRM_XE_VM_CREATE_COMPUTE_MODE (0x1 << 1)
-#define DRM_XE_VM_CREATE_ASYNC_BIND_OPS (0x1 << 2)
+#define DRM_XE_VM_CREATE_ASYNC_DEFAULT (0x1 << 2)
#define DRM_XE_VM_CREATE_FAULT_MODE (0x1 << 3)
/** @flags: Flags */
__u32 flags;
@@ -637,34 +638,12 @@ struct drm_xe_vm_bind_op {
#define XE_VM_BIND_OP_MAP 0x0
#define XE_VM_BIND_OP_UNMAP 0x1
#define XE_VM_BIND_OP_MAP_USERPTR 0x2
-#define XE_VM_BIND_OP_RESTART 0x3
-#define XE_VM_BIND_OP_UNMAP_ALL 0x4
-#define XE_VM_BIND_OP_PREFETCH 0x5
+#define XE_VM_BIND_OP_UNMAP_ALL 0x3
+#define XE_VM_BIND_OP_PREFETCH 0x4
/** @op: Bind operation to perform */
__u32 op;
#define XE_VM_BIND_FLAG_READONLY (0x1 << 0)
- /*
- * A bind ops completions are always async, hence the support for out
- * sync. This flag indicates the allocation of the memory for new page
- * tables and the job to program the pages tables is asynchronous
- * relative to the IOCTL. That part of a bind operation can fail under
- * memory pressure, the job in practice can't fail unless the system is
- * totally shot.
- *
- * If this flag is clear and the IOCTL doesn't return an error, in
- * practice the bind op is good and will complete.
- *
- * If this flag is set and doesn't return an error, the bind op can
- * still fail and recovery is needed. It should free memory
- * via non-async unbinds, and then restart all queued async binds op via
- * XE_VM_BIND_OP_RESTART. Or alternatively the user should destroy the
- * VM.
- *
- * This flag is only allowed when DRM_XE_VM_CREATE_ASYNC_BIND_OPS is
- * configured in the VM and must be set if the VM is configured with
- * DRM_XE_VM_CREATE_ASYNC_BIND_OPS and not in an error state.
- */
#define XE_VM_BIND_FLAG_ASYNC (0x1 << 1)
/*
* Valid on a faulting VM only, do the MAP operation immediately rather
@@ -905,18 +884,10 @@ struct drm_xe_wait_user_fence {
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
- union {
- /**
- * @addr: user pointer address to wait on, must qword aligned
- */
- __u64 addr;
-
- /**
- * @vm_id: The ID of the VM which encounter an error used with
- * DRM_XE_UFENCE_WAIT_VM_ERROR. Upper 32 bits must be clear.
- */
- __u64 vm_id;
- };
+ /**
+ * @addr: user pointer address to wait on, must qword aligned
+ */
+ __u64 addr;
#define DRM_XE_UFENCE_WAIT_EQ 0
#define DRM_XE_UFENCE_WAIT_NEQ 1
@@ -929,7 +900,6 @@ struct drm_xe_wait_user_fence {
#define DRM_XE_UFENCE_WAIT_SOFT_OP (1 << 0) /* e.g. Wait on VM bind */
#define DRM_XE_UFENCE_WAIT_ABSTIME (1 << 1)
-#define DRM_XE_UFENCE_WAIT_VM_ERROR (1 << 2)
/** @flags: wait flags */
__u16 flags;
diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index f0c0681ab..34934855a 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -2892,7 +2892,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
&bb_size,
mem_region) == 0);
} else if (is_xe) {
- vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
exec_queue = xe_exec_queue_create(dst_fb->fd, vm, &inst, 0);
xe_ctx = intel_ctx_xe(dst_fb->fd, vm, exec_queue, 0, 0, 0);
mem_region = vram_if_possible(dst_fb->fd, 0);
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 6e668d28c..df82ef5f5 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -953,7 +953,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
if (!vm) {
igt_assert_f(!ctx, "No vm provided for engine");
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
}
ibb->uses_full_ppgtt = true;
diff --git a/lib/intel_compute.c b/lib/intel_compute.c
index 0c30f39c1..1ae33cdfc 100644
--- a/lib/intel_compute.c
+++ b/lib/intel_compute.c
@@ -79,7 +79,7 @@ static void bo_execenv_create(int fd, struct bo_execenv *execenv)
else
engine_class = DRM_XE_ENGINE_CLASS_COMPUTE;
- execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
execenv->exec_queue = xe_exec_queue_create_class(fd, execenv->vm,
engine_class);
}
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 48cd185de..895e3bd4e 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -201,16 +201,8 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
uint64_t addr, uint64_t size, uint32_t op)
{
- struct drm_xe_sync sync = {
- .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
- .handle = syncobj_create(fd, 0),
- };
-
- __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, &sync, 1,
+ __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, NULL, 0,
0, 0);
-
- igt_assert(syncobj_wait(fd, &sync.handle, 1, INT64_MAX, 0, NULL));
- syncobj_destroy(fd, sync.handle);
}
void xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
@@ -276,10 +268,11 @@ uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size)
return create.handle;
}
-uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext)
+uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext, bool async)
{
struct drm_xe_engine_class_instance instance = {
- .engine_class = DRM_XE_ENGINE_CLASS_VM_BIND,
+ .engine_class = async ? DRM_XE_ENGINE_CLASS_VM_BIND_ASYNC :
+ DRM_XE_ENGINE_CLASS_VM_BIND_SYNC,
};
struct drm_xe_exec_queue_create create = {
.extensions = ext,
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index f0e4109dc..a8dbcf376 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -71,7 +71,8 @@ uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size);
uint32_t xe_exec_queue_create(int fd, uint32_t vm,
struct drm_xe_engine_class_instance *instance,
uint64_t ext);
-uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext);
+uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext,
+ bool async);
uint32_t xe_exec_queue_create_class(int fd, uint32_t vm, uint16_t class);
void xe_exec_queue_destroy(int fd, uint32_t exec_queue);
uint64_t xe_bo_mmap_offset(int fd, uint32_t bo);
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index c356abe1e..ab7b31188 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -316,7 +316,7 @@ bool xe_supports_faults(int fd)
bool supports_faults;
struct drm_xe_vm_create create = {
- .flags = DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+ .flags = DRM_XE_VM_CREATE_ASYNC_DEFAULT |
DRM_XE_VM_CREATE_FAULT_MODE,
};
diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
index 20bbc4448..300b734c8 100644
--- a/tests/intel/xe_ccs.c
+++ b/tests/intel/xe_ccs.c
@@ -343,7 +343,7 @@ static void block_copy(int xe,
uint32_t vm, exec_queue;
if (config->new_ctx) {
- vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
surf_ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
surf_ahnd = intel_allocator_open(xe, surf_ctx->vm,
@@ -550,7 +550,7 @@ static void block_copy_test(int xe,
copyfns[copy_function].suffix) {
uint32_t sync_bind, sync_out;
- vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
sync_bind = syncobj_create(xe, 0);
sync_out = syncobj_create(xe, 0);
diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
index 8d845e5c8..d99bd51cf 100644
--- a/tests/intel/xe_create.c
+++ b/tests/intel/xe_create.c
@@ -54,7 +54,7 @@ static void create_invalid_size(int fd)
uint32_t handle;
int ret;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
xe_for_each_mem_region(fd, memreg, region) {
memregion = xe_mem_region(fd, region);
@@ -140,7 +140,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
fd = drm_reopen_driver(fd);
num_engines = xe_number_hw_engines(fd);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
exec_queues_per_process = max_t(uint32_t, 1, MAXEXECQUEUES / nproc);
igt_debug("nproc: %u, exec_queues per process: %u\n", nproc, exec_queues_per_process);
@@ -199,7 +199,7 @@ static void create_massive_size(int fd)
uint32_t handle;
int ret;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
xe_for_each_mem_region(fd, memreg, region) {
ret = __create_bo(fd, vm, -1ULL << 32, region, &handle);
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index 22e410e14..64168ed19 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -71,7 +71,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
struct xe_spin_opts spin_opts = { .preempt = true };
int i, b, ret;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data) * N_EXEC_QUEUES;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
@@ -90,7 +90,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
for (i = 0; i < N_EXEC_QUEUES; i++) {
exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
- bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
+ bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
syncobjs[i] = syncobj_create(fd, 0);
}
syncobjs[N_EXEC_QUEUES] = syncobj_create(fd, 0);
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index 5d8981f8d..eec001218 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -63,15 +63,17 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
fd = drm_open_driver(DRIVER_XE);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
if (flags & BIND_EXEC_QUEUE)
- bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0);
+ bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
if (flags & MULTI_VM) {
- vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
- vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+ vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
if (flags & BIND_EXEC_QUEUE) {
- bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2, 0);
- bind_exec_queues[2] = xe_bind_exec_queue_create(fd, vm3, 0);
+ bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
+ 0, true);
+ bind_exec_queues[2] = xe_bind_exec_queue_create(fd, vm3,
+ 0, true);
}
}
@@ -240,15 +242,16 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
fd = drm_open_driver(DRIVER_XE);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
if (flags & BIND_EXEC_QUEUE)
- bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0);
+ bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
if (flags & MULTI_VM) {
- vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+ vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
if (flags & BIND_EXEC_QUEUE)
- bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2, 0);
+ bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
+ 0, true);
}
for (i = 0; i < n_exec_queues; i++) {
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index f4f5440f4..3ca3de881 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -66,7 +66,7 @@ static void test_all_active(int fd, int gt, int class)
if (num_placements < 2)
return;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data) * num_placements;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
@@ -207,7 +207,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
if (num_placements < 2)
return;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data) * n_execs;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
@@ -433,7 +433,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
if (num_placements < 2)
return;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
bo_size = sizeof(*data) * n_execs;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index e29398aaa..8dbce524d 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -109,7 +109,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
igt_assert(n_vm <= MAX_N_EXEC_QUEUES);
for (i = 0; i < n_vm; ++i)
- vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data) * n_execs;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
@@ -151,7 +151,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
exec_queues[i] = xe_exec_queue_create(fd, __vm, eci, 0);
if (flags & BIND_EXEC_QUEUE)
- bind_exec_queues[i] = xe_bind_exec_queue_create(fd, __vm, 0);
+ bind_exec_queues[i] = xe_bind_exec_queue_create(fd,
+ __vm, 0,
+ true);
else
bind_exec_queues[i] = 0;
syncobjs[i] = syncobj_create(fd, 0);
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index 02e7ef201..b0a677dca 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -113,7 +113,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
igt_assert(n_exec_queues <= MAX_N_EXECQUEUES);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
bo_size = sizeof(*data) * n_execs;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
@@ -123,7 +123,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
if (flags & BIND_EXECQUEUE)
bind_exec_queues[i] =
- xe_bind_exec_queue_create(fd, vm, 0);
+ xe_bind_exec_queue_create(fd, vm, 0, true);
else
bind_exec_queues[i] = 0;
};
@@ -151,7 +151,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
if (flags & BIND_EXECQUEUE)
bind_exec_queues[i] =
- xe_bind_exec_queue_create(fd, vm, 0);
+ xe_bind_exec_queue_create(fd, vm, 0, true);
else
bind_exec_queues[i] = 0;
};
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index c5d6bdcd5..92d8690a1 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -131,7 +131,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
DRM_XE_VM_CREATE_FAULT_MODE, 0);
bo_size = sizeof(*data) * n_execs;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
@@ -165,7 +165,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
if (flags & BIND_EXEC_QUEUE)
bind_exec_queues[i] =
- xe_bind_exec_queue_create(fd, vm, 0);
+ xe_bind_exec_queue_create(fd, vm, 0, true);
else
bind_exec_queues[i] = 0;
};
@@ -375,7 +375,7 @@ test_atomic(int fd, struct drm_xe_engine_class_instance *eci,
uint32_t *ptr;
int i, b, wait_idx = 0;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
DRM_XE_VM_CREATE_FAULT_MODE, 0);
bo_size = sizeof(*data) * n_atomic;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index ca8d7cc13..44248776b 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -45,7 +45,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
struct xe_spin *spin;
struct xe_spin_opts spin_opts = { .addr = addr, .preempt = false };
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*spin);
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
@@ -176,7 +176,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
if (num_placements < 2)
return;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data) * n_execs;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
@@ -362,7 +362,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
if (flags & CLOSE_FD)
fd = drm_open_driver(DRIVER_XE);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data) * n_execs;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
@@ -528,7 +528,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
if (flags & CLOSE_FD)
fd = drm_open_driver(DRIVER_XE);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
bo_size = sizeof(*data) * n_execs;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index 14f7c9bec..90684b8cb 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -75,7 +75,7 @@ static void store(int fd)
syncobj = syncobj_create(fd, 0);
sync.handle = syncobj;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data);
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
@@ -132,7 +132,7 @@ static void store_all(int fd, int gt, int class)
struct drm_xe_engine_class_instance *hwe;
int i, num_placements = 0;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data);
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index c9a51fc00..bb16bdd88 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -77,7 +77,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
}
if (!vm) {
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
owns_vm = true;
}
@@ -285,7 +285,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
}
if (!vm) {
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
owns_vm = true;
}
@@ -454,7 +454,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
static void
test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
struct drm_xe_engine_class_instance *eci, int n_exec_queues,
- int n_execs, int rebind_error_inject, unsigned int flags)
+ int n_execs, unsigned int flags)
{
struct drm_xe_sync sync[2] = {
{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
@@ -489,7 +489,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
}
if (!vm) {
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
owns_vm = true;
}
@@ -531,7 +531,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
else
exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
if (flags & BIND_EXEC_QUEUE)
- bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
+ bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm,
+ 0, true);
else
bind_exec_queues[i] = 0;
syncobjs[i] = syncobj_create(fd, 0);
@@ -583,8 +584,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
exec.address = exec_addr;
if (e != i && !(flags & HANG))
syncobj_reset(fd, &syncobjs[e], 1);
- if ((flags & HANG && e == hang_exec_queue) ||
- rebind_error_inject > 0) {
+ if ((flags & HANG && e == hang_exec_queue)) {
int err;
do {
@@ -594,20 +594,10 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
xe_exec(fd, &exec);
}
- if (flags & REBIND && i &&
- (!(i & 0x1f) || rebind_error_inject == i)) {
-#define INJECT_ERROR (0x1 << 31)
- if (rebind_error_inject == i)
- __xe_vm_bind_assert(fd, vm, bind_exec_queues[e],
- 0, 0, addr, bo_size,
- XE_VM_BIND_OP_UNMAP,
- XE_VM_BIND_FLAG_ASYNC |
- INJECT_ERROR, sync_all,
- n_exec_queues, 0, 0);
- else
- xe_vm_unbind_async(fd, vm, bind_exec_queues[e],
- 0, addr, bo_size,
- sync_all, n_exec_queues);
+ if (flags & REBIND && i && !(i & 0x1f)) {
+ xe_vm_unbind_async(fd, vm, bind_exec_queues[e],
+ 0, addr, bo_size,
+ sync_all, n_exec_queues);
sync[0].flags |= DRM_XE_SYNC_SIGNAL;
addr += bo_size;
@@ -709,7 +699,6 @@ struct thread_data {
int n_exec_queue;
int n_exec;
int flags;
- int rebind_error_inject;
bool *go;
};
@@ -733,46 +722,7 @@ static void *thread(void *data)
else
test_legacy_mode(t->fd, t->vm_legacy_mode, t->addr, t->userptr,
t->eci, t->n_exec_queue, t->n_exec,
- t->rebind_error_inject, t->flags);
-
- return NULL;
-}
-
-struct vm_thread_data {
- pthread_t thread;
- int fd;
- int vm;
-};
-
-static void *vm_async_ops_err_thread(void *data)
-{
- struct vm_thread_data *args = data;
- int fd = args->fd;
- int ret;
-
- struct drm_xe_wait_user_fence wait = {
- .vm_id = args->vm,
- .op = DRM_XE_UFENCE_WAIT_NEQ,
- .flags = DRM_XE_UFENCE_WAIT_VM_ERROR,
- .mask = DRM_XE_UFENCE_WAIT_U32,
-#define BASICALLY_FOREVER 0xffffffffffff
- .timeout = BASICALLY_FOREVER,
- };
-
- ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
-
- while (!ret) {
- struct drm_xe_vm_bind bind = {
- .vm_id = args->vm,
- .num_binds = 1,
- .bind.op = XE_VM_BIND_OP_RESTART,
- };
-
- /* Restart and wait for next error */
- igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
- &bind), 0);
- ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
- }
+ t->flags);
return NULL;
}
@@ -826,6 +776,10 @@ static void *vm_async_ops_err_thread(void *data)
* shared vm rebind err
* @shared-vm-userptr-rebind-err:
* shared vm userptr rebind err
+ * @rebind-err:
+ * rebind err
+ * @userptr-rebind-err:
+ * userptr rebind err
* @shared-vm-userptr-invalidate:
* shared vm userptr invalidate
* @shared-vm-userptr-invalidate-race:
@@ -842,7 +796,7 @@ static void *vm_async_ops_err_thread(void *data)
* fd userptr invalidate race
* @hang-basic:
* hang basic
- * @hang-userptr:
+ * @hang-userptr:
* hang userptr
* @hang-rebind:
* hang rebind
@@ -864,6 +818,10 @@ static void *vm_async_ops_err_thread(void *data)
* hang shared vm rebind err
* @hang-shared-vm-userptr-rebind-err:
* hang shared vm userptr rebind err
+ * @hang-rebind-err:
+ * hang rebind err
+ * @hang-userptr-rebind-err:
+ * hang userptr rebind err
* @hang-shared-vm-userptr-invalidate:
* hang shared vm userptr invalidate
* @hang-shared-vm-userptr-invalidate-race:
@@ -1019,7 +977,6 @@ static void threads(int fd, int flags)
int n_hw_engines = 0, class;
uint64_t i = 0;
uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
- struct vm_thread_data vm_err_thread = {};
bool go = false;
int n_threads = 0;
int gt;
@@ -1052,18 +1009,12 @@ static void threads(int fd, int flags)
if (flags & SHARED_VM) {
vm_legacy_mode = xe_vm_create(fd,
- DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
+ DRM_XE_VM_CREATE_ASYNC_DEFAULT,
0);
vm_compute_mode = xe_vm_create(fd,
- DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
+ DRM_XE_VM_CREATE_ASYNC_DEFAULT |
DRM_XE_VM_CREATE_COMPUTE_MODE,
0);
-
- vm_err_thread.fd = fd;
- vm_err_thread.vm = vm_legacy_mode;
- pthread_create(&vm_err_thread.thread, 0,
- vm_async_ops_err_thread, &vm_err_thread);
-
}
xe_for_each_hw_engine(fd, hwe) {
@@ -1083,11 +1034,6 @@ static void threads(int fd, int flags)
threads_data[i].n_exec_queue = N_EXEC_QUEUE;
#define N_EXEC 1024
threads_data[i].n_exec = N_EXEC;
- if (flags & REBIND_ERROR)
- threads_data[i].rebind_error_inject =
- (N_EXEC / (n_hw_engines + 1)) * (i + 1);
- else
- threads_data[i].rebind_error_inject = -1;
threads_data[i].flags = flags;
if (flags & MIXED_MODE) {
threads_data[i].flags &= ~MIXED_MODE;
@@ -1190,8 +1136,6 @@ static void threads(int fd, int flags)
if (vm_compute_mode)
xe_vm_destroy(fd, vm_compute_mode);
free(threads_data);
- if (flags & SHARED_VM)
- pthread_join(vm_err_thread.thread, NULL);
pthread_barrier_destroy(&barrier);
}
@@ -1214,9 +1158,8 @@ igt_main
{ "shared-vm-rebind-bindexecqueue", SHARED_VM | REBIND |
BIND_EXEC_QUEUE },
{ "shared-vm-userptr-rebind", SHARED_VM | USERPTR | REBIND },
- { "shared-vm-rebind-err", SHARED_VM | REBIND | REBIND_ERROR },
- { "shared-vm-userptr-rebind-err", SHARED_VM | USERPTR |
- REBIND | REBIND_ERROR},
+ { "rebind-err", REBIND | REBIND_ERROR },
+ { "userptr-rebind-err", USERPTR | REBIND | REBIND_ERROR},
{ "shared-vm-userptr-invalidate", SHARED_VM | USERPTR |
INVALIDATE },
{ "shared-vm-userptr-invalidate-race", SHARED_VM | USERPTR |
@@ -1240,10 +1183,9 @@ igt_main
{ "hang-shared-vm-rebind", HANG | SHARED_VM | REBIND },
{ "hang-shared-vm-userptr-rebind", HANG | SHARED_VM | USERPTR |
REBIND },
- { "hang-shared-vm-rebind-err", HANG | SHARED_VM | REBIND |
+ { "hang-rebind-err", HANG | REBIND | REBIND_ERROR },
+ { "hang-userptr-rebind-err", HANG | USERPTR | REBIND |
REBIND_ERROR },
- { "hang-shared-vm-userptr-rebind-err", HANG | SHARED_VM |
- USERPTR | REBIND | REBIND_ERROR },
{ "hang-shared-vm-userptr-invalidate", HANG | SHARED_VM |
USERPTR | INVALIDATE },
{ "hang-shared-vm-userptr-invalidate-race", HANG | SHARED_VM |
diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
index ca85f5f18..2f349b16d 100644
--- a/tests/intel/xe_exercise_blt.c
+++ b/tests/intel/xe_exercise_blt.c
@@ -280,7 +280,7 @@ static void fast_copy_test(int xe,
region1 = igt_collection_get_value(regions, 0);
region2 = igt_collection_get_value(regions, 1);
- vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
index 0327d8e0e..3f2c4ae23 100644
--- a/tests/intel/xe_guc_pc.c
+++ b/tests/intel/xe_guc_pc.c
@@ -60,7 +60,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
igt_assert(n_execs > 0);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data) * n_execs;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_huc_copy.c b/tests/intel/xe_huc_copy.c
index c9891a729..c71ff74a1 100644
--- a/tests/intel/xe_huc_copy.c
+++ b/tests/intel/xe_huc_copy.c
@@ -117,7 +117,7 @@ test_huc_copy(int fd)
{ .addr = ADDR_BATCH, .size = SIZE_BATCH }, // batch
};
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_VIDEO_DECODE);
sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
sync.handle = syncobj_create(fd, 0);
diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
index 0159a3164..26e4dcc85 100644
--- a/tests/intel/xe_intel_bb.c
+++ b/tests/intel/xe_intel_bb.c
@@ -191,7 +191,7 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
intel_bb_reset(ibb, true);
if (new_context) {
- vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
ctx = xe_exec_queue_create(xe, vm, xe_hw_engine(xe, 0), 0);
intel_bb_destroy(ibb);
ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index fd28d5630..b2976ec84 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -259,7 +259,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
if (check_rpm)
igt_assert(in_d3(device, d_state));
- vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
if (check_rpm)
igt_assert(out_of_d3(device, d_state));
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 89df6149a..dd3302337 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -275,7 +275,7 @@ static void unbind_all(int fd, int n_vmas)
{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
};
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo = xe_bo_create(fd, 0, vm, bo_size);
for (i = 0; i < n_vmas; ++i)
@@ -322,171 +322,6 @@ static void userptr_invalid(int fd)
xe_vm_destroy(fd, vm);
}
-struct vm_thread_data {
- pthread_t thread;
- int fd;
- int vm;
- uint32_t bo;
- size_t bo_size;
- bool destroy;
-};
-
-/**
- * SUBTEST: vm-async-ops-err
- * Description: Test VM async ops error
- * Functionality: VM
- * Test category: negative test
- *
- * SUBTEST: vm-async-ops-err-destroy
- * Description: Test VM async ops error destroy
- * Functionality: VM
- * Test category: negative test
- */
-
-static void *vm_async_ops_err_thread(void *data)
-{
- struct vm_thread_data *args = data;
- int fd = args->fd;
- uint64_t addr = 0x201a0000;
- int num_binds = 0;
- int ret;
-
- struct drm_xe_wait_user_fence wait = {
- .vm_id = args->vm,
- .op = DRM_XE_UFENCE_WAIT_NEQ,
- .flags = DRM_XE_UFENCE_WAIT_VM_ERROR,
- .mask = DRM_XE_UFENCE_WAIT_U32,
- .timeout = MS_TO_NS(1000),
- };
-
- igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE,
- &wait), 0);
- if (args->destroy) {
- usleep(5000); /* Wait other binds to queue up */
- xe_vm_destroy(fd, args->vm);
- return NULL;
- }
-
- while (!ret) {
- struct drm_xe_vm_bind bind = {
- .vm_id = args->vm,
- .num_binds = 1,
- .bind.op = XE_VM_BIND_OP_RESTART,
- };
-
- /* VM sync ops should work */
- if (!(num_binds++ % 2)) {
- xe_vm_bind_sync(fd, args->vm, args->bo, 0, addr,
- args->bo_size);
- } else {
- xe_vm_unbind_sync(fd, args->vm, 0, addr,
- args->bo_size);
- addr += args->bo_size * 2;
- }
-
- /* Restart and wait for next error */
- igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
- &bind), 0);
- ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
- }
-
- return NULL;
-}
-
-static void vm_async_ops_err(int fd, bool destroy)
-{
- uint32_t vm;
- uint64_t addr = 0x1a0000;
- struct drm_xe_sync sync = {
- .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
- };
-#define N_BINDS 32
- struct vm_thread_data thread = {};
- uint32_t syncobjs[N_BINDS];
- size_t bo_size = 0x1000 * 32;
- uint32_t bo;
- int i, j;
-
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
- bo = xe_bo_create(fd, 0, vm, bo_size);
-
- thread.fd = fd;
- thread.vm = vm;
- thread.bo = bo;
- thread.bo_size = bo_size;
- thread.destroy = destroy;
- pthread_create(&thread.thread, 0, vm_async_ops_err_thread, &thread);
-
- for (i = 0; i < N_BINDS; i++)
- syncobjs[i] = syncobj_create(fd, 0);
-
- for (j = 0, i = 0; i < N_BINDS / 4; i++, j++) {
- sync.handle = syncobjs[j];
-#define INJECT_ERROR (0x1 << 31)
- if (i == N_BINDS / 8) /* Inject error on this bind */
- __xe_vm_bind_assert(fd, vm, 0, bo, 0,
- addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_MAP,
- XE_VM_BIND_FLAG_ASYNC |
- INJECT_ERROR, &sync, 1, 0, 0);
- else
- xe_vm_bind_async(fd, vm, 0, bo, 0,
- addr + i * bo_size * 2,
- bo_size, &sync, 1);
- }
-
- for (i = 0; i < N_BINDS / 4; i++, j++) {
- sync.handle = syncobjs[j];
- if (i == N_BINDS / 8)
- __xe_vm_bind_assert(fd, vm, 0, 0, 0,
- addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_UNMAP,
- XE_VM_BIND_FLAG_ASYNC |
- INJECT_ERROR, &sync, 1, 0, 0);
- else
- xe_vm_unbind_async(fd, vm, 0, 0,
- addr + i * bo_size * 2,
- bo_size, &sync, 1);
- }
-
- for (i = 0; i < N_BINDS / 4; i++, j++) {
- sync.handle = syncobjs[j];
- if (i == N_BINDS / 8)
- __xe_vm_bind_assert(fd, vm, 0, bo, 0,
- addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_MAP,
- XE_VM_BIND_FLAG_ASYNC |
- INJECT_ERROR, &sync, 1, 0, 0);
- else
- xe_vm_bind_async(fd, vm, 0, bo, 0,
- addr + i * bo_size * 2,
- bo_size, &sync, 1);
- }
-
- for (i = 0; i < N_BINDS / 4; i++, j++) {
- sync.handle = syncobjs[j];
- if (i == N_BINDS / 8)
- __xe_vm_bind_assert(fd, vm, 0, 0, 0,
- addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_UNMAP,
- XE_VM_BIND_FLAG_ASYNC |
- INJECT_ERROR, &sync, 1, 0, 0);
- else
- xe_vm_unbind_async(fd, vm, 0, 0,
- addr + i * bo_size * 2,
- bo_size, &sync, 1);
- }
-
- for (i = 0; i < N_BINDS; i++)
- igt_assert(syncobj_wait(fd, &syncobjs[i], 1, INT64_MAX, 0,
- NULL));
-
- if (!destroy)
- xe_vm_destroy(fd, vm);
-
- pthread_join(thread.thread, NULL);
-}
-
/**
* SUBTEST: shared-%s-page
* Description: Test shared arg[1] page
@@ -537,7 +372,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
data = malloc(sizeof(*data) * n_bo);
igt_assert(data);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(struct shared_pte_page_data);
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
@@ -718,7 +553,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
struct xe_spin_opts spin_opts = { .preempt = true };
int i, b;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data) * N_EXEC_QUEUES;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
@@ -728,7 +563,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
for (i = 0; i < N_EXEC_QUEUES; i++) {
exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
- bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
+ bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
syncobjs[i] = syncobj_create(fd, 0);
}
syncobjs[N_EXEC_QUEUES] = syncobj_create(fd, 0);
@@ -898,7 +733,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
igt_assert(n_execs <= BIND_ARRAY_MAX_N_EXEC);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = sizeof(*data) * n_execs;
bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
xe_get_default_alignment(fd));
@@ -908,7 +743,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
data = xe_bo_map(fd, bo, bo_size);
if (flags & BIND_ARRAY_BIND_EXEC_QUEUE_FLAG)
- bind_exec_queue = xe_bind_exec_queue_create(fd, vm, 0);
+ bind_exec_queue = xe_bind_exec_queue_create(fd, vm, 0, true);
exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
for (i = 0; i < n_execs; ++i) {
@@ -1092,7 +927,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
}
igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
if (flags & LARGE_BIND_FLAG_USERPTR) {
map = aligned_alloc(xe_get_default_alignment(fd), bo_size);
@@ -1384,7 +1219,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
unbind_n_page_offset *= n_page_per_2mb;
}
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = page_size * bo_n_pages;
if (flags & MAP_FLAG_USERPTR) {
@@ -1684,7 +1519,7 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
unbind_n_page_offset *= n_page_per_2mb;
}
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_size = page_size * bo_n_pages;
if (flags & MAP_FLAG_USERPTR) {
@@ -2001,12 +1836,6 @@ igt_main
igt_subtest("userptr-invalid")
userptr_invalid(fd);
- igt_subtest("vm-async-ops-err")
- vm_async_ops_err(fd, false);
-
- igt_subtest("vm-async-ops-err-destroy")
- vm_async_ops_err(fd, true);
-
igt_subtest("shared-pte-page")
xe_for_each_hw_engine(fd, hwe)
shared_pte_page(fd, hwe, 4,
diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
index 34005fbeb..e0116f181 100644
--- a/tests/intel/xe_waitfence.c
+++ b/tests/intel/xe_waitfence.c
@@ -34,7 +34,7 @@ static void do_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
sync[0].addr = to_user_pointer(&wait_fence);
sync[0].timeline_value = val;
- xe_vm_bind(fd, vm, bo, offset, addr, size, sync, 1);
+ xe_vm_bind_async(fd, vm, 0, bo, offset, addr, size, sync, 1);
}
enum waittype {
@@ -63,7 +63,7 @@ waitfence(int fd, enum waittype wt)
uint32_t bo_7;
int64_t timeout;
- uint32_t vm = xe_vm_create(fd, 0, 0);
+ uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
bo_1 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
bo_2 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
@@ -96,21 +96,6 @@ waitfence(int fd, enum waittype wt)
", elapsed: %" PRId64 "\n",
timeout, signalled, signalled - current);
}
-
- xe_vm_unbind_sync(fd, vm, 0, 0x200000, 0x40000);
- xe_vm_unbind_sync(fd, vm, 0, 0xc0000000, 0x40000);
- xe_vm_unbind_sync(fd, vm, 0, 0x180000000, 0x40000);
- xe_vm_unbind_sync(fd, vm, 0, 0x140000000, 0x10000);
- xe_vm_unbind_sync(fd, vm, 0, 0x100000000, 0x100000);
- xe_vm_unbind_sync(fd, vm, 0, 0xc0040000, 0x1c0000);
- xe_vm_unbind_sync(fd, vm, 0, 0xeffff0000, 0x10000);
- gem_close(fd, bo_7);
- gem_close(fd, bo_6);
- gem_close(fd, bo_5);
- gem_close(fd, bo_4);
- gem_close(fd, bo_3);
- gem_close(fd, bo_2);
- gem_close(fd, bo_1);
}
igt_main
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 10/14] xe: Update to new VM bind uAPI
2023-09-28 11:05 ` [igt-dev] [PATCH v4 10/14] xe: Update to new VM bind uAPI Francois Dugast
@ 2023-09-29 16:32 ` Souza, Jose
2023-10-03 9:35 ` Francois Dugast
0 siblings, 1 reply; 31+ messages in thread
From: Souza, Jose @ 2023-09-29 16:32 UTC (permalink / raw)
To: igt-dev@lists.freedesktop.org, Dugast, Francois; +Cc: Vivi, Rodrigo
On Thu, 2023-09-28 at 11:05 +0000, Francois Dugast wrote:
> From: Matthew Brost <matthew.brost@intel.com>
>
> Sync vs. async changes and new error handling.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> [Rodrigo rebased and fixed conflicts]
> ---
> include/drm-uapi/xe_drm.h | 50 ++------
> lib/igt_fb.c | 2 +-
> lib/intel_batchbuffer.c | 2 +-
> lib/intel_compute.c | 2 +-
> lib/xe/xe_ioctl.c | 15 +--
> lib/xe/xe_ioctl.h | 3 +-
> lib/xe/xe_query.c | 2 +-
> tests/intel/xe_ccs.c | 4 +-
> tests/intel/xe_create.c | 6 +-
> tests/intel/xe_drm_fdinfo.c | 4 +-
> tests/intel/xe_evict.c | 23 ++--
> tests/intel/xe_exec_balancer.c | 6 +-
> tests/intel/xe_exec_basic.c | 6 +-
> tests/intel/xe_exec_compute_mode.c | 6 +-
> tests/intel/xe_exec_fault_mode.c | 6 +-
> tests/intel/xe_exec_reset.c | 8 +-
> tests/intel/xe_exec_store.c | 4 +-
> tests/intel/xe_exec_threads.c | 112 +++++------------
> tests/intel/xe_exercise_blt.c | 2 +-
> tests/intel/xe_guc_pc.c | 2 +-
> tests/intel/xe_huc_copy.c | 2 +-
> tests/intel/xe_intel_bb.c | 2 +-
> tests/intel/xe_pm.c | 2 +-
> tests/intel/xe_vm.c | 189 ++---------------------------
> tests/intel/xe_waitfence.c | 19 +--
> 25 files changed, 102 insertions(+), 377 deletions(-)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 0a05a12b2..80b4c76f3 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -134,10 +134,11 @@ struct drm_xe_engine_class_instance {
> #define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE 3
> #define DRM_XE_ENGINE_CLASS_COMPUTE 4
> /*
> - * Kernel only class (not actual hardware engine class). Used for
> + * Kernel only classes (not actual hardware engine class). Used for
> * creating ordered queues of VM bind operations.
> */
> -#define DRM_XE_ENGINE_CLASS_VM_BIND 5
> +#define DRM_XE_ENGINE_CLASS_VM_BIND_ASYNC 5
> +#define DRM_XE_ENGINE_CLASS_VM_BIND_SYNC 6
> __u16 engine_class;
>
> __u16 engine_instance;
> @@ -577,7 +578,7 @@ struct drm_xe_vm_create {
>
> #define DRM_XE_VM_CREATE_SCRATCH_PAGE (0x1 << 0)
> #define DRM_XE_VM_CREATE_COMPUTE_MODE (0x1 << 1)
> -#define DRM_XE_VM_CREATE_ASYNC_BIND_OPS (0x1 << 2)
> +#define DRM_XE_VM_CREATE_ASYNC_DEFAULT (0x1 << 2)
> #define DRM_XE_VM_CREATE_FAULT_MODE (0x1 << 3)
> /** @flags: Flags */
> __u32 flags;
> @@ -637,34 +638,12 @@ struct drm_xe_vm_bind_op {
> #define XE_VM_BIND_OP_MAP 0x0
> #define XE_VM_BIND_OP_UNMAP 0x1
> #define XE_VM_BIND_OP_MAP_USERPTR 0x2
> -#define XE_VM_BIND_OP_RESTART 0x3
> -#define XE_VM_BIND_OP_UNMAP_ALL 0x4
> -#define XE_VM_BIND_OP_PREFETCH 0x5
> +#define XE_VM_BIND_OP_UNMAP_ALL 0x3
> +#define XE_VM_BIND_OP_PREFETCH 0x4
> /** @op: Bind operation to perform */
> __u32 op;
>
> #define XE_VM_BIND_FLAG_READONLY (0x1 << 0)
> - /*
> - * A bind ops completions are always async, hence the support for out
> - * sync. This flag indicates the allocation of the memory for new page
> - * tables and the job to program the pages tables is asynchronous
> - * relative to the IOCTL. That part of a bind operation can fail under
> - * memory pressure, the job in practice can't fail unless the system is
> - * totally shot.
> - *
> - * If this flag is clear and the IOCTL doesn't return an error, in
> - * practice the bind op is good and will complete.
> - *
> - * If this flag is set and doesn't return an error, the bind op can
> - * still fail and recovery is needed. It should free memory
> - * via non-async unbinds, and then restart all queued async binds op via
> - * XE_VM_BIND_OP_RESTART. Or alternatively the user should destroy the
> - * VM.
> - *
> - * This flag is only allowed when DRM_XE_VM_CREATE_ASYNC_BIND_OPS is
> - * configured in the VM and must be set if the VM is configured with
> - * DRM_XE_VM_CREATE_ASYNC_BIND_OPS and not in an error state.
> - */
> #define XE_VM_BIND_FLAG_ASYNC (0x1 << 1)
> /*
> * Valid on a faulting VM only, do the MAP operation immediately rather
> @@ -905,18 +884,10 @@ struct drm_xe_wait_user_fence {
> /** @extensions: Pointer to the first extension struct, if any */
> __u64 extensions;
>
> - union {
> - /**
> - * @addr: user pointer address to wait on, must qword aligned
> - */
> - __u64 addr;
> -
> - /**
> - * @vm_id: The ID of the VM which encounter an error used with
> - * DRM_XE_UFENCE_WAIT_VM_ERROR. Upper 32 bits must be clear.
> - */
> - __u64 vm_id;
> - };
> + /**
> + * @addr: user pointer address to wait on, must qword aligned
> + */
> + __u64 addr;
>
> #define DRM_XE_UFENCE_WAIT_EQ 0
> #define DRM_XE_UFENCE_WAIT_NEQ 1
> @@ -929,7 +900,6 @@ struct drm_xe_wait_user_fence {
>
> #define DRM_XE_UFENCE_WAIT_SOFT_OP (1 << 0) /* e.g. Wait on VM bind */
> #define DRM_XE_UFENCE_WAIT_ABSTIME (1 << 1)
> -#define DRM_XE_UFENCE_WAIT_VM_ERROR (1 << 2)
> /** @flags: wait flags */
> __u16 flags;
>
> diff --git a/lib/igt_fb.c b/lib/igt_fb.c
> index f0c0681ab..34934855a 100644
> --- a/lib/igt_fb.c
> +++ b/lib/igt_fb.c
> @@ -2892,7 +2892,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
> &bb_size,
> mem_region) == 0);
> } else if (is_xe) {
> - vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> exec_queue = xe_exec_queue_create(dst_fb->fd, vm, &inst, 0);
> xe_ctx = intel_ctx_xe(dst_fb->fd, vm, exec_queue, 0, 0, 0);
> mem_region = vram_if_possible(dst_fb->fd, 0);
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index 6e668d28c..df82ef5f5 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -953,7 +953,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
>
> if (!vm) {
> igt_assert_f(!ctx, "No vm provided for engine");
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> }
>
> ibb->uses_full_ppgtt = true;
> diff --git a/lib/intel_compute.c b/lib/intel_compute.c
> index 0c30f39c1..1ae33cdfc 100644
> --- a/lib/intel_compute.c
> +++ b/lib/intel_compute.c
> @@ -79,7 +79,7 @@ static void bo_execenv_create(int fd, struct bo_execenv *execenv)
> else
> engine_class = DRM_XE_ENGINE_CLASS_COMPUTE;
>
> - execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> execenv->exec_queue = xe_exec_queue_create_class(fd, execenv->vm,
> engine_class);
> }
> diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> index 48cd185de..895e3bd4e 100644
> --- a/lib/xe/xe_ioctl.c
> +++ b/lib/xe/xe_ioctl.c
> @@ -201,16 +201,8 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
> static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> uint64_t addr, uint64_t size, uint32_t op)
> {
> - struct drm_xe_sync sync = {
> - .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> - .handle = syncobj_create(fd, 0),
> - };
> -
> - __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, &sync, 1,
> + __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, NULL, 0,
> 0, 0);
> -
> - igt_assert(syncobj_wait(fd, &sync.handle, 1, INT64_MAX, 0, NULL));
> - syncobj_destroy(fd, sync.handle);
> }
>
> void xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> @@ -276,10 +268,11 @@ uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size)
> return create.handle;
> }
>
> -uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext)
> +uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext, bool async)
> {
> struct drm_xe_engine_class_instance instance = {
> - .engine_class = DRM_XE_ENGINE_CLASS_VM_BIND,
> + .engine_class = async ? DRM_XE_ENGINE_CLASS_VM_BIND_ASYNC :
> + DRM_XE_ENGINE_CLASS_VM_BIND_SYNC,
> };
> struct drm_xe_exec_queue_create create = {
> .extensions = ext,
> diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
> index f0e4109dc..a8dbcf376 100644
> --- a/lib/xe/xe_ioctl.h
> +++ b/lib/xe/xe_ioctl.h
> @@ -71,7 +71,8 @@ uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size);
> uint32_t xe_exec_queue_create(int fd, uint32_t vm,
> struct drm_xe_engine_class_instance *instance,
> uint64_t ext);
> -uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext);
> +uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext,
> + bool async);
> uint32_t xe_exec_queue_create_class(int fd, uint32_t vm, uint16_t class);
> void xe_exec_queue_destroy(int fd, uint32_t exec_queue);
> uint64_t xe_bo_mmap_offset(int fd, uint32_t bo);
> diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> index c356abe1e..ab7b31188 100644
> --- a/lib/xe/xe_query.c
> +++ b/lib/xe/xe_query.c
> @@ -316,7 +316,7 @@ bool xe_supports_faults(int fd)
> bool supports_faults;
>
> struct drm_xe_vm_create create = {
> - .flags = DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> + .flags = DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> DRM_XE_VM_CREATE_FAULT_MODE,
> };
>
> diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
> index 20bbc4448..300b734c8 100644
> --- a/tests/intel/xe_ccs.c
> +++ b/tests/intel/xe_ccs.c
> @@ -343,7 +343,7 @@ static void block_copy(int xe,
> uint32_t vm, exec_queue;
>
> if (config->new_ctx) {
> - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
> surf_ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
> surf_ahnd = intel_allocator_open(xe, surf_ctx->vm,
> @@ -550,7 +550,7 @@ static void block_copy_test(int xe,
> copyfns[copy_function].suffix) {
> uint32_t sync_bind, sync_out;
>
> - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
> sync_bind = syncobj_create(xe, 0);
> sync_out = syncobj_create(xe, 0);
> diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
> index 8d845e5c8..d99bd51cf 100644
> --- a/tests/intel/xe_create.c
> +++ b/tests/intel/xe_create.c
> @@ -54,7 +54,7 @@ static void create_invalid_size(int fd)
> uint32_t handle;
> int ret;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
>
> xe_for_each_mem_region(fd, memreg, region) {
> memregion = xe_mem_region(fd, region);
> @@ -140,7 +140,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
>
> fd = drm_reopen_driver(fd);
> num_engines = xe_number_hw_engines(fd);
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
>
> exec_queues_per_process = max_t(uint32_t, 1, MAXEXECQUEUES / nproc);
> igt_debug("nproc: %u, exec_queues per process: %u\n", nproc, exec_queues_per_process);
> @@ -199,7 +199,7 @@ static void create_massive_size(int fd)
> uint32_t handle;
> int ret;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
>
> xe_for_each_mem_region(fd, memreg, region) {
> ret = __create_bo(fd, vm, -1ULL << 32, region, &handle);
> diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
> index 22e410e14..64168ed19 100644
> --- a/tests/intel/xe_drm_fdinfo.c
> +++ b/tests/intel/xe_drm_fdinfo.c
> @@ -71,7 +71,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
> struct xe_spin_opts spin_opts = { .preempt = true };
> int i, b, ret;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data) * N_EXEC_QUEUES;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> @@ -90,7 +90,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
>
> for (i = 0; i < N_EXEC_QUEUES; i++) {
> exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
> + bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
> syncobjs[i] = syncobj_create(fd, 0);
> }
> syncobjs[N_EXEC_QUEUES] = syncobj_create(fd, 0);
> diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
> index 5d8981f8d..eec001218 100644
> --- a/tests/intel/xe_evict.c
> +++ b/tests/intel/xe_evict.c
> @@ -63,15 +63,17 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
>
> fd = drm_open_driver(DRIVER_XE);
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> if (flags & BIND_EXEC_QUEUE)
> - bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0);
> + bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
> if (flags & MULTI_VM) {
> - vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> - vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> + vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> if (flags & BIND_EXEC_QUEUE) {
> - bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2, 0);
> - bind_exec_queues[2] = xe_bind_exec_queue_create(fd, vm3, 0);
> + bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
> + 0, true);
> + bind_exec_queues[2] = xe_bind_exec_queue_create(fd, vm3,
> + 0, true);
> }
> }
>
> @@ -240,15 +242,16 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
>
> fd = drm_open_driver(DRIVER_XE);
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> if (flags & BIND_EXEC_QUEUE)
> - bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0);
> + bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
> if (flags & MULTI_VM) {
> - vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> + vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> if (flags & BIND_EXEC_QUEUE)
> - bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2, 0);
> + bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
> + 0, true);
> }
>
> for (i = 0; i < n_exec_queues; i++) {
> diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
> index f4f5440f4..3ca3de881 100644
> --- a/tests/intel/xe_exec_balancer.c
> +++ b/tests/intel/xe_exec_balancer.c
> @@ -66,7 +66,7 @@ static void test_all_active(int fd, int gt, int class)
> if (num_placements < 2)
> return;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data) * num_placements;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
>
> @@ -207,7 +207,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
> if (num_placements < 2)
> return;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data) * n_execs;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
>
> @@ -433,7 +433,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
> if (num_placements < 2)
> return;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> bo_size = sizeof(*data) * n_execs;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
> index e29398aaa..8dbce524d 100644
> --- a/tests/intel/xe_exec_basic.c
> +++ b/tests/intel/xe_exec_basic.c
> @@ -109,7 +109,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> igt_assert(n_vm <= MAX_N_EXEC_QUEUES);
>
> for (i = 0; i < n_vm; ++i)
> - vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data) * n_execs;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> @@ -151,7 +151,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>
> exec_queues[i] = xe_exec_queue_create(fd, __vm, eci, 0);
> if (flags & BIND_EXEC_QUEUE)
> - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, __vm, 0);
> + bind_exec_queues[i] = xe_bind_exec_queue_create(fd,
> + __vm, 0,
> + true);
> else
> bind_exec_queues[i] = 0;
> syncobjs[i] = syncobj_create(fd, 0);
> diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
> index 02e7ef201..b0a677dca 100644
> --- a/tests/intel/xe_exec_compute_mode.c
> +++ b/tests/intel/xe_exec_compute_mode.c
> @@ -113,7 +113,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>
> igt_assert(n_exec_queues <= MAX_N_EXECQUEUES);
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> bo_size = sizeof(*data) * n_execs;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> @@ -123,7 +123,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> if (flags & BIND_EXECQUEUE)
> bind_exec_queues[i] =
> - xe_bind_exec_queue_create(fd, vm, 0);
> + xe_bind_exec_queue_create(fd, vm, 0, true);
> else
> bind_exec_queues[i] = 0;
> };
> @@ -151,7 +151,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> if (flags & BIND_EXECQUEUE)
> bind_exec_queues[i] =
> - xe_bind_exec_queue_create(fd, vm, 0);
> + xe_bind_exec_queue_create(fd, vm, 0, true);
> else
> bind_exec_queues[i] = 0;
> };
> diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
> index c5d6bdcd5..92d8690a1 100644
> --- a/tests/intel/xe_exec_fault_mode.c
> +++ b/tests/intel/xe_exec_fault_mode.c
> @@ -131,7 +131,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>
> igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> DRM_XE_VM_CREATE_FAULT_MODE, 0);
> bo_size = sizeof(*data) * n_execs;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> @@ -165,7 +165,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> if (flags & BIND_EXEC_QUEUE)
> bind_exec_queues[i] =
> - xe_bind_exec_queue_create(fd, vm, 0);
> + xe_bind_exec_queue_create(fd, vm, 0, true);
> else
> bind_exec_queues[i] = 0;
> };
> @@ -375,7 +375,7 @@ test_atomic(int fd, struct drm_xe_engine_class_instance *eci,
> uint32_t *ptr;
> int i, b, wait_idx = 0;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> DRM_XE_VM_CREATE_FAULT_MODE, 0);
> bo_size = sizeof(*data) * n_atomic;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
> index ca8d7cc13..44248776b 100644
> --- a/tests/intel/xe_exec_reset.c
> +++ b/tests/intel/xe_exec_reset.c
> @@ -45,7 +45,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
> struct xe_spin *spin;
> struct xe_spin_opts spin_opts = { .addr = addr, .preempt = false };
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*spin);
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> @@ -176,7 +176,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
> if (num_placements < 2)
> return;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data) * n_execs;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> @@ -362,7 +362,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
> if (flags & CLOSE_FD)
> fd = drm_open_driver(DRIVER_XE);
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data) * n_execs;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> @@ -528,7 +528,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
> if (flags & CLOSE_FD)
> fd = drm_open_driver(DRIVER_XE);
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> bo_size = sizeof(*data) * n_execs;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
> index 14f7c9bec..90684b8cb 100644
> --- a/tests/intel/xe_exec_store.c
> +++ b/tests/intel/xe_exec_store.c
> @@ -75,7 +75,7 @@ static void store(int fd)
> syncobj = syncobj_create(fd, 0);
> sync.handle = syncobj;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data);
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> @@ -132,7 +132,7 @@ static void store_all(int fd, int gt, int class)
> struct drm_xe_engine_class_instance *hwe;
> int i, num_placements = 0;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data);
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> index c9a51fc00..bb16bdd88 100644
> --- a/tests/intel/xe_exec_threads.c
> +++ b/tests/intel/xe_exec_threads.c
> @@ -77,7 +77,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
> }
>
> if (!vm) {
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> owns_vm = true;
> }
>
> @@ -285,7 +285,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> }
>
> if (!vm) {
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> owns_vm = true;
> }
> @@ -454,7 +454,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> static void
> test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> struct drm_xe_engine_class_instance *eci, int n_exec_queues,
> - int n_execs, int rebind_error_inject, unsigned int flags)
> + int n_execs, unsigned int flags)
> {
> struct drm_xe_sync sync[2] = {
> { .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> @@ -489,7 +489,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> }
>
> if (!vm) {
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> owns_vm = true;
> }
>
> @@ -531,7 +531,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> else
> exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> if (flags & BIND_EXEC_QUEUE)
> - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
> + bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm,
> + 0, true);
> else
> bind_exec_queues[i] = 0;
> syncobjs[i] = syncobj_create(fd, 0);
> @@ -583,8 +584,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> exec.address = exec_addr;
> if (e != i && !(flags & HANG))
> syncobj_reset(fd, &syncobjs[e], 1);
> - if ((flags & HANG && e == hang_exec_queue) ||
> - rebind_error_inject > 0) {
> + if ((flags & HANG && e == hang_exec_queue)) {
> int err;
>
> do {
> @@ -594,20 +594,10 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> xe_exec(fd, &exec);
> }
>
> - if (flags & REBIND && i &&
> - (!(i & 0x1f) || rebind_error_inject == i)) {
> -#define INJECT_ERROR (0x1 << 31)
> - if (rebind_error_inject == i)
> - __xe_vm_bind_assert(fd, vm, bind_exec_queues[e],
> - 0, 0, addr, bo_size,
> - XE_VM_BIND_OP_UNMAP,
> - XE_VM_BIND_FLAG_ASYNC |
> - INJECT_ERROR, sync_all,
> - n_exec_queues, 0, 0);
> - else
> - xe_vm_unbind_async(fd, vm, bind_exec_queues[e],
> - 0, addr, bo_size,
> - sync_all, n_exec_queues);
> + if (flags & REBIND && i && !(i & 0x1f)) {
> + xe_vm_unbind_async(fd, vm, bind_exec_queues[e],
> + 0, addr, bo_size,
> + sync_all, n_exec_queues);
>
> sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> addr += bo_size;
> @@ -709,7 +699,6 @@ struct thread_data {
> int n_exec_queue;
> int n_exec;
> int flags;
> - int rebind_error_inject;
> bool *go;
> };
>
> @@ -733,46 +722,7 @@ static void *thread(void *data)
> else
> test_legacy_mode(t->fd, t->vm_legacy_mode, t->addr, t->userptr,
> t->eci, t->n_exec_queue, t->n_exec,
> - t->rebind_error_inject, t->flags);
> -
> - return NULL;
> -}
> -
> -struct vm_thread_data {
> - pthread_t thread;
> - int fd;
> - int vm;
> -};
> -
> -static void *vm_async_ops_err_thread(void *data)
> -{
> - struct vm_thread_data *args = data;
> - int fd = args->fd;
> - int ret;
> -
> - struct drm_xe_wait_user_fence wait = {
> - .vm_id = args->vm,
> - .op = DRM_XE_UFENCE_WAIT_NEQ,
> - .flags = DRM_XE_UFENCE_WAIT_VM_ERROR,
> - .mask = DRM_XE_UFENCE_WAIT_U32,
> -#define BASICALLY_FOREVER 0xffffffffffff
> - .timeout = BASICALLY_FOREVER,
> - };
> -
> - ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> -
> - while (!ret) {
> - struct drm_xe_vm_bind bind = {
> - .vm_id = args->vm,
> - .num_binds = 1,
> - .bind.op = XE_VM_BIND_OP_RESTART,
> - };
> -
> - /* Restart and wait for next error */
> - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
> - &bind), 0);
> - ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> - }
> + t->flags);
>
> return NULL;
> }
> @@ -826,6 +776,10 @@ static void *vm_async_ops_err_thread(void *data)
> * shared vm rebind err
> * @shared-vm-userptr-rebind-err:
> * shared vm userptr rebind err
> + * @rebind-err:
> + * rebind err
> + * @userptr-rebind-err:
> + * userptr rebind err
> * @shared-vm-userptr-invalidate:
> * shared vm userptr invalidate
> * @shared-vm-userptr-invalidate-race:
> @@ -842,7 +796,7 @@ static void *vm_async_ops_err_thread(void *data)
> * fd userptr invalidate race
> * @hang-basic:
> * hang basic
> - * @hang-userptr:
> + * @hang-userptr:
> * hang userptr
> * @hang-rebind:
> * hang rebind
> @@ -864,6 +818,10 @@ static void *vm_async_ops_err_thread(void *data)
> * hang shared vm rebind err
> * @hang-shared-vm-userptr-rebind-err:
> * hang shared vm userptr rebind err
> + * @hang-rebind-err:
> + * hang rebind err
> + * @hang-userptr-rebind-err:
> + * hang userptr rebind err
> * @hang-shared-vm-userptr-invalidate:
> * hang shared vm userptr invalidate
> * @hang-shared-vm-userptr-invalidate-race:
> @@ -1019,7 +977,6 @@ static void threads(int fd, int flags)
> int n_hw_engines = 0, class;
> uint64_t i = 0;
> uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
> - struct vm_thread_data vm_err_thread = {};
> bool go = false;
> int n_threads = 0;
> int gt;
> @@ -1052,18 +1009,12 @@ static void threads(int fd, int flags)
>
> if (flags & SHARED_VM) {
> vm_legacy_mode = xe_vm_create(fd,
> - DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
> + DRM_XE_VM_CREATE_ASYNC_DEFAULT,
> 0);
> vm_compute_mode = xe_vm_create(fd,
> - DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> + DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> DRM_XE_VM_CREATE_COMPUTE_MODE,
> 0);
> -
> - vm_err_thread.fd = fd;
> - vm_err_thread.vm = vm_legacy_mode;
> - pthread_create(&vm_err_thread.thread, 0,
> - vm_async_ops_err_thread, &vm_err_thread);
> -
> }
>
> xe_for_each_hw_engine(fd, hwe) {
> @@ -1083,11 +1034,6 @@ static void threads(int fd, int flags)
> threads_data[i].n_exec_queue = N_EXEC_QUEUE;
> #define N_EXEC 1024
> threads_data[i].n_exec = N_EXEC;
> - if (flags & REBIND_ERROR)
> - threads_data[i].rebind_error_inject =
> - (N_EXEC / (n_hw_engines + 1)) * (i + 1);
> - else
> - threads_data[i].rebind_error_inject = -1;
> threads_data[i].flags = flags;
> if (flags & MIXED_MODE) {
> threads_data[i].flags &= ~MIXED_MODE;
> @@ -1190,8 +1136,6 @@ static void threads(int fd, int flags)
> if (vm_compute_mode)
> xe_vm_destroy(fd, vm_compute_mode);
> free(threads_data);
> - if (flags & SHARED_VM)
> - pthread_join(vm_err_thread.thread, NULL);
> pthread_barrier_destroy(&barrier);
> }
>
> @@ -1214,9 +1158,8 @@ igt_main
> { "shared-vm-rebind-bindexecqueue", SHARED_VM | REBIND |
> BIND_EXEC_QUEUE },
> { "shared-vm-userptr-rebind", SHARED_VM | USERPTR | REBIND },
> - { "shared-vm-rebind-err", SHARED_VM | REBIND | REBIND_ERROR },
> - { "shared-vm-userptr-rebind-err", SHARED_VM | USERPTR |
> - REBIND | REBIND_ERROR},
> + { "rebind-err", REBIND | REBIND_ERROR },
> + { "userptr-rebind-err", USERPTR | REBIND | REBIND_ERROR},
> { "shared-vm-userptr-invalidate", SHARED_VM | USERPTR |
> INVALIDATE },
> { "shared-vm-userptr-invalidate-race", SHARED_VM | USERPTR |
> @@ -1240,10 +1183,9 @@ igt_main
> { "hang-shared-vm-rebind", HANG | SHARED_VM | REBIND },
> { "hang-shared-vm-userptr-rebind", HANG | SHARED_VM | USERPTR |
> REBIND },
> - { "hang-shared-vm-rebind-err", HANG | SHARED_VM | REBIND |
> + { "hang-rebind-err", HANG | REBIND | REBIND_ERROR },
> + { "hang-userptr-rebind-err", HANG | USERPTR | REBIND |
> REBIND_ERROR },
> - { "hang-shared-vm-userptr-rebind-err", HANG | SHARED_VM |
> - USERPTR | REBIND | REBIND_ERROR },
> { "hang-shared-vm-userptr-invalidate", HANG | SHARED_VM |
> USERPTR | INVALIDATE },
> { "hang-shared-vm-userptr-invalidate-race", HANG | SHARED_VM |
> diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
> index ca85f5f18..2f349b16d 100644
> --- a/tests/intel/xe_exercise_blt.c
> +++ b/tests/intel/xe_exercise_blt.c
> @@ -280,7 +280,7 @@ static void fast_copy_test(int xe,
> region1 = igt_collection_get_value(regions, 0);
> region2 = igt_collection_get_value(regions, 1);
>
> - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
> ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
>
> diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
> index 0327d8e0e..3f2c4ae23 100644
> --- a/tests/intel/xe_guc_pc.c
> +++ b/tests/intel/xe_guc_pc.c
> @@ -60,7 +60,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
> igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
> igt_assert(n_execs > 0);
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data) * n_execs;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> diff --git a/tests/intel/xe_huc_copy.c b/tests/intel/xe_huc_copy.c
> index c9891a729..c71ff74a1 100644
> --- a/tests/intel/xe_huc_copy.c
> +++ b/tests/intel/xe_huc_copy.c
> @@ -117,7 +117,7 @@ test_huc_copy(int fd)
> { .addr = ADDR_BATCH, .size = SIZE_BATCH }, // batch
> };
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_VIDEO_DECODE);
> sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
> sync.handle = syncobj_create(fd, 0);
> diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
> index 0159a3164..26e4dcc85 100644
> --- a/tests/intel/xe_intel_bb.c
> +++ b/tests/intel/xe_intel_bb.c
> @@ -191,7 +191,7 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
> intel_bb_reset(ibb, true);
>
> if (new_context) {
> - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> ctx = xe_exec_queue_create(xe, vm, xe_hw_engine(xe, 0), 0);
> intel_bb_destroy(ibb);
> ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
> diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> index fd28d5630..b2976ec84 100644
> --- a/tests/intel/xe_pm.c
> +++ b/tests/intel/xe_pm.c
> @@ -259,7 +259,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
> if (check_rpm)
> igt_assert(in_d3(device, d_state));
>
> - vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
>
> if (check_rpm)
> igt_assert(out_of_d3(device, d_state));
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index 89df6149a..dd3302337 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -275,7 +275,7 @@ static void unbind_all(int fd, int n_vmas)
> { .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> };
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo = xe_bo_create(fd, 0, vm, bo_size);
>
> for (i = 0; i < n_vmas; ++i)
> @@ -322,171 +322,6 @@ static void userptr_invalid(int fd)
> xe_vm_destroy(fd, vm);
> }
>
> -struct vm_thread_data {
> - pthread_t thread;
> - int fd;
> - int vm;
> - uint32_t bo;
> - size_t bo_size;
> - bool destroy;
> -};
> -
> -/**
> - * SUBTEST: vm-async-ops-err
> - * Description: Test VM async ops error
> - * Functionality: VM
> - * Test category: negative test
> - *
> - * SUBTEST: vm-async-ops-err-destroy
> - * Description: Test VM async ops error destroy
> - * Functionality: VM
> - * Test category: negative test
> - */
> -
> -static void *vm_async_ops_err_thread(void *data)
> -{
> - struct vm_thread_data *args = data;
> - int fd = args->fd;
> - uint64_t addr = 0x201a0000;
> - int num_binds = 0;
> - int ret;
> -
> - struct drm_xe_wait_user_fence wait = {
> - .vm_id = args->vm,
> - .op = DRM_XE_UFENCE_WAIT_NEQ,
> - .flags = DRM_XE_UFENCE_WAIT_VM_ERROR,
> - .mask = DRM_XE_UFENCE_WAIT_U32,
> - .timeout = MS_TO_NS(1000),
> - };
> -
> - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE,
> - &wait), 0);
> - if (args->destroy) {
> - usleep(5000); /* Wait other binds to queue up */
> - xe_vm_destroy(fd, args->vm);
> - return NULL;
> - }
> -
> - while (!ret) {
> - struct drm_xe_vm_bind bind = {
> - .vm_id = args->vm,
> - .num_binds = 1,
> - .bind.op = XE_VM_BIND_OP_RESTART,
> - };
> -
> - /* VM sync ops should work */
> - if (!(num_binds++ % 2)) {
> - xe_vm_bind_sync(fd, args->vm, args->bo, 0, addr,
> - args->bo_size);
> - } else {
> - xe_vm_unbind_sync(fd, args->vm, 0, addr,
> - args->bo_size);
> - addr += args->bo_size * 2;
> - }
> -
> - /* Restart and wait for next error */
> - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
> - &bind), 0);
> - ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> - }
> -
> - return NULL;
> -}
> -
> -static void vm_async_ops_err(int fd, bool destroy)
> -{
> - uint32_t vm;
> - uint64_t addr = 0x1a0000;
> - struct drm_xe_sync sync = {
> - .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> - };
> -#define N_BINDS 32
> - struct vm_thread_data thread = {};
> - uint32_t syncobjs[N_BINDS];
> - size_t bo_size = 0x1000 * 32;
> - uint32_t bo;
> - int i, j;
> -
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> - bo = xe_bo_create(fd, 0, vm, bo_size);
> -
> - thread.fd = fd;
> - thread.vm = vm;
> - thread.bo = bo;
> - thread.bo_size = bo_size;
> - thread.destroy = destroy;
> - pthread_create(&thread.thread, 0, vm_async_ops_err_thread, &thread);
> -
> - for (i = 0; i < N_BINDS; i++)
> - syncobjs[i] = syncobj_create(fd, 0);
> -
> - for (j = 0, i = 0; i < N_BINDS / 4; i++, j++) {
> - sync.handle = syncobjs[j];
> -#define INJECT_ERROR (0x1 << 31)
> - if (i == N_BINDS / 8) /* Inject error on this bind */
> - __xe_vm_bind_assert(fd, vm, 0, bo, 0,
> - addr + i * bo_size * 2,
> - bo_size, XE_VM_BIND_OP_MAP,
> - XE_VM_BIND_FLAG_ASYNC |
> - INJECT_ERROR, &sync, 1, 0, 0);
> - else
> - xe_vm_bind_async(fd, vm, 0, bo, 0,
> - addr + i * bo_size * 2,
> - bo_size, &sync, 1);
> - }
> -
> - for (i = 0; i < N_BINDS / 4; i++, j++) {
> - sync.handle = syncobjs[j];
> - if (i == N_BINDS / 8)
> - __xe_vm_bind_assert(fd, vm, 0, 0, 0,
> - addr + i * bo_size * 2,
> - bo_size, XE_VM_BIND_OP_UNMAP,
> - XE_VM_BIND_FLAG_ASYNC |
> - INJECT_ERROR, &sync, 1, 0, 0);
> - else
> - xe_vm_unbind_async(fd, vm, 0, 0,
> - addr + i * bo_size * 2,
> - bo_size, &sync, 1);
> - }
> -
> - for (i = 0; i < N_BINDS / 4; i++, j++) {
> - sync.handle = syncobjs[j];
> - if (i == N_BINDS / 8)
> - __xe_vm_bind_assert(fd, vm, 0, bo, 0,
> - addr + i * bo_size * 2,
> - bo_size, XE_VM_BIND_OP_MAP,
> - XE_VM_BIND_FLAG_ASYNC |
> - INJECT_ERROR, &sync, 1, 0, 0);
> - else
> - xe_vm_bind_async(fd, vm, 0, bo, 0,
> - addr + i * bo_size * 2,
> - bo_size, &sync, 1);
> - }
> -
> - for (i = 0; i < N_BINDS / 4; i++, j++) {
> - sync.handle = syncobjs[j];
> - if (i == N_BINDS / 8)
> - __xe_vm_bind_assert(fd, vm, 0, 0, 0,
> - addr + i * bo_size * 2,
> - bo_size, XE_VM_BIND_OP_UNMAP,
> - XE_VM_BIND_FLAG_ASYNC |
> - INJECT_ERROR, &sync, 1, 0, 0);
> - else
> - xe_vm_unbind_async(fd, vm, 0, 0,
> - addr + i * bo_size * 2,
> - bo_size, &sync, 1);
> - }
> -
> - for (i = 0; i < N_BINDS; i++)
> - igt_assert(syncobj_wait(fd, &syncobjs[i], 1, INT64_MAX, 0,
> - NULL));
> -
> - if (!destroy)
> - xe_vm_destroy(fd, vm);
> -
> - pthread_join(thread.thread, NULL);
> -}
> -
> /**
> * SUBTEST: shared-%s-page
> * Description: Test shared arg[1] page
> @@ -537,7 +372,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
> data = malloc(sizeof(*data) * n_bo);
> igt_assert(data);
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(struct shared_pte_page_data);
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> @@ -718,7 +553,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
> struct xe_spin_opts spin_opts = { .preempt = true };
> int i, b;
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data) * N_EXEC_QUEUES;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> @@ -728,7 +563,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
>
> for (i = 0; i < N_EXEC_QUEUES; i++) {
> exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
> + bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
> syncobjs[i] = syncobj_create(fd, 0);
> }
> syncobjs[N_EXEC_QUEUES] = syncobj_create(fd, 0);
> @@ -898,7 +733,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>
> igt_assert(n_execs <= BIND_ARRAY_MAX_N_EXEC);
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = sizeof(*data) * n_execs;
> bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> xe_get_default_alignment(fd));
> @@ -908,7 +743,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
> data = xe_bo_map(fd, bo, bo_size);
>
> if (flags & BIND_ARRAY_BIND_EXEC_QUEUE_FLAG)
> - bind_exec_queue = xe_bind_exec_queue_create(fd, vm, 0);
> + bind_exec_queue = xe_bind_exec_queue_create(fd, vm, 0, true);
> exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
>
> for (i = 0; i < n_execs; ++i) {
> @@ -1092,7 +927,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
> }
>
> igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
>
> if (flags & LARGE_BIND_FLAG_USERPTR) {
> map = aligned_alloc(xe_get_default_alignment(fd), bo_size);
> @@ -1384,7 +1219,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
> unbind_n_page_offset *= n_page_per_2mb;
> }
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = page_size * bo_n_pages;
>
> if (flags & MAP_FLAG_USERPTR) {
> @@ -1684,7 +1519,7 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
> unbind_n_page_offset *= n_page_per_2mb;
> }
>
> - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_size = page_size * bo_n_pages;
>
> if (flags & MAP_FLAG_USERPTR) {
> @@ -2001,12 +1836,6 @@ igt_main
> igt_subtest("userptr-invalid")
> userptr_invalid(fd);
>
> - igt_subtest("vm-async-ops-err")
> - vm_async_ops_err(fd, false);
> -
> - igt_subtest("vm-async-ops-err-destroy")
> - vm_async_ops_err(fd, true);
> -
> igt_subtest("shared-pte-page")
> xe_for_each_hw_engine(fd, hwe)
> shared_pte_page(fd, hwe, 4,
> diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
> index 34005fbeb..e0116f181 100644
> --- a/tests/intel/xe_waitfence.c
> +++ b/tests/intel/xe_waitfence.c
> @@ -34,7 +34,7 @@ static void do_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
>
> sync[0].addr = to_user_pointer(&wait_fence);
> sync[0].timeline_value = val;
> - xe_vm_bind(fd, vm, bo, offset, addr, size, sync, 1);
> + xe_vm_bind_async(fd, vm, 0, bo, offset, addr, size, sync, 1);
> }
>
> enum waittype {
> @@ -63,7 +63,7 @@ waitfence(int fd, enum waittype wt)
> uint32_t bo_7;
> int64_t timeout;
>
> - uint32_t vm = xe_vm_create(fd, 0, 0);
> + uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> bo_1 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
Missing XE_VM_BIND_FLAG_ASYNC with the async vm... this and other tests here have similar problem.
> bo_2 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> @@ -96,21 +96,6 @@ waitfence(int fd, enum waittype wt)
> ", elapsed: %" PRId64 "\n",
> timeout, signalled, signalled - current);
> }
> -
> - xe_vm_unbind_sync(fd, vm, 0, 0x200000, 0x40000);
> - xe_vm_unbind_sync(fd, vm, 0, 0xc0000000, 0x40000);
> - xe_vm_unbind_sync(fd, vm, 0, 0x180000000, 0x40000);
> - xe_vm_unbind_sync(fd, vm, 0, 0x140000000, 0x10000);
> - xe_vm_unbind_sync(fd, vm, 0, 0x100000000, 0x100000);
> - xe_vm_unbind_sync(fd, vm, 0, 0xc0040000, 0x1c0000);
> - xe_vm_unbind_sync(fd, vm, 0, 0xeffff0000, 0x10000);
> - gem_close(fd, bo_7);
> - gem_close(fd, bo_6);
> - gem_close(fd, bo_5);
> - gem_close(fd, bo_4);
> - gem_close(fd, bo_3);
> - gem_close(fd, bo_2);
> - gem_close(fd, bo_1);
unrelated change.
> }
>
> igt_main
^ permalink raw reply [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 10/14] xe: Update to new VM bind uAPI
2023-09-29 16:32 ` Souza, Jose
@ 2023-10-03 9:35 ` Francois Dugast
2023-10-03 14:25 ` Souza, Jose
0 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-10-03 9:35 UTC (permalink / raw)
To: Souza, Jose; +Cc: igt-dev@lists.freedesktop.org, Vivi, Rodrigo
On Fri, Sep 29, 2023 at 06:32:55PM +0200, Souza, Jose wrote:
> On Thu, 2023-09-28 at 11:05 +0000, Francois Dugast wrote:
> > From: Matthew Brost <matthew.brost@intel.com>
> >
> > Sync vs. async changes and new error handling.
> >
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > [Rodrigo rebased and fixed conflicts]
> > ---
> > include/drm-uapi/xe_drm.h | 50 ++------
> > lib/igt_fb.c | 2 +-
> > lib/intel_batchbuffer.c | 2 +-
> > lib/intel_compute.c | 2 +-
> > lib/xe/xe_ioctl.c | 15 +--
> > lib/xe/xe_ioctl.h | 3 +-
> > lib/xe/xe_query.c | 2 +-
> > tests/intel/xe_ccs.c | 4 +-
> > tests/intel/xe_create.c | 6 +-
> > tests/intel/xe_drm_fdinfo.c | 4 +-
> > tests/intel/xe_evict.c | 23 ++--
> > tests/intel/xe_exec_balancer.c | 6 +-
> > tests/intel/xe_exec_basic.c | 6 +-
> > tests/intel/xe_exec_compute_mode.c | 6 +-
> > tests/intel/xe_exec_fault_mode.c | 6 +-
> > tests/intel/xe_exec_reset.c | 8 +-
> > tests/intel/xe_exec_store.c | 4 +-
> > tests/intel/xe_exec_threads.c | 112 +++++------------
> > tests/intel/xe_exercise_blt.c | 2 +-
> > tests/intel/xe_guc_pc.c | 2 +-
> > tests/intel/xe_huc_copy.c | 2 +-
> > tests/intel/xe_intel_bb.c | 2 +-
> > tests/intel/xe_pm.c | 2 +-
> > tests/intel/xe_vm.c | 189 ++---------------------------
> > tests/intel/xe_waitfence.c | 19 +--
> > 25 files changed, 102 insertions(+), 377 deletions(-)
> >
> > diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> > index 0a05a12b2..80b4c76f3 100644
> > --- a/include/drm-uapi/xe_drm.h
> > +++ b/include/drm-uapi/xe_drm.h
> > @@ -134,10 +134,11 @@ struct drm_xe_engine_class_instance {
> > #define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE 3
> > #define DRM_XE_ENGINE_CLASS_COMPUTE 4
> > /*
> > - * Kernel only class (not actual hardware engine class). Used for
> > + * Kernel only classes (not actual hardware engine class). Used for
> > * creating ordered queues of VM bind operations.
> > */
> > -#define DRM_XE_ENGINE_CLASS_VM_BIND 5
> > +#define DRM_XE_ENGINE_CLASS_VM_BIND_ASYNC 5
> > +#define DRM_XE_ENGINE_CLASS_VM_BIND_SYNC 6
> > __u16 engine_class;
> >
> > __u16 engine_instance;
> > @@ -577,7 +578,7 @@ struct drm_xe_vm_create {
> >
> > #define DRM_XE_VM_CREATE_SCRATCH_PAGE (0x1 << 0)
> > #define DRM_XE_VM_CREATE_COMPUTE_MODE (0x1 << 1)
> > -#define DRM_XE_VM_CREATE_ASYNC_BIND_OPS (0x1 << 2)
> > +#define DRM_XE_VM_CREATE_ASYNC_DEFAULT (0x1 << 2)
> > #define DRM_XE_VM_CREATE_FAULT_MODE (0x1 << 3)
> > /** @flags: Flags */
> > __u32 flags;
> > @@ -637,34 +638,12 @@ struct drm_xe_vm_bind_op {
> > #define XE_VM_BIND_OP_MAP 0x0
> > #define XE_VM_BIND_OP_UNMAP 0x1
> > #define XE_VM_BIND_OP_MAP_USERPTR 0x2
> > -#define XE_VM_BIND_OP_RESTART 0x3
> > -#define XE_VM_BIND_OP_UNMAP_ALL 0x4
> > -#define XE_VM_BIND_OP_PREFETCH 0x5
> > +#define XE_VM_BIND_OP_UNMAP_ALL 0x3
> > +#define XE_VM_BIND_OP_PREFETCH 0x4
> > /** @op: Bind operation to perform */
> > __u32 op;
> >
> > #define XE_VM_BIND_FLAG_READONLY (0x1 << 0)
> > - /*
> > - * A bind ops completions are always async, hence the support for out
> > - * sync. This flag indicates the allocation of the memory for new page
> > - * tables and the job to program the pages tables is asynchronous
> > - * relative to the IOCTL. That part of a bind operation can fail under
> > - * memory pressure, the job in practice can't fail unless the system is
> > - * totally shot.
> > - *
> > - * If this flag is clear and the IOCTL doesn't return an error, in
> > - * practice the bind op is good and will complete.
> > - *
> > - * If this flag is set and doesn't return an error, the bind op can
> > - * still fail and recovery is needed. It should free memory
> > - * via non-async unbinds, and then restart all queued async binds op via
> > - * XE_VM_BIND_OP_RESTART. Or alternatively the user should destroy the
> > - * VM.
> > - *
> > - * This flag is only allowed when DRM_XE_VM_CREATE_ASYNC_BIND_OPS is
> > - * configured in the VM and must be set if the VM is configured with
> > - * DRM_XE_VM_CREATE_ASYNC_BIND_OPS and not in an error state.
> > - */
> > #define XE_VM_BIND_FLAG_ASYNC (0x1 << 1)
> > /*
> > * Valid on a faulting VM only, do the MAP operation immediately rather
> > @@ -905,18 +884,10 @@ struct drm_xe_wait_user_fence {
> > /** @extensions: Pointer to the first extension struct, if any */
> > __u64 extensions;
> >
> > - union {
> > - /**
> > - * @addr: user pointer address to wait on, must qword aligned
> > - */
> > - __u64 addr;
> > -
> > - /**
> > - * @vm_id: The ID of the VM which encounter an error used with
> > - * DRM_XE_UFENCE_WAIT_VM_ERROR. Upper 32 bits must be clear.
> > - */
> > - __u64 vm_id;
> > - };
> > + /**
> > + * @addr: user pointer address to wait on, must qword aligned
> > + */
> > + __u64 addr;
> >
> > #define DRM_XE_UFENCE_WAIT_EQ 0
> > #define DRM_XE_UFENCE_WAIT_NEQ 1
> > @@ -929,7 +900,6 @@ struct drm_xe_wait_user_fence {
> >
> > #define DRM_XE_UFENCE_WAIT_SOFT_OP (1 << 0) /* e.g. Wait on VM bind */
> > #define DRM_XE_UFENCE_WAIT_ABSTIME (1 << 1)
> > -#define DRM_XE_UFENCE_WAIT_VM_ERROR (1 << 2)
> > /** @flags: wait flags */
> > __u16 flags;
> >
> > diff --git a/lib/igt_fb.c b/lib/igt_fb.c
> > index f0c0681ab..34934855a 100644
> > --- a/lib/igt_fb.c
> > +++ b/lib/igt_fb.c
> > @@ -2892,7 +2892,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
> > &bb_size,
> > mem_region) == 0);
> > } else if (is_xe) {
> > - vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > exec_queue = xe_exec_queue_create(dst_fb->fd, vm, &inst, 0);
> > xe_ctx = intel_ctx_xe(dst_fb->fd, vm, exec_queue, 0, 0, 0);
> > mem_region = vram_if_possible(dst_fb->fd, 0);
> > diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> > index 6e668d28c..df82ef5f5 100644
> > --- a/lib/intel_batchbuffer.c
> > +++ b/lib/intel_batchbuffer.c
> > @@ -953,7 +953,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
> >
> > if (!vm) {
> > igt_assert_f(!ctx, "No vm provided for engine");
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > }
> >
> > ibb->uses_full_ppgtt = true;
> > diff --git a/lib/intel_compute.c b/lib/intel_compute.c
> > index 0c30f39c1..1ae33cdfc 100644
> > --- a/lib/intel_compute.c
> > +++ b/lib/intel_compute.c
> > @@ -79,7 +79,7 @@ static void bo_execenv_create(int fd, struct bo_execenv *execenv)
> > else
> > engine_class = DRM_XE_ENGINE_CLASS_COMPUTE;
> >
> > - execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > execenv->exec_queue = xe_exec_queue_create_class(fd, execenv->vm,
> > engine_class);
> > }
> > diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> > index 48cd185de..895e3bd4e 100644
> > --- a/lib/xe/xe_ioctl.c
> > +++ b/lib/xe/xe_ioctl.c
> > @@ -201,16 +201,8 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
> > static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> > uint64_t addr, uint64_t size, uint32_t op)
> > {
> > - struct drm_xe_sync sync = {
> > - .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> > - .handle = syncobj_create(fd, 0),
> > - };
> > -
> > - __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, &sync, 1,
> > + __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, NULL, 0,
> > 0, 0);
> > -
> > - igt_assert(syncobj_wait(fd, &sync.handle, 1, INT64_MAX, 0, NULL));
> > - syncobj_destroy(fd, sync.handle);
> > }
> >
> > void xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> > @@ -276,10 +268,11 @@ uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size)
> > return create.handle;
> > }
> >
> > -uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext)
> > +uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext, bool async)
> > {
> > struct drm_xe_engine_class_instance instance = {
> > - .engine_class = DRM_XE_ENGINE_CLASS_VM_BIND,
> > + .engine_class = async ? DRM_XE_ENGINE_CLASS_VM_BIND_ASYNC :
> > + DRM_XE_ENGINE_CLASS_VM_BIND_SYNC,
> > };
> > struct drm_xe_exec_queue_create create = {
> > .extensions = ext,
> > diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
> > index f0e4109dc..a8dbcf376 100644
> > --- a/lib/xe/xe_ioctl.h
> > +++ b/lib/xe/xe_ioctl.h
> > @@ -71,7 +71,8 @@ uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size);
> > uint32_t xe_exec_queue_create(int fd, uint32_t vm,
> > struct drm_xe_engine_class_instance *instance,
> > uint64_t ext);
> > -uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext);
> > +uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext,
> > + bool async);
> > uint32_t xe_exec_queue_create_class(int fd, uint32_t vm, uint16_t class);
> > void xe_exec_queue_destroy(int fd, uint32_t exec_queue);
> > uint64_t xe_bo_mmap_offset(int fd, uint32_t bo);
> > diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> > index c356abe1e..ab7b31188 100644
> > --- a/lib/xe/xe_query.c
> > +++ b/lib/xe/xe_query.c
> > @@ -316,7 +316,7 @@ bool xe_supports_faults(int fd)
> > bool supports_faults;
> >
> > struct drm_xe_vm_create create = {
> > - .flags = DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > + .flags = DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > DRM_XE_VM_CREATE_FAULT_MODE,
> > };
> >
> > diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
> > index 20bbc4448..300b734c8 100644
> > --- a/tests/intel/xe_ccs.c
> > +++ b/tests/intel/xe_ccs.c
> > @@ -343,7 +343,7 @@ static void block_copy(int xe,
> > uint32_t vm, exec_queue;
> >
> > if (config->new_ctx) {
> > - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
> > surf_ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
> > surf_ahnd = intel_allocator_open(xe, surf_ctx->vm,
> > @@ -550,7 +550,7 @@ static void block_copy_test(int xe,
> > copyfns[copy_function].suffix) {
> > uint32_t sync_bind, sync_out;
> >
> > - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
> > sync_bind = syncobj_create(xe, 0);
> > sync_out = syncobj_create(xe, 0);
> > diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
> > index 8d845e5c8..d99bd51cf 100644
> > --- a/tests/intel/xe_create.c
> > +++ b/tests/intel/xe_create.c
> > @@ -54,7 +54,7 @@ static void create_invalid_size(int fd)
> > uint32_t handle;
> > int ret;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> >
> > xe_for_each_mem_region(fd, memreg, region) {
> > memregion = xe_mem_region(fd, region);
> > @@ -140,7 +140,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
> >
> > fd = drm_reopen_driver(fd);
> > num_engines = xe_number_hw_engines(fd);
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> >
> > exec_queues_per_process = max_t(uint32_t, 1, MAXEXECQUEUES / nproc);
> > igt_debug("nproc: %u, exec_queues per process: %u\n", nproc, exec_queues_per_process);
> > @@ -199,7 +199,7 @@ static void create_massive_size(int fd)
> > uint32_t handle;
> > int ret;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> >
> > xe_for_each_mem_region(fd, memreg, region) {
> > ret = __create_bo(fd, vm, -1ULL << 32, region, &handle);
> > diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
> > index 22e410e14..64168ed19 100644
> > --- a/tests/intel/xe_drm_fdinfo.c
> > +++ b/tests/intel/xe_drm_fdinfo.c
> > @@ -71,7 +71,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
> > struct xe_spin_opts spin_opts = { .preempt = true };
> > int i, b, ret;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data) * N_EXEC_QUEUES;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > @@ -90,7 +90,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
> >
> > for (i = 0; i < N_EXEC_QUEUES; i++) {
> > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
> > + bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
> > syncobjs[i] = syncobj_create(fd, 0);
> > }
> > syncobjs[N_EXEC_QUEUES] = syncobj_create(fd, 0);
> > diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
> > index 5d8981f8d..eec001218 100644
> > --- a/tests/intel/xe_evict.c
> > +++ b/tests/intel/xe_evict.c
> > @@ -63,15 +63,17 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
> >
> > fd = drm_open_driver(DRIVER_XE);
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > if (flags & BIND_EXEC_QUEUE)
> > - bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0);
> > + bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
> > if (flags & MULTI_VM) {
> > - vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > - vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > + vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > if (flags & BIND_EXEC_QUEUE) {
> > - bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2, 0);
> > - bind_exec_queues[2] = xe_bind_exec_queue_create(fd, vm3, 0);
> > + bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
> > + 0, true);
> > + bind_exec_queues[2] = xe_bind_exec_queue_create(fd, vm3,
> > + 0, true);
> > }
> > }
> >
> > @@ -240,15 +242,16 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
> >
> > fd = drm_open_driver(DRIVER_XE);
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > if (flags & BIND_EXEC_QUEUE)
> > - bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0);
> > + bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
> > if (flags & MULTI_VM) {
> > - vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > + vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > if (flags & BIND_EXEC_QUEUE)
> > - bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2, 0);
> > + bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
> > + 0, true);
> > }
> >
> > for (i = 0; i < n_exec_queues; i++) {
> > diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
> > index f4f5440f4..3ca3de881 100644
> > --- a/tests/intel/xe_exec_balancer.c
> > +++ b/tests/intel/xe_exec_balancer.c
> > @@ -66,7 +66,7 @@ static void test_all_active(int fd, int gt, int class)
> > if (num_placements < 2)
> > return;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data) * num_placements;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
> >
> > @@ -207,7 +207,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
> > if (num_placements < 2)
> > return;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data) * n_execs;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
> >
> > @@ -433,7 +433,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
> > if (num_placements < 2)
> > return;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > bo_size = sizeof(*data) * n_execs;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
> > index e29398aaa..8dbce524d 100644
> > --- a/tests/intel/xe_exec_basic.c
> > +++ b/tests/intel/xe_exec_basic.c
> > @@ -109,7 +109,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > igt_assert(n_vm <= MAX_N_EXEC_QUEUES);
> >
> > for (i = 0; i < n_vm; ++i)
> > - vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data) * n_execs;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > @@ -151,7 +151,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> >
> > exec_queues[i] = xe_exec_queue_create(fd, __vm, eci, 0);
> > if (flags & BIND_EXEC_QUEUE)
> > - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, __vm, 0);
> > + bind_exec_queues[i] = xe_bind_exec_queue_create(fd,
> > + __vm, 0,
> > + true);
> > else
> > bind_exec_queues[i] = 0;
> > syncobjs[i] = syncobj_create(fd, 0);
> > diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
> > index 02e7ef201..b0a677dca 100644
> > --- a/tests/intel/xe_exec_compute_mode.c
> > +++ b/tests/intel/xe_exec_compute_mode.c
> > @@ -113,7 +113,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> >
> > igt_assert(n_exec_queues <= MAX_N_EXECQUEUES);
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > bo_size = sizeof(*data) * n_execs;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > @@ -123,7 +123,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > if (flags & BIND_EXECQUEUE)
> > bind_exec_queues[i] =
> > - xe_bind_exec_queue_create(fd, vm, 0);
> > + xe_bind_exec_queue_create(fd, vm, 0, true);
> > else
> > bind_exec_queues[i] = 0;
> > };
> > @@ -151,7 +151,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > if (flags & BIND_EXECQUEUE)
> > bind_exec_queues[i] =
> > - xe_bind_exec_queue_create(fd, vm, 0);
> > + xe_bind_exec_queue_create(fd, vm, 0, true);
> > else
> > bind_exec_queues[i] = 0;
> > };
> > diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
> > index c5d6bdcd5..92d8690a1 100644
> > --- a/tests/intel/xe_exec_fault_mode.c
> > +++ b/tests/intel/xe_exec_fault_mode.c
> > @@ -131,7 +131,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> >
> > igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > DRM_XE_VM_CREATE_FAULT_MODE, 0);
> > bo_size = sizeof(*data) * n_execs;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > @@ -165,7 +165,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > if (flags & BIND_EXEC_QUEUE)
> > bind_exec_queues[i] =
> > - xe_bind_exec_queue_create(fd, vm, 0);
> > + xe_bind_exec_queue_create(fd, vm, 0, true);
> > else
> > bind_exec_queues[i] = 0;
> > };
> > @@ -375,7 +375,7 @@ test_atomic(int fd, struct drm_xe_engine_class_instance *eci,
> > uint32_t *ptr;
> > int i, b, wait_idx = 0;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > DRM_XE_VM_CREATE_FAULT_MODE, 0);
> > bo_size = sizeof(*data) * n_atomic;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
> > index ca8d7cc13..44248776b 100644
> > --- a/tests/intel/xe_exec_reset.c
> > +++ b/tests/intel/xe_exec_reset.c
> > @@ -45,7 +45,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
> > struct xe_spin *spin;
> > struct xe_spin_opts spin_opts = { .addr = addr, .preempt = false };
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*spin);
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > @@ -176,7 +176,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
> > if (num_placements < 2)
> > return;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data) * n_execs;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > @@ -362,7 +362,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
> > if (flags & CLOSE_FD)
> > fd = drm_open_driver(DRIVER_XE);
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data) * n_execs;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > @@ -528,7 +528,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
> > if (flags & CLOSE_FD)
> > fd = drm_open_driver(DRIVER_XE);
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > bo_size = sizeof(*data) * n_execs;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
> > index 14f7c9bec..90684b8cb 100644
> > --- a/tests/intel/xe_exec_store.c
> > +++ b/tests/intel/xe_exec_store.c
> > @@ -75,7 +75,7 @@ static void store(int fd)
> > syncobj = syncobj_create(fd, 0);
> > sync.handle = syncobj;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data);
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > @@ -132,7 +132,7 @@ static void store_all(int fd, int gt, int class)
> > struct drm_xe_engine_class_instance *hwe;
> > int i, num_placements = 0;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data);
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> > index c9a51fc00..bb16bdd88 100644
> > --- a/tests/intel/xe_exec_threads.c
> > +++ b/tests/intel/xe_exec_threads.c
> > @@ -77,7 +77,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
> > }
> >
> > if (!vm) {
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > owns_vm = true;
> > }
> >
> > @@ -285,7 +285,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > }
> >
> > if (!vm) {
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > owns_vm = true;
> > }
> > @@ -454,7 +454,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > static void
> > test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > struct drm_xe_engine_class_instance *eci, int n_exec_queues,
> > - int n_execs, int rebind_error_inject, unsigned int flags)
> > + int n_execs, unsigned int flags)
> > {
> > struct drm_xe_sync sync[2] = {
> > { .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> > @@ -489,7 +489,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > }
> >
> > if (!vm) {
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > owns_vm = true;
> > }
> >
> > @@ -531,7 +531,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > else
> > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > if (flags & BIND_EXEC_QUEUE)
> > - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
> > + bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm,
> > + 0, true);
> > else
> > bind_exec_queues[i] = 0;
> > syncobjs[i] = syncobj_create(fd, 0);
> > @@ -583,8 +584,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > exec.address = exec_addr;
> > if (e != i && !(flags & HANG))
> > syncobj_reset(fd, &syncobjs[e], 1);
> > - if ((flags & HANG && e == hang_exec_queue) ||
> > - rebind_error_inject > 0) {
> > + if ((flags & HANG && e == hang_exec_queue)) {
> > int err;
> >
> > do {
> > @@ -594,20 +594,10 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > xe_exec(fd, &exec);
> > }
> >
> > - if (flags & REBIND && i &&
> > - (!(i & 0x1f) || rebind_error_inject == i)) {
> > -#define INJECT_ERROR (0x1 << 31)
> > - if (rebind_error_inject == i)
> > - __xe_vm_bind_assert(fd, vm, bind_exec_queues[e],
> > - 0, 0, addr, bo_size,
> > - XE_VM_BIND_OP_UNMAP,
> > - XE_VM_BIND_FLAG_ASYNC |
> > - INJECT_ERROR, sync_all,
> > - n_exec_queues, 0, 0);
> > - else
> > - xe_vm_unbind_async(fd, vm, bind_exec_queues[e],
> > - 0, addr, bo_size,
> > - sync_all, n_exec_queues);
> > + if (flags & REBIND && i && !(i & 0x1f)) {
> > + xe_vm_unbind_async(fd, vm, bind_exec_queues[e],
> > + 0, addr, bo_size,
> > + sync_all, n_exec_queues);
> >
> > sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> > addr += bo_size;
> > @@ -709,7 +699,6 @@ struct thread_data {
> > int n_exec_queue;
> > int n_exec;
> > int flags;
> > - int rebind_error_inject;
> > bool *go;
> > };
> >
> > @@ -733,46 +722,7 @@ static void *thread(void *data)
> > else
> > test_legacy_mode(t->fd, t->vm_legacy_mode, t->addr, t->userptr,
> > t->eci, t->n_exec_queue, t->n_exec,
> > - t->rebind_error_inject, t->flags);
> > -
> > - return NULL;
> > -}
> > -
> > -struct vm_thread_data {
> > - pthread_t thread;
> > - int fd;
> > - int vm;
> > -};
> > -
> > -static void *vm_async_ops_err_thread(void *data)
> > -{
> > - struct vm_thread_data *args = data;
> > - int fd = args->fd;
> > - int ret;
> > -
> > - struct drm_xe_wait_user_fence wait = {
> > - .vm_id = args->vm,
> > - .op = DRM_XE_UFENCE_WAIT_NEQ,
> > - .flags = DRM_XE_UFENCE_WAIT_VM_ERROR,
> > - .mask = DRM_XE_UFENCE_WAIT_U32,
> > -#define BASICALLY_FOREVER 0xffffffffffff
> > - .timeout = BASICALLY_FOREVER,
> > - };
> > -
> > - ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> > -
> > - while (!ret) {
> > - struct drm_xe_vm_bind bind = {
> > - .vm_id = args->vm,
> > - .num_binds = 1,
> > - .bind.op = XE_VM_BIND_OP_RESTART,
> > - };
> > -
> > - /* Restart and wait for next error */
> > - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
> > - &bind), 0);
> > - ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> > - }
> > + t->flags);
> >
> > return NULL;
> > }
> > @@ -826,6 +776,10 @@ static void *vm_async_ops_err_thread(void *data)
> > * shared vm rebind err
> > * @shared-vm-userptr-rebind-err:
> > * shared vm userptr rebind err
> > + * @rebind-err:
> > + * rebind err
> > + * @userptr-rebind-err:
> > + * userptr rebind err
> > * @shared-vm-userptr-invalidate:
> > * shared vm userptr invalidate
> > * @shared-vm-userptr-invalidate-race:
> > @@ -842,7 +796,7 @@ static void *vm_async_ops_err_thread(void *data)
> > * fd userptr invalidate race
> > * @hang-basic:
> > * hang basic
> > - * @hang-userptr:
> > + * @hang-userptr:
> > * hang userptr
> > * @hang-rebind:
> > * hang rebind
> > @@ -864,6 +818,10 @@ static void *vm_async_ops_err_thread(void *data)
> > * hang shared vm rebind err
> > * @hang-shared-vm-userptr-rebind-err:
> > * hang shared vm userptr rebind err
> > + * @hang-rebind-err:
> > + * hang rebind err
> > + * @hang-userptr-rebind-err:
> > + * hang userptr rebind err
> > * @hang-shared-vm-userptr-invalidate:
> > * hang shared vm userptr invalidate
> > * @hang-shared-vm-userptr-invalidate-race:
> > @@ -1019,7 +977,6 @@ static void threads(int fd, int flags)
> > int n_hw_engines = 0, class;
> > uint64_t i = 0;
> > uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
> > - struct vm_thread_data vm_err_thread = {};
> > bool go = false;
> > int n_threads = 0;
> > int gt;
> > @@ -1052,18 +1009,12 @@ static void threads(int fd, int flags)
> >
> > if (flags & SHARED_VM) {
> > vm_legacy_mode = xe_vm_create(fd,
> > - DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
> > + DRM_XE_VM_CREATE_ASYNC_DEFAULT,
> > 0);
> > vm_compute_mode = xe_vm_create(fd,
> > - DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > + DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > DRM_XE_VM_CREATE_COMPUTE_MODE,
> > 0);
> > -
> > - vm_err_thread.fd = fd;
> > - vm_err_thread.vm = vm_legacy_mode;
> > - pthread_create(&vm_err_thread.thread, 0,
> > - vm_async_ops_err_thread, &vm_err_thread);
> > -
> > }
> >
> > xe_for_each_hw_engine(fd, hwe) {
> > @@ -1083,11 +1034,6 @@ static void threads(int fd, int flags)
> > threads_data[i].n_exec_queue = N_EXEC_QUEUE;
> > #define N_EXEC 1024
> > threads_data[i].n_exec = N_EXEC;
> > - if (flags & REBIND_ERROR)
> > - threads_data[i].rebind_error_inject =
> > - (N_EXEC / (n_hw_engines + 1)) * (i + 1);
> > - else
> > - threads_data[i].rebind_error_inject = -1;
> > threads_data[i].flags = flags;
> > if (flags & MIXED_MODE) {
> > threads_data[i].flags &= ~MIXED_MODE;
> > @@ -1190,8 +1136,6 @@ static void threads(int fd, int flags)
> > if (vm_compute_mode)
> > xe_vm_destroy(fd, vm_compute_mode);
> > free(threads_data);
> > - if (flags & SHARED_VM)
> > - pthread_join(vm_err_thread.thread, NULL);
> > pthread_barrier_destroy(&barrier);
> > }
> >
> > @@ -1214,9 +1158,8 @@ igt_main
> > { "shared-vm-rebind-bindexecqueue", SHARED_VM | REBIND |
> > BIND_EXEC_QUEUE },
> > { "shared-vm-userptr-rebind", SHARED_VM | USERPTR | REBIND },
> > - { "shared-vm-rebind-err", SHARED_VM | REBIND | REBIND_ERROR },
> > - { "shared-vm-userptr-rebind-err", SHARED_VM | USERPTR |
> > - REBIND | REBIND_ERROR},
> > + { "rebind-err", REBIND | REBIND_ERROR },
> > + { "userptr-rebind-err", USERPTR | REBIND | REBIND_ERROR},
> > { "shared-vm-userptr-invalidate", SHARED_VM | USERPTR |
> > INVALIDATE },
> > { "shared-vm-userptr-invalidate-race", SHARED_VM | USERPTR |
> > @@ -1240,10 +1183,9 @@ igt_main
> > { "hang-shared-vm-rebind", HANG | SHARED_VM | REBIND },
> > { "hang-shared-vm-userptr-rebind", HANG | SHARED_VM | USERPTR |
> > REBIND },
> > - { "hang-shared-vm-rebind-err", HANG | SHARED_VM | REBIND |
> > + { "hang-rebind-err", HANG | REBIND | REBIND_ERROR },
> > + { "hang-userptr-rebind-err", HANG | USERPTR | REBIND |
> > REBIND_ERROR },
> > - { "hang-shared-vm-userptr-rebind-err", HANG | SHARED_VM |
> > - USERPTR | REBIND | REBIND_ERROR },
> > { "hang-shared-vm-userptr-invalidate", HANG | SHARED_VM |
> > USERPTR | INVALIDATE },
> > { "hang-shared-vm-userptr-invalidate-race", HANG | SHARED_VM |
> > diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
> > index ca85f5f18..2f349b16d 100644
> > --- a/tests/intel/xe_exercise_blt.c
> > +++ b/tests/intel/xe_exercise_blt.c
> > @@ -280,7 +280,7 @@ static void fast_copy_test(int xe,
> > region1 = igt_collection_get_value(regions, 0);
> > region2 = igt_collection_get_value(regions, 1);
> >
> > - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
> > ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
> >
> > diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
> > index 0327d8e0e..3f2c4ae23 100644
> > --- a/tests/intel/xe_guc_pc.c
> > +++ b/tests/intel/xe_guc_pc.c
> > @@ -60,7 +60,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
> > igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
> > igt_assert(n_execs > 0);
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data) * n_execs;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > diff --git a/tests/intel/xe_huc_copy.c b/tests/intel/xe_huc_copy.c
> > index c9891a729..c71ff74a1 100644
> > --- a/tests/intel/xe_huc_copy.c
> > +++ b/tests/intel/xe_huc_copy.c
> > @@ -117,7 +117,7 @@ test_huc_copy(int fd)
> > { .addr = ADDR_BATCH, .size = SIZE_BATCH }, // batch
> > };
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_VIDEO_DECODE);
> > sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
> > sync.handle = syncobj_create(fd, 0);
> > diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
> > index 0159a3164..26e4dcc85 100644
> > --- a/tests/intel/xe_intel_bb.c
> > +++ b/tests/intel/xe_intel_bb.c
> > @@ -191,7 +191,7 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
> > intel_bb_reset(ibb, true);
> >
> > if (new_context) {
> > - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > ctx = xe_exec_queue_create(xe, vm, xe_hw_engine(xe, 0), 0);
> > intel_bb_destroy(ibb);
> > ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
> > diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> > index fd28d5630..b2976ec84 100644
> > --- a/tests/intel/xe_pm.c
> > +++ b/tests/intel/xe_pm.c
> > @@ -259,7 +259,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
> > if (check_rpm)
> > igt_assert(in_d3(device, d_state));
> >
> > - vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> >
> > if (check_rpm)
> > igt_assert(out_of_d3(device, d_state));
> > diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> > index 89df6149a..dd3302337 100644
> > --- a/tests/intel/xe_vm.c
> > +++ b/tests/intel/xe_vm.c
> > @@ -275,7 +275,7 @@ static void unbind_all(int fd, int n_vmas)
> > { .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> > };
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo = xe_bo_create(fd, 0, vm, bo_size);
> >
> > for (i = 0; i < n_vmas; ++i)
> > @@ -322,171 +322,6 @@ static void userptr_invalid(int fd)
> > xe_vm_destroy(fd, vm);
> > }
> >
> > -struct vm_thread_data {
> > - pthread_t thread;
> > - int fd;
> > - int vm;
> > - uint32_t bo;
> > - size_t bo_size;
> > - bool destroy;
> > -};
> > -
> > -/**
> > - * SUBTEST: vm-async-ops-err
> > - * Description: Test VM async ops error
> > - * Functionality: VM
> > - * Test category: negative test
> > - *
> > - * SUBTEST: vm-async-ops-err-destroy
> > - * Description: Test VM async ops error destroy
> > - * Functionality: VM
> > - * Test category: negative test
> > - */
> > -
> > -static void *vm_async_ops_err_thread(void *data)
> > -{
> > - struct vm_thread_data *args = data;
> > - int fd = args->fd;
> > - uint64_t addr = 0x201a0000;
> > - int num_binds = 0;
> > - int ret;
> > -
> > - struct drm_xe_wait_user_fence wait = {
> > - .vm_id = args->vm,
> > - .op = DRM_XE_UFENCE_WAIT_NEQ,
> > - .flags = DRM_XE_UFENCE_WAIT_VM_ERROR,
> > - .mask = DRM_XE_UFENCE_WAIT_U32,
> > - .timeout = MS_TO_NS(1000),
> > - };
> > -
> > - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE,
> > - &wait), 0);
> > - if (args->destroy) {
> > - usleep(5000); /* Wait other binds to queue up */
> > - xe_vm_destroy(fd, args->vm);
> > - return NULL;
> > - }
> > -
> > - while (!ret) {
> > - struct drm_xe_vm_bind bind = {
> > - .vm_id = args->vm,
> > - .num_binds = 1,
> > - .bind.op = XE_VM_BIND_OP_RESTART,
> > - };
> > -
> > - /* VM sync ops should work */
> > - if (!(num_binds++ % 2)) {
> > - xe_vm_bind_sync(fd, args->vm, args->bo, 0, addr,
> > - args->bo_size);
> > - } else {
> > - xe_vm_unbind_sync(fd, args->vm, 0, addr,
> > - args->bo_size);
> > - addr += args->bo_size * 2;
> > - }
> > -
> > - /* Restart and wait for next error */
> > - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
> > - &bind), 0);
> > - ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> > - }
> > -
> > - return NULL;
> > -}
> > -
> > -static void vm_async_ops_err(int fd, bool destroy)
> > -{
> > - uint32_t vm;
> > - uint64_t addr = 0x1a0000;
> > - struct drm_xe_sync sync = {
> > - .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> > - };
> > -#define N_BINDS 32
> > - struct vm_thread_data thread = {};
> > - uint32_t syncobjs[N_BINDS];
> > - size_t bo_size = 0x1000 * 32;
> > - uint32_t bo;
> > - int i, j;
> > -
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > - bo = xe_bo_create(fd, 0, vm, bo_size);
> > -
> > - thread.fd = fd;
> > - thread.vm = vm;
> > - thread.bo = bo;
> > - thread.bo_size = bo_size;
> > - thread.destroy = destroy;
> > - pthread_create(&thread.thread, 0, vm_async_ops_err_thread, &thread);
> > -
> > - for (i = 0; i < N_BINDS; i++)
> > - syncobjs[i] = syncobj_create(fd, 0);
> > -
> > - for (j = 0, i = 0; i < N_BINDS / 4; i++, j++) {
> > - sync.handle = syncobjs[j];
> > -#define INJECT_ERROR (0x1 << 31)
> > - if (i == N_BINDS / 8) /* Inject error on this bind */
> > - __xe_vm_bind_assert(fd, vm, 0, bo, 0,
> > - addr + i * bo_size * 2,
> > - bo_size, XE_VM_BIND_OP_MAP,
> > - XE_VM_BIND_FLAG_ASYNC |
> > - INJECT_ERROR, &sync, 1, 0, 0);
> > - else
> > - xe_vm_bind_async(fd, vm, 0, bo, 0,
> > - addr + i * bo_size * 2,
> > - bo_size, &sync, 1);
> > - }
> > -
> > - for (i = 0; i < N_BINDS / 4; i++, j++) {
> > - sync.handle = syncobjs[j];
> > - if (i == N_BINDS / 8)
> > - __xe_vm_bind_assert(fd, vm, 0, 0, 0,
> > - addr + i * bo_size * 2,
> > - bo_size, XE_VM_BIND_OP_UNMAP,
> > - XE_VM_BIND_FLAG_ASYNC |
> > - INJECT_ERROR, &sync, 1, 0, 0);
> > - else
> > - xe_vm_unbind_async(fd, vm, 0, 0,
> > - addr + i * bo_size * 2,
> > - bo_size, &sync, 1);
> > - }
> > -
> > - for (i = 0; i < N_BINDS / 4; i++, j++) {
> > - sync.handle = syncobjs[j];
> > - if (i == N_BINDS / 8)
> > - __xe_vm_bind_assert(fd, vm, 0, bo, 0,
> > - addr + i * bo_size * 2,
> > - bo_size, XE_VM_BIND_OP_MAP,
> > - XE_VM_BIND_FLAG_ASYNC |
> > - INJECT_ERROR, &sync, 1, 0, 0);
> > - else
> > - xe_vm_bind_async(fd, vm, 0, bo, 0,
> > - addr + i * bo_size * 2,
> > - bo_size, &sync, 1);
> > - }
> > -
> > - for (i = 0; i < N_BINDS / 4; i++, j++) {
> > - sync.handle = syncobjs[j];
> > - if (i == N_BINDS / 8)
> > - __xe_vm_bind_assert(fd, vm, 0, 0, 0,
> > - addr + i * bo_size * 2,
> > - bo_size, XE_VM_BIND_OP_UNMAP,
> > - XE_VM_BIND_FLAG_ASYNC |
> > - INJECT_ERROR, &sync, 1, 0, 0);
> > - else
> > - xe_vm_unbind_async(fd, vm, 0, 0,
> > - addr + i * bo_size * 2,
> > - bo_size, &sync, 1);
> > - }
> > -
> > - for (i = 0; i < N_BINDS; i++)
> > - igt_assert(syncobj_wait(fd, &syncobjs[i], 1, INT64_MAX, 0,
> > - NULL));
> > -
> > - if (!destroy)
> > - xe_vm_destroy(fd, vm);
> > -
> > - pthread_join(thread.thread, NULL);
> > -}
> > -
> > /**
> > * SUBTEST: shared-%s-page
> > * Description: Test shared arg[1] page
> > @@ -537,7 +372,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
> > data = malloc(sizeof(*data) * n_bo);
> > igt_assert(data);
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(struct shared_pte_page_data);
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > @@ -718,7 +553,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
> > struct xe_spin_opts spin_opts = { .preempt = true };
> > int i, b;
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data) * N_EXEC_QUEUES;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > @@ -728,7 +563,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
> >
> > for (i = 0; i < N_EXEC_QUEUES; i++) {
> > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
> > + bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
> > syncobjs[i] = syncobj_create(fd, 0);
> > }
> > syncobjs[N_EXEC_QUEUES] = syncobj_create(fd, 0);
> > @@ -898,7 +733,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
> >
> > igt_assert(n_execs <= BIND_ARRAY_MAX_N_EXEC);
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = sizeof(*data) * n_execs;
> > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > xe_get_default_alignment(fd));
> > @@ -908,7 +743,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
> > data = xe_bo_map(fd, bo, bo_size);
> >
> > if (flags & BIND_ARRAY_BIND_EXEC_QUEUE_FLAG)
> > - bind_exec_queue = xe_bind_exec_queue_create(fd, vm, 0);
> > + bind_exec_queue = xe_bind_exec_queue_create(fd, vm, 0, true);
> > exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
> >
> > for (i = 0; i < n_execs; ++i) {
> > @@ -1092,7 +927,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
> > }
> >
> > igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> >
> > if (flags & LARGE_BIND_FLAG_USERPTR) {
> > map = aligned_alloc(xe_get_default_alignment(fd), bo_size);
> > @@ -1384,7 +1219,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
> > unbind_n_page_offset *= n_page_per_2mb;
> > }
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = page_size * bo_n_pages;
> >
> > if (flags & MAP_FLAG_USERPTR) {
> > @@ -1684,7 +1519,7 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
> > unbind_n_page_offset *= n_page_per_2mb;
> > }
> >
> > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_size = page_size * bo_n_pages;
> >
> > if (flags & MAP_FLAG_USERPTR) {
> > @@ -2001,12 +1836,6 @@ igt_main
> > igt_subtest("userptr-invalid")
> > userptr_invalid(fd);
> >
> > - igt_subtest("vm-async-ops-err")
> > - vm_async_ops_err(fd, false);
> > -
> > - igt_subtest("vm-async-ops-err-destroy")
> > - vm_async_ops_err(fd, true);
> > -
> > igt_subtest("shared-pte-page")
> > xe_for_each_hw_engine(fd, hwe)
> > shared_pte_page(fd, hwe, 4,
> > diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
> > index 34005fbeb..e0116f181 100644
> > --- a/tests/intel/xe_waitfence.c
> > +++ b/tests/intel/xe_waitfence.c
> > @@ -34,7 +34,7 @@ static void do_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> >
> > sync[0].addr = to_user_pointer(&wait_fence);
> > sync[0].timeline_value = val;
> > - xe_vm_bind(fd, vm, bo, offset, addr, size, sync, 1);
> > + xe_vm_bind_async(fd, vm, 0, bo, offset, addr, size, sync, 1);
> > }
> >
> > enum waittype {
> > @@ -63,7 +63,7 @@ waitfence(int fd, enum waittype wt)
> > uint32_t bo_7;
> > int64_t timeout;
> >
> > - uint32_t vm = xe_vm_create(fd, 0, 0);
> > + uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > bo_1 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> > do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
>
>
> Missing XE_VM_BIND_FLAG_ASYNC with the async vm... this and other tests here have similar problem.
It seems this flag is set in xe_vm_bind_async() which is called from do_bind(). Without it the
test would fail.
>
>
> > bo_2 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> > @@ -96,21 +96,6 @@ waitfence(int fd, enum waittype wt)
> > ", elapsed: %" PRId64 "\n",
> > timeout, signalled, signalled - current);
> > }
> > -
> > - xe_vm_unbind_sync(fd, vm, 0, 0x200000, 0x40000);
> > - xe_vm_unbind_sync(fd, vm, 0, 0xc0000000, 0x40000);
> > - xe_vm_unbind_sync(fd, vm, 0, 0x180000000, 0x40000);
> > - xe_vm_unbind_sync(fd, vm, 0, 0x140000000, 0x10000);
> > - xe_vm_unbind_sync(fd, vm, 0, 0x100000000, 0x100000);
> > - xe_vm_unbind_sync(fd, vm, 0, 0xc0040000, 0x1c0000);
> > - xe_vm_unbind_sync(fd, vm, 0, 0xeffff0000, 0x10000);
> > - gem_close(fd, bo_7);
> > - gem_close(fd, bo_6);
> > - gem_close(fd, bo_5);
> > - gem_close(fd, bo_4);
> > - gem_close(fd, bo_3);
> > - gem_close(fd, bo_2);
> > - gem_close(fd, bo_1);
>
> unrelated change.
>
> > }
> >
> > igt_main
>
^ permalink raw reply [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 10/14] xe: Update to new VM bind uAPI
2023-10-03 9:35 ` Francois Dugast
@ 2023-10-03 14:25 ` Souza, Jose
0 siblings, 0 replies; 31+ messages in thread
From: Souza, Jose @ 2023-10-03 14:25 UTC (permalink / raw)
To: Dugast, Francois; +Cc: igt-dev@lists.freedesktop.org, Vivi, Rodrigo
On Tue, 2023-10-03 at 11:35 +0200, Francois Dugast wrote:
> On Fri, Sep 29, 2023 at 06:32:55PM +0200, Souza, Jose wrote:
> > On Thu, 2023-09-28 at 11:05 +0000, Francois Dugast wrote:
> > > From: Matthew Brost <matthew.brost@intel.com>
> > >
> > > Sync vs. async changes and new error handling.
> > >
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > > [Rodrigo rebased and fixed conflicts]
> > > ---
> > > include/drm-uapi/xe_drm.h | 50 ++------
> > > lib/igt_fb.c | 2 +-
> > > lib/intel_batchbuffer.c | 2 +-
> > > lib/intel_compute.c | 2 +-
> > > lib/xe/xe_ioctl.c | 15 +--
> > > lib/xe/xe_ioctl.h | 3 +-
> > > lib/xe/xe_query.c | 2 +-
> > > tests/intel/xe_ccs.c | 4 +-
> > > tests/intel/xe_create.c | 6 +-
> > > tests/intel/xe_drm_fdinfo.c | 4 +-
> > > tests/intel/xe_evict.c | 23 ++--
> > > tests/intel/xe_exec_balancer.c | 6 +-
> > > tests/intel/xe_exec_basic.c | 6 +-
> > > tests/intel/xe_exec_compute_mode.c | 6 +-
> > > tests/intel/xe_exec_fault_mode.c | 6 +-
> > > tests/intel/xe_exec_reset.c | 8 +-
> > > tests/intel/xe_exec_store.c | 4 +-
> > > tests/intel/xe_exec_threads.c | 112 +++++------------
> > > tests/intel/xe_exercise_blt.c | 2 +-
> > > tests/intel/xe_guc_pc.c | 2 +-
> > > tests/intel/xe_huc_copy.c | 2 +-
> > > tests/intel/xe_intel_bb.c | 2 +-
> > > tests/intel/xe_pm.c | 2 +-
> > > tests/intel/xe_vm.c | 189 ++---------------------------
> > > tests/intel/xe_waitfence.c | 19 +--
> > > 25 files changed, 102 insertions(+), 377 deletions(-)
> > >
> > > diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> > > index 0a05a12b2..80b4c76f3 100644
> > > --- a/include/drm-uapi/xe_drm.h
> > > +++ b/include/drm-uapi/xe_drm.h
> > > @@ -134,10 +134,11 @@ struct drm_xe_engine_class_instance {
> > > #define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE 3
> > > #define DRM_XE_ENGINE_CLASS_COMPUTE 4
> > > /*
> > > - * Kernel only class (not actual hardware engine class). Used for
> > > + * Kernel only classes (not actual hardware engine class). Used for
> > > * creating ordered queues of VM bind operations.
> > > */
> > > -#define DRM_XE_ENGINE_CLASS_VM_BIND 5
> > > +#define DRM_XE_ENGINE_CLASS_VM_BIND_ASYNC 5
> > > +#define DRM_XE_ENGINE_CLASS_VM_BIND_SYNC 6
> > > __u16 engine_class;
> > >
> > > __u16 engine_instance;
> > > @@ -577,7 +578,7 @@ struct drm_xe_vm_create {
> > >
> > > #define DRM_XE_VM_CREATE_SCRATCH_PAGE (0x1 << 0)
> > > #define DRM_XE_VM_CREATE_COMPUTE_MODE (0x1 << 1)
> > > -#define DRM_XE_VM_CREATE_ASYNC_BIND_OPS (0x1 << 2)
> > > +#define DRM_XE_VM_CREATE_ASYNC_DEFAULT (0x1 << 2)
> > > #define DRM_XE_VM_CREATE_FAULT_MODE (0x1 << 3)
> > > /** @flags: Flags */
> > > __u32 flags;
> > > @@ -637,34 +638,12 @@ struct drm_xe_vm_bind_op {
> > > #define XE_VM_BIND_OP_MAP 0x0
> > > #define XE_VM_BIND_OP_UNMAP 0x1
> > > #define XE_VM_BIND_OP_MAP_USERPTR 0x2
> > > -#define XE_VM_BIND_OP_RESTART 0x3
> > > -#define XE_VM_BIND_OP_UNMAP_ALL 0x4
> > > -#define XE_VM_BIND_OP_PREFETCH 0x5
> > > +#define XE_VM_BIND_OP_UNMAP_ALL 0x3
> > > +#define XE_VM_BIND_OP_PREFETCH 0x4
> > > /** @op: Bind operation to perform */
> > > __u32 op;
> > >
> > > #define XE_VM_BIND_FLAG_READONLY (0x1 << 0)
> > > - /*
> > > - * A bind ops completions are always async, hence the support for out
> > > - * sync. This flag indicates the allocation of the memory for new page
> > > - * tables and the job to program the pages tables is asynchronous
> > > - * relative to the IOCTL. That part of a bind operation can fail under
> > > - * memory pressure, the job in practice can't fail unless the system is
> > > - * totally shot.
> > > - *
> > > - * If this flag is clear and the IOCTL doesn't return an error, in
> > > - * practice the bind op is good and will complete.
> > > - *
> > > - * If this flag is set and doesn't return an error, the bind op can
> > > - * still fail and recovery is needed. It should free memory
> > > - * via non-async unbinds, and then restart all queued async binds op via
> > > - * XE_VM_BIND_OP_RESTART. Or alternatively the user should destroy the
> > > - * VM.
> > > - *
> > > - * This flag is only allowed when DRM_XE_VM_CREATE_ASYNC_BIND_OPS is
> > > - * configured in the VM and must be set if the VM is configured with
> > > - * DRM_XE_VM_CREATE_ASYNC_BIND_OPS and not in an error state.
> > > - */
> > > #define XE_VM_BIND_FLAG_ASYNC (0x1 << 1)
> > > /*
> > > * Valid on a faulting VM only, do the MAP operation immediately rather
> > > @@ -905,18 +884,10 @@ struct drm_xe_wait_user_fence {
> > > /** @extensions: Pointer to the first extension struct, if any */
> > > __u64 extensions;
> > >
> > > - union {
> > > - /**
> > > - * @addr: user pointer address to wait on, must qword aligned
> > > - */
> > > - __u64 addr;
> > > -
> > > - /**
> > > - * @vm_id: The ID of the VM which encounter an error used with
> > > - * DRM_XE_UFENCE_WAIT_VM_ERROR. Upper 32 bits must be clear.
> > > - */
> > > - __u64 vm_id;
> > > - };
> > > + /**
> > > + * @addr: user pointer address to wait on, must qword aligned
> > > + */
> > > + __u64 addr;
> > >
> > > #define DRM_XE_UFENCE_WAIT_EQ 0
> > > #define DRM_XE_UFENCE_WAIT_NEQ 1
> > > @@ -929,7 +900,6 @@ struct drm_xe_wait_user_fence {
> > >
> > > #define DRM_XE_UFENCE_WAIT_SOFT_OP (1 << 0) /* e.g. Wait on VM bind */
> > > #define DRM_XE_UFENCE_WAIT_ABSTIME (1 << 1)
> > > -#define DRM_XE_UFENCE_WAIT_VM_ERROR (1 << 2)
> > > /** @flags: wait flags */
> > > __u16 flags;
> > >
> > > diff --git a/lib/igt_fb.c b/lib/igt_fb.c
> > > index f0c0681ab..34934855a 100644
> > > --- a/lib/igt_fb.c
> > > +++ b/lib/igt_fb.c
> > > @@ -2892,7 +2892,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
> > > &bb_size,
> > > mem_region) == 0);
> > > } else if (is_xe) {
> > > - vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > exec_queue = xe_exec_queue_create(dst_fb->fd, vm, &inst, 0);
> > > xe_ctx = intel_ctx_xe(dst_fb->fd, vm, exec_queue, 0, 0, 0);
> > > mem_region = vram_if_possible(dst_fb->fd, 0);
> > > diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> > > index 6e668d28c..df82ef5f5 100644
> > > --- a/lib/intel_batchbuffer.c
> > > +++ b/lib/intel_batchbuffer.c
> > > @@ -953,7 +953,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
> > >
> > > if (!vm) {
> > > igt_assert_f(!ctx, "No vm provided for engine");
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > }
> > >
> > > ibb->uses_full_ppgtt = true;
> > > diff --git a/lib/intel_compute.c b/lib/intel_compute.c
> > > index 0c30f39c1..1ae33cdfc 100644
> > > --- a/lib/intel_compute.c
> > > +++ b/lib/intel_compute.c
> > > @@ -79,7 +79,7 @@ static void bo_execenv_create(int fd, struct bo_execenv *execenv)
> > > else
> > > engine_class = DRM_XE_ENGINE_CLASS_COMPUTE;
> > >
> > > - execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > execenv->exec_queue = xe_exec_queue_create_class(fd, execenv->vm,
> > > engine_class);
> > > }
> > > diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> > > index 48cd185de..895e3bd4e 100644
> > > --- a/lib/xe/xe_ioctl.c
> > > +++ b/lib/xe/xe_ioctl.c
> > > @@ -201,16 +201,8 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
> > > static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> > > uint64_t addr, uint64_t size, uint32_t op)
> > > {
> > > - struct drm_xe_sync sync = {
> > > - .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> > > - .handle = syncobj_create(fd, 0),
> > > - };
> > > -
> > > - __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, &sync, 1,
> > > + __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, NULL, 0,
> > > 0, 0);
> > > -
> > > - igt_assert(syncobj_wait(fd, &sync.handle, 1, INT64_MAX, 0, NULL));
> > > - syncobj_destroy(fd, sync.handle);
> > > }
> > >
> > > void xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> > > @@ -276,10 +268,11 @@ uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size)
> > > return create.handle;
> > > }
> > >
> > > -uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext)
> > > +uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext, bool async)
> > > {
> > > struct drm_xe_engine_class_instance instance = {
> > > - .engine_class = DRM_XE_ENGINE_CLASS_VM_BIND,
> > > + .engine_class = async ? DRM_XE_ENGINE_CLASS_VM_BIND_ASYNC :
> > > + DRM_XE_ENGINE_CLASS_VM_BIND_SYNC,
> > > };
> > > struct drm_xe_exec_queue_create create = {
> > > .extensions = ext,
> > > diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
> > > index f0e4109dc..a8dbcf376 100644
> > > --- a/lib/xe/xe_ioctl.h
> > > +++ b/lib/xe/xe_ioctl.h
> > > @@ -71,7 +71,8 @@ uint32_t xe_bo_create(int fd, int gt, uint32_t vm, uint64_t size);
> > > uint32_t xe_exec_queue_create(int fd, uint32_t vm,
> > > struct drm_xe_engine_class_instance *instance,
> > > uint64_t ext);
> > > -uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext);
> > > +uint32_t xe_bind_exec_queue_create(int fd, uint32_t vm, uint64_t ext,
> > > + bool async);
> > > uint32_t xe_exec_queue_create_class(int fd, uint32_t vm, uint16_t class);
> > > void xe_exec_queue_destroy(int fd, uint32_t exec_queue);
> > > uint64_t xe_bo_mmap_offset(int fd, uint32_t bo);
> > > diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> > > index c356abe1e..ab7b31188 100644
> > > --- a/lib/xe/xe_query.c
> > > +++ b/lib/xe/xe_query.c
> > > @@ -316,7 +316,7 @@ bool xe_supports_faults(int fd)
> > > bool supports_faults;
> > >
> > > struct drm_xe_vm_create create = {
> > > - .flags = DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > > + .flags = DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > > DRM_XE_VM_CREATE_FAULT_MODE,
> > > };
> > >
> > > diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
> > > index 20bbc4448..300b734c8 100644
> > > --- a/tests/intel/xe_ccs.c
> > > +++ b/tests/intel/xe_ccs.c
> > > @@ -343,7 +343,7 @@ static void block_copy(int xe,
> > > uint32_t vm, exec_queue;
> > >
> > > if (config->new_ctx) {
> > > - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
> > > surf_ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
> > > surf_ahnd = intel_allocator_open(xe, surf_ctx->vm,
> > > @@ -550,7 +550,7 @@ static void block_copy_test(int xe,
> > > copyfns[copy_function].suffix) {
> > > uint32_t sync_bind, sync_out;
> > >
> > > - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
> > > sync_bind = syncobj_create(xe, 0);
> > > sync_out = syncobj_create(xe, 0);
> > > diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
> > > index 8d845e5c8..d99bd51cf 100644
> > > --- a/tests/intel/xe_create.c
> > > +++ b/tests/intel/xe_create.c
> > > @@ -54,7 +54,7 @@ static void create_invalid_size(int fd)
> > > uint32_t handle;
> > > int ret;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > >
> > > xe_for_each_mem_region(fd, memreg, region) {
> > > memregion = xe_mem_region(fd, region);
> > > @@ -140,7 +140,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
> > >
> > > fd = drm_reopen_driver(fd);
> > > num_engines = xe_number_hw_engines(fd);
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > >
> > > exec_queues_per_process = max_t(uint32_t, 1, MAXEXECQUEUES / nproc);
> > > igt_debug("nproc: %u, exec_queues per process: %u\n", nproc, exec_queues_per_process);
> > > @@ -199,7 +199,7 @@ static void create_massive_size(int fd)
> > > uint32_t handle;
> > > int ret;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > >
> > > xe_for_each_mem_region(fd, memreg, region) {
> > > ret = __create_bo(fd, vm, -1ULL << 32, region, &handle);
> > > diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
> > > index 22e410e14..64168ed19 100644
> > > --- a/tests/intel/xe_drm_fdinfo.c
> > > +++ b/tests/intel/xe_drm_fdinfo.c
> > > @@ -71,7 +71,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
> > > struct xe_spin_opts spin_opts = { .preempt = true };
> > > int i, b, ret;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data) * N_EXEC_QUEUES;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > @@ -90,7 +90,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
> > >
> > > for (i = 0; i < N_EXEC_QUEUES; i++) {
> > > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > > - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
> > > + bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
> > > syncobjs[i] = syncobj_create(fd, 0);
> > > }
> > > syncobjs[N_EXEC_QUEUES] = syncobj_create(fd, 0);
> > > diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
> > > index 5d8981f8d..eec001218 100644
> > > --- a/tests/intel/xe_evict.c
> > > +++ b/tests/intel/xe_evict.c
> > > @@ -63,15 +63,17 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
> > >
> > > fd = drm_open_driver(DRIVER_XE);
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > if (flags & BIND_EXEC_QUEUE)
> > > - bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0);
> > > + bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
> > > if (flags & MULTI_VM) {
> > > - vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > - vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > + vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > if (flags & BIND_EXEC_QUEUE) {
> > > - bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2, 0);
> > > - bind_exec_queues[2] = xe_bind_exec_queue_create(fd, vm3, 0);
> > > + bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
> > > + 0, true);
> > > + bind_exec_queues[2] = xe_bind_exec_queue_create(fd, vm3,
> > > + 0, true);
> > > }
> > > }
> > >
> > > @@ -240,15 +242,16 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
> > >
> > > fd = drm_open_driver(DRIVER_XE);
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > > if (flags & BIND_EXEC_QUEUE)
> > > - bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0);
> > > + bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
> > > if (flags & MULTI_VM) {
> > > - vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > > + vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > > if (flags & BIND_EXEC_QUEUE)
> > > - bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2, 0);
> > > + bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
> > > + 0, true);
> > > }
> > >
> > > for (i = 0; i < n_exec_queues; i++) {
> > > diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
> > > index f4f5440f4..3ca3de881 100644
> > > --- a/tests/intel/xe_exec_balancer.c
> > > +++ b/tests/intel/xe_exec_balancer.c
> > > @@ -66,7 +66,7 @@ static void test_all_active(int fd, int gt, int class)
> > > if (num_placements < 2)
> > > return;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data) * num_placements;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
> > >
> > > @@ -207,7 +207,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
> > > if (num_placements < 2)
> > > return;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data) * n_execs;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
> > >
> > > @@ -433,7 +433,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
> > > if (num_placements < 2)
> > > return;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > > bo_size = sizeof(*data) * n_execs;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
> > > index e29398aaa..8dbce524d 100644
> > > --- a/tests/intel/xe_exec_basic.c
> > > +++ b/tests/intel/xe_exec_basic.c
> > > @@ -109,7 +109,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > > igt_assert(n_vm <= MAX_N_EXEC_QUEUES);
> > >
> > > for (i = 0; i < n_vm; ++i)
> > > - vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data) * n_execs;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > @@ -151,7 +151,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > >
> > > exec_queues[i] = xe_exec_queue_create(fd, __vm, eci, 0);
> > > if (flags & BIND_EXEC_QUEUE)
> > > - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, __vm, 0);
> > > + bind_exec_queues[i] = xe_bind_exec_queue_create(fd,
> > > + __vm, 0,
> > > + true);
> > > else
> > > bind_exec_queues[i] = 0;
> > > syncobjs[i] = syncobj_create(fd, 0);
> > > diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
> > > index 02e7ef201..b0a677dca 100644
> > > --- a/tests/intel/xe_exec_compute_mode.c
> > > +++ b/tests/intel/xe_exec_compute_mode.c
> > > @@ -113,7 +113,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > >
> > > igt_assert(n_exec_queues <= MAX_N_EXECQUEUES);
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > > bo_size = sizeof(*data) * n_execs;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > @@ -123,7 +123,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > > if (flags & BIND_EXECQUEUE)
> > > bind_exec_queues[i] =
> > > - xe_bind_exec_queue_create(fd, vm, 0);
> > > + xe_bind_exec_queue_create(fd, vm, 0, true);
> > > else
> > > bind_exec_queues[i] = 0;
> > > };
> > > @@ -151,7 +151,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > > if (flags & BIND_EXECQUEUE)
> > > bind_exec_queues[i] =
> > > - xe_bind_exec_queue_create(fd, vm, 0);
> > > + xe_bind_exec_queue_create(fd, vm, 0, true);
> > > else
> > > bind_exec_queues[i] = 0;
> > > };
> > > diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
> > > index c5d6bdcd5..92d8690a1 100644
> > > --- a/tests/intel/xe_exec_fault_mode.c
> > > +++ b/tests/intel/xe_exec_fault_mode.c
> > > @@ -131,7 +131,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > >
> > > igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > > DRM_XE_VM_CREATE_FAULT_MODE, 0);
> > > bo_size = sizeof(*data) * n_execs;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > @@ -165,7 +165,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> > > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > > if (flags & BIND_EXEC_QUEUE)
> > > bind_exec_queues[i] =
> > > - xe_bind_exec_queue_create(fd, vm, 0);
> > > + xe_bind_exec_queue_create(fd, vm, 0, true);
> > > else
> > > bind_exec_queues[i] = 0;
> > > };
> > > @@ -375,7 +375,7 @@ test_atomic(int fd, struct drm_xe_engine_class_instance *eci,
> > > uint32_t *ptr;
> > > int i, b, wait_idx = 0;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > > DRM_XE_VM_CREATE_FAULT_MODE, 0);
> > > bo_size = sizeof(*data) * n_atomic;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
> > > index ca8d7cc13..44248776b 100644
> > > --- a/tests/intel/xe_exec_reset.c
> > > +++ b/tests/intel/xe_exec_reset.c
> > > @@ -45,7 +45,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
> > > struct xe_spin *spin;
> > > struct xe_spin_opts spin_opts = { .addr = addr, .preempt = false };
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*spin);
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > @@ -176,7 +176,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
> > > if (num_placements < 2)
> > > return;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data) * n_execs;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > @@ -362,7 +362,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
> > > if (flags & CLOSE_FD)
> > > fd = drm_open_driver(DRIVER_XE);
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data) * n_execs;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > @@ -528,7 +528,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
> > > if (flags & CLOSE_FD)
> > > fd = drm_open_driver(DRIVER_XE);
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > > bo_size = sizeof(*data) * n_execs;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
> > > index 14f7c9bec..90684b8cb 100644
> > > --- a/tests/intel/xe_exec_store.c
> > > +++ b/tests/intel/xe_exec_store.c
> > > @@ -75,7 +75,7 @@ static void store(int fd)
> > > syncobj = syncobj_create(fd, 0);
> > > sync.handle = syncobj;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data);
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > @@ -132,7 +132,7 @@ static void store_all(int fd, int gt, int class)
> > > struct drm_xe_engine_class_instance *hwe;
> > > int i, num_placements = 0;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data);
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> > > index c9a51fc00..bb16bdd88 100644
> > > --- a/tests/intel/xe_exec_threads.c
> > > +++ b/tests/intel/xe_exec_threads.c
> > > @@ -77,7 +77,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
> > > }
> > >
> > > if (!vm) {
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > owns_vm = true;
> > > }
> > >
> > > @@ -285,7 +285,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > > }
> > >
> > > if (!vm) {
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > > DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> > > owns_vm = true;
> > > }
> > > @@ -454,7 +454,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > > static void
> > > test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > > struct drm_xe_engine_class_instance *eci, int n_exec_queues,
> > > - int n_execs, int rebind_error_inject, unsigned int flags)
> > > + int n_execs, unsigned int flags)
> > > {
> > > struct drm_xe_sync sync[2] = {
> > > { .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> > > @@ -489,7 +489,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > > }
> > >
> > > if (!vm) {
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > owns_vm = true;
> > > }
> > >
> > > @@ -531,7 +531,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > > else
> > > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > > if (flags & BIND_EXEC_QUEUE)
> > > - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
> > > + bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm,
> > > + 0, true);
> > > else
> > > bind_exec_queues[i] = 0;
> > > syncobjs[i] = syncobj_create(fd, 0);
> > > @@ -583,8 +584,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > > exec.address = exec_addr;
> > > if (e != i && !(flags & HANG))
> > > syncobj_reset(fd, &syncobjs[e], 1);
> > > - if ((flags & HANG && e == hang_exec_queue) ||
> > > - rebind_error_inject > 0) {
> > > + if ((flags & HANG && e == hang_exec_queue)) {
> > > int err;
> > >
> > > do {
> > > @@ -594,20 +594,10 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> > > xe_exec(fd, &exec);
> > > }
> > >
> > > - if (flags & REBIND && i &&
> > > - (!(i & 0x1f) || rebind_error_inject == i)) {
> > > -#define INJECT_ERROR (0x1 << 31)
> > > - if (rebind_error_inject == i)
> > > - __xe_vm_bind_assert(fd, vm, bind_exec_queues[e],
> > > - 0, 0, addr, bo_size,
> > > - XE_VM_BIND_OP_UNMAP,
> > > - XE_VM_BIND_FLAG_ASYNC |
> > > - INJECT_ERROR, sync_all,
> > > - n_exec_queues, 0, 0);
> > > - else
> > > - xe_vm_unbind_async(fd, vm, bind_exec_queues[e],
> > > - 0, addr, bo_size,
> > > - sync_all, n_exec_queues);
> > > + if (flags & REBIND && i && !(i & 0x1f)) {
> > > + xe_vm_unbind_async(fd, vm, bind_exec_queues[e],
> > > + 0, addr, bo_size,
> > > + sync_all, n_exec_queues);
> > >
> > > sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> > > addr += bo_size;
> > > @@ -709,7 +699,6 @@ struct thread_data {
> > > int n_exec_queue;
> > > int n_exec;
> > > int flags;
> > > - int rebind_error_inject;
> > > bool *go;
> > > };
> > >
> > > @@ -733,46 +722,7 @@ static void *thread(void *data)
> > > else
> > > test_legacy_mode(t->fd, t->vm_legacy_mode, t->addr, t->userptr,
> > > t->eci, t->n_exec_queue, t->n_exec,
> > > - t->rebind_error_inject, t->flags);
> > > -
> > > - return NULL;
> > > -}
> > > -
> > > -struct vm_thread_data {
> > > - pthread_t thread;
> > > - int fd;
> > > - int vm;
> > > -};
> > > -
> > > -static void *vm_async_ops_err_thread(void *data)
> > > -{
> > > - struct vm_thread_data *args = data;
> > > - int fd = args->fd;
> > > - int ret;
> > > -
> > > - struct drm_xe_wait_user_fence wait = {
> > > - .vm_id = args->vm,
> > > - .op = DRM_XE_UFENCE_WAIT_NEQ,
> > > - .flags = DRM_XE_UFENCE_WAIT_VM_ERROR,
> > > - .mask = DRM_XE_UFENCE_WAIT_U32,
> > > -#define BASICALLY_FOREVER 0xffffffffffff
> > > - .timeout = BASICALLY_FOREVER,
> > > - };
> > > -
> > > - ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> > > -
> > > - while (!ret) {
> > > - struct drm_xe_vm_bind bind = {
> > > - .vm_id = args->vm,
> > > - .num_binds = 1,
> > > - .bind.op = XE_VM_BIND_OP_RESTART,
> > > - };
> > > -
> > > - /* Restart and wait for next error */
> > > - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
> > > - &bind), 0);
> > > - ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> > > - }
> > > + t->flags);
> > >
> > > return NULL;
> > > }
> > > @@ -826,6 +776,10 @@ static void *vm_async_ops_err_thread(void *data)
> > > * shared vm rebind err
> > > * @shared-vm-userptr-rebind-err:
> > > * shared vm userptr rebind err
> > > + * @rebind-err:
> > > + * rebind err
> > > + * @userptr-rebind-err:
> > > + * userptr rebind err
> > > * @shared-vm-userptr-invalidate:
> > > * shared vm userptr invalidate
> > > * @shared-vm-userptr-invalidate-race:
> > > @@ -842,7 +796,7 @@ static void *vm_async_ops_err_thread(void *data)
> > > * fd userptr invalidate race
> > > * @hang-basic:
> > > * hang basic
> > > - * @hang-userptr:
> > > + * @hang-userptr:
> > > * hang userptr
> > > * @hang-rebind:
> > > * hang rebind
> > > @@ -864,6 +818,10 @@ static void *vm_async_ops_err_thread(void *data)
> > > * hang shared vm rebind err
> > > * @hang-shared-vm-userptr-rebind-err:
> > > * hang shared vm userptr rebind err
> > > + * @hang-rebind-err:
> > > + * hang rebind err
> > > + * @hang-userptr-rebind-err:
> > > + * hang userptr rebind err
> > > * @hang-shared-vm-userptr-invalidate:
> > > * hang shared vm userptr invalidate
> > > * @hang-shared-vm-userptr-invalidate-race:
> > > @@ -1019,7 +977,6 @@ static void threads(int fd, int flags)
> > > int n_hw_engines = 0, class;
> > > uint64_t i = 0;
> > > uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
> > > - struct vm_thread_data vm_err_thread = {};
> > > bool go = false;
> > > int n_threads = 0;
> > > int gt;
> > > @@ -1052,18 +1009,12 @@ static void threads(int fd, int flags)
> > >
> > > if (flags & SHARED_VM) {
> > > vm_legacy_mode = xe_vm_create(fd,
> > > - DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
> > > + DRM_XE_VM_CREATE_ASYNC_DEFAULT,
> > > 0);
> > > vm_compute_mode = xe_vm_create(fd,
> > > - DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
> > > + DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> > > DRM_XE_VM_CREATE_COMPUTE_MODE,
> > > 0);
> > > -
> > > - vm_err_thread.fd = fd;
> > > - vm_err_thread.vm = vm_legacy_mode;
> > > - pthread_create(&vm_err_thread.thread, 0,
> > > - vm_async_ops_err_thread, &vm_err_thread);
> > > -
> > > }
> > >
> > > xe_for_each_hw_engine(fd, hwe) {
> > > @@ -1083,11 +1034,6 @@ static void threads(int fd, int flags)
> > > threads_data[i].n_exec_queue = N_EXEC_QUEUE;
> > > #define N_EXEC 1024
> > > threads_data[i].n_exec = N_EXEC;
> > > - if (flags & REBIND_ERROR)
> > > - threads_data[i].rebind_error_inject =
> > > - (N_EXEC / (n_hw_engines + 1)) * (i + 1);
> > > - else
> > > - threads_data[i].rebind_error_inject = -1;
> > > threads_data[i].flags = flags;
> > > if (flags & MIXED_MODE) {
> > > threads_data[i].flags &= ~MIXED_MODE;
> > > @@ -1190,8 +1136,6 @@ static void threads(int fd, int flags)
> > > if (vm_compute_mode)
> > > xe_vm_destroy(fd, vm_compute_mode);
> > > free(threads_data);
> > > - if (flags & SHARED_VM)
> > > - pthread_join(vm_err_thread.thread, NULL);
> > > pthread_barrier_destroy(&barrier);
> > > }
> > >
> > > @@ -1214,9 +1158,8 @@ igt_main
> > > { "shared-vm-rebind-bindexecqueue", SHARED_VM | REBIND |
> > > BIND_EXEC_QUEUE },
> > > { "shared-vm-userptr-rebind", SHARED_VM | USERPTR | REBIND },
> > > - { "shared-vm-rebind-err", SHARED_VM | REBIND | REBIND_ERROR },
> > > - { "shared-vm-userptr-rebind-err", SHARED_VM | USERPTR |
> > > - REBIND | REBIND_ERROR},
> > > + { "rebind-err", REBIND | REBIND_ERROR },
> > > + { "userptr-rebind-err", USERPTR | REBIND | REBIND_ERROR},
> > > { "shared-vm-userptr-invalidate", SHARED_VM | USERPTR |
> > > INVALIDATE },
> > > { "shared-vm-userptr-invalidate-race", SHARED_VM | USERPTR |
> > > @@ -1240,10 +1183,9 @@ igt_main
> > > { "hang-shared-vm-rebind", HANG | SHARED_VM | REBIND },
> > > { "hang-shared-vm-userptr-rebind", HANG | SHARED_VM | USERPTR |
> > > REBIND },
> > > - { "hang-shared-vm-rebind-err", HANG | SHARED_VM | REBIND |
> > > + { "hang-rebind-err", HANG | REBIND | REBIND_ERROR },
> > > + { "hang-userptr-rebind-err", HANG | USERPTR | REBIND |
> > > REBIND_ERROR },
> > > - { "hang-shared-vm-userptr-rebind-err", HANG | SHARED_VM |
> > > - USERPTR | REBIND | REBIND_ERROR },
> > > { "hang-shared-vm-userptr-invalidate", HANG | SHARED_VM |
> > > USERPTR | INVALIDATE },
> > > { "hang-shared-vm-userptr-invalidate-race", HANG | SHARED_VM |
> > > diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
> > > index ca85f5f18..2f349b16d 100644
> > > --- a/tests/intel/xe_exercise_blt.c
> > > +++ b/tests/intel/xe_exercise_blt.c
> > > @@ -280,7 +280,7 @@ static void fast_copy_test(int xe,
> > > region1 = igt_collection_get_value(regions, 0);
> > > region2 = igt_collection_get_value(regions, 1);
> > >
> > > - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
> > > ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
> > >
> > > diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
> > > index 0327d8e0e..3f2c4ae23 100644
> > > --- a/tests/intel/xe_guc_pc.c
> > > +++ b/tests/intel/xe_guc_pc.c
> > > @@ -60,7 +60,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
> > > igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
> > > igt_assert(n_execs > 0);
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data) * n_execs;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > diff --git a/tests/intel/xe_huc_copy.c b/tests/intel/xe_huc_copy.c
> > > index c9891a729..c71ff74a1 100644
> > > --- a/tests/intel/xe_huc_copy.c
> > > +++ b/tests/intel/xe_huc_copy.c
> > > @@ -117,7 +117,7 @@ test_huc_copy(int fd)
> > > { .addr = ADDR_BATCH, .size = SIZE_BATCH }, // batch
> > > };
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > exec_queue = xe_exec_queue_create_class(fd, vm, DRM_XE_ENGINE_CLASS_VIDEO_DECODE);
> > > sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
> > > sync.handle = syncobj_create(fd, 0);
> > > diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
> > > index 0159a3164..26e4dcc85 100644
> > > --- a/tests/intel/xe_intel_bb.c
> > > +++ b/tests/intel/xe_intel_bb.c
> > > @@ -191,7 +191,7 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
> > > intel_bb_reset(ibb, true);
> > >
> > > if (new_context) {
> > > - vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > ctx = xe_exec_queue_create(xe, vm, xe_hw_engine(xe, 0), 0);
> > > intel_bb_destroy(ibb);
> > > ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
> > > diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> > > index fd28d5630..b2976ec84 100644
> > > --- a/tests/intel/xe_pm.c
> > > +++ b/tests/intel/xe_pm.c
> > > @@ -259,7 +259,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
> > > if (check_rpm)
> > > igt_assert(in_d3(device, d_state));
> > >
> > > - vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > >
> > > if (check_rpm)
> > > igt_assert(out_of_d3(device, d_state));
> > > diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> > > index 89df6149a..dd3302337 100644
> > > --- a/tests/intel/xe_vm.c
> > > +++ b/tests/intel/xe_vm.c
> > > @@ -275,7 +275,7 @@ static void unbind_all(int fd, int n_vmas)
> > > { .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> > > };
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo = xe_bo_create(fd, 0, vm, bo_size);
> > >
> > > for (i = 0; i < n_vmas; ++i)
> > > @@ -322,171 +322,6 @@ static void userptr_invalid(int fd)
> > > xe_vm_destroy(fd, vm);
> > > }
> > >
> > > -struct vm_thread_data {
> > > - pthread_t thread;
> > > - int fd;
> > > - int vm;
> > > - uint32_t bo;
> > > - size_t bo_size;
> > > - bool destroy;
> > > -};
> > > -
> > > -/**
> > > - * SUBTEST: vm-async-ops-err
> > > - * Description: Test VM async ops error
> > > - * Functionality: VM
> > > - * Test category: negative test
> > > - *
> > > - * SUBTEST: vm-async-ops-err-destroy
> > > - * Description: Test VM async ops error destroy
> > > - * Functionality: VM
> > > - * Test category: negative test
> > > - */
> > > -
> > > -static void *vm_async_ops_err_thread(void *data)
> > > -{
> > > - struct vm_thread_data *args = data;
> > > - int fd = args->fd;
> > > - uint64_t addr = 0x201a0000;
> > > - int num_binds = 0;
> > > - int ret;
> > > -
> > > - struct drm_xe_wait_user_fence wait = {
> > > - .vm_id = args->vm,
> > > - .op = DRM_XE_UFENCE_WAIT_NEQ,
> > > - .flags = DRM_XE_UFENCE_WAIT_VM_ERROR,
> > > - .mask = DRM_XE_UFENCE_WAIT_U32,
> > > - .timeout = MS_TO_NS(1000),
> > > - };
> > > -
> > > - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE,
> > > - &wait), 0);
> > > - if (args->destroy) {
> > > - usleep(5000); /* Wait other binds to queue up */
> > > - xe_vm_destroy(fd, args->vm);
> > > - return NULL;
> > > - }
> > > -
> > > - while (!ret) {
> > > - struct drm_xe_vm_bind bind = {
> > > - .vm_id = args->vm,
> > > - .num_binds = 1,
> > > - .bind.op = XE_VM_BIND_OP_RESTART,
> > > - };
> > > -
> > > - /* VM sync ops should work */
> > > - if (!(num_binds++ % 2)) {
> > > - xe_vm_bind_sync(fd, args->vm, args->bo, 0, addr,
> > > - args->bo_size);
> > > - } else {
> > > - xe_vm_unbind_sync(fd, args->vm, 0, addr,
> > > - args->bo_size);
> > > - addr += args->bo_size * 2;
> > > - }
> > > -
> > > - /* Restart and wait for next error */
> > > - igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
> > > - &bind), 0);
> > > - ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
> > > - }
> > > -
> > > - return NULL;
> > > -}
> > > -
> > > -static void vm_async_ops_err(int fd, bool destroy)
> > > -{
> > > - uint32_t vm;
> > > - uint64_t addr = 0x1a0000;
> > > - struct drm_xe_sync sync = {
> > > - .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> > > - };
> > > -#define N_BINDS 32
> > > - struct vm_thread_data thread = {};
> > > - uint32_t syncobjs[N_BINDS];
> > > - size_t bo_size = 0x1000 * 32;
> > > - uint32_t bo;
> > > - int i, j;
> > > -
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > - bo = xe_bo_create(fd, 0, vm, bo_size);
> > > -
> > > - thread.fd = fd;
> > > - thread.vm = vm;
> > > - thread.bo = bo;
> > > - thread.bo_size = bo_size;
> > > - thread.destroy = destroy;
> > > - pthread_create(&thread.thread, 0, vm_async_ops_err_thread, &thread);
> > > -
> > > - for (i = 0; i < N_BINDS; i++)
> > > - syncobjs[i] = syncobj_create(fd, 0);
> > > -
> > > - for (j = 0, i = 0; i < N_BINDS / 4; i++, j++) {
> > > - sync.handle = syncobjs[j];
> > > -#define INJECT_ERROR (0x1 << 31)
> > > - if (i == N_BINDS / 8) /* Inject error on this bind */
> > > - __xe_vm_bind_assert(fd, vm, 0, bo, 0,
> > > - addr + i * bo_size * 2,
> > > - bo_size, XE_VM_BIND_OP_MAP,
> > > - XE_VM_BIND_FLAG_ASYNC |
> > > - INJECT_ERROR, &sync, 1, 0, 0);
> > > - else
> > > - xe_vm_bind_async(fd, vm, 0, bo, 0,
> > > - addr + i * bo_size * 2,
> > > - bo_size, &sync, 1);
> > > - }
> > > -
> > > - for (i = 0; i < N_BINDS / 4; i++, j++) {
> > > - sync.handle = syncobjs[j];
> > > - if (i == N_BINDS / 8)
> > > - __xe_vm_bind_assert(fd, vm, 0, 0, 0,
> > > - addr + i * bo_size * 2,
> > > - bo_size, XE_VM_BIND_OP_UNMAP,
> > > - XE_VM_BIND_FLAG_ASYNC |
> > > - INJECT_ERROR, &sync, 1, 0, 0);
> > > - else
> > > - xe_vm_unbind_async(fd, vm, 0, 0,
> > > - addr + i * bo_size * 2,
> > > - bo_size, &sync, 1);
> > > - }
> > > -
> > > - for (i = 0; i < N_BINDS / 4; i++, j++) {
> > > - sync.handle = syncobjs[j];
> > > - if (i == N_BINDS / 8)
> > > - __xe_vm_bind_assert(fd, vm, 0, bo, 0,
> > > - addr + i * bo_size * 2,
> > > - bo_size, XE_VM_BIND_OP_MAP,
> > > - XE_VM_BIND_FLAG_ASYNC |
> > > - INJECT_ERROR, &sync, 1, 0, 0);
> > > - else
> > > - xe_vm_bind_async(fd, vm, 0, bo, 0,
> > > - addr + i * bo_size * 2,
> > > - bo_size, &sync, 1);
> > > - }
> > > -
> > > - for (i = 0; i < N_BINDS / 4; i++, j++) {
> > > - sync.handle = syncobjs[j];
> > > - if (i == N_BINDS / 8)
> > > - __xe_vm_bind_assert(fd, vm, 0, 0, 0,
> > > - addr + i * bo_size * 2,
> > > - bo_size, XE_VM_BIND_OP_UNMAP,
> > > - XE_VM_BIND_FLAG_ASYNC |
> > > - INJECT_ERROR, &sync, 1, 0, 0);
> > > - else
> > > - xe_vm_unbind_async(fd, vm, 0, 0,
> > > - addr + i * bo_size * 2,
> > > - bo_size, &sync, 1);
> > > - }
> > > -
> > > - for (i = 0; i < N_BINDS; i++)
> > > - igt_assert(syncobj_wait(fd, &syncobjs[i], 1, INT64_MAX, 0,
> > > - NULL));
> > > -
> > > - if (!destroy)
> > > - xe_vm_destroy(fd, vm);
> > > -
> > > - pthread_join(thread.thread, NULL);
> > > -}
> > > -
> > > /**
> > > * SUBTEST: shared-%s-page
> > > * Description: Test shared arg[1] page
> > > @@ -537,7 +372,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
> > > data = malloc(sizeof(*data) * n_bo);
> > > igt_assert(data);
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(struct shared_pte_page_data);
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > @@ -718,7 +553,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
> > > struct xe_spin_opts spin_opts = { .preempt = true };
> > > int i, b;
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data) * N_EXEC_QUEUES;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > @@ -728,7 +563,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
> > >
> > > for (i = 0; i < N_EXEC_QUEUES; i++) {
> > > exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
> > > - bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0);
> > > + bind_exec_queues[i] = xe_bind_exec_queue_create(fd, vm, 0, true);
> > > syncobjs[i] = syncobj_create(fd, 0);
> > > }
> > > syncobjs[N_EXEC_QUEUES] = syncobj_create(fd, 0);
> > > @@ -898,7 +733,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
> > >
> > > igt_assert(n_execs <= BIND_ARRAY_MAX_N_EXEC);
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = sizeof(*data) * n_execs;
> > > bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
> > > xe_get_default_alignment(fd));
> > > @@ -908,7 +743,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
> > > data = xe_bo_map(fd, bo, bo_size);
> > >
> > > if (flags & BIND_ARRAY_BIND_EXEC_QUEUE_FLAG)
> > > - bind_exec_queue = xe_bind_exec_queue_create(fd, vm, 0);
> > > + bind_exec_queue = xe_bind_exec_queue_create(fd, vm, 0, true);
> > > exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
> > >
> > > for (i = 0; i < n_execs; ++i) {
> > > @@ -1092,7 +927,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
> > > }
> > >
> > > igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > >
> > > if (flags & LARGE_BIND_FLAG_USERPTR) {
> > > map = aligned_alloc(xe_get_default_alignment(fd), bo_size);
> > > @@ -1384,7 +1219,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
> > > unbind_n_page_offset *= n_page_per_2mb;
> > > }
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = page_size * bo_n_pages;
> > >
> > > if (flags & MAP_FLAG_USERPTR) {
> > > @@ -1684,7 +1519,7 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
> > > unbind_n_page_offset *= n_page_per_2mb;
> > > }
> > >
> > > - vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
> > > + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_size = page_size * bo_n_pages;
> > >
> > > if (flags & MAP_FLAG_USERPTR) {
> > > @@ -2001,12 +1836,6 @@ igt_main
> > > igt_subtest("userptr-invalid")
> > > userptr_invalid(fd);
> > >
> > > - igt_subtest("vm-async-ops-err")
> > > - vm_async_ops_err(fd, false);
> > > -
> > > - igt_subtest("vm-async-ops-err-destroy")
> > > - vm_async_ops_err(fd, true);
> > > -
> > > igt_subtest("shared-pte-page")
> > > xe_for_each_hw_engine(fd, hwe)
> > > shared_pte_page(fd, hwe, 4,
> > > diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
> > > index 34005fbeb..e0116f181 100644
> > > --- a/tests/intel/xe_waitfence.c
> > > +++ b/tests/intel/xe_waitfence.c
> > > @@ -34,7 +34,7 @@ static void do_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> > >
> > > sync[0].addr = to_user_pointer(&wait_fence);
> > > sync[0].timeline_value = val;
> > > - xe_vm_bind(fd, vm, bo, offset, addr, size, sync, 1);
> > > + xe_vm_bind_async(fd, vm, 0, bo, offset, addr, size, sync, 1);
> > > }
> > >
> > > enum waittype {
> > > @@ -63,7 +63,7 @@ waitfence(int fd, enum waittype wt)
> > > uint32_t bo_7;
> > > int64_t timeout;
> > >
> > > - uint32_t vm = xe_vm_create(fd, 0, 0);
> > > + uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> > > bo_1 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> > > do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
> >
> >
> > Missing XE_VM_BIND_FLAG_ASYNC with the async vm... this and other tests here have similar problem.
>
> It seems this flag is set in xe_vm_bind_async() which is called from do_bind(). Without it the
> test would fail.
yeah, missed that.
LGTM
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
>
> >
> >
> > > bo_2 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> > > @@ -96,21 +96,6 @@ waitfence(int fd, enum waittype wt)
> > > ", elapsed: %" PRId64 "\n",
> > > timeout, signalled, signalled - current);
> > > }
> > > -
> > > - xe_vm_unbind_sync(fd, vm, 0, 0x200000, 0x40000);
> > > - xe_vm_unbind_sync(fd, vm, 0, 0xc0000000, 0x40000);
> > > - xe_vm_unbind_sync(fd, vm, 0, 0x180000000, 0x40000);
> > > - xe_vm_unbind_sync(fd, vm, 0, 0x140000000, 0x10000);
> > > - xe_vm_unbind_sync(fd, vm, 0, 0x100000000, 0x100000);
> > > - xe_vm_unbind_sync(fd, vm, 0, 0xc0040000, 0x1c0000);
> > > - xe_vm_unbind_sync(fd, vm, 0, 0xeffff0000, 0x10000);
> > > - gem_close(fd, bo_7);
> > > - gem_close(fd, bo_6);
> > > - gem_close(fd, bo_5);
> > > - gem_close(fd, bo_4);
> > > - gem_close(fd, bo_3);
> > > - gem_close(fd, bo_2);
> > > - gem_close(fd, bo_1);
> >
> > unrelated change.
> >
> > > }
> > >
> > > igt_main
> >
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] [PATCH v4 11/14] drm-uapi/xe: Replace useless 'instance' per unique gt_id
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (9 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 10/14] xe: Update to new VM bind uAPI Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 12:00 ` Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 12/14] drm-uapi/xe: Remove unused field of drm_xe_query_gt Francois Dugast
` (5 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Align with commit ("drm/xe/uapi: Replace useless 'instance' per unique gt_id")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 65 ++++++++++++++++++++++++++-------------
tests/intel/xe_query.c | 2 +-
2 files changed, 44 insertions(+), 23 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 80b4c76f3..e6879a6be 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -334,6 +334,47 @@ struct drm_xe_query_config {
__u64 info[];
};
+/**
+ * struct drm_xe_query_gt - describe an individual GT.
+ *
+ * To be used with drm_xe_query_gts, which will return a list with all the
+ * existing GT individual descriptions.
+ * Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
+ * implementing graphics and/or media operations.
+ */
+struct drm_xe_query_gt {
+#define XE_QUERY_GT_TYPE_MAIN 0
+#define XE_QUERY_GT_TYPE_REMOTE 1
+#define XE_QUERY_GT_TYPE_MEDIA 2
+ /** @type: GT type: Main, Remote, or Media */
+ __u16 type;
+ /** @gt_id: Unique ID of this GT within the PCI Device */
+ __u16 gt_id;
+ /** @clock_freq: A clock frequency for timestamp */
+ __u32 clock_freq;
+ /** @features: Reserved for future information about GT features */
+ __u64 features;
+ /**
+ * @native_mem_regions: Bit mask of instances from
+ * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
+ * direct access.
+ */
+ __u64 native_mem_regions;
+ /**
+ * @slow_mem_regions: Bit mask of instances from
+ * drm_xe_query_mem_usage that this GT can indirectly access, although
+ * they live on a different GPU/Tile.
+ */
+ __u64 slow_mem_regions;
+ /**
+ * @inaccessible_mem_regions: Bit mask of instances from
+ * drm_xe_query_mem_usage that is not accessible by this GT at all.
+ */
+ __u64 inaccessible_mem_regions;
+ /** @reserved: Reserved */
+ __u64 reserved[8];
+};
+
/**
* struct drm_xe_query_gts - describe GTs
*
@@ -344,30 +385,10 @@ struct drm_xe_query_config {
struct drm_xe_query_gts {
/** @num_gt: number of GTs returned in gts */
__u32 num_gt;
-
/** @pad: MBZ */
__u32 pad;
-
- /**
- * @gts: The GTs returned for this device
- *
- * TODO: convert drm_xe_query_gt to proper kernel-doc.
- * TODO: Perhaps info about every mem region relative to this GT? e.g.
- * bandwidth between this GT and remote region?
- */
- struct drm_xe_query_gt {
-#define XE_QUERY_GT_TYPE_MAIN 0
-#define XE_QUERY_GT_TYPE_REMOTE 1
-#define XE_QUERY_GT_TYPE_MEDIA 2
- __u16 type;
- __u16 instance;
- __u32 clock_freq;
- __u64 features;
- __u64 native_mem_regions; /* bit mask of instances from drm_xe_query_mem_usage */
- __u64 slow_mem_regions; /* bit mask of instances from drm_xe_query_mem_usage */
- __u64 inaccessible_mem_regions; /* bit mask of instances from drm_xe_query_mem_usage */
- __u64 reserved[8];
- } gts[];
+ /** @gts: The GT list returned for this device */
+ struct drm_xe_query_gt gts[];
};
/**
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 3e7460ff4..f1ae1bf40 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -279,7 +279,7 @@ test_query_gts(int fd)
for (i = 0; i < gts->num_gt; i++) {
igt_info("type: %d\n", gts->gts[i].type);
- igt_info("instance: %d\n", gts->gts[i].instance);
+ igt_info("gt_id: %d\n", gts->gts[i].gt_id);
igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
igt_info("features: 0x%016llx\n", gts->gts[i].features);
igt_info("native_mem_regions: 0x%016llx\n",
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 11/14] drm-uapi/xe: Replace useless 'instance' per unique gt_id
2023-09-28 11:05 ` [igt-dev] [PATCH v4 11/14] drm-uapi/xe: Replace useless 'instance' per unique gt_id Francois Dugast
@ 2023-09-28 12:00 ` Francois Dugast
0 siblings, 0 replies; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 12:00 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
On Thu, Sep 28, 2023 at 11:05:13AM +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
>
> Align with commit ("drm/xe/uapi: Replace useless 'instance' per unique gt_id")
>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 65 ++++++++++++++++++++++++++-------------
> tests/intel/xe_query.c | 2 +-
> 2 files changed, 44 insertions(+), 23 deletions(-)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 80b4c76f3..e6879a6be 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -334,6 +334,47 @@ struct drm_xe_query_config {
> __u64 info[];
> };
>
> +/**
> + * struct drm_xe_query_gt - describe an individual GT.
> + *
> + * To be used with drm_xe_query_gts, which will return a list with all the
> + * existing GT individual descriptions.
> + * Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
> + * implementing graphics and/or media operations.
> + */
> +struct drm_xe_query_gt {
> +#define XE_QUERY_GT_TYPE_MAIN 0
> +#define XE_QUERY_GT_TYPE_REMOTE 1
> +#define XE_QUERY_GT_TYPE_MEDIA 2
> + /** @type: GT type: Main, Remote, or Media */
> + __u16 type;
> + /** @gt_id: Unique ID of this GT within the PCI Device */
> + __u16 gt_id;
> + /** @clock_freq: A clock frequency for timestamp */
> + __u32 clock_freq;
> + /** @features: Reserved for future information about GT features */
> + __u64 features;
> + /**
> + * @native_mem_regions: Bit mask of instances from
> + * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
> + * direct access.
> + */
> + __u64 native_mem_regions;
> + /**
> + * @slow_mem_regions: Bit mask of instances from
> + * drm_xe_query_mem_usage that this GT can indirectly access, although
> + * they live on a different GPU/Tile.
> + */
> + __u64 slow_mem_regions;
> + /**
> + * @inaccessible_mem_regions: Bit mask of instances from
> + * drm_xe_query_mem_usage that is not accessible by this GT at all.
> + */
> + __u64 inaccessible_mem_regions;
> + /** @reserved: Reserved */
> + __u64 reserved[8];
> +};
> +
> /**
> * struct drm_xe_query_gts - describe GTs
> *
> @@ -344,30 +385,10 @@ struct drm_xe_query_config {
> struct drm_xe_query_gts {
> /** @num_gt: number of GTs returned in gts */
> __u32 num_gt;
> -
> /** @pad: MBZ */
> __u32 pad;
> -
> - /**
> - * @gts: The GTs returned for this device
> - *
> - * TODO: convert drm_xe_query_gt to proper kernel-doc.
> - * TODO: Perhaps info about every mem region relative to this GT? e.g.
> - * bandwidth between this GT and remote region?
> - */
> - struct drm_xe_query_gt {
> -#define XE_QUERY_GT_TYPE_MAIN 0
> -#define XE_QUERY_GT_TYPE_REMOTE 1
> -#define XE_QUERY_GT_TYPE_MEDIA 2
> - __u16 type;
> - __u16 instance;
> - __u32 clock_freq;
> - __u64 features;
> - __u64 native_mem_regions; /* bit mask of instances from drm_xe_query_mem_usage */
> - __u64 slow_mem_regions; /* bit mask of instances from drm_xe_query_mem_usage */
> - __u64 inaccessible_mem_regions; /* bit mask of instances from drm_xe_query_mem_usage */
> - __u64 reserved[8];
> - } gts[];
> + /** @gts: The GT list returned for this device */
> + struct drm_xe_query_gt gts[];
> };
>
> /**
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index 3e7460ff4..f1ae1bf40 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -279,7 +279,7 @@ test_query_gts(int fd)
>
> for (i = 0; i < gts->num_gt; i++) {
> igt_info("type: %d\n", gts->gts[i].type);
> - igt_info("instance: %d\n", gts->gts[i].instance);
> + igt_info("gt_id: %d\n", gts->gts[i].gt_id);
> igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
> igt_info("features: 0x%016llx\n", gts->gts[i].features);
> igt_info("native_mem_regions: 0x%016llx\n",
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] [PATCH v4 12/14] drm-uapi/xe: Remove unused field of drm_xe_query_gt
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (10 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 11/14] drm-uapi/xe: Replace useless 'instance' per unique gt_id Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 11:25 ` Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 13/14] drm-uapi/xe: Rename gts to gt_list Francois Dugast
` (4 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Align with commit ("drm/xe/uapi: Remove unused field of drm_xe_query_gt")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 2 --
tests/intel/xe_query.c | 1 -
2 files changed, 3 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index e6879a6be..2103dae40 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -352,8 +352,6 @@ struct drm_xe_query_gt {
__u16 gt_id;
/** @clock_freq: A clock frequency for timestamp */
__u32 clock_freq;
- /** @features: Reserved for future information about GT features */
- __u64 features;
/**
* @native_mem_regions: Bit mask of instances from
* drm_xe_query_mem_usage that lives on the same GPU/Tile and have
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index f1ae1bf40..2b8edf5ec 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -281,7 +281,6 @@ test_query_gts(int fd)
igt_info("type: %d\n", gts->gts[i].type);
igt_info("gt_id: %d\n", gts->gts[i].gt_id);
igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
- igt_info("features: 0x%016llx\n", gts->gts[i].features);
igt_info("native_mem_regions: 0x%016llx\n",
gts->gts[i].native_mem_regions);
igt_info("slow_mem_regions: 0x%016llx\n",
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 12/14] drm-uapi/xe: Remove unused field of drm_xe_query_gt
2023-09-28 11:05 ` [igt-dev] [PATCH v4 12/14] drm-uapi/xe: Remove unused field of drm_xe_query_gt Francois Dugast
@ 2023-09-28 11:25 ` Francois Dugast
0 siblings, 0 replies; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:25 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
On Thu, Sep 28, 2023 at 11:05:14AM +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
>
> Align with commit ("drm/xe/uapi: Remove unused field of drm_xe_query_gt")
>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 2 --
> tests/intel/xe_query.c | 1 -
> 2 files changed, 3 deletions(-)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index e6879a6be..2103dae40 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -352,8 +352,6 @@ struct drm_xe_query_gt {
> __u16 gt_id;
> /** @clock_freq: A clock frequency for timestamp */
> __u32 clock_freq;
> - /** @features: Reserved for future information about GT features */
> - __u64 features;
> /**
> * @native_mem_regions: Bit mask of instances from
> * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index f1ae1bf40..2b8edf5ec 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -281,7 +281,6 @@ test_query_gts(int fd)
> igt_info("type: %d\n", gts->gts[i].type);
> igt_info("gt_id: %d\n", gts->gts[i].gt_id);
> igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
> - igt_info("features: 0x%016llx\n", gts->gts[i].features);
> igt_info("native_mem_regions: 0x%016llx\n",
> gts->gts[i].native_mem_regions);
> igt_info("slow_mem_regions: 0x%016llx\n",
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] [PATCH v4 13/14] drm-uapi/xe: Rename gts to gt_list
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (11 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 12/14] drm-uapi/xe: Remove unused field of drm_xe_query_gt Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 12:07 ` Francois Dugast
2023-09-28 11:05 ` [igt-dev] [PATCH v4 14/14] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY Francois Dugast
` (3 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Align with commit ("drm/xe/uapi: Rename gts to gt_list")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 18 ++++----
lib/xe/xe_query.c | 52 ++++++++++++------------
lib/xe/xe_query.h | 10 ++---
lib/xe/xe_spin.c | 6 +--
tests/intel-ci/xe-fast-feedback.testlist | 2 +-
tests/intel/xe_query.c | 36 ++++++++--------
6 files changed, 62 insertions(+), 62 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 2103dae40..652879eb2 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -337,7 +337,7 @@ struct drm_xe_query_config {
/**
* struct drm_xe_query_gt - describe an individual GT.
*
- * To be used with drm_xe_query_gts, which will return a list with all the
+ * To be used with drm_xe_query_gt_list, which will return a list with all the
* existing GT individual descriptions.
* Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
* implementing graphics and/or media operations.
@@ -374,19 +374,19 @@ struct drm_xe_query_gt {
};
/**
- * struct drm_xe_query_gts - describe GTs
+ * struct drm_xe_query_gt_list - A list with GT description items.
*
* If a query is made with a struct drm_xe_device_query where .query
- * is equal to DRM_XE_DEVICE_QUERY_GTS, then the reply uses struct
- * drm_xe_query_gts in .data.
+ * is equal to DRM_XE_DEVICE_QUERY_GT_LIST, then the reply uses struct
+ * drm_xe_query_gt_list in .data.
*/
-struct drm_xe_query_gts {
- /** @num_gt: number of GTs returned in gts */
+struct drm_xe_query_gt_list {
+ /** @num_gt: number of GT items returned in gt_list */
__u32 num_gt;
/** @pad: MBZ */
__u32 pad;
- /** @gts: The GT list returned for this device */
- struct drm_xe_query_gt gts[];
+ /** @gt_list: The GT list returned for this device */
+ struct drm_xe_query_gt gt_list[];
};
/**
@@ -479,7 +479,7 @@ struct drm_xe_device_query {
#define DRM_XE_DEVICE_QUERY_ENGINES 0
#define DRM_XE_DEVICE_QUERY_MEM_USAGE 1
#define DRM_XE_DEVICE_QUERY_CONFIG 2
-#define DRM_XE_DEVICE_QUERY_GTS 3
+#define DRM_XE_DEVICE_QUERY_GT_LIST 3
#define DRM_XE_DEVICE_QUERY_HWCONFIG 4
#define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY 5
#define DRM_XE_DEVICE_QUERY_ENGINE_CYCLES 6
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index ab7b31188..986a3a0c1 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -39,35 +39,35 @@ static struct drm_xe_query_config *xe_query_config_new(int fd)
return config;
}
-static struct drm_xe_query_gts *xe_query_gts_new(int fd)
+static struct drm_xe_query_gt_list *xe_query_gt_list_new(int fd)
{
- struct drm_xe_query_gts *gts;
+ struct drm_xe_query_gt_list *gt_list;
struct drm_xe_device_query query = {
.extensions = 0,
- .query = DRM_XE_DEVICE_QUERY_GTS,
+ .query = DRM_XE_DEVICE_QUERY_GT_LIST,
.size = 0,
.data = 0,
};
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
- gts = malloc(query.size);
- igt_assert(gts);
+ gt_list = malloc(query.size);
+ igt_assert(gt_list);
- query.data = to_user_pointer(gts);
+ query.data = to_user_pointer(gt_list);
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
- return gts;
+ return gt_list;
}
-static uint64_t __memory_regions(const struct drm_xe_query_gts *gts)
+static uint64_t __memory_regions(const struct drm_xe_query_gt_list *gt_list)
{
uint64_t regions = 0;
int i;
- for (i = 0; i < gts->num_gt; i++)
- regions |= gts->gts[i].native_mem_regions |
- gts->gts[i].slow_mem_regions;
+ for (i = 0; i < gt_list->num_gt; i++)
+ regions |= gt_list->gt_list[i].native_mem_regions |
+ gt_list->gt_list[i].slow_mem_regions;
return regions;
}
@@ -118,21 +118,21 @@ static struct drm_xe_query_mem_usage *xe_query_mem_usage_new(int fd)
return mem_usage;
}
-static uint64_t native_region_for_gt(const struct drm_xe_query_gts *gts, int gt)
+static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list, int gt)
{
uint64_t region;
- igt_assert(gts->num_gt > gt);
- region = gts->gts[gt].native_mem_regions;
+ igt_assert(gt_list->num_gt > gt);
+ region = gt_list->gt_list[gt].native_mem_regions;
igt_assert(region);
return region;
}
static uint64_t gt_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
- const struct drm_xe_query_gts *gts, int gt)
+ const struct drm_xe_query_gt_list *gt_list, int gt)
{
- int region_idx = ffs(native_region_for_gt(gts, gt)) - 1;
+ int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
return mem_usage->regions[region_idx].total_size;
@@ -141,9 +141,9 @@ static uint64_t gt_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
}
static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
- const struct drm_xe_query_gts *gts, int gt)
+ const struct drm_xe_query_gt_list *gt_list, int gt)
{
- int region_idx = ffs(native_region_for_gt(gts, gt)) - 1;
+ int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
return mem_usage->regions[region_idx].cpu_visible_size;
@@ -220,7 +220,7 @@ static struct xe_device *find_in_cache(int fd)
static void xe_device_free(struct xe_device *xe_dev)
{
free(xe_dev->config);
- free(xe_dev->gts);
+ free(xe_dev->gt_list);
free(xe_dev->hw_engines);
free(xe_dev->mem_usage);
free(xe_dev->vram_size);
@@ -252,18 +252,18 @@ struct xe_device *xe_device_get(int fd)
xe_dev->number_gt = xe_dev->config->info[XE_QUERY_CONFIG_GT_COUNT];
xe_dev->va_bits = xe_dev->config->info[XE_QUERY_CONFIG_VA_BITS];
xe_dev->dev_id = xe_dev->config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
- xe_dev->gts = xe_query_gts_new(fd);
- xe_dev->memory_regions = __memory_regions(xe_dev->gts);
+ xe_dev->gt_list = xe_query_gt_list_new(fd);
+ xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
xe_dev->mem_usage = xe_query_mem_usage_new(fd);
xe_dev->vram_size = calloc(xe_dev->number_gt, sizeof(*xe_dev->vram_size));
xe_dev->visible_vram_size = calloc(xe_dev->number_gt, sizeof(*xe_dev->visible_vram_size));
for (int gt = 0; gt < xe_dev->number_gt; gt++) {
xe_dev->vram_size[gt] = gt_vram_size(xe_dev->mem_usage,
- xe_dev->gts, gt);
+ xe_dev->gt_list, gt);
xe_dev->visible_vram_size[gt] =
gt_visible_vram_size(xe_dev->mem_usage,
- xe_dev->gts, gt);
+ xe_dev->gt_list, gt);
}
xe_dev->default_alignment = __mem_default_alignment(xe_dev->mem_usage);
xe_dev->has_vram = __mem_has_vram(xe_dev->mem_usage);
@@ -356,7 +356,7 @@ _TYPE _NAME(int fd) \
* xe_number_gt:
* @fd: xe device fd
*
- * Return number of gts for xe device fd.
+ * Return number of gt_list for xe device fd.
*/
xe_dev_FN(xe_number_gt, number_gt, unsigned int);
@@ -396,7 +396,7 @@ uint64_t vram_memory(int fd, int gt)
igt_assert(xe_dev);
igt_assert(gt >= 0 && gt < xe_dev->number_gt);
- return xe_has_vram(fd) ? native_region_for_gt(xe_dev->gts, gt) : 0;
+ return xe_has_vram(fd) ? native_region_for_gt(xe_dev->gt_list, gt) : 0;
}
static uint64_t __xe_visible_vram_size(int fd, int gt)
@@ -647,7 +647,7 @@ uint64_t xe_vram_available(int fd, int gt)
xe_dev = find_in_cache(fd);
igt_assert(xe_dev);
- region_idx = ffs(native_region_for_gt(xe_dev->gts, gt)) - 1;
+ region_idx = ffs(native_region_for_gt(xe_dev->gt_list, gt)) - 1;
mem_region = &xe_dev->mem_usage->regions[region_idx];
if (XE_IS_CLASS_VRAM(mem_region)) {
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index 20dbfa12c..da7deaf4c 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -26,13 +26,13 @@ struct xe_device {
/** @config: xe configuration */
struct drm_xe_query_config *config;
- /** @gts: gt info */
- struct drm_xe_query_gts *gts;
+ /** @gt_list: gt info */
+ struct drm_xe_query_gt_list *gt_list;
/** @number_gt: number of gt */
unsigned int number_gt;
- /** @gts: bitmask of all memory regions */
+ /** @gt_list: bitmask of all memory regions */
uint64_t memory_regions;
/** @hw_engines: array of hardware engines */
@@ -44,10 +44,10 @@ struct xe_device {
/** @mem_usage: regions memory information and usage */
struct drm_xe_query_mem_usage *mem_usage;
- /** @vram_size: array of vram sizes for all gts */
+ /** @vram_size: array of vram sizes for all gt_list */
uint64_t *vram_size;
- /** @visible_vram_size: array of visible vram sizes for all gts */
+ /** @visible_vram_size: array of visible vram sizes for all gt_list */
uint64_t *visible_vram_size;
/** @default_alignment: safe alignment regardless region location */
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index f0d77aed3..b05b38829 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -20,10 +20,10 @@ static uint32_t read_timestamp_frequency(int fd, int gt_id)
{
struct xe_device *dev = xe_device_get(fd);
- igt_assert(dev && dev->gts && dev->gts->num_gt);
- igt_assert(gt_id >= 0 && gt_id <= dev->gts->num_gt);
+ igt_assert(dev && dev->gt_list && dev->gt_list->num_gt);
+ igt_assert(gt_id >= 0 && gt_id <= dev->gt_list->num_gt);
- return dev->gts->gts[gt_id].clock_freq;
+ return dev->gt_list->gt_list[gt_id].clock_freq;
}
static uint64_t div64_u64_round_up(const uint64_t x, const uint64_t y)
diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
index a9fe43b08..0cf28baf9 100644
--- a/tests/intel-ci/xe-fast-feedback.testlist
+++ b/tests/intel-ci/xe-fast-feedback.testlist
@@ -147,7 +147,7 @@ igt@xe_prime_self_import@basic-with_fd_dup
#igt@xe_prime_self_import@basic-llseek-size
igt@xe_query@query-engines
igt@xe_query@query-mem-usage
-igt@xe_query@query-gts
+igt@xe_query@query-gt-list
igt@xe_query@query-config
igt@xe_query@query-hwconfig
igt@xe_query@query-topology
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 2b8edf5ec..30fc367c8 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -252,17 +252,17 @@ test_query_mem_usage(int fd)
}
/**
- * SUBTEST: query-gts
+ * SUBTEST: query-gt-list
* Test category: functionality test
- * Description: Display information about available GTs for xe device.
+ * Description: Display information about available GT components for xe device.
*/
static void
-test_query_gts(int fd)
+test_query_gt_list(int fd)
{
- struct drm_xe_query_gts *gts;
+ struct drm_xe_query_gt_list *gt_list;
struct drm_xe_device_query query = {
.extensions = 0,
- .query = DRM_XE_DEVICE_QUERY_GTS,
+ .query = DRM_XE_DEVICE_QUERY_GT_LIST,
.size = 0,
.data = 0,
};
@@ -271,29 +271,29 @@ test_query_gts(int fd)
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
igt_assert_neq(query.size, 0);
- gts = malloc(query.size);
- igt_assert(gts);
+ gt_list = malloc(query.size);
+ igt_assert(gt_list);
- query.data = to_user_pointer(gts);
+ query.data = to_user_pointer(gt_list);
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
- for (i = 0; i < gts->num_gt; i++) {
- igt_info("type: %d\n", gts->gts[i].type);
- igt_info("gt_id: %d\n", gts->gts[i].gt_id);
- igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
+ for (i = 0; i < gt_list->num_gt; i++) {
+ igt_info("type: %d\n", gt_list->gt_list[i].type);
+ igt_info("gt_id: %d\n", gt_list->gt_list[i].gt_id);
+ igt_info("clock_freq: %u\n", gt_list->gt_list[i].clock_freq);
igt_info("native_mem_regions: 0x%016llx\n",
- gts->gts[i].native_mem_regions);
+ gt_list->gt_list[i].native_mem_regions);
igt_info("slow_mem_regions: 0x%016llx\n",
- gts->gts[i].slow_mem_regions);
+ gt_list->gt_list[i].slow_mem_regions);
igt_info("inaccessible_mem_regions: 0x%016llx\n",
- gts->gts[i].inaccessible_mem_regions);
+ gt_list->gt_list[i].inaccessible_mem_regions);
}
}
/**
* SUBTEST: query-topology
* Test category: functionality test
- * Description: Display topology information of GTs.
+ * Description: Display topology information of GT.
*/
static void
test_query_gt_topology(int fd)
@@ -677,8 +677,8 @@ igt_main
igt_subtest("query-mem-usage")
test_query_mem_usage(xe);
- igt_subtest("query-gts")
- test_query_gts(xe);
+ igt_subtest("query-gt-list")
+ test_query_gt_list(xe);
igt_subtest("query-config")
test_query_config(xe);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 13/14] drm-uapi/xe: Rename gts to gt_list
2023-09-28 11:05 ` [igt-dev] [PATCH v4 13/14] drm-uapi/xe: Rename gts to gt_list Francois Dugast
@ 2023-09-28 12:07 ` Francois Dugast
0 siblings, 0 replies; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 12:07 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
On Thu, Sep 28, 2023 at 11:05:15AM +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
>
> Align with commit ("drm/xe/uapi: Rename gts to gt_list")
>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 18 ++++----
> lib/xe/xe_query.c | 52 ++++++++++++------------
> lib/xe/xe_query.h | 10 ++---
> lib/xe/xe_spin.c | 6 +--
> tests/intel-ci/xe-fast-feedback.testlist | 2 +-
> tests/intel/xe_query.c | 36 ++++++++--------
> 6 files changed, 62 insertions(+), 62 deletions(-)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 2103dae40..652879eb2 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -337,7 +337,7 @@ struct drm_xe_query_config {
> /**
> * struct drm_xe_query_gt - describe an individual GT.
> *
> - * To be used with drm_xe_query_gts, which will return a list with all the
> + * To be used with drm_xe_query_gt_list, which will return a list with all the
> * existing GT individual descriptions.
> * Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
> * implementing graphics and/or media operations.
> @@ -374,19 +374,19 @@ struct drm_xe_query_gt {
> };
>
> /**
> - * struct drm_xe_query_gts - describe GTs
> + * struct drm_xe_query_gt_list - A list with GT description items.
> *
> * If a query is made with a struct drm_xe_device_query where .query
> - * is equal to DRM_XE_DEVICE_QUERY_GTS, then the reply uses struct
> - * drm_xe_query_gts in .data.
> + * is equal to DRM_XE_DEVICE_QUERY_GT_LIST, then the reply uses struct
> + * drm_xe_query_gt_list in .data.
> */
> -struct drm_xe_query_gts {
> - /** @num_gt: number of GTs returned in gts */
> +struct drm_xe_query_gt_list {
> + /** @num_gt: number of GT items returned in gt_list */
> __u32 num_gt;
> /** @pad: MBZ */
> __u32 pad;
> - /** @gts: The GT list returned for this device */
> - struct drm_xe_query_gt gts[];
> + /** @gt_list: The GT list returned for this device */
> + struct drm_xe_query_gt gt_list[];
> };
>
> /**
> @@ -479,7 +479,7 @@ struct drm_xe_device_query {
> #define DRM_XE_DEVICE_QUERY_ENGINES 0
> #define DRM_XE_DEVICE_QUERY_MEM_USAGE 1
> #define DRM_XE_DEVICE_QUERY_CONFIG 2
> -#define DRM_XE_DEVICE_QUERY_GTS 3
> +#define DRM_XE_DEVICE_QUERY_GT_LIST 3
> #define DRM_XE_DEVICE_QUERY_HWCONFIG 4
> #define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY 5
> #define DRM_XE_DEVICE_QUERY_ENGINE_CYCLES 6
> diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> index ab7b31188..986a3a0c1 100644
> --- a/lib/xe/xe_query.c
> +++ b/lib/xe/xe_query.c
> @@ -39,35 +39,35 @@ static struct drm_xe_query_config *xe_query_config_new(int fd)
> return config;
> }
>
> -static struct drm_xe_query_gts *xe_query_gts_new(int fd)
> +static struct drm_xe_query_gt_list *xe_query_gt_list_new(int fd)
> {
> - struct drm_xe_query_gts *gts;
> + struct drm_xe_query_gt_list *gt_list;
> struct drm_xe_device_query query = {
> .extensions = 0,
> - .query = DRM_XE_DEVICE_QUERY_GTS,
> + .query = DRM_XE_DEVICE_QUERY_GT_LIST,
> .size = 0,
> .data = 0,
> };
>
> igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>
> - gts = malloc(query.size);
> - igt_assert(gts);
> + gt_list = malloc(query.size);
> + igt_assert(gt_list);
>
> - query.data = to_user_pointer(gts);
> + query.data = to_user_pointer(gt_list);
> igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>
> - return gts;
> + return gt_list;
> }
>
> -static uint64_t __memory_regions(const struct drm_xe_query_gts *gts)
> +static uint64_t __memory_regions(const struct drm_xe_query_gt_list *gt_list)
> {
> uint64_t regions = 0;
> int i;
>
> - for (i = 0; i < gts->num_gt; i++)
> - regions |= gts->gts[i].native_mem_regions |
> - gts->gts[i].slow_mem_regions;
> + for (i = 0; i < gt_list->num_gt; i++)
> + regions |= gt_list->gt_list[i].native_mem_regions |
> + gt_list->gt_list[i].slow_mem_regions;
>
> return regions;
> }
> @@ -118,21 +118,21 @@ static struct drm_xe_query_mem_usage *xe_query_mem_usage_new(int fd)
> return mem_usage;
> }
>
> -static uint64_t native_region_for_gt(const struct drm_xe_query_gts *gts, int gt)
> +static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list, int gt)
> {
> uint64_t region;
>
> - igt_assert(gts->num_gt > gt);
> - region = gts->gts[gt].native_mem_regions;
> + igt_assert(gt_list->num_gt > gt);
> + region = gt_list->gt_list[gt].native_mem_regions;
> igt_assert(region);
>
> return region;
> }
>
> static uint64_t gt_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
> - const struct drm_xe_query_gts *gts, int gt)
> + const struct drm_xe_query_gt_list *gt_list, int gt)
> {
> - int region_idx = ffs(native_region_for_gt(gts, gt)) - 1;
> + int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
>
> if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
> return mem_usage->regions[region_idx].total_size;
> @@ -141,9 +141,9 @@ static uint64_t gt_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
> }
>
> static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
> - const struct drm_xe_query_gts *gts, int gt)
> + const struct drm_xe_query_gt_list *gt_list, int gt)
> {
> - int region_idx = ffs(native_region_for_gt(gts, gt)) - 1;
> + int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
>
> if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
> return mem_usage->regions[region_idx].cpu_visible_size;
> @@ -220,7 +220,7 @@ static struct xe_device *find_in_cache(int fd)
> static void xe_device_free(struct xe_device *xe_dev)
> {
> free(xe_dev->config);
> - free(xe_dev->gts);
> + free(xe_dev->gt_list);
> free(xe_dev->hw_engines);
> free(xe_dev->mem_usage);
> free(xe_dev->vram_size);
> @@ -252,18 +252,18 @@ struct xe_device *xe_device_get(int fd)
> xe_dev->number_gt = xe_dev->config->info[XE_QUERY_CONFIG_GT_COUNT];
> xe_dev->va_bits = xe_dev->config->info[XE_QUERY_CONFIG_VA_BITS];
> xe_dev->dev_id = xe_dev->config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
> - xe_dev->gts = xe_query_gts_new(fd);
> - xe_dev->memory_regions = __memory_regions(xe_dev->gts);
> + xe_dev->gt_list = xe_query_gt_list_new(fd);
> + xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
> xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
> xe_dev->mem_usage = xe_query_mem_usage_new(fd);
> xe_dev->vram_size = calloc(xe_dev->number_gt, sizeof(*xe_dev->vram_size));
> xe_dev->visible_vram_size = calloc(xe_dev->number_gt, sizeof(*xe_dev->visible_vram_size));
> for (int gt = 0; gt < xe_dev->number_gt; gt++) {
> xe_dev->vram_size[gt] = gt_vram_size(xe_dev->mem_usage,
> - xe_dev->gts, gt);
> + xe_dev->gt_list, gt);
> xe_dev->visible_vram_size[gt] =
> gt_visible_vram_size(xe_dev->mem_usage,
> - xe_dev->gts, gt);
> + xe_dev->gt_list, gt);
> }
> xe_dev->default_alignment = __mem_default_alignment(xe_dev->mem_usage);
> xe_dev->has_vram = __mem_has_vram(xe_dev->mem_usage);
> @@ -356,7 +356,7 @@ _TYPE _NAME(int fd) \
> * xe_number_gt:
> * @fd: xe device fd
> *
> - * Return number of gts for xe device fd.
> + * Return number of gt_list for xe device fd.
> */
> xe_dev_FN(xe_number_gt, number_gt, unsigned int);
>
> @@ -396,7 +396,7 @@ uint64_t vram_memory(int fd, int gt)
> igt_assert(xe_dev);
> igt_assert(gt >= 0 && gt < xe_dev->number_gt);
>
> - return xe_has_vram(fd) ? native_region_for_gt(xe_dev->gts, gt) : 0;
> + return xe_has_vram(fd) ? native_region_for_gt(xe_dev->gt_list, gt) : 0;
> }
>
> static uint64_t __xe_visible_vram_size(int fd, int gt)
> @@ -647,7 +647,7 @@ uint64_t xe_vram_available(int fd, int gt)
> xe_dev = find_in_cache(fd);
> igt_assert(xe_dev);
>
> - region_idx = ffs(native_region_for_gt(xe_dev->gts, gt)) - 1;
> + region_idx = ffs(native_region_for_gt(xe_dev->gt_list, gt)) - 1;
> mem_region = &xe_dev->mem_usage->regions[region_idx];
>
> if (XE_IS_CLASS_VRAM(mem_region)) {
> diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
> index 20dbfa12c..da7deaf4c 100644
> --- a/lib/xe/xe_query.h
> +++ b/lib/xe/xe_query.h
> @@ -26,13 +26,13 @@ struct xe_device {
> /** @config: xe configuration */
> struct drm_xe_query_config *config;
>
> - /** @gts: gt info */
> - struct drm_xe_query_gts *gts;
> + /** @gt_list: gt info */
> + struct drm_xe_query_gt_list *gt_list;
>
> /** @number_gt: number of gt */
> unsigned int number_gt;
>
> - /** @gts: bitmask of all memory regions */
> + /** @gt_list: bitmask of all memory regions */
> uint64_t memory_regions;
>
> /** @hw_engines: array of hardware engines */
> @@ -44,10 +44,10 @@ struct xe_device {
> /** @mem_usage: regions memory information and usage */
> struct drm_xe_query_mem_usage *mem_usage;
>
> - /** @vram_size: array of vram sizes for all gts */
> + /** @vram_size: array of vram sizes for all gt_list */
> uint64_t *vram_size;
>
> - /** @visible_vram_size: array of visible vram sizes for all gts */
> + /** @visible_vram_size: array of visible vram sizes for all gt_list */
> uint64_t *visible_vram_size;
>
> /** @default_alignment: safe alignment regardless region location */
> diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
> index f0d77aed3..b05b38829 100644
> --- a/lib/xe/xe_spin.c
> +++ b/lib/xe/xe_spin.c
> @@ -20,10 +20,10 @@ static uint32_t read_timestamp_frequency(int fd, int gt_id)
> {
> struct xe_device *dev = xe_device_get(fd);
>
> - igt_assert(dev && dev->gts && dev->gts->num_gt);
> - igt_assert(gt_id >= 0 && gt_id <= dev->gts->num_gt);
> + igt_assert(dev && dev->gt_list && dev->gt_list->num_gt);
> + igt_assert(gt_id >= 0 && gt_id <= dev->gt_list->num_gt);
>
> - return dev->gts->gts[gt_id].clock_freq;
> + return dev->gt_list->gt_list[gt_id].clock_freq;
> }
>
> static uint64_t div64_u64_round_up(const uint64_t x, const uint64_t y)
> diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
> index a9fe43b08..0cf28baf9 100644
> --- a/tests/intel-ci/xe-fast-feedback.testlist
> +++ b/tests/intel-ci/xe-fast-feedback.testlist
> @@ -147,7 +147,7 @@ igt@xe_prime_self_import@basic-with_fd_dup
> #igt@xe_prime_self_import@basic-llseek-size
> igt@xe_query@query-engines
> igt@xe_query@query-mem-usage
> -igt@xe_query@query-gts
> +igt@xe_query@query-gt-list
> igt@xe_query@query-config
> igt@xe_query@query-hwconfig
> igt@xe_query@query-topology
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index 2b8edf5ec..30fc367c8 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -252,17 +252,17 @@ test_query_mem_usage(int fd)
> }
>
> /**
> - * SUBTEST: query-gts
> + * SUBTEST: query-gt-list
> * Test category: functionality test
> - * Description: Display information about available GTs for xe device.
> + * Description: Display information about available GT components for xe device.
> */
> static void
> -test_query_gts(int fd)
> +test_query_gt_list(int fd)
> {
> - struct drm_xe_query_gts *gts;
> + struct drm_xe_query_gt_list *gt_list;
> struct drm_xe_device_query query = {
> .extensions = 0,
> - .query = DRM_XE_DEVICE_QUERY_GTS,
> + .query = DRM_XE_DEVICE_QUERY_GT_LIST,
> .size = 0,
> .data = 0,
> };
> @@ -271,29 +271,29 @@ test_query_gts(int fd)
> igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
> igt_assert_neq(query.size, 0);
>
> - gts = malloc(query.size);
> - igt_assert(gts);
> + gt_list = malloc(query.size);
> + igt_assert(gt_list);
>
> - query.data = to_user_pointer(gts);
> + query.data = to_user_pointer(gt_list);
> igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>
> - for (i = 0; i < gts->num_gt; i++) {
> - igt_info("type: %d\n", gts->gts[i].type);
> - igt_info("gt_id: %d\n", gts->gts[i].gt_id);
> - igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
> + for (i = 0; i < gt_list->num_gt; i++) {
> + igt_info("type: %d\n", gt_list->gt_list[i].type);
> + igt_info("gt_id: %d\n", gt_list->gt_list[i].gt_id);
> + igt_info("clock_freq: %u\n", gt_list->gt_list[i].clock_freq);
> igt_info("native_mem_regions: 0x%016llx\n",
> - gts->gts[i].native_mem_regions);
> + gt_list->gt_list[i].native_mem_regions);
> igt_info("slow_mem_regions: 0x%016llx\n",
> - gts->gts[i].slow_mem_regions);
> + gt_list->gt_list[i].slow_mem_regions);
> igt_info("inaccessible_mem_regions: 0x%016llx\n",
> - gts->gts[i].inaccessible_mem_regions);
> + gt_list->gt_list[i].inaccessible_mem_regions);
> }
> }
>
> /**
> * SUBTEST: query-topology
> * Test category: functionality test
> - * Description: Display topology information of GTs.
> + * Description: Display topology information of GT.
> */
> static void
> test_query_gt_topology(int fd)
> @@ -677,8 +677,8 @@ igt_main
> igt_subtest("query-mem-usage")
> test_query_mem_usage(xe);
>
> - igt_subtest("query-gts")
> - test_query_gts(xe);
> + igt_subtest("query-gt-list")
> + test_query_gt_list(xe);
>
> igt_subtest("query-config")
> test_query_config(xe);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] [PATCH v4 14/14] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (12 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 13/14] drm-uapi/xe: Rename gts to gt_list Francois Dugast
@ 2023-09-28 11:05 ` Francois Dugast
2023-09-28 11:27 ` Francois Dugast
2023-09-28 12:11 ` [igt-dev] ✗ CI.xeBAT: failure for uAPI Alignment - take 1 (rev3) Patchwork
` (2 subsequent siblings)
16 siblings, 1 reply; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:05 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
Align with kernel commit
("drm/xe/uapi: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 4 ++--
tests/intel/xe_query.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 652879eb2..6ff1106e4 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -328,8 +328,8 @@ struct drm_xe_query_config {
#define XE_QUERY_CONFIG_VA_BITS 3
#define XE_QUERY_CONFIG_GT_COUNT 4
#define XE_QUERY_CONFIG_MEM_REGION_COUNT 5
-#define XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY 6
-#define XE_QUERY_CONFIG_NUM_PARAM (XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY + 1)
+#define XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY 6
+#define XE_QUERY_CONFIG_NUM_PARAM (XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY + 1)
/** @info: array of elements containing the config info */
__u64 info[];
};
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 30fc367c8..2cff75414 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -380,8 +380,8 @@ test_query_config(int fd)
config->info[XE_QUERY_CONFIG_GT_COUNT]);
igt_info("XE_QUERY_CONFIG_MEM_REGION_COUNT\t%llu\n",
config->info[XE_QUERY_CONFIG_MEM_REGION_COUNT]);
- igt_info("XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY\t%llu\n",
- config->info[XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY]);
+ igt_info("XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY\t%llu\n",
+ config->info[XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY]);
dump_hex_debug(config, query.size);
free(config);
--
2.34.1
^ permalink raw reply related [flat|nested] 31+ messages in thread* Re: [igt-dev] [PATCH v4 14/14] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY
2023-09-28 11:05 ` [igt-dev] [PATCH v4 14/14] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY Francois Dugast
@ 2023-09-28 11:27 ` Francois Dugast
0 siblings, 0 replies; 31+ messages in thread
From: Francois Dugast @ 2023-09-28 11:27 UTC (permalink / raw)
To: igt-dev; +Cc: Rodrigo Vivi
On Thu, Sep 28, 2023 at 11:05:16AM +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
>
> Align with kernel commit
> ("drm/xe/uapi: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY")
>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 4 ++--
> tests/intel/xe_query.c | 4 ++--
> 2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 652879eb2..6ff1106e4 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -328,8 +328,8 @@ struct drm_xe_query_config {
> #define XE_QUERY_CONFIG_VA_BITS 3
> #define XE_QUERY_CONFIG_GT_COUNT 4
> #define XE_QUERY_CONFIG_MEM_REGION_COUNT 5
> -#define XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY 6
> -#define XE_QUERY_CONFIG_NUM_PARAM (XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY + 1)
> +#define XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY 6
> +#define XE_QUERY_CONFIG_NUM_PARAM (XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY + 1)
> /** @info: array of elements containing the config info */
> __u64 info[];
> };
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index 30fc367c8..2cff75414 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -380,8 +380,8 @@ test_query_config(int fd)
> config->info[XE_QUERY_CONFIG_GT_COUNT]);
> igt_info("XE_QUERY_CONFIG_MEM_REGION_COUNT\t%llu\n",
> config->info[XE_QUERY_CONFIG_MEM_REGION_COUNT]);
> - igt_info("XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY\t%llu\n",
> - config->info[XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY]);
> + igt_info("XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY\t%llu\n",
> + config->info[XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY]);
> dump_hex_debug(config, query.size);
>
> free(config);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* [igt-dev] ✗ CI.xeBAT: failure for uAPI Alignment - take 1 (rev3)
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (13 preceding siblings ...)
2023-09-28 11:05 ` [igt-dev] [PATCH v4 14/14] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY Francois Dugast
@ 2023-09-28 12:11 ` Patchwork
2023-09-28 12:15 ` [igt-dev] ✓ Fi.CI.BAT: success " Patchwork
2023-09-28 23:36 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
16 siblings, 0 replies; 31+ messages in thread
From: Patchwork @ 2023-09-28 12:11 UTC (permalink / raw)
To: Rodrigo Vivi; +Cc: igt-dev
[-- Attachment #1: Type: text/plain, Size: 16529 bytes --]
== Series Details ==
Series: uAPI Alignment - take 1 (rev3)
URL : https://patchwork.freedesktop.org/series/123916/
State : failure
== Summary ==
CI Bug Log - changes from XEIGT_7506_BAT -> XEIGTPW_9890_BAT
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with XEIGTPW_9890_BAT absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in XEIGTPW_9890_BAT, please notify your bug team (lgci.bug.filing@intel.com) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (3 -> 4)
------------------------------
Additional (1): bat-pvc-2
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in XEIGTPW_9890_BAT:
### IGT changes ###
#### Possible regressions ####
* igt@xe_live_ktest@migrate:
- bat-adlp-7: NOTRUN -> [INCOMPLETE][1]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-adlp-7/igt@xe_live_ktest@migrate.html
Known issues
------------
Here are the changes found in XEIGTPW_9890_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_psr@primary_page_flip:
- bat-adlp-7: NOTRUN -> [FAIL][2] ([Intel XE#716]) +12 other tests fail
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-adlp-7/igt@kms_psr@primary_page_flip.html
* igt@xe_exec_compute_mode@twice-userptr-invalidate:
- bat-atsm-2: [PASS][3] -> [FAIL][4] ([Intel XE#716]) +127 other tests fail
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@xe_exec_compute_mode@twice-userptr-invalidate.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@xe_exec_compute_mode@twice-userptr-invalidate.html
* igt@xe_exec_fault_mode@twice-userptr-invalidate-prefetch:
- bat-pvc-2: NOTRUN -> [FAIL][5] ([Intel XE#716]) +206 other tests fail
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-pvc-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-prefetch.html
* igt@xe_intel_bb@create-in-region:
- bat-dg2-oem2: [PASS][6] -> [FAIL][7] ([Intel XE#716]) +177 other tests fail
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@xe_intel_bb@create-in-region.html
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@xe_intel_bb@create-in-region.html
- bat-adlp-7: [PASS][8] -> [FAIL][9] ([Intel XE#716]) +154 other tests fail
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-adlp-7/igt@xe_intel_bb@create-in-region.html
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-adlp-7/igt@xe_intel_bb@create-in-region.html
* igt@xe_module_load@load:
- bat-pvc-2: NOTRUN -> [SKIP][10] ([Intel XE#378])
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-pvc-2/igt@xe_module_load@load.html
#### Warnings ####
* igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
- bat-dg2-oem2: [SKIP][11] ([Intel XE#623]) -> [FAIL][12] ([Intel XE#716])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
* igt@kms_addfb_basic@basic-y-tiled-legacy:
- bat-dg2-oem2: [SKIP][13] ([Intel XE#624]) -> [FAIL][14] ([Intel XE#716])
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@kms_addfb_basic@basic-y-tiled-legacy.html
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@kms_addfb_basic@basic-y-tiled-legacy.html
- bat-adlp-7: [FAIL][15] ([Intel XE#609]) -> [FAIL][16] ([Intel XE#716]) +2 other tests fail
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-adlp-7/igt@kms_addfb_basic@basic-y-tiled-legacy.html
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-adlp-7/igt@kms_addfb_basic@basic-y-tiled-legacy.html
* igt@kms_addfb_basic@invalid-set-prop-any:
- bat-atsm-2: [SKIP][17] ([i915#6077]) -> [FAIL][18] ([Intel XE#716]) +33 other tests fail
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@kms_addfb_basic@invalid-set-prop-any.html
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@kms_addfb_basic@invalid-set-prop-any.html
* igt@kms_addfb_basic@tile-pitch-mismatch:
- bat-dg2-oem2: [FAIL][19] ([Intel XE#609]) -> [FAIL][20] ([Intel XE#716]) +1 other test fail
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@kms_addfb_basic@tile-pitch-mismatch.html
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@kms_addfb_basic@tile-pitch-mismatch.html
* igt@kms_cursor_legacy@basic-flip-before-cursor-legacy:
- bat-atsm-2: [SKIP][21] ([Intel XE#274] / [Intel XE#539]) -> [FAIL][22] ([Intel XE#716]) +5 other tests fail
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html
* igt@kms_dsc@dsc-basic:
- bat-atsm-2: [SKIP][23] ([Intel XE#539]) -> [FAIL][24] ([Intel XE#716]) +1 other test fail
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@kms_dsc@dsc-basic.html
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@kms_dsc@dsc-basic.html
- bat-dg2-oem2: [SKIP][25] ([Intel XE#423]) -> [FAIL][26] ([Intel XE#716])
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@kms_dsc@dsc-basic.html
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@kms_dsc@dsc-basic.html
- bat-adlp-7: [SKIP][27] ([Intel XE#423]) -> [FAIL][28] ([Intel XE#716])
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-adlp-7/igt@kms_dsc@dsc-basic.html
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-adlp-7/igt@kms_dsc@dsc-basic.html
* igt@kms_flip@basic-flip-vs-modeset:
- bat-atsm-2: [SKIP][29] ([Intel XE#275]) -> [FAIL][30] ([Intel XE#716]) +3 other tests fail
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@kms_flip@basic-flip-vs-modeset.html
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@kms_flip@basic-flip-vs-modeset.html
* igt@kms_flip@basic-flip-vs-wf_vblank:
- bat-adlp-7: [FAIL][31] ([Intel XE#480]) -> [FAIL][32] ([Intel XE#716])
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-adlp-7/igt@kms_flip@basic-flip-vs-wf_vblank.html
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-adlp-7/igt@kms_flip@basic-flip-vs-wf_vblank.html
* igt@kms_force_connector_basic@force-connector-state:
- bat-atsm-2: [SKIP][33] ([Intel XE#277] / [Intel XE#540]) -> [FAIL][34] ([Intel XE#716]) +2 other tests fail
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@kms_force_connector_basic@force-connector-state.html
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@kms_force_connector_basic@force-connector-state.html
* igt@kms_force_connector_basic@prune-stale-modes:
- bat-dg2-oem2: [SKIP][35] ([i915#5274]) -> [FAIL][36] ([Intel XE#716])
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@kms_force_connector_basic@prune-stale-modes.html
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@kms_force_connector_basic@prune-stale-modes.html
* igt@kms_frontbuffer_tracking@basic:
- bat-dg2-oem2: [FAIL][37] ([Intel XE#608]) -> [FAIL][38] ([Intel XE#716])
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@kms_frontbuffer_tracking@basic.html
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@kms_frontbuffer_tracking@basic.html
- bat-adlp-7: [INCOMPLETE][39] ([Intel XE#632]) -> [FAIL][40] ([Intel XE#716])
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html
* igt@kms_hdmi_inject@inject-audio:
- bat-atsm-2: [SKIP][41] ([Intel XE#540]) -> [FAIL][42] ([Intel XE#716])
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@kms_hdmi_inject@inject-audio.html
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@kms_hdmi_inject@inject-audio.html
* igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12:
- bat-dg2-oem2: [FAIL][43] ([Intel XE#400]) -> [FAIL][44] ([Intel XE#716])
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12.html
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12.html
* igt@kms_pipe_crc_basic@compare-crc-sanitycheck-xr24:
- bat-atsm-2: [SKIP][45] ([Intel XE#537] / [i915#1836]) -> [FAIL][46] ([Intel XE#716]) +6 other tests fail
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-xr24.html
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-xr24.html
* igt@kms_prop_blob@basic:
- bat-atsm-2: [SKIP][47] ([Intel XE#273]) -> [FAIL][48] ([Intel XE#716])
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@kms_prop_blob@basic.html
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@kms_prop_blob@basic.html
* igt@kms_psr@cursor_plane_move:
- bat-atsm-2: [SKIP][49] ([i915#1072]) -> [FAIL][50] ([Intel XE#716]) +2 other tests fail
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@kms_psr@cursor_plane_move.html
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@kms_psr@cursor_plane_move.html
* igt@kms_psr@primary_page_flip:
- bat-dg2-oem2: [SKIP][51] ([i915#1072]) -> [FAIL][52] ([Intel XE#716]) +2 other tests fail
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@kms_psr@primary_page_flip.html
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@kms_psr@primary_page_flip.html
* igt@xe_compute@compute-square:
- bat-atsm-2: [SKIP][53] ([Intel XE#672]) -> [FAIL][54] ([Intel XE#716])
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@xe_compute@compute-square.html
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@xe_compute@compute-square.html
- bat-dg2-oem2: [SKIP][55] ([Intel XE#672]) -> [FAIL][56] ([Intel XE#716])
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@xe_compute@compute-square.html
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@xe_compute@compute-square.html
* igt@xe_evict@evict-beng-small-external:
- bat-adlp-7: [SKIP][57] ([Intel XE#261] / [Intel XE#688]) -> [FAIL][58] ([Intel XE#716]) +15 other tests fail
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-adlp-7/igt@xe_evict@evict-beng-small-external.html
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-adlp-7/igt@xe_evict@evict-beng-small-external.html
* igt@xe_exec_fault_mode@many-basic:
- bat-dg2-oem2: [SKIP][59] ([Intel XE#288]) -> [FAIL][60] ([Intel XE#716]) +17 other tests fail
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@xe_exec_fault_mode@many-basic.html
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@xe_exec_fault_mode@many-basic.html
* igt@xe_exec_fault_mode@twice-userptr:
- bat-adlp-7: [SKIP][61] ([Intel XE#288]) -> [FAIL][62] ([Intel XE#716]) +17 other tests fail
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-adlp-7/igt@xe_exec_fault_mode@twice-userptr.html
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-adlp-7/igt@xe_exec_fault_mode@twice-userptr.html
* igt@xe_exec_fault_mode@twice-userptr-invalidate-imm:
- bat-atsm-2: [SKIP][63] ([Intel XE#288]) -> [FAIL][64] ([Intel XE#716]) +17 other tests fail
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-imm.html
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-imm.html
* igt@xe_huc_copy@huc_copy:
- bat-dg2-oem2: [SKIP][65] ([Intel XE#255]) -> [FAIL][66] ([Intel XE#716])
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-dg2-oem2/igt@xe_huc_copy@huc_copy.html
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-dg2-oem2/igt@xe_huc_copy@huc_copy.html
- bat-atsm-2: [SKIP][67] ([Intel XE#255]) -> [FAIL][68] ([Intel XE#716])
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-atsm-2/igt@xe_huc_copy@huc_copy.html
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-atsm-2/igt@xe_huc_copy@huc_copy.html
* igt@xe_mmap@vram:
- bat-adlp-7: [SKIP][69] ([Intel XE#263]) -> [FAIL][70] ([Intel XE#716])
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7506/bat-adlp-7/igt@xe_mmap@vram.html
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/bat-adlp-7/igt@xe_mmap@vram.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#255]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/255
[Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
[Intel XE#263]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/263
[Intel XE#273]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/273
[Intel XE#274]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/274
[Intel XE#275]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/275
[Intel XE#277]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/277
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
[Intel XE#400]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/400
[Intel XE#423]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/423
[Intel XE#480]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/480
[Intel XE#537]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/537
[Intel XE#539]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/539
[Intel XE#540]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/540
[Intel XE#608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/608
[Intel XE#609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/609
[Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
[Intel XE#624]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/624
[Intel XE#632]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/632
[Intel XE#672]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/672
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[Intel XE#716]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/716
[i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
[i915#1836]: https://gitlab.freedesktop.org/drm/intel/issues/1836
[i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
[i915#6077]: https://gitlab.freedesktop.org/drm/intel/issues/6077
Build changes
-------------
* IGT: IGT_7506 -> IGTPW_9890
* Linux: xe-397-bce60b0ff2937cb2ea51841a479bc1a2da65052b -> xe-399-7c58b58522cf124dd324fbf95d9dd838fac36bcb
IGTPW_9890: 9890
IGT_7506: 4fdf544bd0a38c5a100ef43c30171827e1c8c442 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-397-bce60b0ff2937cb2ea51841a479bc1a2da65052b: bce60b0ff2937cb2ea51841a479bc1a2da65052b
xe-399-7c58b58522cf124dd324fbf95d9dd838fac36bcb: 7c58b58522cf124dd324fbf95d9dd838fac36bcb
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9890/index.html
[-- Attachment #2: Type: text/html, Size: 20891 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread* [igt-dev] ✓ Fi.CI.BAT: success for uAPI Alignment - take 1 (rev3)
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (14 preceding siblings ...)
2023-09-28 12:11 ` [igt-dev] ✗ CI.xeBAT: failure for uAPI Alignment - take 1 (rev3) Patchwork
@ 2023-09-28 12:15 ` Patchwork
2023-09-28 23:36 ` [igt-dev] ✗ Fi.CI.IGT: failure " Patchwork
16 siblings, 0 replies; 31+ messages in thread
From: Patchwork @ 2023-09-28 12:15 UTC (permalink / raw)
To: Rodrigo Vivi; +Cc: igt-dev
[-- Attachment #1: Type: text/plain, Size: 10224 bytes --]
== Series Details ==
Series: uAPI Alignment - take 1 (rev3)
URL : https://patchwork.freedesktop.org/series/123916/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_13689 -> IGTPW_9890
====================================================
Summary
-------
**SUCCESS**
No regressions found.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/index.html
Participating hosts (34 -> 39)
------------------------------
Additional (7): fi-kbl-soraka bat-kbl-2 fi-cfl-8700k fi-apl-guc fi-kbl-guc fi-ivb-3770 fi-skl-6600u
Missing (2): fi-hsw-4770 fi-snb-2520m
Known issues
------------
Here are the changes found in IGTPW_9890 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@fbdev@info:
- bat-kbl-2: NOTRUN -> [SKIP][1] ([fdo#109271] / [i915#1849])
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/bat-kbl-2/igt@fbdev@info.html
- fi-kbl-guc: NOTRUN -> [SKIP][2] ([fdo#109271] / [i915#1849])
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-kbl-guc/igt@fbdev@info.html
* igt@gem_exec_suspend@basic-s0@smem:
- bat-dg2-9: [PASS][3] -> [INCOMPLETE][4] ([i915#9275])
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/bat-dg2-9/igt@gem_exec_suspend@basic-s0@smem.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/bat-dg2-9/igt@gem_exec_suspend@basic-s0@smem.html
* igt@gem_huc_copy@huc-copy:
- fi-cfl-8700k: NOTRUN -> [SKIP][5] ([fdo#109271] / [i915#2190])
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-cfl-8700k/igt@gem_huc_copy@huc-copy.html
- fi-skl-6600u: NOTRUN -> [SKIP][6] ([fdo#109271] / [i915#2190])
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-skl-6600u/igt@gem_huc_copy@huc-copy.html
- fi-kbl-soraka: NOTRUN -> [SKIP][7] ([fdo#109271] / [i915#2190])
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-kbl-soraka/igt@gem_huc_copy@huc-copy.html
* igt@gem_lmem_swapping@basic:
- fi-apl-guc: NOTRUN -> [SKIP][8] ([fdo#109271] / [i915#4613]) +3 other tests skip
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-apl-guc/igt@gem_lmem_swapping@basic.html
- fi-kbl-soraka: NOTRUN -> [SKIP][9] ([fdo#109271] / [i915#4613]) +3 other tests skip
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-kbl-soraka/igt@gem_lmem_swapping@basic.html
- fi-cfl-8700k: NOTRUN -> [SKIP][10] ([fdo#109271] / [i915#4613]) +3 other tests skip
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-cfl-8700k/igt@gem_lmem_swapping@basic.html
- fi-kbl-guc: NOTRUN -> [SKIP][11] ([fdo#109271] / [i915#4613]) +3 other tests skip
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-kbl-guc/igt@gem_lmem_swapping@basic.html
* igt@gem_lmem_swapping@parallel-random-engines:
- bat-kbl-2: NOTRUN -> [SKIP][12] ([fdo#109271]) +39 other tests skip
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/bat-kbl-2/igt@gem_lmem_swapping@parallel-random-engines.html
* igt@gem_lmem_swapping@random-engines:
- fi-skl-6600u: NOTRUN -> [SKIP][13] ([fdo#109271] / [i915#4613]) +3 other tests skip
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-skl-6600u/igt@gem_lmem_swapping@random-engines.html
* igt@i915_selftest@live@gt_pm:
- fi-kbl-soraka: NOTRUN -> [DMESG-FAIL][14] ([i915#1886] / [i915#7913])
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-kbl-soraka/igt@i915_selftest@live@gt_pm.html
* igt@i915_selftest@live@requests:
- bat-mtlp-8: [PASS][15] -> [ABORT][16] ([i915#9414])
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/bat-mtlp-8/igt@i915_selftest@live@requests.html
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/bat-mtlp-8/igt@i915_selftest@live@requests.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
- fi-kbl-soraka: NOTRUN -> [SKIP][17] ([fdo#109271]) +9 other tests skip
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-kbl-soraka/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
* igt@kms_cursor_legacy@basic-flip-before-cursor-legacy:
- fi-kbl-guc: NOTRUN -> [SKIP][18] ([fdo#109271] / [i915#1845]) +8 other tests skip
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-kbl-guc/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html
* igt@kms_dsc@dsc-basic:
- fi-skl-6600u: NOTRUN -> [SKIP][19] ([fdo#109271]) +8 other tests skip
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-skl-6600u/igt@kms_dsc@dsc-basic.html
* igt@kms_force_connector_basic@force-load-detect:
- fi-cfl-8700k: NOTRUN -> [SKIP][20] ([fdo#109271]) +10 other tests skip
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-cfl-8700k/igt@kms_force_connector_basic@force-load-detect.html
* igt@kms_hdmi_inject@inject-audio:
- fi-apl-guc: NOTRUN -> [SKIP][21] ([fdo#109271]) +16 other tests skip
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-apl-guc/igt@kms_hdmi_inject@inject-audio.html
- fi-kbl-guc: NOTRUN -> [FAIL][22] ([IGT#3])
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-kbl-guc/igt@kms_hdmi_inject@inject-audio.html
* igt@kms_pipe_crc_basic@suspend-read-crc@pipe-c-vga-1:
- fi-ivb-3770: NOTRUN -> [DMESG-WARN][23] ([i915#8841]) +6 other tests dmesg-warn
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-ivb-3770/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-c-vga-1.html
* igt@kms_psr@cursor_plane_move:
- fi-ivb-3770: NOTRUN -> [SKIP][24] ([fdo#109271]) +21 other tests skip
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-ivb-3770/igt@kms_psr@cursor_plane_move.html
- fi-kbl-guc: NOTRUN -> [SKIP][25] ([fdo#109271]) +25 other tests skip
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/fi-kbl-guc/igt@kms_psr@cursor_plane_move.html
#### Possible fixes ####
* igt@kms_chamelium_edid@hdmi-edid-read:
- {bat-dg2-13}: [DMESG-WARN][26] ([i915#7952]) -> [PASS][27]
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/bat-dg2-13/igt@kms_chamelium_edid@hdmi-edid-read.html
[27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/bat-dg2-13/igt@kms_chamelium_edid@hdmi-edid-read.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[IGT#3]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/3
[fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
[i915#1845]: https://gitlab.freedesktop.org/drm/intel/issues/1845
[i915#1849]: https://gitlab.freedesktop.org/drm/intel/issues/1849
[i915#1886]: https://gitlab.freedesktop.org/drm/intel/issues/1886
[i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
[i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
[i915#7913]: https://gitlab.freedesktop.org/drm/intel/issues/7913
[i915#7952]: https://gitlab.freedesktop.org/drm/intel/issues/7952
[i915#8841]: https://gitlab.freedesktop.org/drm/intel/issues/8841
[i915#9275]: https://gitlab.freedesktop.org/drm/intel/issues/9275
[i915#9414]: https://gitlab.freedesktop.org/drm/intel/issues/9414
Build changes
-------------
* CI: CI-20190529 -> None
* IGT: IGT_7506 -> IGTPW_9890
CI-20190529: 20190529
CI_DRM_13689: 5933eb0a0717a28e668d33e01a707311d31cebbb @ git://anongit.freedesktop.org/gfx-ci/linux
IGTPW_9890: 9890
IGT_7506: 4fdf544bd0a38c5a100ef43c30171827e1c8c442 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
Testlist changes
----------------
+igt@xe_exec_balancer@many-cm-parallel-basic
+igt@xe_exec_balancer@many-cm-parallel-rebind
+igt@xe_exec_balancer@many-cm-parallel-userptr
+igt@xe_exec_balancer@many-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@many-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@many-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@many-execqueues-cm-parallel-basic
+igt@xe_exec_balancer@many-execqueues-cm-parallel-rebind
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@no-exec-cm-parallel-basic
+igt@xe_exec_balancer@no-exec-cm-parallel-rebind
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@once-cm-parallel-basic
+igt@xe_exec_balancer@once-cm-parallel-rebind
+igt@xe_exec_balancer@once-cm-parallel-userptr
+igt@xe_exec_balancer@once-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@once-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@once-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@twice-cm-parallel-basic
+igt@xe_exec_balancer@twice-cm-parallel-rebind
+igt@xe_exec_balancer@twice-cm-parallel-userptr
+igt@xe_exec_balancer@twice-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@twice-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@twice-cm-parallel-userptr-rebind
+igt@xe_exec_threads@threads-hang-rebind-err
+igt@xe_exec_threads@threads-hang-userptr-rebind-err
+igt@xe_exec_threads@threads-rebind-err
+igt@xe_exec_threads@threads-userptr-rebind-err
+igt@xe_query@query-cs-cycles
+igt@xe_query@query-gt-list
+igt@xe_query@query-invalid-cs-cycles
-igt@xe_create@multigpu-create-massive-size
-igt@xe_exec_threads@threads-hang-shared-vm-rebind-err
-igt@xe_exec_threads@threads-hang-shared-vm-userptr-rebind-err
-igt@xe_exec_threads@threads-shared-vm-rebind-err
-igt@xe_exec_threads@threads-shared-vm-userptr-rebind-err
-igt@xe_mmio@mmio-invalid
-igt@xe_mmio@mmio-timestamp
-igt@xe_query@query-gts
-igt@xe_vm@vm-async-ops-err
-igt@xe_vm@vm-async-ops-err-destroy
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/index.html
[-- Attachment #2: Type: text/html, Size: 13221 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread* [igt-dev] ✗ Fi.CI.IGT: failure for uAPI Alignment - take 1 (rev3)
2023-09-28 11:05 [igt-dev] [PATCH v4 00/14] uAPI Alignment - take 1 v4 Francois Dugast
` (15 preceding siblings ...)
2023-09-28 12:15 ` [igt-dev] ✓ Fi.CI.BAT: success " Patchwork
@ 2023-09-28 23:36 ` Patchwork
16 siblings, 0 replies; 31+ messages in thread
From: Patchwork @ 2023-09-28 23:36 UTC (permalink / raw)
To: Rodrigo Vivi; +Cc: igt-dev
[-- Attachment #1: Type: text/plain, Size: 92648 bytes --]
== Series Details ==
Series: uAPI Alignment - take 1 (rev3)
URL : https://patchwork.freedesktop.org/series/123916/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_13689_full -> IGTPW_9890_full
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with IGTPW_9890_full absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in IGTPW_9890_full, please notify your bug team (lgci.bug.filing@intel.com) to allow them
to document this new failure mode, which will reduce false positives in CI.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/index.html
Participating hosts (10 -> 9)
------------------------------
Missing (1): shard-rkl0
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in IGTPW_9890_full:
### IGT changes ###
#### Possible regressions ####
* igt@gem_exec_schedule@wide@rcs0:
- shard-tglu: [PASS][1] -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-tglu-5/igt@gem_exec_schedule@wide@rcs0.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-4/igt@gem_exec_schedule@wide@rcs0.html
* {igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-a-hdmi-a-3} (NEW):
- shard-dg2: NOTRUN -> [SKIP][3] +3 other tests skip
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-a-hdmi-a-3.html
* igt@kms_vblank@pipe-d-ts-continuation-dpms-suspend:
- shard-dg2: NOTRUN -> [INCOMPLETE][4]
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@kms_vblank@pipe-d-ts-continuation-dpms-suspend.html
#### Suppressed ####
The following results come from untrusted machines, tests, or statuses.
They do not affect the overall result.
* {igt@kms_content_protection@mei-interface}:
- shard-dg1: [SKIP][5] ([i915#9424]) -> [SKIP][6]
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-19/igt@kms_content_protection@mei-interface.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-12/igt@kms_content_protection@mei-interface.html
New tests
---------
New tests have been introduced between CI_DRM_13689_full and IGTPW_9890_full:
### New IGT tests (12) ###
* igt@kms_lease@setcrtc-implicit-plane@pipe-a-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [0.0] s
* igt@kms_lease@setcrtc-implicit-plane@pipe-b-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [0.0] s
* igt@kms_lease@setcrtc-implicit-plane@pipe-c-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [0.0] s
* igt@kms_lease@setcrtc-implicit-plane@pipe-d-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [0.0] s
* igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-a-hdmi-a-3:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-b-hdmi-a-3:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-c-hdmi-a-3:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-d-hdmi-a-3:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@kms_plane_scaling@plane-upscale-20x20-with-pixel-format@pipe-a-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [0.0] s
* igt@kms_plane_scaling@plane-upscale-20x20-with-pixel-format@pipe-b-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [0.0] s
* igt@kms_plane_scaling@plane-upscale-20x20-with-pixel-format@pipe-c-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [0.0] s
* igt@kms_plane_scaling@plane-upscale-20x20-with-pixel-format@pipe-d-hdmi-a-3:
- Statuses : 1 pass(s)
- Exec time: [0.0] s
Known issues
------------
Here are the changes found in IGTPW_9890_full that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@api_intel_bb@object-reloc-keep-cache:
- shard-dg2: NOTRUN -> [SKIP][7] ([i915#8411])
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@api_intel_bb@object-reloc-keep-cache.html
- shard-mtlp: NOTRUN -> [SKIP][8] ([i915#8411])
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-6/igt@api_intel_bb@object-reloc-keep-cache.html
* igt@api_intel_bb@render-ccs:
- shard-dg2: NOTRUN -> [FAIL][9] ([i915#6122])
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@api_intel_bb@render-ccs.html
* igt@drm_fdinfo@busy-check-all@bcs0:
- shard-dg1: NOTRUN -> [SKIP][10] ([i915#8414]) +4 other tests skip
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-16/igt@drm_fdinfo@busy-check-all@bcs0.html
* igt@drm_fdinfo@busy-check-all@vecs1:
- shard-dg2: NOTRUN -> [SKIP][11] ([i915#8414]) +13 other tests skip
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@drm_fdinfo@busy-check-all@vecs1.html
* igt@drm_fdinfo@isolation@rcs0:
- shard-mtlp: NOTRUN -> [SKIP][12] ([i915#8414]) +21 other tests skip
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-3/igt@drm_fdinfo@isolation@rcs0.html
* igt@gem_caching@read-writes:
- shard-mtlp: NOTRUN -> [SKIP][13] ([i915#4873])
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@gem_caching@read-writes.html
* igt@gem_ccs@suspend-resume@tile4-compressed-compfmt0-smem-lmem0:
- shard-dg2: NOTRUN -> [INCOMPLETE][14] ([i915#7297])
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-6/igt@gem_ccs@suspend-resume@tile4-compressed-compfmt0-smem-lmem0.html
* igt@gem_close_race@multigpu-basic-threads:
- shard-dg2: NOTRUN -> [SKIP][15] ([i915#7697])
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@gem_close_race@multigpu-basic-threads.html
* igt@gem_create@create-ext-cpu-access-big:
- shard-dg2: [PASS][16] -> [ABORT][17] ([i915#7461])
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg2-2/igt@gem_create@create-ext-cpu-access-big.html
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@gem_create@create-ext-cpu-access-big.html
* igt@gem_ctx_exec@basic-nohangcheck:
- shard-rkl: [PASS][18] -> [FAIL][19] ([i915#6268])
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-rkl-6/igt@gem_ctx_exec@basic-nohangcheck.html
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@gem_ctx_exec@basic-nohangcheck.html
* igt@gem_ctx_persistence@engines-hang@vcs0:
- shard-mtlp: [PASS][20] -> [FAIL][21] ([i915#2410])
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-mtlp-2/igt@gem_ctx_persistence@engines-hang@vcs0.html
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@gem_ctx_persistence@engines-hang@vcs0.html
* igt@gem_ctx_persistence@hang:
- shard-mtlp: NOTRUN -> [SKIP][22] ([i915#8555]) +1 other test skip
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-5/igt@gem_ctx_persistence@hang.html
- shard-dg2: NOTRUN -> [SKIP][23] ([i915#8555])
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-7/igt@gem_ctx_persistence@hang.html
* igt@gem_ctx_persistence@process:
- shard-snb: NOTRUN -> [SKIP][24] ([fdo#109271] / [i915#1099])
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-snb7/igt@gem_ctx_persistence@process.html
* igt@gem_ctx_sseu@invalid-args:
- shard-rkl: NOTRUN -> [SKIP][25] ([i915#280])
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-6/igt@gem_ctx_sseu@invalid-args.html
- shard-dg1: NOTRUN -> [SKIP][26] ([i915#280])
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-15/igt@gem_ctx_sseu@invalid-args.html
* igt@gem_ctx_sseu@mmap-args:
- shard-dg2: NOTRUN -> [SKIP][27] ([i915#280]) +1 other test skip
[27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@gem_ctx_sseu@mmap-args.html
* igt@gem_eio@unwedge-stress:
- shard-dg1: [PASS][28] -> [FAIL][29] ([i915#5784]) +1 other test fail
[28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-19/igt@gem_eio@unwedge-stress.html
[29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-15/igt@gem_eio@unwedge-stress.html
* igt@gem_exec_balancer@bonded-pair:
- shard-dg2: NOTRUN -> [SKIP][30] ([i915#4771])
[30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@gem_exec_balancer@bonded-pair.html
* igt@gem_exec_balancer@parallel-ordering:
- shard-rkl: NOTRUN -> [SKIP][31] ([i915#4525])
[31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-6/igt@gem_exec_balancer@parallel-ordering.html
* igt@gem_exec_fair@basic-none-rrul@rcs0:
- shard-tglu: NOTRUN -> [FAIL][32] ([i915#2842])
[32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-9/igt@gem_exec_fair@basic-none-rrul@rcs0.html
* igt@gem_exec_fair@basic-none-vip:
- shard-mtlp: NOTRUN -> [SKIP][33] ([i915#4473] / [i915#4771]) +1 other test skip
[33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-3/igt@gem_exec_fair@basic-none-vip.html
* igt@gem_exec_fence@submit:
- shard-dg1: NOTRUN -> [SKIP][34] ([i915#4812])
[34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-16/igt@gem_exec_fence@submit.html
- shard-mtlp: NOTRUN -> [SKIP][35] ([i915#4812])
[35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@gem_exec_fence@submit.html
* igt@gem_exec_fence@submit3:
- shard-dg2: NOTRUN -> [SKIP][36] ([i915#4812]) +1 other test skip
[36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@gem_exec_fence@submit3.html
* igt@gem_exec_flush@basic-uc-pro-default:
- shard-dg2: NOTRUN -> [SKIP][37] ([i915#3539] / [i915#4852]) +5 other tests skip
[37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-7/igt@gem_exec_flush@basic-uc-pro-default.html
* igt@gem_exec_params@rsvd2-dirt:
- shard-mtlp: NOTRUN -> [SKIP][38] ([i915#5107])
[38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-2/igt@gem_exec_params@rsvd2-dirt.html
* igt@gem_exec_params@secure-non-root:
- shard-dg2: NOTRUN -> [SKIP][39] ([fdo#112283])
[39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@gem_exec_params@secure-non-root.html
* igt@gem_exec_reloc@basic-cpu-wc-noreloc:
- shard-mtlp: NOTRUN -> [SKIP][40] ([i915#3281]) +15 other tests skip
[40]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-3/igt@gem_exec_reloc@basic-cpu-wc-noreloc.html
* igt@gem_exec_reloc@basic-write-read-noreloc:
- shard-rkl: NOTRUN -> [SKIP][41] ([i915#3281]) +7 other tests skip
[41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@gem_exec_reloc@basic-write-read-noreloc.html
* igt@gem_exec_reloc@basic-write-wc-active:
- shard-dg1: NOTRUN -> [SKIP][42] ([i915#3281]) +2 other tests skip
[42]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-12/igt@gem_exec_reloc@basic-write-wc-active.html
* igt@gem_exec_schedule@preempt-queue-contexts-chain:
- shard-dg2: NOTRUN -> [SKIP][43] ([i915#4537] / [i915#4812]) +1 other test skip
[43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@gem_exec_schedule@preempt-queue-contexts-chain.html
* igt@gem_exec_schedule@semaphore-power:
- shard-mtlp: NOTRUN -> [SKIP][44] ([i915#4537] / [i915#4812]) +1 other test skip
[44]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@gem_exec_schedule@semaphore-power.html
* igt@gem_exec_suspend@basic-s4-devices@lmem0:
- shard-dg2: NOTRUN -> [ABORT][45] ([i915#7975] / [i915#8213])
[45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-10/igt@gem_exec_suspend@basic-s4-devices@lmem0.html
* igt@gem_fence_thrash@bo-write-verify-threaded-none:
- shard-dg2: NOTRUN -> [SKIP][46] ([i915#4860])
[46]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@gem_fence_thrash@bo-write-verify-threaded-none.html
* igt@gem_fenced_exec_thrash@no-spare-fences-interruptible:
- shard-mtlp: NOTRUN -> [SKIP][47] ([i915#4860]) +3 other tests skip
[47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@gem_fenced_exec_thrash@no-spare-fences-interruptible.html
* igt@gem_huc_copy@huc-copy:
- shard-rkl: NOTRUN -> [SKIP][48] ([i915#2190])
[48]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@gem_huc_copy@huc-copy.html
* igt@gem_lmem_swapping@heavy-verify-multi:
- shard-mtlp: NOTRUN -> [SKIP][49] ([i915#4613]) +4 other tests skip
[49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@gem_lmem_swapping@heavy-verify-multi.html
* igt@gem_lmem_swapping@heavy-verify-random:
- shard-apl: NOTRUN -> [SKIP][50] ([fdo#109271] / [i915#4613])
[50]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-apl6/igt@gem_lmem_swapping@heavy-verify-random.html
* igt@gem_lmem_swapping@parallel-random-engines:
- shard-rkl: NOTRUN -> [SKIP][51] ([i915#4613])
[51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-6/igt@gem_lmem_swapping@parallel-random-engines.html
* igt@gem_media_fill@media-fill:
- shard-mtlp: NOTRUN -> [SKIP][52] ([i915#8289])
[52]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-6/igt@gem_media_fill@media-fill.html
* igt@gem_mmap@bad-object:
- shard-mtlp: NOTRUN -> [SKIP][53] ([i915#4083]) +3 other tests skip
[53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@gem_mmap@bad-object.html
* igt@gem_mmap_gtt@cpuset-medium-copy:
- shard-mtlp: NOTRUN -> [SKIP][54] ([i915#4077]) +13 other tests skip
[54]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@gem_mmap_gtt@cpuset-medium-copy.html
- shard-dg1: NOTRUN -> [SKIP][55] ([i915#4077])
[55]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-18/igt@gem_mmap_gtt@cpuset-medium-copy.html
* igt@gem_mmap_gtt@zero-extend:
- shard-dg2: NOTRUN -> [SKIP][56] ([i915#4077]) +18 other tests skip
[56]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@gem_mmap_gtt@zero-extend.html
* igt@gem_mmap_wc@write-wc-read-gtt:
- shard-dg2: NOTRUN -> [SKIP][57] ([i915#4083]) +5 other tests skip
[57]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@gem_mmap_wc@write-wc-read-gtt.html
* igt@gem_partial_pwrite_pread@reads:
- shard-dg2: NOTRUN -> [SKIP][58] ([i915#3282]) +10 other tests skip
[58]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@gem_partial_pwrite_pread@reads.html
* igt@gem_partial_pwrite_pread@reads-display:
- shard-mtlp: NOTRUN -> [SKIP][59] ([i915#3282]) +2 other tests skip
[59]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-5/igt@gem_partial_pwrite_pread@reads-display.html
* igt@gem_partial_pwrite_pread@reads-uncached:
- shard-rkl: NOTRUN -> [SKIP][60] ([i915#3282]) +1 other test skip
[60]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-1/igt@gem_partial_pwrite_pread@reads-uncached.html
* igt@gem_pxp@display-protected-crc:
- shard-mtlp: NOTRUN -> [SKIP][61] ([i915#4270]) +3 other tests skip
[61]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-6/igt@gem_pxp@display-protected-crc.html
* igt@gem_pxp@regular-baseline-src-copy-readible:
- shard-dg2: NOTRUN -> [SKIP][62] ([i915#4270]) +3 other tests skip
[62]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@gem_pxp@regular-baseline-src-copy-readible.html
* igt@gem_render_copy@y-tiled-ccs-to-yf-tiled:
- shard-mtlp: NOTRUN -> [SKIP][63] ([i915#8428]) +9 other tests skip
[63]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@gem_render_copy@y-tiled-ccs-to-yf-tiled.html
* igt@gem_set_tiling_vs_blt@untiled-to-tiled:
- shard-dg2: NOTRUN -> [SKIP][64] ([i915#4079])
[64]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@gem_set_tiling_vs_blt@untiled-to-tiled.html
* igt@gem_set_tiling_vs_gtt:
- shard-mtlp: NOTRUN -> [SKIP][65] ([i915#4079]) +2 other tests skip
[65]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-6/igt@gem_set_tiling_vs_gtt.html
* igt@gem_softpin@evict-snoop:
- shard-dg2: NOTRUN -> [SKIP][66] ([i915#4885])
[66]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@gem_softpin@evict-snoop.html
* igt@gem_unfence_active_buffers:
- shard-mtlp: NOTRUN -> [SKIP][67] ([i915#4879])
[67]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-2/igt@gem_unfence_active_buffers.html
* igt@gem_userptr_blits@map-fixed-invalidate-overlap-busy:
- shard-dg2: NOTRUN -> [SKIP][68] ([i915#3297] / [i915#4880])
[68]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-7/igt@gem_userptr_blits@map-fixed-invalidate-overlap-busy.html
* igt@gem_userptr_blits@readonly-pwrite-unsync:
- shard-dg2: NOTRUN -> [SKIP][69] ([i915#3297]) +2 other tests skip
[69]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@gem_userptr_blits@readonly-pwrite-unsync.html
- shard-mtlp: NOTRUN -> [SKIP][70] ([i915#3297]) +3 other tests skip
[70]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@gem_userptr_blits@readonly-pwrite-unsync.html
* igt@gem_userptr_blits@relocations:
- shard-dg2: NOTRUN -> [SKIP][71] ([i915#3281]) +13 other tests skip
[71]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@gem_userptr_blits@relocations.html
* igt@gem_workarounds@suspend-resume-fd:
- shard-snb: NOTRUN -> [DMESG-WARN][72] ([i915#8841]) +6 other tests dmesg-warn
[72]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-snb4/igt@gem_workarounds@suspend-resume-fd.html
* igt@gen3_render_tiledy_blits:
- shard-rkl: NOTRUN -> [SKIP][73] ([fdo#109289]) +1 other test skip
[73]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@gen3_render_tiledy_blits.html
- shard-dg1: NOTRUN -> [SKIP][74] ([fdo#109289]) +1 other test skip
[74]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-17/igt@gen3_render_tiledy_blits.html
* igt@gen7_exec_parse@basic-rejected:
- shard-dg2: NOTRUN -> [SKIP][75] ([fdo#109289]) +6 other tests skip
[75]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@gen7_exec_parse@basic-rejected.html
* igt@gen9_exec_parse@bb-oversize:
- shard-rkl: NOTRUN -> [SKIP][76] ([i915#2527]) +3 other tests skip
[76]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@gen9_exec_parse@bb-oversize.html
* igt@gen9_exec_parse@bb-start-far:
- shard-dg2: NOTRUN -> [SKIP][77] ([i915#2856]) +3 other tests skip
[77]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@gen9_exec_parse@bb-start-far.html
* igt@gen9_exec_parse@cmd-crossing-page:
- shard-mtlp: NOTRUN -> [SKIP][78] ([i915#2856]) +5 other tests skip
[78]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-2/igt@gen9_exec_parse@cmd-crossing-page.html
- shard-dg1: NOTRUN -> [SKIP][79] ([i915#2527])
[79]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-14/igt@gen9_exec_parse@cmd-crossing-page.html
* igt@i915_hangman@gt-engine-error@vcs0:
- shard-mtlp: [PASS][80] -> [FAIL][81] ([i915#7069])
[80]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-mtlp-2/igt@i915_hangman@gt-engine-error@vcs0.html
[81]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@i915_hangman@gt-engine-error@vcs0.html
* igt@i915_module_load@load:
- shard-apl: NOTRUN -> [SKIP][82] ([fdo#109271] / [i915#6227])
[82]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-apl6/igt@i915_module_load@load.html
- shard-dg2: NOTRUN -> [SKIP][83] ([i915#6227])
[83]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@i915_module_load@load.html
* igt@i915_pm_freq_mult@media-freq@gt1:
- shard-mtlp: NOTRUN -> [SKIP][84] ([i915#6590]) +1 other test skip
[84]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-3/igt@i915_pm_freq_mult@media-freq@gt1.html
* igt@i915_pm_rc6_residency@rc6-idle@bcs0:
- shard-dg1: [PASS][85] -> [FAIL][86] ([i915#3591])
[85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-19/igt@i915_pm_rc6_residency@rc6-idle@bcs0.html
[86]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-14/igt@i915_pm_rc6_residency@rc6-idle@bcs0.html
* igt@i915_pm_rpm@dpms-mode-unset-lpsp:
- shard-dg2: [PASS][87] -> [SKIP][88] ([i915#1397])
[87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg2-10/igt@i915_pm_rpm@dpms-mode-unset-lpsp.html
[88]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@i915_pm_rpm@dpms-mode-unset-lpsp.html
* igt@i915_pm_rpm@gem-execbuf-stress-pc8:
- shard-rkl: NOTRUN -> [SKIP][89] ([fdo#109506])
[89]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@i915_pm_rpm@gem-execbuf-stress-pc8.html
* igt@i915_pm_rpm@gem-mmap-type@gtt-smem0:
- shard-mtlp: NOTRUN -> [SKIP][90] ([i915#8431])
[90]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-2/igt@i915_pm_rpm@gem-mmap-type@gtt-smem0.html
* igt@i915_pm_rpm@modeset-lpsp:
- shard-dg1: [PASS][91] -> [SKIP][92] ([i915#1397])
[91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-19/igt@i915_pm_rpm@modeset-lpsp.html
[92]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-12/igt@i915_pm_rpm@modeset-lpsp.html
* igt@i915_pm_rpm@modeset-lpsp-stress-no-wait:
- shard-rkl: NOTRUN -> [SKIP][93] ([i915#1397])
[93]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@i915_pm_rpm@modeset-lpsp-stress-no-wait.html
* igt@i915_pm_rpm@modeset-non-lpsp-stress:
- shard-rkl: [PASS][94] -> [SKIP][95] ([i915#1397]) +1 other test skip
[94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-rkl-6/igt@i915_pm_rpm@modeset-non-lpsp-stress.html
[95]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@i915_pm_rpm@modeset-non-lpsp-stress.html
- shard-mtlp: NOTRUN -> [SKIP][96] ([i915#1397]) +1 other test skip
[96]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@i915_pm_rpm@modeset-non-lpsp-stress.html
* igt@i915_pm_rps@min-max-config-idle:
- shard-dg2: NOTRUN -> [SKIP][97] ([i915#6621])
[97]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@i915_pm_rps@min-max-config-idle.html
* igt@i915_pm_rps@thresholds-idle@gt0:
- shard-dg2: NOTRUN -> [SKIP][98] ([i915#8925])
[98]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-6/igt@i915_pm_rps@thresholds-idle@gt0.html
* igt@i915_pm_rps@thresholds-idle@gt1:
- shard-mtlp: NOTRUN -> [SKIP][99] ([i915#8925]) +3 other tests skip
[99]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-2/igt@i915_pm_rps@thresholds-idle@gt1.html
* igt@i915_query@query-topology-unsupported:
- shard-dg2: NOTRUN -> [SKIP][100] ([fdo#109302])
[100]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@i915_query@query-topology-unsupported.html
* igt@i915_query@test-query-geometry-subslices:
- shard-rkl: NOTRUN -> [SKIP][101] ([i915#5723])
[101]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@i915_query@test-query-geometry-subslices.html
- shard-dg1: NOTRUN -> [SKIP][102] ([i915#5723])
[102]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-15/igt@i915_query@test-query-geometry-subslices.html
* igt@i915_selftest@mock@memory_region:
- shard-rkl: NOTRUN -> [DMESG-WARN][103] ([i915#9311])
[103]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@i915_selftest@mock@memory_region.html
- shard-snb: NOTRUN -> [DMESG-WARN][104] ([i915#9311])
[104]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-snb4/igt@i915_selftest@mock@memory_region.html
* igt@kms_addfb_basic@addfb25-framebuffer-vs-set-tiling:
- shard-mtlp: NOTRUN -> [SKIP][105] ([i915#4212]) +2 other tests skip
[105]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-2/igt@kms_addfb_basic@addfb25-framebuffer-vs-set-tiling.html
* igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
- shard-mtlp: NOTRUN -> [SKIP][106] ([i915#5190])
[106]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
* igt@kms_addfb_basic@framebuffer-vs-set-tiling:
- shard-dg2: NOTRUN -> [SKIP][107] ([i915#4212]) +2 other tests skip
[107]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-6/igt@kms_addfb_basic@framebuffer-vs-set-tiling.html
- shard-dg1: NOTRUN -> [SKIP][108] ([i915#4212])
[108]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-19/igt@kms_addfb_basic@framebuffer-vs-set-tiling.html
* igt@kms_async_flips@async-flip-with-page-flip-events@pipe-b-hdmi-a-2-4-rc_ccs-cc:
- shard-dg2: NOTRUN -> [SKIP][109] ([i915#8709]) +11 other tests skip
[109]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-b-hdmi-a-2-4-rc_ccs-cc.html
* igt@kms_async_flips@crc@pipe-d-hdmi-a-4:
- shard-dg1: NOTRUN -> [FAIL][110] ([i915#8247]) +3 other tests fail
[110]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-15/igt@kms_async_flips@crc@pipe-d-hdmi-a-4.html
* igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels:
- shard-dg2: NOTRUN -> [SKIP][111] ([i915#1769] / [i915#3555])
[111]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html
* igt@kms_big_fb@4-tiled-16bpp-rotate-0:
- shard-rkl: NOTRUN -> [SKIP][112] ([i915#5286]) +2 other tests skip
[112]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@kms_big_fb@4-tiled-16bpp-rotate-0.html
* igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-hflip:
- shard-dg1: NOTRUN -> [SKIP][113] ([i915#4538] / [i915#5286])
[113]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-15/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
* igt@kms_big_fb@linear-32bpp-rotate-90:
- shard-rkl: NOTRUN -> [SKIP][114] ([fdo#111614] / [i915#3638])
[114]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@kms_big_fb@linear-32bpp-rotate-90.html
* igt@kms_big_fb@linear-64bpp-rotate-90:
- shard-tglu: NOTRUN -> [SKIP][115] ([fdo#111614])
[115]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-8/igt@kms_big_fb@linear-64bpp-rotate-90.html
* igt@kms_big_fb@x-tiled-16bpp-rotate-90:
- shard-dg2: NOTRUN -> [SKIP][116] ([fdo#111614]) +6 other tests skip
[116]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_big_fb@x-tiled-16bpp-rotate-90.html
* igt@kms_big_fb@x-tiled-8bpp-rotate-270:
- shard-mtlp: NOTRUN -> [SKIP][117] ([fdo#111614]) +2 other tests skip
[117]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@kms_big_fb@x-tiled-8bpp-rotate-270.html
* igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
- shard-tglu: [PASS][118] -> [FAIL][119] ([i915#3743])
[118]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-tglu-4/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
[119]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-8/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip:
- shard-dg2: NOTRUN -> [SKIP][120] ([i915#5190]) +21 other tests skip
[120]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip.html
* igt@kms_big_fb@yf-tiled-16bpp-rotate-0:
- shard-dg1: NOTRUN -> [SKIP][121] ([i915#4538])
[121]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-14/igt@kms_big_fb@yf-tiled-16bpp-rotate-0.html
* igt@kms_big_fb@yf-tiled-8bpp-rotate-90:
- shard-dg2: NOTRUN -> [SKIP][122] ([i915#4538] / [i915#5190]) +8 other tests skip
[122]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_big_fb@yf-tiled-8bpp-rotate-90.html
* igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow:
- shard-rkl: NOTRUN -> [SKIP][123] ([fdo#111615])
[123]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-1/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0:
- shard-mtlp: NOTRUN -> [SKIP][124] ([fdo#111615]) +9 other tests skip
[124]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip:
- shard-rkl: NOTRUN -> [SKIP][125] ([fdo#110723]) +3 other tests skip
[125]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
* igt@kms_big_joiner@invalid-modeset:
- shard-dg2: NOTRUN -> [SKIP][126] ([i915#2705]) +1 other test skip
[126]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-7/igt@kms_big_joiner@invalid-modeset.html
* igt@kms_ccs@pipe-a-bad-pixel-format-y_tiled_gen12_rc_ccs_cc:
- shard-mtlp: NOTRUN -> [SKIP][127] ([i915#3886] / [i915#5354] / [i915#6095]) +11 other tests skip
[127]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-5/igt@kms_ccs@pipe-a-bad-pixel-format-y_tiled_gen12_rc_ccs_cc.html
* igt@kms_ccs@pipe-a-crc-primary-basic-yf_tiled_ccs:
- shard-dg2: NOTRUN -> [SKIP][128] ([i915#3689] / [i915#5354]) +28 other tests skip
[128]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_ccs@pipe-a-crc-primary-basic-yf_tiled_ccs.html
- shard-tglu: NOTRUN -> [SKIP][129] ([fdo#111615] / [i915#3689] / [i915#5354] / [i915#6095])
[129]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-8/igt@kms_ccs@pipe-a-crc-primary-basic-yf_tiled_ccs.html
* igt@kms_ccs@pipe-a-random-ccs-data-y_tiled_gen12_rc_ccs:
- shard-mtlp: NOTRUN -> [SKIP][130] ([i915#5354] / [i915#6095]) +40 other tests skip
[130]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@kms_ccs@pipe-a-random-ccs-data-y_tiled_gen12_rc_ccs.html
* igt@kms_ccs@pipe-b-bad-aux-stride-yf_tiled_ccs:
- shard-rkl: NOTRUN -> [SKIP][131] ([i915#3734] / [i915#5354] / [i915#6095]) +4 other tests skip
[131]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@kms_ccs@pipe-b-bad-aux-stride-yf_tiled_ccs.html
* igt@kms_ccs@pipe-b-ccs-on-another-bo-y_tiled_gen12_mc_ccs:
- shard-rkl: NOTRUN -> [SKIP][132] ([i915#3886] / [i915#5354] / [i915#6095])
[132]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-6/igt@kms_ccs@pipe-b-ccs-on-another-bo-y_tiled_gen12_mc_ccs.html
* igt@kms_ccs@pipe-b-crc-sprite-planes-basic-4_tiled_dg2_mc_ccs:
- shard-rkl: NOTRUN -> [SKIP][133] ([i915#5354] / [i915#6095]) +6 other tests skip
[133]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@kms_ccs@pipe-b-crc-sprite-planes-basic-4_tiled_dg2_mc_ccs.html
* igt@kms_ccs@pipe-b-missing-ccs-buffer-y_tiled_gen12_rc_ccs_cc:
- shard-apl: NOTRUN -> [SKIP][134] ([fdo#109271] / [i915#3886]) +2 other tests skip
[134]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-apl4/igt@kms_ccs@pipe-b-missing-ccs-buffer-y_tiled_gen12_rc_ccs_cc.html
* igt@kms_ccs@pipe-c-crc-primary-rotation-180-y_tiled_gen12_mc_ccs:
- shard-dg2: NOTRUN -> [SKIP][135] ([i915#3689] / [i915#3886] / [i915#5354]) +13 other tests skip
[135]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@kms_ccs@pipe-c-crc-primary-rotation-180-y_tiled_gen12_mc_ccs.html
* igt@kms_ccs@pipe-c-crc-sprite-planes-basic-yf_tiled_ccs:
- shard-dg1: NOTRUN -> [SKIP][136] ([i915#3689] / [i915#5354] / [i915#6095]) +4 other tests skip
[136]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-16/igt@kms_ccs@pipe-c-crc-sprite-planes-basic-yf_tiled_ccs.html
* igt@kms_ccs@pipe-d-random-ccs-data-4_tiled_dg2_rc_ccs:
- shard-rkl: NOTRUN -> [SKIP][137] ([i915#5354]) +19 other tests skip
[137]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@kms_ccs@pipe-d-random-ccs-data-4_tiled_dg2_rc_ccs.html
- shard-dg1: NOTRUN -> [SKIP][138] ([i915#5354] / [i915#6095]) +4 other tests skip
[138]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-17/igt@kms_ccs@pipe-d-random-ccs-data-4_tiled_dg2_rc_ccs.html
* igt@kms_cdclk@mode-transition@pipe-d-hdmi-a-3:
- shard-dg2: NOTRUN -> [SKIP][139] ([i915#4087] / [i915#7213]) +4 other tests skip
[139]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@kms_cdclk@mode-transition@pipe-d-hdmi-a-3.html
* igt@kms_cdclk@plane-scaling:
- shard-rkl: NOTRUN -> [SKIP][140] ([i915#3742])
[140]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@kms_cdclk@plane-scaling.html
* igt@kms_chamelium_color@ctm-limited-range:
- shard-mtlp: NOTRUN -> [SKIP][141] ([fdo#111827])
[141]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-3/igt@kms_chamelium_color@ctm-limited-range.html
* igt@kms_chamelium_color@ctm-negative:
- shard-dg2: NOTRUN -> [SKIP][142] ([fdo#111827]) +2 other tests skip
[142]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-7/igt@kms_chamelium_color@ctm-negative.html
* igt@kms_chamelium_edid@hdmi-edid-change-during-suspend:
- shard-rkl: NOTRUN -> [SKIP][143] ([i915#7828]) +4 other tests skip
[143]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@kms_chamelium_edid@hdmi-edid-change-during-suspend.html
* igt@kms_chamelium_edid@hdmi-edid-stress-resolution-non-4k:
- shard-dg2: NOTRUN -> [SKIP][144] ([i915#7828]) +8 other tests skip
[144]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@kms_chamelium_edid@hdmi-edid-stress-resolution-non-4k.html
* igt@kms_chamelium_frames@dp-crc-single:
- shard-mtlp: NOTRUN -> [SKIP][145] ([i915#7828]) +6 other tests skip
[145]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@kms_chamelium_frames@dp-crc-single.html
* igt@kms_color@deep-color@pipe-b-edp-1-degamma:
- shard-mtlp: NOTRUN -> [FAIL][146] ([i915#6892]) +3 other tests fail
[146]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@kms_color@deep-color@pipe-b-edp-1-degamma.html
* igt@kms_content_protection@atomic:
- shard-dg2: NOTRUN -> [SKIP][147] ([i915#7118]) +2 other tests skip
[147]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-7/igt@kms_content_protection@atomic.html
* igt@kms_content_protection@dp-mst-lic-type-1:
- shard-dg2: NOTRUN -> [SKIP][148] ([i915#3299])
[148]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-6/igt@kms_content_protection@dp-mst-lic-type-1.html
- shard-mtlp: NOTRUN -> [SKIP][149] ([i915#3299]) +1 other test skip
[149]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-3/igt@kms_content_protection@dp-mst-lic-type-1.html
* igt@kms_content_protection@legacy:
- shard-mtlp: NOTRUN -> [SKIP][150] ([i915#6944])
[150]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@kms_content_protection@legacy.html
* igt@kms_content_protection@srm:
- shard-rkl: NOTRUN -> [SKIP][151] ([i915#7118]) +1 other test skip
[151]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@kms_content_protection@srm.html
* igt@kms_content_protection@srm@pipe-a-dp-4:
- shard-dg2: NOTRUN -> [TIMEOUT][152] ([i915#7173])
[152]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_content_protection@srm@pipe-a-dp-4.html
* igt@kms_cursor_crc@cursor-offscreen-512x512:
- shard-dg2: NOTRUN -> [SKIP][153] ([i915#3359]) +1 other test skip
[153]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_cursor_crc@cursor-offscreen-512x512.html
* igt@kms_cursor_crc@cursor-onscreen-max-size:
- shard-rkl: NOTRUN -> [SKIP][154] ([i915#3555]) +1 other test skip
[154]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@kms_cursor_crc@cursor-onscreen-max-size.html
- shard-mtlp: NOTRUN -> [SKIP][155] ([i915#3555] / [i915#8814]) +3 other tests skip
[155]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-5/igt@kms_cursor_crc@cursor-onscreen-max-size.html
* igt@kms_cursor_crc@cursor-random-512x512:
- shard-rkl: NOTRUN -> [SKIP][156] ([i915#3359]) +1 other test skip
[156]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@kms_cursor_crc@cursor-random-512x512.html
* igt@kms_cursor_crc@cursor-rapid-movement-32x32:
- shard-dg2: NOTRUN -> [SKIP][157] ([i915#3555]) +9 other tests skip
[157]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@kms_cursor_crc@cursor-rapid-movement-32x32.html
- shard-tglu: NOTRUN -> [SKIP][158] ([i915#3555])
[158]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-6/igt@kms_cursor_crc@cursor-rapid-movement-32x32.html
* igt@kms_cursor_crc@cursor-sliding-512x170:
- shard-mtlp: NOTRUN -> [SKIP][159] ([i915#3359])
[159]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@kms_cursor_crc@cursor-sliding-512x170.html
* igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy:
- shard-rkl: NOTRUN -> [SKIP][160] ([fdo#111825]) +4 other tests skip
[160]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-varying-size:
- shard-mtlp: NOTRUN -> [SKIP][161] ([i915#4213])
[161]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-varying-size.html
* igt@kms_cursor_legacy@cursora-vs-flipb-atomic:
- shard-mtlp: NOTRUN -> [SKIP][162] ([i915#3546]) +3 other tests skip
[162]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@kms_cursor_legacy@cursora-vs-flipb-atomic.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-toggle:
- shard-dg2: NOTRUN -> [SKIP][163] ([fdo#109274] / [fdo#111767] / [i915#5354]) +1 other test skip
[163]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_cursor_legacy@cursorb-vs-flipb-toggle.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size:
- shard-dg2: NOTRUN -> [SKIP][164] ([fdo#109274] / [i915#5354]) +5 other tests skip
[164]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
- shard-dg2: NOTRUN -> [SKIP][165] ([i915#4103] / [i915#4213]) +1 other test skip
[165]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
* igt@kms_dirtyfb@dirtyfb-ioctl@fbc-hdmi-a-4:
- shard-dg1: NOTRUN -> [SKIP][166] ([i915#9227])
[166]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-15/igt@kms_dirtyfb@dirtyfb-ioctl@fbc-hdmi-a-4.html
* igt@kms_dirtyfb@dirtyfb-ioctl@psr-hdmi-a-4:
- shard-dg1: NOTRUN -> [SKIP][167] ([i915#9226] / [i915#9261]) +1 other test skip
[167]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-15/igt@kms_dirtyfb@dirtyfb-ioctl@psr-hdmi-a-4.html
* igt@kms_display_modes@mst-extended-mode-negative:
- shard-rkl: NOTRUN -> [SKIP][168] ([i915#8588])
[168]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@kms_display_modes@mst-extended-mode-negative.html
* igt@kms_draw_crc@draw-method-mmap-gtt:
- shard-mtlp: NOTRUN -> [SKIP][169] ([i915#8812])
[169]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-2/igt@kms_draw_crc@draw-method-mmap-gtt.html
* igt@kms_dsc@dsc-basic:
- shard-dg2: NOTRUN -> [SKIP][170] ([i915#3555] / [i915#3840]) +1 other test skip
[170]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@kms_dsc@dsc-basic.html
* igt@kms_dsc@dsc-with-bpc:
- shard-rkl: NOTRUN -> [SKIP][171] ([i915#3555] / [i915#3840])
[171]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-1/igt@kms_dsc@dsc-with-bpc.html
- shard-dg1: NOTRUN -> [SKIP][172] ([i915#3555] / [i915#3840])
[172]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-14/igt@kms_dsc@dsc-with-bpc.html
* igt@kms_dsc@dsc-with-output-formats:
- shard-mtlp: NOTRUN -> [SKIP][173] ([i915#3555] / [i915#3840]) +1 other test skip
[173]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@kms_dsc@dsc-with-output-formats.html
* igt@kms_flip@2x-flip-vs-blocking-wf-vblank:
- shard-apl: NOTRUN -> [SKIP][174] ([fdo#109271] / [fdo#111767]) +1 other test skip
[174]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-apl6/igt@kms_flip@2x-flip-vs-blocking-wf-vblank.html
- shard-snb: NOTRUN -> [SKIP][175] ([fdo#109271] / [fdo#111767]) +1 other test skip
[175]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-snb6/igt@kms_flip@2x-flip-vs-blocking-wf-vblank.html
* igt@kms_flip@2x-flip-vs-expired-vblank:
- shard-mtlp: NOTRUN -> [SKIP][176] ([i915#3637]) +5 other tests skip
[176]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-3/igt@kms_flip@2x-flip-vs-expired-vblank.html
* igt@kms_flip@2x-flip-vs-rmfb-interruptible:
- shard-rkl: NOTRUN -> [SKIP][177] ([fdo#111767] / [fdo#111825])
[177]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html
* igt@kms_flip@2x-flip-vs-wf_vblank-interruptible:
- shard-dg2: NOTRUN -> [SKIP][178] ([fdo#109274]) +9 other tests skip
[178]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@kms_flip@2x-flip-vs-wf_vblank-interruptible.html
* igt@kms_flip@flip-vs-suspend@b-edp1:
- shard-mtlp: NOTRUN -> [DMESG-WARN][179] ([i915#9262]) +2 other tests dmesg-warn
[179]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-4/igt@kms_flip@flip-vs-suspend@b-edp1.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling@pipe-a-default-mode:
- shard-mtlp: NOTRUN -> [SKIP][180] ([i915#2672])
[180]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-5/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-16bpp-4tile-downscaling@pipe-a-default-mode:
- shard-mtlp: NOTRUN -> [SKIP][181] ([i915#8810])
[181]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-4/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-16bpp-4tile-downscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling@pipe-a-valid-mode:
- shard-rkl: NOTRUN -> [SKIP][182] ([i915#2672])
[182]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling@pipe-a-valid-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-downscaling@pipe-a-default-mode:
- shard-mtlp: NOTRUN -> [SKIP][183] ([i915#3555] / [i915#8810]) +2 other tests skip
[183]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-5/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-downscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-upscaling@pipe-a-valid-mode:
- shard-dg2: NOTRUN -> [SKIP][184] ([i915#2672])
[184]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-upscaling@pipe-a-valid-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-valid-mode:
- shard-dg2: NOTRUN -> [SKIP][185] ([i915#2672] / [i915#3555])
[185]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-valid-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling@pipe-a-default-mode:
- shard-mtlp: NOTRUN -> [SKIP][186] ([i915#2672] / [i915#3555])
[186]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling@pipe-a-default-mode.html
* igt@kms_force_connector_basic@force-load-detect:
- shard-dg2: NOTRUN -> [SKIP][187] ([fdo#109285])
[187]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@kms_force_connector_basic@force-load-detect.html
- shard-mtlp: NOTRUN -> [SKIP][188] ([fdo#109285])
[188]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@kms_force_connector_basic@force-load-detect.html
* igt@kms_force_connector_basic@prune-stale-modes:
- shard-mtlp: NOTRUN -> [SKIP][189] ([i915#5274])
[189]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@kms_force_connector_basic@prune-stale-modes.html
- shard-dg2: NOTRUN -> [SKIP][190] ([i915#5274])
[190]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_force_connector_basic@prune-stale-modes.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-blt:
- shard-dg2: NOTRUN -> [SKIP][191] ([i915#5354]) +71 other tests skip
[191]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-mmap-wc:
- shard-dg2: NOTRUN -> [SKIP][192] ([i915#8708]) +20 other tests skip
[192]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-mmap-wc.html
- shard-tglu: NOTRUN -> [SKIP][193] ([fdo#109280]) +1 other test skip
[193]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-shrfb-draw-mmap-gtt:
- shard-rkl: NOTRUN -> [SKIP][194] ([fdo#111825] / [i915#1825]) +15 other tests skip
[194]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-shrfb-draw-mmap-gtt.html
* igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary:
- shard-dg2: [PASS][195] -> [FAIL][196] ([i915#6880])
[195]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg2-2/igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary.html
[196]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-cpu:
- shard-dg2: NOTRUN -> [SKIP][197] ([i915#3458]) +25 other tests skip
[197]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-cpu.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-mmap-wc:
- shard-apl: NOTRUN -> [SKIP][198] ([fdo#109271]) +40 other tests skip
[198]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-apl4/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-onoff:
- shard-mtlp: NOTRUN -> [SKIP][199] ([i915#1825]) +37 other tests skip
[199]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-mmap-gtt:
- shard-mtlp: NOTRUN -> [SKIP][200] ([i915#8708]) +10 other tests skip
[200]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-mmap-gtt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-render:
- shard-dg1: NOTRUN -> [SKIP][201] ([fdo#111825]) +4 other tests skip
[201]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-18/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbcpsr-tiling-y:
- shard-dg2: NOTRUN -> [SKIP][202] ([i915#5460])
[202]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-10/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html
- shard-mtlp: NOTRUN -> [SKIP][203] ([i915#5460])
[203]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-3/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html
* igt@kms_frontbuffer_tracking@psr-1p-pri-indfb-multidraw:
- shard-dg1: NOTRUN -> [SKIP][204] ([i915#3458]) +1 other test skip
[204]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-19/igt@kms_frontbuffer_tracking@psr-1p-pri-indfb-multidraw.html
* igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-plflip-blt:
- shard-rkl: NOTRUN -> [SKIP][205] ([i915#3023]) +13 other tests skip
[205]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
- shard-snb: NOTRUN -> [SKIP][206] ([fdo#109271]) +190 other tests skip
[206]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-snb6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@psr-rgb101010-draw-mmap-gtt:
- shard-dg1: NOTRUN -> [SKIP][207] ([i915#8708])
[207]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-17/igt@kms_frontbuffer_tracking@psr-rgb101010-draw-mmap-gtt.html
* igt@kms_hdmi_inject@inject-audio:
- shard-dg1: NOTRUN -> [SKIP][208] ([i915#433])
[208]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-19/igt@kms_hdmi_inject@inject-audio.html
* igt@kms_hdr@bpc-switch:
- shard-rkl: NOTRUN -> [SKIP][209] ([i915#3555] / [i915#8228]) +1 other test skip
[209]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-1/igt@kms_hdr@bpc-switch.html
* igt@kms_hdr@static-toggle-suspend:
- shard-dg2: NOTRUN -> [SKIP][210] ([i915#3555] / [i915#8228]) +2 other tests skip
[210]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@kms_hdr@static-toggle-suspend.html
* igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
- shard-dg2: NOTRUN -> [SKIP][211] ([i915#4816])
[211]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
* igt@kms_panel_fitting@legacy:
- shard-dg2: NOTRUN -> [SKIP][212] ([i915#6301])
[212]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_panel_fitting@legacy.html
* igt@kms_pipe_b_c_ivb@pipe-b-double-modeset-then-modeset-pipe-c:
- shard-mtlp: NOTRUN -> [SKIP][213] ([fdo#109289]) +3 other tests skip
[213]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@kms_pipe_b_c_ivb@pipe-b-double-modeset-then-modeset-pipe-c.html
* igt@kms_pipe_crc_basic@suspend-read-crc@pipe-b-dp-1:
- shard-apl: NOTRUN -> [INCOMPLETE][214] ([i915#9392])
[214]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-apl6/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-b-dp-1.html
* igt@kms_plane@plane-panning-bottom-right-suspend@pipe-a-planes:
- shard-mtlp: [PASS][215] -> [ABORT][216] ([i915#9262])
[215]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-mtlp-8/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-a-planes.html
[216]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-a-planes.html
* igt@kms_plane_lowres@tiling-4@pipe-c-edp-1:
- shard-mtlp: NOTRUN -> [SKIP][217] ([i915#3582]) +3 other tests skip
[217]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@kms_plane_lowres@tiling-4@pipe-c-edp-1.html
* igt@kms_plane_multiple@tiling-y:
- shard-dg2: NOTRUN -> [SKIP][218] ([i915#8806])
[218]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_plane_multiple@tiling-y.html
* igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-20x20@pipe-b-hdmi-a-2:
- shard-rkl: NOTRUN -> [SKIP][219] ([i915#5235]) +3 other tests skip
[219]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-20x20@pipe-b-hdmi-a-2.html
* igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-c-hdmi-a-1:
- shard-dg2: NOTRUN -> [SKIP][220] ([i915#5235]) +7 other tests skip
[220]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-10/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-c-hdmi-a-1.html
* igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-d-hdmi-a-4:
- shard-dg1: NOTRUN -> [SKIP][221] ([i915#5235]) +15 other tests skip
[221]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-14/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-d-hdmi-a-4.html
* igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-5@pipe-d-edp-1:
- shard-mtlp: NOTRUN -> [SKIP][222] ([i915#5235]) +11 other tests skip
[222]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-5/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-5@pipe-d-edp-1.html
* igt@kms_psr2_sf@cursor-plane-update-sf:
- shard-rkl: NOTRUN -> [SKIP][223] ([fdo#111068] / [i915#658])
[223]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@kms_psr2_sf@cursor-plane-update-sf.html
* igt@kms_psr2_sf@overlay-plane-move-continuous-exceed-sf:
- shard-apl: NOTRUN -> [SKIP][224] ([fdo#109271] / [i915#658])
[224]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-apl2/igt@kms_psr2_sf@overlay-plane-move-continuous-exceed-sf.html
* igt@kms_psr2_su@page_flip-xrgb8888:
- shard-dg2: NOTRUN -> [SKIP][225] ([i915#658]) +4 other tests skip
[225]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_psr2_su@page_flip-xrgb8888.html
* igt@kms_psr@primary_mmap_cpu:
- shard-glk: NOTRUN -> [SKIP][226] ([fdo#109271]) +15 other tests skip
[226]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-glk6/igt@kms_psr@primary_mmap_cpu.html
* igt@kms_psr@psr2_cursor_plane_onoff:
- shard-rkl: NOTRUN -> [SKIP][227] ([i915#1072]) +5 other tests skip
[227]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@kms_psr@psr2_cursor_plane_onoff.html
- shard-dg1: NOTRUN -> [SKIP][228] ([i915#1072]) +2 other tests skip
[228]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-12/igt@kms_psr@psr2_cursor_plane_onoff.html
* igt@kms_psr@psr2_dpms:
- shard-dg2: NOTRUN -> [SKIP][229] ([i915#1072]) +12 other tests skip
[229]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@kms_psr@psr2_dpms.html
* igt@kms_psr_stress_test@flip-primary-invalidate-overlay:
- shard-rkl: NOTRUN -> [SKIP][230] ([i915#5461] / [i915#658])
[230]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html
* igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
- shard-dg2: NOTRUN -> [SKIP][231] ([i915#5461] / [i915#658])
[231]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
* igt@kms_rotation_crc@bad-tiling:
- shard-dg2: NOTRUN -> [SKIP][232] ([i915#4235])
[232]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_rotation_crc@bad-tiling.html
* igt@kms_rotation_crc@primary-rotation-270:
- shard-mtlp: NOTRUN -> [SKIP][233] ([i915#4235]) +1 other test skip
[233]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-6/igt@kms_rotation_crc@primary-rotation-270.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-0:
- shard-rkl: [PASS][234] -> [INCOMPLETE][235] ([i915#8875])
[234]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-rkl-1/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
[235]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-6/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
- shard-mtlp: NOTRUN -> [SKIP][236] ([i915#5289])
[236]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
* igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270:
- shard-dg2: NOTRUN -> [SKIP][237] ([i915#4235] / [i915#5190]) +1 other test skip
[237]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270.html
- shard-rkl: NOTRUN -> [SKIP][238] ([fdo#111615] / [i915#5289])
[238]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270.html
- shard-dg1: NOTRUN -> [SKIP][239] ([fdo#111615] / [i915#5289])
[239]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-17/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270.html
* igt@kms_setmode@basic-clone-single-crtc:
- shard-rkl: NOTRUN -> [SKIP][240] ([i915#3555] / [i915#4098])
[240]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-6/igt@kms_setmode@basic-clone-single-crtc.html
* igt@kms_setmode@clone-exclusive-crtc:
- shard-mtlp: NOTRUN -> [SKIP][241] ([i915#3555] / [i915#8809])
[241]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@kms_setmode@clone-exclusive-crtc.html
* igt@kms_tiled_display@basic-test-pattern:
- shard-dg1: NOTRUN -> [SKIP][242] ([i915#8623])
[242]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-15/igt@kms_tiled_display@basic-test-pattern.html
- shard-mtlp: NOTRUN -> [SKIP][243] ([i915#8623])
[243]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-1/igt@kms_tiled_display@basic-test-pattern.html
- shard-dg2: NOTRUN -> [SKIP][244] ([i915#8623])
[244]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@kms_tiled_display@basic-test-pattern.html
- shard-rkl: NOTRUN -> [SKIP][245] ([i915#8623])
[245]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-1/igt@kms_tiled_display@basic-test-pattern.html
* igt@kms_vblank@pipe-c-query-forked-busy-hang:
- shard-rkl: NOTRUN -> [SKIP][246] ([i915#4070] / [i915#6768]) +1 other test skip
[246]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-6/igt@kms_vblank@pipe-c-query-forked-busy-hang.html
* igt@kms_vblank@pipe-c-ts-continuation-suspend:
- shard-mtlp: NOTRUN -> [ABORT][247] ([i915#9262]) +11 other tests abort
[247]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-6/igt@kms_vblank@pipe-c-ts-continuation-suspend.html
* igt@kms_vblank@pipe-d-wait-forked:
- shard-rkl: NOTRUN -> [SKIP][248] ([i915#4070] / [i915#533] / [i915#6768]) +3 other tests skip
[248]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-6/igt@kms_vblank@pipe-d-wait-forked.html
* igt@kms_vrr@flip-suspend:
- shard-mtlp: NOTRUN -> [SKIP][249] ([i915#3555] / [i915#8808])
[249]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-2/igt@kms_vrr@flip-suspend.html
* igt@kms_writeback@writeback-fb-id:
- shard-dg2: NOTRUN -> [SKIP][250] ([i915#2437])
[250]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@kms_writeback@writeback-fb-id.html
* igt@kms_writeback@writeback-invalid-parameters:
- shard-mtlp: NOTRUN -> [SKIP][251] ([i915#2437]) +1 other test skip
[251]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-5/igt@kms_writeback@writeback-invalid-parameters.html
* igt@perf@global-sseu-config:
- shard-mtlp: NOTRUN -> [SKIP][252] ([i915#7387])
[252]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@perf@global-sseu-config.html
- shard-dg2: NOTRUN -> [SKIP][253] ([i915#7387])
[253]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@perf@global-sseu-config.html
* igt@perf@mi-rpc:
- shard-mtlp: NOTRUN -> [SKIP][254] ([i915#2434])
[254]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-2/igt@perf@mi-rpc.html
* igt@perf_pmu@busy-idle-check-all@ccs3:
- shard-dg2: [PASS][255] -> [FAIL][256] ([i915#4521]) +3 other tests fail
[255]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg2-10/igt@perf_pmu@busy-idle-check-all@ccs3.html
[256]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@perf_pmu@busy-idle-check-all@ccs3.html
* igt@perf_pmu@cpu-hotplug:
- shard-rkl: NOTRUN -> [SKIP][257] ([i915#8850])
[257]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@perf_pmu@cpu-hotplug.html
* igt@perf_pmu@faulting-read@gtt:
- shard-mtlp: NOTRUN -> [SKIP][258] ([i915#8440])
[258]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@perf_pmu@faulting-read@gtt.html
* igt@perf_pmu@rc6-all-gts:
- shard-dg2: NOTRUN -> [SKIP][259] ([i915#5608] / [i915#8516])
[259]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@perf_pmu@rc6-all-gts.html
* igt@perf_pmu@rc6@other-idle-gt0:
- shard-dg2: NOTRUN -> [SKIP][260] ([i915#8516])
[260]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@perf_pmu@rc6@other-idle-gt0.html
* igt@perf_pmu@render-node-busy-idle@vcs1:
- shard-dg2: [PASS][261] -> [FAIL][262] ([i915#4349]) +15 other tests fail
[261]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg2-6/igt@perf_pmu@render-node-busy-idle@vcs1.html
[262]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@perf_pmu@render-node-busy-idle@vcs1.html
* igt@perf_pmu@semaphore-busy@vcs1:
- shard-dg1: [PASS][263] -> [FAIL][264] ([i915#4349]) +5 other tests fail
[263]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-19/igt@perf_pmu@semaphore-busy@vcs1.html
[264]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-18/igt@perf_pmu@semaphore-busy@vcs1.html
* igt@prime_vgem@basic-fence-mmap:
- shard-dg2: NOTRUN -> [SKIP][265] ([i915#3708] / [i915#4077])
[265]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-1/igt@prime_vgem@basic-fence-mmap.html
* igt@prime_vgem@basic-write:
- shard-mtlp: NOTRUN -> [SKIP][266] ([i915#3708])
[266]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-4/igt@prime_vgem@basic-write.html
* igt@prime_vgem@fence-read-hang:
- shard-rkl: NOTRUN -> [SKIP][267] ([fdo#109295] / [i915#3708])
[267]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-1/igt@prime_vgem@fence-read-hang.html
* igt@prime_vgem@fence-write-hang:
- shard-tglu: NOTRUN -> [SKIP][268] ([fdo#109295])
[268]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-2/igt@prime_vgem@fence-write-hang.html
- shard-dg2: NOTRUN -> [SKIP][269] ([i915#3708])
[269]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-7/igt@prime_vgem@fence-write-hang.html
* igt@tools_test@sysfs_l3_parity:
- shard-rkl: NOTRUN -> [SKIP][270] ([fdo#109307])
[270]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@tools_test@sysfs_l3_parity.html
- shard-mtlp: NOTRUN -> [SKIP][271] ([i915#4818])
[271]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-3/igt@tools_test@sysfs_l3_parity.html
* igt@v3d/v3d_perfmon@create-perfmon-0:
- shard-tglu: NOTRUN -> [SKIP][272] ([fdo#109315] / [i915#2575])
[272]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-3/igt@v3d/v3d_perfmon@create-perfmon-0.html
* igt@v3d/v3d_perfmon@get-values-invalid-pad:
- shard-mtlp: NOTRUN -> [SKIP][273] ([i915#2575]) +12 other tests skip
[273]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@v3d/v3d_perfmon@get-values-invalid-pad.html
* igt@v3d/v3d_submit_cl@multisync-out-syncs:
- shard-dg1: NOTRUN -> [SKIP][274] ([i915#2575])
[274]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-16/igt@v3d/v3d_submit_cl@multisync-out-syncs.html
* igt@v3d/v3d_submit_csd@single-out-sync:
- shard-dg2: NOTRUN -> [SKIP][275] ([i915#2575]) +19 other tests skip
[275]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-3/igt@v3d/v3d_submit_csd@single-out-sync.html
* igt@v3d/v3d_submit_csd@valid-multisync-submission:
- shard-rkl: NOTRUN -> [SKIP][276] ([fdo#109315]) +4 other tests skip
[276]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@v3d/v3d_submit_csd@valid-multisync-submission.html
* igt@vc4/vc4_perfmon@destroy-valid-perfmon:
- shard-dg2: NOTRUN -> [SKIP][277] ([i915#7711]) +11 other tests skip
[277]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@vc4/vc4_perfmon@destroy-valid-perfmon.html
* igt@vc4/vc4_purgeable_bo@access-purged-bo-mem:
- shard-mtlp: NOTRUN -> [SKIP][278] ([i915#7711]) +7 other tests skip
[278]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@vc4/vc4_purgeable_bo@access-purged-bo-mem.html
* igt@vc4/vc4_purgeable_bo@mark-unpurgeable-twice:
- shard-tglu: NOTRUN -> [SKIP][279] ([i915#2575])
[279]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-7/igt@vc4/vc4_purgeable_bo@mark-unpurgeable-twice.html
* igt@vc4/vc4_wait_bo@used-bo-0ns:
- shard-dg1: NOTRUN -> [SKIP][280] ([i915#7711]) +1 other test skip
[280]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-19/igt@vc4/vc4_wait_bo@used-bo-0ns.html
* igt@vc4/vc4_wait_seqno@bad-seqno-0ns:
- shard-rkl: NOTRUN -> [SKIP][281] ([i915#7711]) +5 other tests skip
[281]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-4/igt@vc4/vc4_wait_seqno@bad-seqno-0ns.html
#### Possible fixes ####
* igt@debugfs_test@read_all_entries:
- shard-dg1: [DMESG-WARN][282] ([i915#4423]) -> [PASS][283]
[282]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-18/igt@debugfs_test@read_all_entries.html
[283]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-15/igt@debugfs_test@read_all_entries.html
* igt@device_reset@unbind-reset-rebind:
- shard-dg2: [INCOMPLETE][284] ([i915#5507]) -> [PASS][285]
[284]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg2-11/igt@device_reset@unbind-reset-rebind.html
[285]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-11/igt@device_reset@unbind-reset-rebind.html
* igt@drm_fdinfo@most-busy-check-all@rcs0:
- shard-rkl: [FAIL][286] ([i915#7742]) -> [PASS][287] +1 other test pass
[286]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-rkl-1/igt@drm_fdinfo@most-busy-check-all@rcs0.html
[287]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@drm_fdinfo@most-busy-check-all@rcs0.html
* igt@gem_barrier_race@remote-request@rcs0:
- shard-glk: [ABORT][288] ([i915#8190]) -> [PASS][289]
[288]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-glk5/igt@gem_barrier_race@remote-request@rcs0.html
[289]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-glk9/igt@gem_barrier_race@remote-request@rcs0.html
* igt@gem_exec_fair@basic-pace-share@rcs0:
- shard-glk: [FAIL][290] ([i915#2842]) -> [PASS][291] +1 other test pass
[290]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-glk1/igt@gem_exec_fair@basic-pace-share@rcs0.html
[291]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-glk3/igt@gem_exec_fair@basic-pace-share@rcs0.html
* igt@gem_exec_fair@basic-pace-solo@rcs0:
- shard-rkl: [FAIL][292] ([i915#2842]) -> [PASS][293]
[292]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-rkl-1/igt@gem_exec_fair@basic-pace-solo@rcs0.html
[293]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@gem_exec_fair@basic-pace-solo@rcs0.html
* igt@gem_lmem_swapping@smem-oom@lmem0:
- shard-dg2: [TIMEOUT][294] ([i915#5493]) -> [PASS][295]
[294]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg2-10/igt@gem_lmem_swapping@smem-oom@lmem0.html
[295]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-7/igt@gem_lmem_swapping@smem-oom@lmem0.html
* igt@i915_pm_rc6_residency@rc6-idle@vecs0:
- shard-dg1: [FAIL][296] ([i915#3591]) -> [PASS][297]
[296]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-19/igt@i915_pm_rc6_residency@rc6-idle@vecs0.html
[297]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-14/igt@i915_pm_rc6_residency@rc6-idle@vecs0.html
* igt@i915_pm_rpm@dpms-lpsp:
- shard-dg1: [SKIP][298] ([i915#1397]) -> [PASS][299] +1 other test pass
[298]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-16/igt@i915_pm_rpm@dpms-lpsp.html
[299]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-19/igt@i915_pm_rpm@dpms-lpsp.html
* igt@i915_pm_rpm@dpms-mode-unset-non-lpsp:
- shard-dg2: [SKIP][300] ([i915#1397]) -> [PASS][301] +1 other test pass
[300]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg2-10/igt@i915_pm_rpm@dpms-mode-unset-non-lpsp.html
[301]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-5/igt@i915_pm_rpm@dpms-mode-unset-non-lpsp.html
* igt@i915_pm_rps@waitboost:
- shard-dg1: [FAIL][302] ([i915#8229]) -> [PASS][303]
[302]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-15/igt@i915_pm_rps@waitboost.html
[303]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-12/igt@i915_pm_rps@waitboost.html
* igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
- shard-glk: [FAIL][304] ([i915#2346]) -> [PASS][305]
[304]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-glk4/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
[305]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-glk2/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
* igt@kms_fbcon_fbt@fbc-suspend:
- shard-tglu: [FAIL][306] ([i915#4767]) -> [PASS][307]
[306]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-tglu-10/igt@kms_fbcon_fbt@fbc-suspend.html
[307]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-2/igt@kms_fbcon_fbt@fbc-suspend.html
* igt@kms_frontbuffer_tracking@fbc-suspend:
- shard-mtlp: [ABORT][308] ([i915#9262]) -> [PASS][309] +1 other test pass
[308]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-mtlp-6/igt@kms_frontbuffer_tracking@fbc-suspend.html
[309]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-7/igt@kms_frontbuffer_tracking@fbc-suspend.html
* igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-dp-1:
- shard-apl: [INCOMPLETE][310] ([i915#1982] / [i915#9392]) -> [PASS][311]
[310]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-apl6/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-dp-1.html
[311]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-apl6/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-dp-1.html
* {igt@kms_pm_dc@dc9-dpms}:
- shard-apl: [SKIP][312] ([fdo#109271]) -> [PASS][313]
[312]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-apl3/igt@kms_pm_dc@dc9-dpms.html
[313]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-apl2/igt@kms_pm_dc@dc9-dpms.html
* {igt@kms_pm_lpsp@kms-lpsp@kms-lpsp-hdmi-a}:
- shard-rkl: [SKIP][314] ([i915#1937]) -> [PASS][315]
[314]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-rkl-1/igt@kms_pm_lpsp@kms-lpsp@kms-lpsp-hdmi-a.html
[315]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-7/igt@kms_pm_lpsp@kms-lpsp@kms-lpsp-hdmi-a.html
* igt@kms_universal_plane@cursor-fb-leak-pipe-a:
- shard-dg1: [FAIL][316] ([i915#9196]) -> [PASS][317] +1 other test pass
[316]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-15/igt@kms_universal_plane@cursor-fb-leak-pipe-a.html
[317]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-16/igt@kms_universal_plane@cursor-fb-leak-pipe-a.html
- shard-snb: [FAIL][318] ([i915#9196]) -> [PASS][319]
[318]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-snb1/igt@kms_universal_plane@cursor-fb-leak-pipe-a.html
[319]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-snb2/igt@kms_universal_plane@cursor-fb-leak-pipe-a.html
* igt@perf_pmu@frequency@gt0:
- shard-glk: [SKIP][320] ([fdo#109271]) -> [PASS][321]
[320]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-glk2/igt@perf_pmu@frequency@gt0.html
[321]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-glk2/igt@perf_pmu@frequency@gt0.html
- shard-snb: [SKIP][322] ([fdo#109271]) -> [PASS][323]
[322]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-snb1/igt@perf_pmu@frequency@gt0.html
[323]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-snb6/igt@perf_pmu@frequency@gt0.html
#### Warnings ####
* igt@gem_lmem_swapping@parallel-random-verify-ccs@lmem0:
- shard-dg1: [SKIP][324] ([i915#4423] / [i915#4565]) -> [SKIP][325] ([i915#4565])
[324]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-18/igt@gem_lmem_swapping@parallel-random-verify-ccs@lmem0.html
[325]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-12/igt@gem_lmem_swapping@parallel-random-verify-ccs@lmem0.html
* igt@i915_pm_rc6_residency@rc6-idle@rcs0:
- shard-tglu: [FAIL][326] ([i915#2681] / [i915#3591]) -> [WARN][327] ([i915#2681])
[326]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-tglu-9/igt@i915_pm_rc6_residency@rc6-idle@rcs0.html
[327]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-tglu-5/igt@i915_pm_rc6_residency@rc6-idle@rcs0.html
* igt@kms_ccs@pipe-a-ccs-on-another-bo-y_tiled_gen12_mc_ccs:
- shard-dg1: [SKIP][328] ([i915#3689] / [i915#3886] / [i915#4423] / [i915#5354] / [i915#6095]) -> [SKIP][329] ([i915#3689] / [i915#3886] / [i915#5354] / [i915#6095])
[328]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-12/igt@kms_ccs@pipe-a-ccs-on-another-bo-y_tiled_gen12_mc_ccs.html
[329]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-15/igt@kms_ccs@pipe-a-ccs-on-another-bo-y_tiled_gen12_mc_ccs.html
* igt@kms_fbcon_fbt@psr-suspend:
- shard-rkl: [SKIP][330] ([i915#3955]) -> [SKIP][331] ([fdo#110189] / [i915#3955])
[330]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-rkl-7/igt@kms_fbcon_fbt@psr-suspend.html
[331]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-rkl-2/igt@kms_fbcon_fbt@psr-suspend.html
* igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes:
- shard-mtlp: [ABORT][332] ([i915#9262]) -> [DMESG-WARN][333] ([i915#9262])
[332]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-mtlp-8/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes.html
[333]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-mtlp-8/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes.html
* igt@kms_psr@primary_page_flip:
- shard-dg1: [SKIP][334] ([i915#1072] / [i915#4078]) -> [SKIP][335] ([i915#1072])
[334]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg1-18/igt@kms_psr@primary_page_flip.html
[335]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg1-19/igt@kms_psr@primary_page_flip.html
* igt@perf_pmu@rc6-suspend:
- shard-snb: [DMESG-WARN][336] ([i915#8841]) -> [DMESG-FAIL][337] ([fdo#103375])
[336]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-snb4/igt@perf_pmu@rc6-suspend.html
[337]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-snb7/igt@perf_pmu@rc6-suspend.html
* igt@prime_mmap@test_aperture_limit@test_aperture_limit-smem:
- shard-dg2: [CRASH][338] ([i915#9351]) -> [INCOMPLETE][339] ([i915#5493])
[338]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13689/shard-dg2-2/igt@prime_mmap@test_aperture_limit@test_aperture_limit-smem.html
[339]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/shard-dg2-2/igt@prime_mmap@test_aperture_limit@test_aperture_limit-smem.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[fdo#103375]: https://bugs.freedesktop.org/show_bug.cgi?id=103375
[fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
[fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
[fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
[fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
[fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
[fdo#109295]: https://bugs.freedesktop.org/show_bug.cgi?id=109295
[fdo#109302]: https://bugs.freedesktop.org/show_bug.cgi?id=109302
[fdo#109307]: https://bugs.freedesktop.org/show_bug.cgi?id=109307
[fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
[fdo#109506]: https://bugs.freedesktop.org/show_bug.cgi?id=109506
[fdo#110189]: https://bugs.freedesktop.org/show_bug.cgi?id=110189
[fdo#110723]: https://bugs.freedesktop.org/show_bug.cgi?id=110723
[fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
[fdo#111614]: https://bugs.freedesktop.org/show_bug.cgi?id=111614
[fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
[fdo#111767]: https://bugs.freedesktop.org/show_bug.cgi?id=111767
[fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
[fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
[fdo#112283]: https://bugs.freedesktop.org/show_bug.cgi?id=112283
[i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
[i915#1099]: https://gitlab.freedesktop.org/drm/intel/issues/1099
[i915#1397]: https://gitlab.freedesktop.org/drm/intel/issues/1397
[i915#1769]: https://gitlab.freedesktop.org/drm/intel/issues/1769
[i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825
[i915#1839]: https://gitlab.freedesktop.org/drm/intel/issues/1839
[i915#1937]: https://gitlab.freedesktop.org/drm/intel/issues/1937
[i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
[i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
[i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
[i915#2410]: https://gitlab.freedesktop.org/drm/intel/issues/2410
[i915#2434]: https://gitlab.freedesktop.org/drm/intel/issues/2434
[i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437
[i915#2527]: https://gitlab.freedesktop.org/drm/intel/issues/2527
[i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
[i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
[i915#2681]: https://gitlab.freedesktop.org/drm/intel/issues/2681
[i915#2705]: https://gitlab.freedesktop.org/drm/intel/issues/2705
[i915#280]: https://gitlab.freedesktop.org/drm/intel/issues/280
[i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
[i915#2856]: https://gitlab.freedesktop.org/drm/intel/issues/2856
[i915#3023]: https://gitlab.freedesktop.org/drm/intel/issues/3023
[i915#3281]: https://gitlab.freedesktop.org/drm/intel/issues/3281
[i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
[i915#3297]: https://gitlab.freedesktop.org/drm/intel/issues/3297
[i915#3299]: https://gitlab.freedesktop.org/drm/intel/issues/3299
[i915#3359]: https://gitlab.freedesktop.org/drm/intel/issues/3359
[i915#3458]: https://gitlab.freedesktop.org/drm/intel/issues/3458
[i915#3539]: https://gitlab.freedesktop.org/drm/intel/issues/3539
[i915#3546]: https://gitlab.freedesktop.org/drm/intel/issues/3546
[i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
[i915#3582]: https://gitlab.freedesktop.org/drm/intel/issues/3582
[i915#3591]: https://gitlab.freedesktop.org/drm/intel/issues/3591
[i915#3637]: https://gitlab.freedesktop.org/drm/intel/issues/3637
[i915#3638]: https://gitlab.freedesktop.org/drm/intel/issues/3638
[i915#3689]: https://gitlab.freedesktop.org/drm/intel/issues/3689
[i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
[i915#3734]: https://gitlab.freedesktop.org/drm/intel/issues/3734
[i915#3742]: https://gitlab.freedesktop.org/drm/intel/issues/3742
[i915#3743]: https://gitlab.freedesktop.org/drm/intel/issues/3743
[i915#3840]: https://gitlab.freedesktop.org/drm/intel/issues/3840
[i915#3886]: https://gitlab.freedesktop.org/drm/intel/issues/3886
[i915#3955]: https://gitlab.freedesktop.org/drm/intel/issues/3955
[i915#4070]: https://gitlab.freedesktop.org/drm/intel/issues/4070
[i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
[i915#4078]: https://gitlab.freedesktop.org/drm/intel/issues/4078
[i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079
[i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
[i915#4087]: https://gitlab.freedesktop.org/drm/intel/issues/4087
[i915#4098]: https://gitlab.freedesktop.org/drm/intel/issues/4098
[i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
[i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
[i915#4213]: https://gitlab.freedesktop.org/drm/intel/issues/4213
[i915#4235]: https://gitlab.freedesktop.org/drm/intel/issues/4235
[i915#4270]: https://gitlab.freedesktop.org/drm/intel/issues/4270
[i915#433]: https://gitlab.freedesktop.org/drm/intel/issues/433
[i915#4349]: https://gitlab.freedesktop.org/drm/intel/issues/4349
[i915#4423]: https://gitlab.freedesktop.org/drm/intel/issues/4423
[i915#4473]: https://gitlab.freedesktop.org/drm/intel/issues/4473
[i915#4521]: https://gitlab.freedesktop.org/drm/intel/issues/4521
[i915#4525]: https://gitlab.freedesktop.org/drm/intel/issues/4525
[i915#4537]: https://gitlab.freedesktop.org/drm/intel/issues/4537
[i915#4538]: https://gitlab.freedesktop.org/drm/intel/issues/4538
[i915#4565]: https://gitlab.freedesktop.org/drm/intel/issues/4565
[i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
[i915#4767]: https://gitlab.freedesktop.org/drm/intel/issues/4767
[i915#4771]: https://gitlab.freedesktop.org/drm/intel/issues/4771
[i915#4812]: https://gitlab.freedesktop.org/drm/intel/issues/4812
[i915#4816]: https://gitlab.freedesktop.org/drm/intel/issues/4816
[i915#4818]: https://gitlab.freedesktop.org/drm/intel/issues/4818
[i915#4852]: https://gitlab.freedesktop.org/drm/intel/issues/4852
[i915#4860]: https://gitlab.freedesktop.org/drm/intel/issues/4860
[i915#4873]: https://gitlab.freedesktop.org/drm/intel/issues/4873
[i915#4879]: https://gitlab.freedesktop.org/drm/intel/issues/4879
[i915#4880]: https://gitlab.freedesktop.org/drm/intel/issues/4880
[i915#4885]: https://gitlab.freedesktop.org/drm/intel/issues/4885
[i915#5107]: https://gitlab.freedesktop.org/drm/intel/issues/5107
[i915#5176]: https://gitlab.freedesktop.org/drm/intel/issues/5176
[i915#5190]: https://gitlab.freedesktop.org/drm/intel/issues/5190
[i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
[i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
[i915#5286]: https://gitlab.freedesktop.org/drm/intel/issues/5286
[i915#5289]: https://gitlab.freedesktop.org/drm/intel/issues/5289
[i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
[i915#5354]: https://gitlab.freedesktop.org/drm/intel/issues/5354
[i915#5460]: https://gitlab.freedesktop.org/drm/intel/issues/5460
[i915#5461]: https://gitlab.freedesktop.org/drm/intel/issues/5461
[i915#5493]: https://gitlab.freedesktop.org/drm/intel/issues/5493
[i915#5507]: https://gitlab.freedesktop.org/drm/intel/issues/5507
[i915#5608]: https://gitlab.freedesktop.org/drm/intel/issues/5608
[i915#5723]: https://gitlab.freedesktop.org/drm/intel/issues/5723
[i915#5784]: https://gitlab.freedesktop.org/drm/intel/issues/5784
[i915#5978]: https://gitlab.freedesktop.org/drm/intel/issues/5978
[i915#6095]: https://gitlab.freedesktop.org/drm/intel/issues/6095
[i915#6122]: https://gitlab.freedesktop.org/drm/intel/issues/6122
[i915#6227]: https://gitlab.freedesktop.org/drm/intel/issues/6227
[i915#6268]: https://gitlab.freedesktop.org/drm/intel/issues/6268
[i915#6301]: https://gitlab.freedesktop.org/drm/intel/issues/6301
[i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
[i915#6590]: https://gitlab.freedesktop.org/drm/intel/issues/6590
[i915#6621]: https://gitlab.freedesktop.org/drm/intel/issues/6621
[i915#6768]: https://gitlab.freedesktop.org/drm/intel/issues/6768
[i915#6880]: https://gitlab.freedesktop.org/drm/intel/issues/6880
[i915#6892]: https://gitlab.freedesktop.org/drm/intel/issues/6892
[i915#6944]: https://gitlab.freedesktop.org/drm/intel/issues/6944
[i915#7069]: https://gitlab.freedesktop.org/drm/intel/issues/7069
[i915#7118]: https://gitlab.freedesktop.org/drm/intel/issues/7118
[i915#7173]: https://gitlab.freedesktop.org/drm/intel/issues/7173
[i915#7213]: https://gitlab.freedesktop.org/drm/intel/issues/7213
[i915#7297]: https://gitlab.freedesktop.org/drm/intel/issues/7297
[i915#7387]: https://gitlab.freedesktop.org/drm/intel/issues/7387
[i915#7461]: https://gitlab.freedesktop.org/drm/intel/issues/7461
[i915#7697]: https://gitlab.freedesktop.org/drm/intel/issues/7697
[i915#7711]: https://gitlab.freedesktop.org/drm/intel/issues/7711
[i915#7742]: https://gitlab.freedesktop.org/drm/intel/issues/7742
[i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
[i915#7975]: https://gitlab.freedesktop.org/drm/intel/issues/7975
[i915#8190]: https://gitlab.freedesktop.org/drm/intel/issues/8190
[i915#8213]: https://gitlab.freedesktop.org/drm/intel/issues/8213
[i915#8228]: https://gitlab.freedesktop.org/drm/intel/issues/8228
[i915#8229]: https://gitlab.freedesktop.org/drm/intel/issues/8229
[i915#8247]: https://gitlab.freedesktop.org/drm/intel/issues/8247
[i915#8289]: https://gitlab.freedesktop.org/drm/intel/issues/8289
[i915#8411]: https://gitlab.freedesktop.org/drm/intel/issues/8411
[i915#8414]: https://gitlab.freedesktop.org/drm/intel/issues/8414
[i915#8428]: https://gitlab.freedesktop.org/drm/intel/issues/8428
[i915#8431]: https://gitlab.freedesktop.org/drm/intel/issues/8431
[i915#8440]: https://gitlab.freedesktop.org/drm/intel/issues/8440
[i915#8516]: https://gitlab.freedesktop.org/drm/intel/issues/8516
[i915#8555]: https://gitlab.freedesktop.org/drm/intel/issues/8555
[i915#8588]: https://gitlab.freedesktop.org/drm/intel/issues/8588
[i915#8623]: https://gitlab.freedesktop.org/drm/intel/issues/8623
[i915#8708]: https://gitlab.freedesktop.org/drm/intel/issues/8708
[i915#8709]: https://gitlab.freedesktop.org/drm/intel/issues/8709
[i915#8806]: https://gitlab.freedesktop.org/drm/intel/issues/8806
[i915#8808]: https://gitlab.freedesktop.org/drm/intel/issues/8808
[i915#8809]: https://gitlab.freedesktop.org/drm/intel/issues/8809
[i915#8810]: https://gitlab.freedesktop.org/drm/intel/issues/8810
[i915#8812]: https://gitlab.freedesktop.org/drm/intel/issues/8812
[i915#8814]: https://gitlab.freedesktop.org/drm/intel/issues/8814
[i915#8841]: https://gitlab.freedesktop.org/drm/intel/issues/8841
[i915#8850]: https://gitlab.freedesktop.org/drm/intel/issues/8850
[i915#8875]: https://gitlab.freedesktop.org/drm/intel/issues/8875
[i915#8925]: https://gitlab.freedesktop.org/drm/intel/issues/8925
[i915#9196]: https://gitlab.freedesktop.org/drm/intel/issues/9196
[i915#9226]: https://gitlab.freedesktop.org/drm/intel/issues/9226
[i915#9227]: https://gitlab.freedesktop.org/drm/intel/issues/9227
[i915#9261]: https://gitlab.freedesktop.org/drm/intel/issues/9261
[i915#9262]: https://gitlab.freedesktop.org/drm/intel/issues/9262
[i915#9298]: https://gitlab.freedesktop.org/drm/intel/issues/9298
[i915#9310]: https://gitlab.freedesktop.org/drm/intel/issues/9310
[i915#9311]: https://gitlab.freedesktop.org/drm/intel/issues/9311
[i915#9337]: https://gitlab.freedesktop.org/drm/intel/issues/9337
[i915#9351]: https://gitlab.freedesktop.org/drm/intel/issues/9351
[i915#9392]: https://gitlab.freedesktop.org/drm/intel/issues/9392
[i915#9423]: https://gitlab.freedesktop.org/drm/intel/issues/9423
[i915#9424]: https://gitlab.freedesktop.org/drm/intel/issues/9424
Build changes
-------------
* CI: CI-20190529 -> None
* IGT: IGT_7506 -> IGTPW_9890
* Piglit: piglit_4509 -> None
CI-20190529: 20190529
CI_DRM_13689: 5933eb0a0717a28e668d33e01a707311d31cebbb @ git://anongit.freedesktop.org/gfx-ci/linux
IGTPW_9890: 9890
IGT_7506: 4fdf544bd0a38c5a100ef43c30171827e1c8c442 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9890/index.html
[-- Attachment #2: Type: text/html, Size: 115644 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread