* [igt-dev] [PATCH i-g-t 01/16] drm-uapi/xe_drm: Align with new PMU interface
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-20 8:28 ` Aravind Iddamsetty
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 02/16] tests/intel/xe_query: Add a test for querying cs cycles Rodrigo Vivi
` (17 subsequent siblings)
18 siblings, 1 reply; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
Align with commit ("drm/xe/pmu: Enable PMU interface")
Cc: Francois Dugast <francois.dugast@intel.com>
Cc: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
include/drm-uapi/xe_drm.h | 38 ++++++++++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 804c02270..6aaa8517c 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -1053,6 +1053,44 @@ struct drm_xe_vm_madvise {
__u64 reserved[2];
};
+/**
+ * XE PMU event config IDs
+ *
+ * Check 'man perf_event_open' to use these ID's in 'struct perf_event_attr'
+ * as part of perf_event_open syscall to read a particular event.
+ *
+ * For example to open the XE_PMU_INTERRUPTS(0):
+ *
+ * .. code-block:: C
+ * struct perf_event_attr attr;
+ * long long count;
+ * int cpu = 0;
+ * int fd;
+ *
+ * memset(&attr, 0, sizeof(struct perf_event_attr));
+ * attr.type = type; // eg: /sys/bus/event_source/devices/xe_0000_56_00.0/type
+ * attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
+ * attr.use_clockid = 1;
+ * attr.clockid = CLOCK_MONOTONIC;
+ * attr.config = XE_PMU_INTERRUPTS(0);
+ *
+ * fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
+ */
+
+/*
+ * Top bits of every counter are GT id.
+ */
+#define __XE_PMU_GT_SHIFT (56)
+
+#define ___XE_PMU_OTHER(gt, x) \
+ (((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
+
+#define XE_PMU_INTERRUPTS(gt) ___XE_PMU_OTHER(gt, 0)
+#define XE_PMU_RENDER_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 1)
+#define XE_PMU_COPY_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 2)
+#define XE_PMU_MEDIA_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 3)
+#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 4)
+
#if defined(__cplusplus)
}
#endif
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* Re: [igt-dev] [PATCH i-g-t 01/16] drm-uapi/xe_drm: Align with new PMU interface
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 01/16] drm-uapi/xe_drm: Align with new PMU interface Rodrigo Vivi
@ 2023-09-20 8:28 ` Aravind Iddamsetty
2023-09-20 19:15 ` [igt-dev] [Intel-xe] " Rodrigo Vivi
0 siblings, 1 reply; 23+ messages in thread
From: Aravind Iddamsetty @ 2023-09-20 8:28 UTC (permalink / raw)
To: Rodrigo Vivi, intel-xe, igt-dev
On 19/09/23 19:49, Rodrigo Vivi wrote:
Hi Rodrigo,
can you please pick the latest header xe_drm.h I did a small fix for comment.
Thanks,
Aravind.
> Align with commit ("drm/xe/pmu: Enable PMU interface")
>
> Cc: Francois Dugast <francois.dugast@intel.com>
> Cc: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 38 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 38 insertions(+)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 804c02270..6aaa8517c 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -1053,6 +1053,44 @@ struct drm_xe_vm_madvise {
> __u64 reserved[2];
> };
>
> +/**
> + * XE PMU event config IDs
> + *
> + * Check 'man perf_event_open' to use these ID's in 'struct perf_event_attr'
> + * as part of perf_event_open syscall to read a particular event.
> + *
> + * For example to open the XE_PMU_INTERRUPTS(0):
> + *
> + * .. code-block:: C
> + * struct perf_event_attr attr;
> + * long long count;
> + * int cpu = 0;
> + * int fd;
> + *
> + * memset(&attr, 0, sizeof(struct perf_event_attr));
> + * attr.type = type; // eg: /sys/bus/event_source/devices/xe_0000_56_00.0/type
> + * attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
> + * attr.use_clockid = 1;
> + * attr.clockid = CLOCK_MONOTONIC;
> + * attr.config = XE_PMU_INTERRUPTS(0);
> + *
> + * fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
> + */
> +
> +/*
> + * Top bits of every counter are GT id.
> + */
> +#define __XE_PMU_GT_SHIFT (56)
> +
> +#define ___XE_PMU_OTHER(gt, x) \
> + (((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
> +
> +#define XE_PMU_INTERRUPTS(gt) ___XE_PMU_OTHER(gt, 0)
> +#define XE_PMU_RENDER_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 1)
> +#define XE_PMU_COPY_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 2)
> +#define XE_PMU_MEDIA_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 3)
> +#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 4)
> +
> #if defined(__cplusplus)
> }
> #endif
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: [igt-dev] [Intel-xe] [PATCH i-g-t 01/16] drm-uapi/xe_drm: Align with new PMU interface
2023-09-20 8:28 ` Aravind Iddamsetty
@ 2023-09-20 19:15 ` Rodrigo Vivi
0 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-20 19:15 UTC (permalink / raw)
To: Aravind Iddamsetty; +Cc: igt-dev, intel-xe
On Wed, Sep 20, 2023 at 01:58:57PM +0530, Aravind Iddamsetty wrote:
>
> On 19/09/23 19:49, Rodrigo Vivi wrote:
>
> Hi Rodrigo,
>
> can you please pick the latest header xe_drm.h I did a small fix for comment.
sure. I got it locally here. will be in the next revision.
>
> Thanks,
> Aravind.
> > Align with commit ("drm/xe/pmu: Enable PMU interface")
> >
> > Cc: Francois Dugast <francois.dugast@intel.com>
> > Cc: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
> > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > ---
> > include/drm-uapi/xe_drm.h | 38 ++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 38 insertions(+)
> >
> > diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> > index 804c02270..6aaa8517c 100644
> > --- a/include/drm-uapi/xe_drm.h
> > +++ b/include/drm-uapi/xe_drm.h
> > @@ -1053,6 +1053,44 @@ struct drm_xe_vm_madvise {
> > __u64 reserved[2];
> > };
> >
> > +/**
> > + * XE PMU event config IDs
> > + *
> > + * Check 'man perf_event_open' to use these ID's in 'struct perf_event_attr'
> > + * as part of perf_event_open syscall to read a particular event.
> > + *
> > + * For example to open the XE_PMU_INTERRUPTS(0):
> > + *
> > + * .. code-block:: C
> > + * struct perf_event_attr attr;
> > + * long long count;
> > + * int cpu = 0;
> > + * int fd;
> > + *
> > + * memset(&attr, 0, sizeof(struct perf_event_attr));
> > + * attr.type = type; // eg: /sys/bus/event_source/devices/xe_0000_56_00.0/type
> > + * attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
> > + * attr.use_clockid = 1;
> > + * attr.clockid = CLOCK_MONOTONIC;
> > + * attr.config = XE_PMU_INTERRUPTS(0);
> > + *
> > + * fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
> > + */
> > +
> > +/*
> > + * Top bits of every counter are GT id.
> > + */
> > +#define __XE_PMU_GT_SHIFT (56)
> > +
> > +#define ___XE_PMU_OTHER(gt, x) \
> > + (((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
> > +
> > +#define XE_PMU_INTERRUPTS(gt) ___XE_PMU_OTHER(gt, 0)
> > +#define XE_PMU_RENDER_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 1)
> > +#define XE_PMU_COPY_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 2)
> > +#define XE_PMU_MEDIA_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 3)
> > +#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt) ___XE_PMU_OTHER(gt, 4)
> > +
> > #if defined(__cplusplus)
> > }
> > #endif
^ permalink raw reply [flat|nested] 23+ messages in thread
* [igt-dev] [PATCH i-g-t 02/16] tests/intel/xe_query: Add a test for querying cs cycles
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 01/16] drm-uapi/xe_drm: Align with new PMU interface Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 03/16] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi Rodrigo Vivi
` (16 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
From: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
The DRM_XE_QUERY_CS_CYCLES query provides a way for the user to obtain
CPU and GPU timestamps as close to each other as possible.
Add a test to query cs cycles and GPU/CPU time correlation as well as
validate the parameters.
Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
include/drm-uapi/xe_drm.h | 95 ++++++++++++++----
tests/intel/xe_query.c | 200 ++++++++++++++++++++++++++++++++++++++
2 files changed, 277 insertions(+), 18 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 6aaa8517c..f96c84a98 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -128,6 +128,24 @@ struct xe_user_extension {
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_VM_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_VM_MADVISE, struct drm_xe_vm_madvise)
+/** struct drm_xe_engine_class_instance - instance of an engine class */
+struct drm_xe_engine_class_instance {
+#define DRM_XE_ENGINE_CLASS_RENDER 0
+#define DRM_XE_ENGINE_CLASS_COPY 1
+#define DRM_XE_ENGINE_CLASS_VIDEO_DECODE 2
+#define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE 3
+#define DRM_XE_ENGINE_CLASS_COMPUTE 4
+ /*
+ * Kernel only class (not actual hardware engine class). Used for
+ * creating ordered queues of VM bind operations.
+ */
+#define DRM_XE_ENGINE_CLASS_VM_BIND 5
+ __u16 engine_class;
+
+ __u16 engine_instance;
+ __u16 gt_id;
+};
+
/**
* enum drm_xe_memory_class - Supported memory classes.
*/
@@ -219,6 +237,64 @@ struct drm_xe_query_mem_region {
__u64 reserved[6];
};
+/**
+ * struct drm_xe_query_cs_cycles - correlate CPU and GPU timestamps
+ *
+ * If a query is made with a struct drm_xe_device_query where .query
+ * is equal to DRM_XE_QUERY_CS_CYCLES, then the reply uses
+ * struct drm_xe_query_cs_cycles in .data.
+ *
+ * struct drm_xe_query_cs_cycles is allocated by the user and .data points to
+ * this allocated structure. The user must pass .eci and .clockid as inputs to
+ * this query. eci determines the engine and tile info required to fetch the
+ * relevant GPU timestamp. clockid is used to return the specific CPU
+ * timestamp.
+ *
+ * The query returns the command streamer cycles and the frequency that can
+ * be used to calculate the command streamer timestamp. In addition the
+ * query returns a set of cpu timestamps that indicate when the command
+ * streamer cycle count was captured.
+ */
+struct drm_xe_query_cs_cycles {
+ /** Engine for which command streamer cycles is queried. */
+ struct drm_xe_engine_class_instance eci;
+
+ /** MBZ (pad eci to 64 bit) */
+ __u16 rsvd;
+
+ /**
+ * Command streamer cycles as read from the command streamer
+ * register at 0x358 offset.
+ */
+ __u64 cs_cycles;
+
+ /** Frequency of the cs cycles in Hz. */
+ __u64 cs_frequency;
+
+ /**
+ * CPU timestamp in ns. The timestamp is captured before reading the
+ * cs_cycles register using the reference clockid set by the user.
+ */
+ __u64 cpu_timestamp;
+
+ /**
+ * Time delta in ns captured around reading the lower dword of the
+ * cs_cycles register.
+ */
+ __u64 cpu_delta;
+
+ /**
+ * Reference clock id for CPU timestamp. For definition, see
+ * clock_gettime(2) and perf_event_open(2). Supported clock ids are
+ * CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_REALTIME, CLOCK_BOOTTIME,
+ * CLOCK_TAI.
+ */
+ __s32 clockid;
+
+ /** Width of the cs cycle counter in bits. */
+ __u32 width;
+};
+
/**
* struct drm_xe_query_mem_usage - describe memory regions and usage
*
@@ -391,6 +467,7 @@ struct drm_xe_device_query {
#define DRM_XE_DEVICE_QUERY_GTS 3
#define DRM_XE_DEVICE_QUERY_HWCONFIG 4
#define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY 5
+#define DRM_XE_QUERY_CS_CYCLES 6
/** @query: The type of data to query */
__u32 query;
@@ -732,24 +809,6 @@ struct drm_xe_exec_queue_set_property {
__u64 reserved[2];
};
-/** struct drm_xe_engine_class_instance - instance of an engine class */
-struct drm_xe_engine_class_instance {
-#define DRM_XE_ENGINE_CLASS_RENDER 0
-#define DRM_XE_ENGINE_CLASS_COPY 1
-#define DRM_XE_ENGINE_CLASS_VIDEO_DECODE 2
-#define DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE 3
-#define DRM_XE_ENGINE_CLASS_COMPUTE 4
- /*
- * Kernel only class (not actual hardware engine class). Used for
- * creating ordered queues of VM bind operations.
- */
-#define DRM_XE_ENGINE_CLASS_VM_BIND 5
- __u16 engine_class;
-
- __u16 engine_instance;
- __u16 gt_id;
-};
-
struct drm_xe_exec_queue_create {
#define XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
/** @extensions: Pointer to the first extension struct, if any */
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 5966968d3..acf069f46 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -476,6 +476,200 @@ test_query_invalid_extension(int fd)
do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
}
+static bool
+query_cs_cycles_supported(int fd)
+{
+ struct drm_xe_device_query query = {
+ .extensions = 0,
+ .query = DRM_XE_QUERY_CS_CYCLES,
+ .size = 0,
+ .data = 0,
+ };
+
+ return igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query) == 0;
+}
+
+static void
+query_cs_cycles(int fd, struct drm_xe_query_cs_cycles *resp)
+{
+ struct drm_xe_device_query query = {
+ .extensions = 0,
+ .query = DRM_XE_QUERY_CS_CYCLES,
+ .size = sizeof(*resp),
+ .data = to_user_pointer(resp),
+ };
+
+ do_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
+ igt_assert(query.size);
+}
+
+static void
+__cs_cycles(int fd, struct drm_xe_engine_class_instance *hwe)
+{
+ struct drm_xe_query_cs_cycles ts1 = {};
+ struct drm_xe_query_cs_cycles ts2 = {};
+ uint64_t delta_cpu, delta_cs, delta_delta;
+ unsigned int exec_queue;
+ int i, usable = 0;
+ igt_spin_t *spin;
+ uint64_t ahnd;
+ uint32_t vm;
+ struct {
+ int32_t id;
+ const char *name;
+ } clock[] = {
+ { CLOCK_MONOTONIC, "CLOCK_MONOTONIC" },
+ { CLOCK_MONOTONIC_RAW, "CLOCK_MONOTONIC_RAW" },
+ { CLOCK_REALTIME, "CLOCK_REALTIME" },
+ { CLOCK_BOOTTIME, "CLOCK_BOOTTIME" },
+ { CLOCK_TAI, "CLOCK_TAI" },
+ };
+
+ igt_debug("engine[%u:%u]\n",
+ hwe->engine_class,
+ hwe->engine_instance);
+
+ vm = xe_vm_create(fd, 0, 0);
+ exec_queue = xe_exec_queue_create(fd, vm, hwe, 0);
+ ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_RELOC);
+ spin = igt_spin_new(fd, .ahnd = ahnd, .engine = exec_queue, .vm = vm);
+
+ /* Try a new clock every 10 iterations. */
+#define NUM_SNAPSHOTS 10
+ for (i = 0; i < NUM_SNAPSHOTS * ARRAY_SIZE(clock); i++) {
+ int index = i / NUM_SNAPSHOTS;
+
+ ts1.eci = *hwe;
+ ts1.clockid = clock[index].id;
+
+ ts2.eci = *hwe;
+ ts2.clockid = clock[index].id;
+
+ query_cs_cycles(fd, &ts1);
+ query_cs_cycles(fd, &ts2);
+
+ igt_debug("[1] cpu_ts before %llu, reg read time %llu\n",
+ ts1.cpu_timestamp,
+ ts1.cpu_delta);
+ igt_debug("[1] cs_ts %llu, freq %llu Hz, width %u\n",
+ ts1.cs_cycles, ts1.cs_frequency, ts1.width);
+
+ igt_debug("[2] cpu_ts before %llu, reg read time %llu\n",
+ ts2.cpu_timestamp,
+ ts2.cpu_delta);
+ igt_debug("[2] cs_ts %llu, freq %llu Hz, width %u\n",
+ ts2.cs_cycles, ts2.cs_frequency, ts2.width);
+
+ delta_cpu = ts2.cpu_timestamp - ts1.cpu_timestamp;
+
+ if (ts2.cs_cycles >= ts1.cs_cycles)
+ delta_cs = (ts2.cs_cycles - ts1.cs_cycles) *
+ NSEC_PER_SEC / ts1.cs_frequency;
+ else
+ delta_cs = (((1 << ts2.width) - ts2.cs_cycles) + ts1.cs_cycles) *
+ NSEC_PER_SEC / ts1.cs_frequency;
+
+ igt_debug("delta_cpu[%lu], delta_cs[%lu]\n",
+ delta_cpu, delta_cs);
+
+ delta_delta = delta_cpu > delta_cs ?
+ delta_cpu - delta_cs :
+ delta_cs - delta_cpu;
+ igt_debug("delta_delta %lu\n", delta_delta);
+
+ if (delta_delta < 5000)
+ usable++;
+
+ /*
+ * User needs few good snapshots of the timestamps to
+ * synchronize cpu time with cs time. Check if we have enough
+ * usable values before moving to the next clockid.
+ */
+ if (!((i + 1) % NUM_SNAPSHOTS)) {
+ igt_debug("clock %s\n", clock[index].name);
+ igt_debug("usable %d\n", usable);
+ igt_assert(usable > 2);
+ usable = 0;
+ }
+ }
+
+ igt_spin_free(fd, spin);
+ xe_exec_queue_destroy(fd, exec_queue);
+ xe_vm_destroy(fd, vm);
+ put_ahnd(ahnd);
+}
+
+/**
+ * SUBTEST: query-cs-cycles
+ * Description: Query CPU-GPU timestamp correlation
+ */
+static void test_query_cs_cycles(int fd)
+{
+ struct drm_xe_engine_class_instance *hwe;
+
+ igt_require(query_cs_cycles_supported(fd));
+
+ xe_for_each_hw_engine(fd, hwe) {
+ igt_assert(hwe);
+ __cs_cycles(fd, hwe);
+ }
+}
+
+/**
+ * SUBTEST: query-invalid-cs-cycles
+ * Description: Check query with invalid arguments returns expected error code.
+ */
+static void test_cs_cycles_invalid(int fd)
+{
+ struct drm_xe_engine_class_instance *hwe;
+ struct drm_xe_query_cs_cycles ts = {};
+ struct drm_xe_device_query query = {
+ .extensions = 0,
+ .query = DRM_XE_QUERY_CS_CYCLES,
+ .size = sizeof(ts),
+ .data = to_user_pointer(&ts),
+ };
+
+ igt_require(query_cs_cycles_supported(fd));
+
+ /* get one engine */
+ xe_for_each_hw_engine(fd, hwe)
+ break;
+
+ /* sanity check engine selection is valid */
+ ts.eci = *hwe;
+ query_cs_cycles(fd, &ts);
+
+ /* bad instance */
+ ts.eci = *hwe;
+ ts.eci.engine_instance = 0xffff;
+ do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+ ts.eci = *hwe;
+
+ /* bad class */
+ ts.eci.engine_class = 0xffff;
+ do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+ ts.eci = *hwe;
+
+ /* bad gt */
+ ts.eci.gt_id = 0xffff;
+ do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+ ts.eci = *hwe;
+
+ /* non zero rsvd field */
+ ts.rsvd = 1;
+ do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+ ts.rsvd = 0;
+
+ /* bad clockid */
+ ts.clockid = -1;
+ do_ioctl_err(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query, EINVAL);
+ ts.clockid = 0;
+
+ /* sanity check */
+ query_cs_cycles(fd, &ts);
+}
+
igt_main
{
int xe;
@@ -501,6 +695,12 @@ igt_main
igt_subtest("query-topology")
test_query_gt_topology(xe);
+ igt_subtest("query-cs-cycles")
+ test_query_cs_cycles(xe);
+
+ igt_subtest("query-invalid-cs-cycles")
+ test_cs_cycles_invalid(xe);
+
igt_subtest("query-invalid-query")
test_query_invalid_query(xe);
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 03/16] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 01/16] drm-uapi/xe_drm: Align with new PMU interface Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 02/16] tests/intel/xe_query: Add a test for querying cs cycles Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 15:31 ` Matthew Brost
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 04/16] drm-uapi/xe_drm: Remove MMIO ioctl and " Rodrigo Vivi
` (15 subsequent siblings)
18 siblings, 1 reply; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
From: Francois Dugast <francois.dugast@intel.com>
Align with commit ("drm/xe/uapi: Separate VM_BIND's operation and flag")
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
include/drm-uapi/xe_drm.h | 14 ++++++++------
lib/intel_batchbuffer.c | 11 +++++++----
lib/xe/xe_ioctl.c | 31 ++++++++++++++++---------------
lib/xe/xe_ioctl.h | 6 +++---
lib/xe/xe_util.c | 9 ++++++---
tests/intel/xe_exec_basic.c | 2 +-
tests/intel/xe_exec_threads.c | 2 +-
tests/intel/xe_vm.c | 16 +++++++++-------
8 files changed, 51 insertions(+), 40 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index f96c84a98..078edd9f8 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -663,8 +663,10 @@ struct drm_xe_vm_bind_op {
#define XE_VM_BIND_OP_RESTART 0x3
#define XE_VM_BIND_OP_UNMAP_ALL 0x4
#define XE_VM_BIND_OP_PREFETCH 0x5
+ /** @op: Bind operation to perform */
+ __u32 op;
-#define XE_VM_BIND_FLAG_READONLY (0x1 << 16)
+#define XE_VM_BIND_FLAG_READONLY (0x1 << 0)
/*
* A bind ops completions are always async, hence the support for out
* sync. This flag indicates the allocation of the memory for new page
@@ -689,12 +691,12 @@ struct drm_xe_vm_bind_op {
* configured in the VM and must be set if the VM is configured with
* DRM_XE_VM_CREATE_ASYNC_BIND_OPS and not in an error state.
*/
-#define XE_VM_BIND_FLAG_ASYNC (0x1 << 17)
+#define XE_VM_BIND_FLAG_ASYNC (0x1 << 1)
/*
* Valid on a faulting VM only, do the MAP operation immediately rather
* than deferring the MAP to the page fault handler.
*/
-#define XE_VM_BIND_FLAG_IMMEDIATE (0x1 << 18)
+#define XE_VM_BIND_FLAG_IMMEDIATE (0x1 << 2)
/*
* When the NULL flag is set, the page tables are setup with a special
* bit which indicates writes are dropped and all reads return zero. In
@@ -702,9 +704,9 @@ struct drm_xe_vm_bind_op {
* operations, the BO handle MBZ, and the BO offset MBZ. This flag is
* intended to implement VK sparse bindings.
*/
-#define XE_VM_BIND_FLAG_NULL (0x1 << 19)
- /** @op: Operation to perform (lower 16 bits) and flags (upper 16 bits) */
- __u32 op;
+#define XE_VM_BIND_FLAG_NULL (0x1 << 3)
+ /** @flags: Bind flags */
+ __u32 flags;
/** @mem_region: Memory region to prefetch VMA to, instance not a mask */
__u32 region;
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index e7b1b755f..6e668d28c 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1281,7 +1281,8 @@ void intel_bb_destroy(struct intel_bb *ibb)
}
static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
- uint32_t op, uint32_t region)
+ uint32_t op, uint32_t flags,
+ uint32_t region)
{
struct drm_i915_gem_exec_object2 **objects = ibb->objects;
struct drm_xe_vm_bind_op *bind_ops, *ops;
@@ -1298,6 +1299,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
ops->obj = objects[i]->handle;
ops->op = op;
+ ops->flags = flags;
ops->obj_offset = 0;
ops->addr = objects[i]->offset;
ops->range = objects[i]->rsvd1;
@@ -1323,9 +1325,10 @@ static void __unbind_xe_objects(struct intel_bb *ibb)
if (ibb->num_objects > 1) {
struct drm_xe_vm_bind_op *bind_ops;
- uint32_t op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
+ uint32_t op = XE_VM_BIND_OP_UNMAP;
+ uint32_t flags = XE_VM_BIND_FLAG_ASYNC;
- bind_ops = xe_alloc_bind_ops(ibb, op, 0);
+ bind_ops = xe_alloc_bind_ops(ibb, op, flags, 0);
xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
ibb->num_objects, syncs, 2);
free(bind_ops);
@@ -2354,7 +2357,7 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
syncs[0].handle = syncobj_create(ibb->fd, 0);
if (ibb->num_objects > 1) {
- bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, 0);
+ bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, 0);
xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
ibb->num_objects, syncs, 1);
free(bind_ops);
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 730dcfd16..48cd185de 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -67,7 +67,7 @@ void xe_vm_unbind_all_async(int fd, uint32_t vm, uint32_t exec_queue,
uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, exec_queue, bo, 0, 0, 0,
- XE_VM_BIND_OP_UNMAP_ALL | XE_VM_BIND_FLAG_ASYNC,
+ XE_VM_BIND_OP_UNMAP_ALL, XE_VM_BIND_FLAG_ASYNC,
sync, num_syncs, 0, 0);
}
@@ -91,8 +91,8 @@ void xe_vm_bind_array(int fd, uint32_t vm, uint32_t exec_queue,
int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
- struct drm_xe_sync *sync, uint32_t num_syncs, uint32_t region,
- uint64_t ext)
+ uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
+ uint32_t region, uint64_t ext)
{
struct drm_xe_vm_bind bind = {
.extensions = ext,
@@ -103,6 +103,7 @@ int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
.bind.range = size,
.bind.addr = addr,
.bind.op = op,
+ .bind.flags = flags,
.bind.region = region,
.num_syncs = num_syncs,
.syncs = (uintptr_t)sync,
@@ -117,11 +118,11 @@ int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
void __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
uint64_t offset, uint64_t addr, uint64_t size,
- uint32_t op, struct drm_xe_sync *sync,
+ uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
uint32_t num_syncs, uint32_t region, uint64_t ext)
{
igt_assert_eq(__xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
- op, sync, num_syncs, region, ext), 0);
+ op, flags, sync, num_syncs, region, ext), 0);
}
void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
@@ -129,7 +130,7 @@ void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
struct drm_xe_sync *sync, uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size,
- XE_VM_BIND_OP_MAP, sync, num_syncs, 0, 0);
+ XE_VM_BIND_OP_MAP, 0, sync, num_syncs, 0, 0);
}
void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
@@ -137,7 +138,7 @@ void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
struct drm_xe_sync *sync, uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, 0, 0, offset, addr, size,
- XE_VM_BIND_OP_UNMAP, sync, num_syncs, 0, 0);
+ XE_VM_BIND_OP_UNMAP, 0, sync, num_syncs, 0, 0);
}
void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t offset,
@@ -146,7 +147,7 @@ void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t off
uint32_t region)
{
__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
- XE_VM_BIND_OP_PREFETCH | XE_VM_BIND_FLAG_ASYNC,
+ XE_VM_BIND_OP_PREFETCH, XE_VM_BIND_FLAG_ASYNC,
sync, num_syncs, region, 0);
}
@@ -155,7 +156,7 @@ void xe_vm_bind_async(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
struct drm_xe_sync *sync, uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
- XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, sync,
+ XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, sync,
num_syncs, 0, 0);
}
@@ -165,7 +166,7 @@ void xe_vm_bind_async_flags(int fd, uint32_t vm, uint32_t exec_queue, uint32_t b
uint32_t flags)
{
__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
- XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC | flags,
+ XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC | flags,
sync, num_syncs, 0, 0);
}
@@ -174,7 +175,7 @@ void xe_vm_bind_userptr_async(int fd, uint32_t vm, uint32_t exec_queue,
struct drm_xe_sync *sync, uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
- XE_VM_BIND_OP_MAP_USERPTR | XE_VM_BIND_FLAG_ASYNC,
+ XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC,
sync, num_syncs, 0, 0);
}
@@ -184,7 +185,7 @@ void xe_vm_bind_userptr_async_flags(int fd, uint32_t vm, uint32_t exec_queue,
uint32_t num_syncs, uint32_t flags)
{
__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
- XE_VM_BIND_OP_MAP_USERPTR | XE_VM_BIND_FLAG_ASYNC |
+ XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC |
flags, sync, num_syncs, 0, 0);
}
@@ -193,7 +194,7 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
struct drm_xe_sync *sync, uint32_t num_syncs)
{
__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
- XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC, sync,
+ XE_VM_BIND_OP_UNMAP, XE_VM_BIND_FLAG_ASYNC, sync,
num_syncs, 0, 0);
}
@@ -205,8 +206,8 @@ static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
.handle = syncobj_create(fd, 0),
};
- __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, &sync, 1, 0,
- 0);
+ __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, &sync, 1,
+ 0, 0);
igt_assert(syncobj_wait(fd, &sync.handle, 1, INT64_MAX, 0, NULL));
syncobj_destroy(fd, sync.handle);
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index 6c281b3bf..f0e4109dc 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -19,11 +19,11 @@ uint32_t xe_cs_prefetch_size(int fd);
uint32_t xe_vm_create(int fd, uint32_t flags, uint64_t ext);
int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
- struct drm_xe_sync *sync, uint32_t num_syncs, uint32_t region,
- uint64_t ext);
+ uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
+ uint32_t region, uint64_t ext);
void __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
uint64_t offset, uint64_t addr, uint64_t size,
- uint32_t op, struct drm_xe_sync *sync,
+ uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
uint32_t num_syncs, uint32_t region, uint64_t ext);
void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
uint64_t addr, uint64_t size,
diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
index 2f9ffe2f1..5fa4d4610 100644
--- a/lib/xe/xe_util.c
+++ b/lib/xe/xe_util.c
@@ -116,7 +116,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
{
struct drm_xe_vm_bind_op *bind_ops, *ops;
struct xe_object *obj;
- uint32_t num_objects = 0, i = 0, op;
+ uint32_t num_objects = 0, i = 0, op, flags;
igt_list_for_each_entry(obj, obj_list, link)
num_objects++;
@@ -134,13 +134,16 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
ops = &bind_ops[i];
if (obj->bind_op == XE_OBJECT_BIND) {
- op = XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC;
+ op = XE_VM_BIND_OP_MAP;
+ flags = XE_VM_BIND_FLAG_ASYNC;
ops->obj = obj->handle;
} else {
- op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
+ op = XE_VM_BIND_OP_UNMAP;
+ flags = XE_VM_BIND_FLAG_ASYNC;
}
ops->op = op;
+ ops->flags = flags;
ops->obj_offset = 0;
ops->addr = obj->offset;
ops->range = obj->size;
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index a4414e052..e29398aaa 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -170,7 +170,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
if (flags & SPARSE)
__xe_vm_bind_assert(fd, vm[i], bind_exec_queues[i],
0, 0, sparse_addr[i], bo_size,
- XE_VM_BIND_OP_MAP |
+ XE_VM_BIND_OP_MAP,
XE_VM_BIND_FLAG_ASYNC |
XE_VM_BIND_FLAG_NULL, sync,
1, 0, 0);
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index 12e76874e..1f9af894f 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -609,7 +609,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
if (rebind_error_inject == i)
__xe_vm_bind_assert(fd, vm, bind_exec_queues[e],
0, 0, addr, bo_size,
- XE_VM_BIND_OP_UNMAP |
+ XE_VM_BIND_OP_UNMAP,
XE_VM_BIND_FLAG_ASYNC |
INJECT_ERROR, sync_all,
n_exec_queues, 0, 0);
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 4952ea786..f96305851 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -316,7 +316,7 @@ static void userptr_invalid(int fd)
vm = xe_vm_create(fd, 0, 0);
munmap(data, size);
ret = __xe_vm_bind(fd, vm, 0, 0, to_user_pointer(data), 0x40000,
- size, XE_VM_BIND_OP_MAP_USERPTR, NULL, 0, 0, 0);
+ size, XE_VM_BIND_OP_MAP_USERPTR, 0, NULL, 0, 0, 0);
igt_assert(ret == -EFAULT);
xe_vm_destroy(fd, vm);
@@ -437,7 +437,7 @@ static void vm_async_ops_err(int fd, bool destroy)
if (i == N_BINDS / 8) /* Inject error on this bind */
__xe_vm_bind_assert(fd, vm, 0, bo, 0,
addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_MAP |
+ bo_size, XE_VM_BIND_OP_MAP,
XE_VM_BIND_FLAG_ASYNC |
INJECT_ERROR, &sync, 1, 0, 0);
else
@@ -451,7 +451,7 @@ static void vm_async_ops_err(int fd, bool destroy)
if (i == N_BINDS / 8)
__xe_vm_bind_assert(fd, vm, 0, 0, 0,
addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_UNMAP |
+ bo_size, XE_VM_BIND_OP_UNMAP,
XE_VM_BIND_FLAG_ASYNC |
INJECT_ERROR, &sync, 1, 0, 0);
else
@@ -465,7 +465,7 @@ static void vm_async_ops_err(int fd, bool destroy)
if (i == N_BINDS / 8)
__xe_vm_bind_assert(fd, vm, 0, bo, 0,
addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_MAP |
+ bo_size, XE_VM_BIND_OP_MAP,
XE_VM_BIND_FLAG_ASYNC |
INJECT_ERROR, &sync, 1, 0, 0);
else
@@ -479,7 +479,7 @@ static void vm_async_ops_err(int fd, bool destroy)
if (i == N_BINDS / 8)
__xe_vm_bind_assert(fd, vm, 0, 0, 0,
addr + i * bo_size * 2,
- bo_size, XE_VM_BIND_OP_UNMAP |
+ bo_size, XE_VM_BIND_OP_UNMAP,
XE_VM_BIND_FLAG_ASYNC |
INJECT_ERROR, &sync, 1, 0, 0);
else
@@ -928,7 +928,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
bind_ops[i].range = bo_size;
bind_ops[i].addr = addr;
bind_ops[i].tile_mask = 0x1 << eci->gt_id;
- bind_ops[i].op = XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC;
+ bind_ops[i].op = XE_VM_BIND_OP_MAP;
+ bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
bind_ops[i].region = 0;
bind_ops[i].reserved[0] = 0;
bind_ops[i].reserved[1] = 0;
@@ -972,7 +973,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
for (i = 0; i < n_execs; ++i) {
bind_ops[i].obj = 0;
- bind_ops[i].op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
+ bind_ops[i].op = XE_VM_BIND_OP_UNMAP;
+ bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
}
syncobj_reset(fd, &sync[0].handle, 1);
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* Re: [igt-dev] [PATCH i-g-t 03/16] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 03/16] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi Rodrigo Vivi
@ 2023-09-19 15:31 ` Matthew Brost
0 siblings, 0 replies; 23+ messages in thread
From: Matthew Brost @ 2023-09-19 15:31 UTC (permalink / raw)
To: Rodrigo Vivi; +Cc: igt-dev, intel-xe
On Tue, Sep 19, 2023 at 10:19:46AM -0400, Rodrigo Vivi wrote:
> From: Francois Dugast <francois.dugast@intel.com>
>
> Align with commit ("drm/xe/uapi: Separate VM_BIND's operation and flag")
>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> include/drm-uapi/xe_drm.h | 14 ++++++++------
> lib/intel_batchbuffer.c | 11 +++++++----
> lib/xe/xe_ioctl.c | 31 ++++++++++++++++---------------
> lib/xe/xe_ioctl.h | 6 +++---
> lib/xe/xe_util.c | 9 ++++++---
> tests/intel/xe_exec_basic.c | 2 +-
> tests/intel/xe_exec_threads.c | 2 +-
> tests/intel/xe_vm.c | 16 +++++++++-------
> 8 files changed, 51 insertions(+), 40 deletions(-)
>
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index f96c84a98..078edd9f8 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -663,8 +663,10 @@ struct drm_xe_vm_bind_op {
> #define XE_VM_BIND_OP_RESTART 0x3
> #define XE_VM_BIND_OP_UNMAP_ALL 0x4
> #define XE_VM_BIND_OP_PREFETCH 0x5
> + /** @op: Bind operation to perform */
> + __u32 op;
>
> -#define XE_VM_BIND_FLAG_READONLY (0x1 << 16)
> +#define XE_VM_BIND_FLAG_READONLY (0x1 << 0)
> /*
> * A bind ops completions are always async, hence the support for out
> * sync. This flag indicates the allocation of the memory for new page
> @@ -689,12 +691,12 @@ struct drm_xe_vm_bind_op {
> * configured in the VM and must be set if the VM is configured with
> * DRM_XE_VM_CREATE_ASYNC_BIND_OPS and not in an error state.
> */
> -#define XE_VM_BIND_FLAG_ASYNC (0x1 << 17)
> +#define XE_VM_BIND_FLAG_ASYNC (0x1 << 1)
> /*
> * Valid on a faulting VM only, do the MAP operation immediately rather
> * than deferring the MAP to the page fault handler.
> */
> -#define XE_VM_BIND_FLAG_IMMEDIATE (0x1 << 18)
> +#define XE_VM_BIND_FLAG_IMMEDIATE (0x1 << 2)
> /*
> * When the NULL flag is set, the page tables are setup with a special
> * bit which indicates writes are dropped and all reads return zero. In
> @@ -702,9 +704,9 @@ struct drm_xe_vm_bind_op {
> * operations, the BO handle MBZ, and the BO offset MBZ. This flag is
> * intended to implement VK sparse bindings.
> */
> -#define XE_VM_BIND_FLAG_NULL (0x1 << 19)
> - /** @op: Operation to perform (lower 16 bits) and flags (upper 16 bits) */
> - __u32 op;
> +#define XE_VM_BIND_FLAG_NULL (0x1 << 3)
> + /** @flags: Bind flags */
> + __u32 flags;
>
> /** @mem_region: Memory region to prefetch VMA to, instance not a mask */
> __u32 region;
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index e7b1b755f..6e668d28c 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -1281,7 +1281,8 @@ void intel_bb_destroy(struct intel_bb *ibb)
> }
>
> static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
> - uint32_t op, uint32_t region)
> + uint32_t op, uint32_t flags,
> + uint32_t region)
> {
> struct drm_i915_gem_exec_object2 **objects = ibb->objects;
> struct drm_xe_vm_bind_op *bind_ops, *ops;
> @@ -1298,6 +1299,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
> ops->obj = objects[i]->handle;
>
> ops->op = op;
> + ops->flags = flags;
> ops->obj_offset = 0;
> ops->addr = objects[i]->offset;
> ops->range = objects[i]->rsvd1;
> @@ -1323,9 +1325,10 @@ static void __unbind_xe_objects(struct intel_bb *ibb)
>
> if (ibb->num_objects > 1) {
> struct drm_xe_vm_bind_op *bind_ops;
> - uint32_t op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
> + uint32_t op = XE_VM_BIND_OP_UNMAP;
> + uint32_t flags = XE_VM_BIND_FLAG_ASYNC;
>
> - bind_ops = xe_alloc_bind_ops(ibb, op, 0);
> + bind_ops = xe_alloc_bind_ops(ibb, op, flags, 0);
> xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
> ibb->num_objects, syncs, 2);
> free(bind_ops);
> @@ -2354,7 +2357,7 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
>
> syncs[0].handle = syncobj_create(ibb->fd, 0);
> if (ibb->num_objects > 1) {
> - bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, 0);
> + bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, 0);
> xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
> ibb->num_objects, syncs, 1);
> free(bind_ops);
> diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> index 730dcfd16..48cd185de 100644
> --- a/lib/xe/xe_ioctl.c
> +++ b/lib/xe/xe_ioctl.c
> @@ -67,7 +67,7 @@ void xe_vm_unbind_all_async(int fd, uint32_t vm, uint32_t exec_queue,
> uint32_t num_syncs)
> {
> __xe_vm_bind_assert(fd, vm, exec_queue, bo, 0, 0, 0,
> - XE_VM_BIND_OP_UNMAP_ALL | XE_VM_BIND_FLAG_ASYNC,
> + XE_VM_BIND_OP_UNMAP_ALL, XE_VM_BIND_FLAG_ASYNC,
> sync, num_syncs, 0, 0);
> }
>
> @@ -91,8 +91,8 @@ void xe_vm_bind_array(int fd, uint32_t vm, uint32_t exec_queue,
>
> int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
> uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
> - struct drm_xe_sync *sync, uint32_t num_syncs, uint32_t region,
> - uint64_t ext)
> + uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
> + uint32_t region, uint64_t ext)
> {
> struct drm_xe_vm_bind bind = {
> .extensions = ext,
> @@ -103,6 +103,7 @@ int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
> .bind.range = size,
> .bind.addr = addr,
> .bind.op = op,
> + .bind.flags = flags,
> .bind.region = region,
> .num_syncs = num_syncs,
> .syncs = (uintptr_t)sync,
> @@ -117,11 +118,11 @@ int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>
> void __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
> uint64_t offset, uint64_t addr, uint64_t size,
> - uint32_t op, struct drm_xe_sync *sync,
> + uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
> uint32_t num_syncs, uint32_t region, uint64_t ext)
> {
> igt_assert_eq(__xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
> - op, sync, num_syncs, region, ext), 0);
> + op, flags, sync, num_syncs, region, ext), 0);
> }
>
> void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> @@ -129,7 +130,7 @@ void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> struct drm_xe_sync *sync, uint32_t num_syncs)
> {
> __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size,
> - XE_VM_BIND_OP_MAP, sync, num_syncs, 0, 0);
> + XE_VM_BIND_OP_MAP, 0, sync, num_syncs, 0, 0);
> }
>
> void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
> @@ -137,7 +138,7 @@ void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
> struct drm_xe_sync *sync, uint32_t num_syncs)
> {
> __xe_vm_bind_assert(fd, vm, 0, 0, offset, addr, size,
> - XE_VM_BIND_OP_UNMAP, sync, num_syncs, 0, 0);
> + XE_VM_BIND_OP_UNMAP, 0, sync, num_syncs, 0, 0);
> }
>
> void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t offset,
> @@ -146,7 +147,7 @@ void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t off
> uint32_t region)
> {
> __xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
> - XE_VM_BIND_OP_PREFETCH | XE_VM_BIND_FLAG_ASYNC,
> + XE_VM_BIND_OP_PREFETCH, XE_VM_BIND_FLAG_ASYNC,
> sync, num_syncs, region, 0);
> }
>
> @@ -155,7 +156,7 @@ void xe_vm_bind_async(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
> struct drm_xe_sync *sync, uint32_t num_syncs)
> {
> __xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
> - XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC, sync,
> + XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, sync,
> num_syncs, 0, 0);
> }
>
> @@ -165,7 +166,7 @@ void xe_vm_bind_async_flags(int fd, uint32_t vm, uint32_t exec_queue, uint32_t b
> uint32_t flags)
> {
> __xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
> - XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC | flags,
> + XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC | flags,
> sync, num_syncs, 0, 0);
> }
>
> @@ -174,7 +175,7 @@ void xe_vm_bind_userptr_async(int fd, uint32_t vm, uint32_t exec_queue,
> struct drm_xe_sync *sync, uint32_t num_syncs)
> {
> __xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
> - XE_VM_BIND_OP_MAP_USERPTR | XE_VM_BIND_FLAG_ASYNC,
> + XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC,
> sync, num_syncs, 0, 0);
> }
>
> @@ -184,7 +185,7 @@ void xe_vm_bind_userptr_async_flags(int fd, uint32_t vm, uint32_t exec_queue,
> uint32_t num_syncs, uint32_t flags)
> {
> __xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
> - XE_VM_BIND_OP_MAP_USERPTR | XE_VM_BIND_FLAG_ASYNC |
> + XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC |
> flags, sync, num_syncs, 0, 0);
> }
>
> @@ -193,7 +194,7 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
> struct drm_xe_sync *sync, uint32_t num_syncs)
> {
> __xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
> - XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC, sync,
> + XE_VM_BIND_OP_UNMAP, XE_VM_BIND_FLAG_ASYNC, sync,
> num_syncs, 0, 0);
> }
>
> @@ -205,8 +206,8 @@ static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> .handle = syncobj_create(fd, 0),
> };
>
> - __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, &sync, 1, 0,
> - 0);
> + __xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size, op, 0, &sync, 1,
> + 0, 0);
>
> igt_assert(syncobj_wait(fd, &sync.handle, 1, INT64_MAX, 0, NULL));
> syncobj_destroy(fd, sync.handle);
> diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
> index 6c281b3bf..f0e4109dc 100644
> --- a/lib/xe/xe_ioctl.h
> +++ b/lib/xe/xe_ioctl.h
> @@ -19,11 +19,11 @@ uint32_t xe_cs_prefetch_size(int fd);
> uint32_t xe_vm_create(int fd, uint32_t flags, uint64_t ext);
> int __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
> uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
> - struct drm_xe_sync *sync, uint32_t num_syncs, uint32_t region,
> - uint64_t ext);
> + uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
> + uint32_t region, uint64_t ext);
> void __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
> uint64_t offset, uint64_t addr, uint64_t size,
> - uint32_t op, struct drm_xe_sync *sync,
> + uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
> uint32_t num_syncs, uint32_t region, uint64_t ext);
> void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> uint64_t addr, uint64_t size,
> diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
> index 2f9ffe2f1..5fa4d4610 100644
> --- a/lib/xe/xe_util.c
> +++ b/lib/xe/xe_util.c
> @@ -116,7 +116,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
> {
> struct drm_xe_vm_bind_op *bind_ops, *ops;
> struct xe_object *obj;
> - uint32_t num_objects = 0, i = 0, op;
> + uint32_t num_objects = 0, i = 0, op, flags;
>
> igt_list_for_each_entry(obj, obj_list, link)
> num_objects++;
> @@ -134,13 +134,16 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
> ops = &bind_ops[i];
>
> if (obj->bind_op == XE_OBJECT_BIND) {
> - op = XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC;
> + op = XE_VM_BIND_OP_MAP;
> + flags = XE_VM_BIND_FLAG_ASYNC;
> ops->obj = obj->handle;
> } else {
> - op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
> + op = XE_VM_BIND_OP_UNMAP;
> + flags = XE_VM_BIND_FLAG_ASYNC;
> }
>
> ops->op = op;
> + ops->flags = flags;
> ops->obj_offset = 0;
> ops->addr = obj->offset;
> ops->range = obj->size;
> diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
> index a4414e052..e29398aaa 100644
> --- a/tests/intel/xe_exec_basic.c
> +++ b/tests/intel/xe_exec_basic.c
> @@ -170,7 +170,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> if (flags & SPARSE)
> __xe_vm_bind_assert(fd, vm[i], bind_exec_queues[i],
> 0, 0, sparse_addr[i], bo_size,
> - XE_VM_BIND_OP_MAP |
> + XE_VM_BIND_OP_MAP,
> XE_VM_BIND_FLAG_ASYNC |
> XE_VM_BIND_FLAG_NULL, sync,
> 1, 0, 0);
> diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> index 12e76874e..1f9af894f 100644
> --- a/tests/intel/xe_exec_threads.c
> +++ b/tests/intel/xe_exec_threads.c
> @@ -609,7 +609,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
> if (rebind_error_inject == i)
> __xe_vm_bind_assert(fd, vm, bind_exec_queues[e],
> 0, 0, addr, bo_size,
> - XE_VM_BIND_OP_UNMAP |
> + XE_VM_BIND_OP_UNMAP,
> XE_VM_BIND_FLAG_ASYNC |
> INJECT_ERROR, sync_all,
> n_exec_queues, 0, 0);
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index 4952ea786..f96305851 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -316,7 +316,7 @@ static void userptr_invalid(int fd)
> vm = xe_vm_create(fd, 0, 0);
> munmap(data, size);
> ret = __xe_vm_bind(fd, vm, 0, 0, to_user_pointer(data), 0x40000,
> - size, XE_VM_BIND_OP_MAP_USERPTR, NULL, 0, 0, 0);
> + size, XE_VM_BIND_OP_MAP_USERPTR, 0, NULL, 0, 0, 0);
> igt_assert(ret == -EFAULT);
>
> xe_vm_destroy(fd, vm);
> @@ -437,7 +437,7 @@ static void vm_async_ops_err(int fd, bool destroy)
> if (i == N_BINDS / 8) /* Inject error on this bind */
> __xe_vm_bind_assert(fd, vm, 0, bo, 0,
> addr + i * bo_size * 2,
> - bo_size, XE_VM_BIND_OP_MAP |
> + bo_size, XE_VM_BIND_OP_MAP,
> XE_VM_BIND_FLAG_ASYNC |
> INJECT_ERROR, &sync, 1, 0, 0);
> else
> @@ -451,7 +451,7 @@ static void vm_async_ops_err(int fd, bool destroy)
> if (i == N_BINDS / 8)
> __xe_vm_bind_assert(fd, vm, 0, 0, 0,
> addr + i * bo_size * 2,
> - bo_size, XE_VM_BIND_OP_UNMAP |
> + bo_size, XE_VM_BIND_OP_UNMAP,
> XE_VM_BIND_FLAG_ASYNC |
> INJECT_ERROR, &sync, 1, 0, 0);
> else
> @@ -465,7 +465,7 @@ static void vm_async_ops_err(int fd, bool destroy)
> if (i == N_BINDS / 8)
> __xe_vm_bind_assert(fd, vm, 0, bo, 0,
> addr + i * bo_size * 2,
> - bo_size, XE_VM_BIND_OP_MAP |
> + bo_size, XE_VM_BIND_OP_MAP,
> XE_VM_BIND_FLAG_ASYNC |
> INJECT_ERROR, &sync, 1, 0, 0);
> else
> @@ -479,7 +479,7 @@ static void vm_async_ops_err(int fd, bool destroy)
> if (i == N_BINDS / 8)
> __xe_vm_bind_assert(fd, vm, 0, 0, 0,
> addr + i * bo_size * 2,
> - bo_size, XE_VM_BIND_OP_UNMAP |
> + bo_size, XE_VM_BIND_OP_UNMAP,
> XE_VM_BIND_FLAG_ASYNC |
> INJECT_ERROR, &sync, 1, 0, 0);
> else
> @@ -928,7 +928,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
> bind_ops[i].range = bo_size;
> bind_ops[i].addr = addr;
> bind_ops[i].tile_mask = 0x1 << eci->gt_id;
> - bind_ops[i].op = XE_VM_BIND_OP_MAP | XE_VM_BIND_FLAG_ASYNC;
> + bind_ops[i].op = XE_VM_BIND_OP_MAP;
> + bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
> bind_ops[i].region = 0;
> bind_ops[i].reserved[0] = 0;
> bind_ops[i].reserved[1] = 0;
> @@ -972,7 +973,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>
> for (i = 0; i < n_execs; ++i) {
> bind_ops[i].obj = 0;
> - bind_ops[i].op = XE_VM_BIND_OP_UNMAP | XE_VM_BIND_FLAG_ASYNC;
> + bind_ops[i].op = XE_VM_BIND_OP_UNMAP;
> + bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
> }
>
> syncobj_reset(fd, &sync[0].handle, 1);
> --
> 2.41.0
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [igt-dev] [PATCH i-g-t 04/16] drm-uapi/xe_drm: Remove MMIO ioctl and align with latest uapi
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (2 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 03/16] drm-uapi/xe_drm: Separate VM_BIND's operation and flag, align with latest uapi Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 05/16] xe_exec_balancer: Enable parallel submission and compute mode Rodrigo Vivi
` (14 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
From: Francois Dugast <francois.dugast@intel.com>
Align with commit ("drm/xe/uapi: Remove MMIO ioctl")
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
include/drm-uapi/xe_drm.h | 31 +-
tests/intel-ci/xe-fast-feedback.testlist | 2 -
tests/intel/xe_mmio.c | 91 ------
tests/meson.build | 1 -
tools/meson.build | 1 -
tools/xe_reg.c | 366 -----------------------
6 files changed, 4 insertions(+), 488 deletions(-)
delete mode 100644 tests/intel/xe_mmio.c
delete mode 100644 tools/xe_reg.c
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 078edd9f8..1d869f5e8 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -106,11 +106,10 @@ struct xe_user_extension {
#define DRM_XE_EXEC_QUEUE_CREATE 0x06
#define DRM_XE_EXEC_QUEUE_DESTROY 0x07
#define DRM_XE_EXEC 0x08
-#define DRM_XE_MMIO 0x09
-#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x0a
-#define DRM_XE_WAIT_USER_FENCE 0x0b
-#define DRM_XE_VM_MADVISE 0x0c
-#define DRM_XE_EXEC_QUEUE_GET_PROPERTY 0x0d
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x09
+#define DRM_XE_WAIT_USER_FENCE 0x0a
+#define DRM_XE_VM_MADVISE 0x0b
+#define DRM_XE_EXEC_QUEUE_GET_PROPERTY 0x0c
/* Must be kept compact -- no holes */
#define DRM_IOCTL_XE_DEVICE_QUERY DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_DEVICE_QUERY, struct drm_xe_device_query)
@@ -123,7 +122,6 @@ struct xe_user_extension {
#define DRM_IOCTL_XE_EXEC_QUEUE_GET_PROPERTY DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_GET_PROPERTY, struct drm_xe_exec_queue_get_property)
#define DRM_IOCTL_XE_EXEC_QUEUE_DESTROY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_DESTROY, struct drm_xe_exec_queue_destroy)
#define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
-#define DRM_IOCTL_XE_MMIO DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MMIO, struct drm_xe_mmio)
#define DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY, struct drm_xe_exec_queue_set_property)
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_VM_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_VM_MADVISE, struct drm_xe_vm_madvise)
@@ -939,27 +937,6 @@ struct drm_xe_exec {
__u64 reserved[2];
};
-struct drm_xe_mmio {
- /** @extensions: Pointer to the first extension struct, if any */
- __u64 extensions;
-
- __u32 addr;
-
-#define DRM_XE_MMIO_8BIT 0x0
-#define DRM_XE_MMIO_16BIT 0x1
-#define DRM_XE_MMIO_32BIT 0x2
-#define DRM_XE_MMIO_64BIT 0x3
-#define DRM_XE_MMIO_BITS_MASK 0x3
-#define DRM_XE_MMIO_READ 0x4
-#define DRM_XE_MMIO_WRITE 0x8
- __u32 flags;
-
- __u64 value;
-
- /** @reserved: Reserved */
- __u64 reserved[2];
-};
-
/**
* struct drm_xe_wait_user_fence - wait user fence
*
diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
index 610cc958c..a9fe43b08 100644
--- a/tests/intel-ci/xe-fast-feedback.testlist
+++ b/tests/intel-ci/xe-fast-feedback.testlist
@@ -141,8 +141,6 @@ igt@xe_mmap@bad-object
igt@xe_mmap@system
igt@xe_mmap@vram
igt@xe_mmap@vram-system
-igt@xe_mmio@mmio-timestamp
-igt@xe_mmio@mmio-invalid
igt@xe_pm_residency@gt-c6-on-idle
igt@xe_prime_self_import@basic-with_one_bo
igt@xe_prime_self_import@basic-with_fd_dup
diff --git a/tests/intel/xe_mmio.c b/tests/intel/xe_mmio.c
deleted file mode 100644
index 9ac544770..000000000
--- a/tests/intel/xe_mmio.c
+++ /dev/null
@@ -1,91 +0,0 @@
-// SPDX-License-Identifier: MIT
-/*
- * Copyright © 2023 Intel Corporation
- */
-
-/**
- * TEST: Test if mmio feature
- * Category: Software building block
- * Sub-category: mmio
- * Functionality: mmap
- */
-
-#include "igt.h"
-
-#include "xe_drm.h"
-#include "xe/xe_ioctl.h"
-#include "xe/xe_query.h"
-
-#include <string.h>
-
-#define RCS_TIMESTAMP 0x2358
-
-/**
- * SUBTEST: mmio-timestamp
- * Test category: functionality test
- * Description:
- * Try to run mmio ioctl with 32 and 64 bits and check it a timestamp
- * matches
- */
-
-static void test_xe_mmio_timestamp(int fd)
-{
- int ret;
- struct drm_xe_mmio mmio = {
- .addr = RCS_TIMESTAMP,
- .flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_64BIT,
- };
- ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
- if (!ret)
- igt_debug("RCS_TIMESTAMP 64b = 0x%llx\n", mmio.value);
- igt_assert(!ret);
- mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_32BIT;
- mmio.value = 0;
- ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
- if (!ret)
- igt_debug("RCS_TIMESTAMP 32b = 0x%llx\n", mmio.value);
- igt_assert(!ret);
-}
-
-
-/**
- * SUBTEST: mmio-invalid
- * Test category: negative test
- * Description: Try to run mmio ioctl with 8, 16 and 32 and 64 bits mmio
- */
-
-static void test_xe_mmio_invalid(int fd)
-{
- int ret;
- struct drm_xe_mmio mmio = {
- .addr = RCS_TIMESTAMP,
- .flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_8BIT,
- };
- ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
- igt_assert(ret);
- mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_16BIT;
- mmio.value = 0;
- ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
- igt_assert(ret);
- mmio.addr = RCS_TIMESTAMP;
- mmio.flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_64BIT;
- mmio.value = 0x1;
- ret = igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio);
- igt_assert(ret);
-}
-
-igt_main
-{
- int fd;
-
- igt_fixture
- fd = drm_open_driver(DRIVER_XE);
-
- igt_subtest("mmio-timestamp")
- test_xe_mmio_timestamp(fd);
- igt_subtest("mmio-invalid")
- test_xe_mmio_invalid(fd);
-
- igt_fixture
- drm_close_driver(fd);
-}
diff --git a/tests/meson.build b/tests/meson.build
index 31492bf7b..7c67b22d4 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -292,7 +292,6 @@ intel_xe_progs = [
'xe_live_ktest',
'xe_media_fill',
'xe_mmap',
- 'xe_mmio',
'xe_module_load',
'xe_noexec_ping_pong',
'xe_pm',
diff --git a/tools/meson.build b/tools/meson.build
index 21e244c24..ac79d8b58 100644
--- a/tools/meson.build
+++ b/tools/meson.build
@@ -42,7 +42,6 @@ tools_progs = [
'intel_gvtg_test',
'dpcd_reg',
'lsgpu',
- 'xe_reg',
]
tool_deps = igt_deps
tool_deps += zlib
diff --git a/tools/xe_reg.c b/tools/xe_reg.c
deleted file mode 100644
index 1f7b384d3..000000000
--- a/tools/xe_reg.c
+++ /dev/null
@@ -1,366 +0,0 @@
-// SPDX-License-Identifier: MIT
-/*
- * Copyright © 2021 Intel Corporation
- */
-
-#include "igt.h"
-#include "igt_device_scan.h"
-
-#include "xe_drm.h"
-
-#include <stdlib.h>
-#include <stdio.h>
-#include <string.h>
-
-#define DECL_XE_MMIO_READ_FN(bits) \
-static inline uint##bits##_t \
-xe_mmio_read##bits(int fd, uint32_t reg) \
-{ \
- struct drm_xe_mmio mmio = { \
- .addr = reg, \
- .flags = DRM_XE_MMIO_READ | DRM_XE_MMIO_##bits##BIT, \
- }; \
-\
- igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio), 0); \
-\
- return mmio.value;\
-}\
-static inline void \
-xe_mmio_write##bits(int fd, uint32_t reg, uint##bits##_t value) \
-{ \
- struct drm_xe_mmio mmio = { \
- .addr = reg, \
- .flags = DRM_XE_MMIO_WRITE | DRM_XE_MMIO_##bits##BIT, \
- .value = value, \
- }; \
-\
- igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_MMIO, &mmio), 0); \
-}
-
-DECL_XE_MMIO_READ_FN(8)
-DECL_XE_MMIO_READ_FN(16)
-DECL_XE_MMIO_READ_FN(32)
-DECL_XE_MMIO_READ_FN(64)
-
-static void print_help(FILE *fp)
-{
- fprintf(fp, "usage: xe_reg read REG1 [REG2]...\n");
- fprintf(fp, " xe_reg write REG VALUE\n");
-}
-
-enum ring {
- RING_UNKNOWN = -1,
- RING_RCS0,
- RING_BCS0,
-};
-
-static const struct ring_info {
- enum ring ring;
- const char *name;
- uint32_t mmio_base;
-} ring_info[] = {
- {RING_RCS0, "rcs0", 0x02000, },
- {RING_BCS0, "bcs0", 0x22000, },
-};
-
-static const struct ring_info *ring_info_for_name(const char *name)
-{
- int i;
-
- for (i = 0; i < ARRAY_SIZE(ring_info); i++)
- if (strcmp(name, ring_info[i].name) == 0)
- return &ring_info[i];
-
- return NULL;
-}
-
-struct reg_info {
- const char *name;
- bool is_ring;
- uint32_t addr_low;
- uint32_t addr_high;
-} reg_info[] = {
-#define REG32(name, addr) { #name, false, addr }
-#define REG64(name, low, high) { #name, false, low, high }
-#define RING_REG32(name, addr) { #name, true, addr }
-#define RING_REG64(name, low, high) { #name, true, low, high }
-
- RING_REG64(ACTHD, 0x74, 0x5c),
- RING_REG32(BB_ADDR_DIFF, 0x154),
- RING_REG64(BB_ADDR, 0x140, 0x168),
- RING_REG32(BB_PER_CTX_PTR, 0x2c0),
- RING_REG64(EXECLIST_STATUS, 0x234, 0x238),
- RING_REG64(EXECLIST_SQ0, 0x510, 0x514),
- RING_REG64(EXECLIST_SQ1, 0x518, 0x51c),
- RING_REG32(HWS_PGA, 0x80),
- RING_REG32(INDIRECT_CTX, 0x1C4),
- RING_REG32(INDIRECT_CTX_OFFSET, 0x1C8),
- RING_REG32(NOPID, 0x94),
- RING_REG64(PML4E, 0x270, 0x274),
- RING_REG32(RING_BUFFER_CTL, 0x3c),
- RING_REG32(RING_BUFFER_HEAD, 0x34),
- RING_REG32(RING_BUFFER_START, 0x38),
- RING_REG32(RING_BUFFER_TAIL, 0x30),
- RING_REG64(SBB_ADDR, 0x114, 0x11c),
- RING_REG32(SBB_STATE, 0x118),
-
-#undef REG32
-#undef REG64
-#undef RING_REG32
-#undef RING_REG64
-};
-
-static const struct reg_info *reg_info_for_name(const char *name)
-{
- int i;
-
- for (i = 0; i < ARRAY_SIZE(reg_info); i++)
- if (strcmp(name, reg_info[i].name) == 0)
- return ®_info[i];
-
- return NULL;
-}
-
-static int print_reg_for_info(int xe, FILE *fp, const struct reg_info *reg,
- const struct ring_info *ring)
-{
- if (reg->is_ring) {
- if (!ring) {
- fprintf(stderr, "%s is a ring register but --ring "
- "not set\n", reg->name);
- return EXIT_FAILURE;
- }
-
- if (reg->addr_high) {
- uint32_t low = xe_mmio_read32(xe, reg->addr_low +
- ring->mmio_base);
- uint32_t high = xe_mmio_read32(xe, reg->addr_high +
- ring->mmio_base);
-
- fprintf(fp, "%s[%s] = 0x%08x %08x\n", reg->name,
- ring->name, high, low);
- } else {
- uint32_t value = xe_mmio_read32(xe, reg->addr_low +
- ring->mmio_base);
-
- fprintf(fp, "%s[%s] = 0x%08x\n", reg->name,
- ring->name, value);
- }
- } else {
- if (reg->addr_high) {
- uint32_t low = xe_mmio_read32(xe, reg->addr_low);
- uint32_t high = xe_mmio_read32(xe, reg->addr_high);
-
- fprintf(fp, "%s = 0x%08x %08x\n", reg->name, high, low);
- } else {
- uint32_t value = xe_mmio_read32(xe, reg->addr_low);
-
- fprintf(fp, "%s = 0x%08x\n", reg->name, value);
- }
- }
-
- return 0;
-}
-
-static void print_reg_for_addr(int xe, FILE *fp, uint32_t addr)
-{
- uint32_t value = xe_mmio_read32(xe, addr);
-
- fprintf(fp, "MMIO[0x%05x] = 0x%08x\n", addr, value);
-}
-
-enum opt {
- OPT_UNKNOWN = '?',
- OPT_END = -1,
- OPT_DEVICE,
- OPT_RING,
- OPT_ALL,
-};
-
-static int read_reg(int argc, char *argv[])
-{
- int xe, i, err, index;
- unsigned long reg_addr;
- char *endp = NULL;
- const struct ring_info *ring = NULL;
- enum opt opt;
- bool dump_all = false;
-
- static struct option options[] = {
- { "device", required_argument, NULL, OPT_DEVICE },
- { "ring", required_argument, NULL, OPT_RING },
- { "all", no_argument, NULL, OPT_ALL },
- };
-
- for (opt = 0; opt != OPT_END; ) {
- opt = getopt_long(argc, argv, "", options, &index);
-
- switch (opt) {
- case OPT_DEVICE:
- igt_device_filter_add(optarg);
- break;
- case OPT_RING:
- ring = ring_info_for_name(optarg);
- if (!ring) {
- fprintf(stderr, "invalid ring: %s\n", optarg);
- return EXIT_FAILURE;
- }
- break;
- case OPT_ALL:
- dump_all = true;
- break;
- case OPT_END:
- break;
- case OPT_UNKNOWN:
- return EXIT_FAILURE;
- }
- }
-
- argc -= optind;
- argv += optind;
-
- xe = drm_open_driver(DRIVER_XE);
- if (dump_all) {
- for (i = 0; i < ARRAY_SIZE(reg_info); i++) {
- if (reg_info[i].is_ring != !!ring)
- continue;
-
- print_reg_for_info(xe, stdout, ®_info[i], ring);
- }
- } else {
- for (i = 0; i < argc; i++) {
- const struct reg_info *reg = reg_info_for_name(argv[i]);
- if (reg) {
- err = print_reg_for_info(xe, stdout, reg, ring);
- if (err)
- return err;
- continue;
- }
- reg_addr = strtoul(argv[i], &endp, 16);
- if (!reg_addr || reg_addr >= (4 << 20) || *endp) {
- fprintf(stderr, "invalid reg address '%s'\n",
- argv[i]);
- return EXIT_FAILURE;
- }
- print_reg_for_addr(xe, stdout, reg_addr);
- }
- }
-
- return 0;
-}
-
-static int write_reg_for_info(int xe, const struct reg_info *reg,
- const struct ring_info *ring,
- uint64_t value)
-{
- if (reg->is_ring) {
- if (!ring) {
- fprintf(stderr, "%s is a ring register but --ring "
- "not set\n", reg->name);
- return EXIT_FAILURE;
- }
-
- xe_mmio_write32(xe, reg->addr_low + ring->mmio_base, value);
- if (reg->addr_high) {
- xe_mmio_write32(xe, reg->addr_high + ring->mmio_base,
- value >> 32);
- }
- } else {
- xe_mmio_write32(xe, reg->addr_low, value);
- if (reg->addr_high)
- xe_mmio_write32(xe, reg->addr_high, value >> 32);
- }
-
- return 0;
-}
-
-static void write_reg_for_addr(int xe, uint32_t addr, uint32_t value)
-{
- xe_mmio_write32(xe, addr, value);
-}
-
-static int write_reg(int argc, char *argv[])
-{
- int xe, index;
- unsigned long reg_addr;
- char *endp = NULL;
- const struct ring_info *ring = NULL;
- enum opt opt;
- const char *reg_name;
- const struct reg_info *reg;
- uint64_t value;
-
- static struct option options[] = {
- { "device", required_argument, NULL, OPT_DEVICE },
- { "ring", required_argument, NULL, OPT_RING },
- };
-
- for (opt = 0; opt != OPT_END; ) {
- opt = getopt_long(argc, argv, "", options, &index);
-
- switch (opt) {
- case OPT_DEVICE:
- igt_device_filter_add(optarg);
- break;
- case OPT_RING:
- ring = ring_info_for_name(optarg);
- if (!ring) {
- fprintf(stderr, "invalid ring: %s\n", optarg);
- return EXIT_FAILURE;
- }
- break;
- case OPT_END:
- break;
- case OPT_UNKNOWN:
- return EXIT_FAILURE;
- default:
- break;
- }
- }
-
- argc -= optind;
- argv += optind;
-
- if (argc != 2) {
- print_help(stderr);
- return EXIT_FAILURE;
- }
-
- reg_name = argv[0];
- value = strtoull(argv[1], &endp, 0);
- if (*endp) {
- fprintf(stderr, "Invalid register value: %s\n", argv[1]);
- return EXIT_FAILURE;
- }
-
- xe = drm_open_driver(DRIVER_XE);
-
- reg = reg_info_for_name(reg_name);
- if (reg)
- return write_reg_for_info(xe, reg, ring, value);
-
- reg_addr = strtoul(reg_name, &endp, 16);
- if (!reg_addr || reg_addr >= (4 << 20) || *endp) {
- fprintf(stderr, "invalid reg address '%s'\n", reg_name);
- return EXIT_FAILURE;
- }
- write_reg_for_addr(xe, reg_addr, value);
-
- return 0;
-}
-
-int main(int argc, char *argv[])
-{
- if (argc < 2) {
- print_help(stderr);
- return EXIT_FAILURE;
- }
-
- if (strcmp(argv[1], "read") == 0)
- return read_reg(argc - 1, argv + 1);
- else if (strcmp(argv[1], "write") == 0)
- return write_reg(argc - 1, argv + 1);
-
- fprintf(stderr, "invalid sub-command: %s", argv[1]);
- return EXIT_FAILURE;
-}
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 05/16] xe_exec_balancer: Enable parallel submission and compute mode
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (3 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 04/16] drm-uapi/xe_drm: Remove MMIO ioctl and " Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 06/16] xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a compute VM Rodrigo Vivi
` (13 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
From: Matthew Brost <matthew.brost@intel.com>
This is now supported. Test it.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
tests/intel/xe_exec_balancer.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index 0314b4cd2..a4a438db7 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -383,6 +383,12 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
* @virtual-userptr-rebind: virtual userptr rebind
* @virtual-userptr-invalidate: virtual userptr invalidate
* @virtual-userptr-invalidate-race: virtual userptr invalidate racy
+ * @parallel-basic: parallel basic
+ * @parallel-userptr: parallel userptr
+ * @parallel-rebind: parallel rebind
+ * @parallel-userptr-rebind: parallel userptr rebind
+ * @parallel-userptr-invalidate: parallel userptr invalidate
+ * @parallel-userptr-invalidate-race: parallel userptr invalidate racy
*/
static void
@@ -460,8 +466,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
};
struct drm_xe_exec_queue_create create = {
.vm_id = vm,
- .width = 1,
- .num_placements = num_placements,
+ .width = flags & PARALLEL ? num_placements : 1,
+ .num_placements = flags & PARALLEL ? 1 : num_placements,
.instances = to_user_pointer(eci),
.extensions = to_user_pointer(&ext),
};
@@ -470,6 +476,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
&create), 0);
exec_queues[i] = create.exec_queue_id;
}
+ exec.num_batch_buffer = flags & PARALLEL ? num_placements : 1;
sync[0].addr = to_user_pointer(&data[0].vm_sync);
if (bo)
@@ -487,8 +494,12 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
uint64_t batch_addr = addr + batch_offset;
uint64_t sdi_offset = (char *)&data[i].data - (char *)data;
uint64_t sdi_addr = addr + sdi_offset;
+ uint64_t batches[MAX_INSTANCE];
int e = i % n_exec_queues;
+ for (j = 0; j < num_placements && flags & PARALLEL; ++j)
+ batches[j] = batch_addr;
+
b = 0;
data[i].batch[b++] = MI_STORE_DWORD_IMM_GEN4;
data[i].batch[b++] = sdi_addr;
@@ -500,7 +511,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
sync[0].addr = addr + (char *)&data[i].exec_sync - (char *)data;
exec.exec_queue_id = exec_queues[e];
- exec.address = batch_addr;
+ exec.address = flags & PARALLEL ?
+ to_user_pointer(batches) : batch_addr;
xe_exec(fd, &exec);
if (flags & REBIND && i + 1 != n_execs) {
@@ -661,9 +673,6 @@ igt_main
test_exec(fd, gt, class, 1, 0,
s->flags);
- if (s->flags & PARALLEL)
- continue;
-
igt_subtest_f("once-cm-%s", s->name)
xe_for_each_gt(fd, gt)
xe_for_each_hw_engine_class(class)
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 06/16] xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a compute VM
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (4 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 05/16] xe_exec_balancer: Enable parallel submission and compute mode Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 07/16] xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE Rodrigo Vivi
` (12 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
From: Matthew Brost <matthew.brost@intel.com>
XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE was used creating a compute VM.
This just happened to work as it is same value as
DRM_XE_VM_CREATE_COMPUTE_MODE. Fix this and use correct flag,
DRM_XE_VM_CREATE_COMPUTE_MODE.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
tests/intel/xe_exec_threads.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index 1f9af894f..d19708f80 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -286,7 +286,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
if (!vm) {
vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
- XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE, 0);
+ DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
owns_vm = true;
}
@@ -1076,7 +1076,7 @@ static void threads(int fd, int flags)
to_user_pointer(&ext));
vm_compute_mode = xe_vm_create(fd,
DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
- XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
+ DRM_XE_VM_CREATE_COMPUTE_MODE,
0);
vm_err_thread.capture = &capture;
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 07/16] xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (5 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 06/16] xe_exec_threads: Use DRM_XE_VM_CREATE_COMPUTE_MODE when creating a compute VM Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 08/16] drm-uapi/xe: Use common drm_xe_ext_set_property extension Rodrigo Vivi
` (11 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
From: Matthew Brost <matthew.brost@intel.com>
XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE has been removed from uAPI,
remove all references in Xe tests.
Align with commits
("drm/xe: Remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE from uAPI") and
("drm/xe: Deprecate XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE implementation")
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
[Rodrigo updated header with built version from make header_install]
[Rodrigo added the commit subjects of the kernel uapi changes]
---
include/drm-uapi/xe_drm.h | 21 +++++++--------------
tests/intel/xe_evict.c | 14 +++-----------
tests/intel/xe_exec_balancer.c | 8 +-------
tests/intel/xe_exec_compute_mode.c | 20 ++------------------
tests/intel/xe_exec_reset.c | 10 ++--------
tests/intel/xe_exec_threads.c | 13 ++-----------
tests/intel/xe_noexec_ping_pong.c | 10 +---------
7 files changed, 18 insertions(+), 78 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 1d869f5e8..a9060bcf8 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -784,21 +784,14 @@ struct drm_xe_exec_queue_set_property {
/** @exec_queue_id: Exec queue ID */
__u32 exec_queue_id;
-#define XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
+#define XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
#define XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
#define XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT 2
- /*
- * Long running or ULLS engine mode. DMA fences not allowed in this
- * mode. Must match the value of DRM_XE_VM_CREATE_COMPUTE_MODE, serves
- * as a sanity check the UMD knows what it is doing. Can only be set at
- * engine create time.
- */
-#define XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE 3
-#define XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE 4
-#define XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT 5
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER 6
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY 7
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY 8
+#define XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE 3
+#define XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT 4
+#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER 5
+#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY 6
+#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY 7
/** @property: property to set */
__u32 property;
@@ -1092,7 +1085,7 @@ struct drm_xe_vm_madvise {
};
/**
- * XE PMU event config IDs
+ * DOC: XE PMU event config IDs
*
* Check 'man perf_event_open' to use these ID's in 'struct perf_event_attr'
* as part of perf_event_open syscall to read a particular event.
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index 5b64e56b4..5d8981f8d 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -252,19 +252,11 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
}
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
-
if (flags & MULTI_VM)
- exec_queues[i] = xe_exec_queue_create(fd, i & 1 ? vm2 : vm, eci,
- to_user_pointer(&ext));
+ exec_queues[i] = xe_exec_queue_create(fd, i & 1 ? vm2 :
+ vm, eci, 0);
else
- exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
- to_user_pointer(&ext));
+ exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
}
for (i = 0; i < n_execs; i++) {
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index a4a438db7..f4f5440f4 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -458,18 +458,12 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
memset(data, 0, bo_size);
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
struct drm_xe_exec_queue_create create = {
.vm_id = vm,
.width = flags & PARALLEL ? num_placements : 1,
.num_placements = flags & PARALLEL ? 1 : num_placements,
.instances = to_user_pointer(eci),
- .extensions = to_user_pointer(&ext),
+ .extensions = 0,
};
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE,
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index 6d1084727..02e7ef201 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -120,15 +120,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
xe_get_default_alignment(fd));
for (i = 0; (flags & EXEC_QUEUE_EARLY) && i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
-
- exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
- to_user_pointer(&ext));
+ exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
if (flags & BIND_EXECQUEUE)
bind_exec_queues[i] =
xe_bind_exec_queue_create(fd, vm, 0);
@@ -156,15 +148,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
memset(data, 0, bo_size);
for (i = 0; !(flags & EXEC_QUEUE_EARLY) && i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
-
- exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
- to_user_pointer(&ext));
+ exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
if (flags & BIND_EXECQUEUE)
bind_exec_queues[i] =
xe_bind_exec_queue_create(fd, vm, 0);
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 6e3f0aa4b..68e17cc98 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -540,14 +540,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
memset(data, 0, bo_size);
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property compute = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
- .base.next_extension = to_user_pointer(&compute),
+ .base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
.value = 1000,
@@ -557,7 +551,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
if (flags & EXEC_QUEUE_RESET)
ext = to_user_pointer(&preempt_timeout);
else
- ext = to_user_pointer(&compute);
+ ext = 0;
exec_queues[i] = xe_exec_queue_create(fd, vm, eci, ext);
};
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index d19708f80..306d8113d 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -313,17 +313,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
}
memset(data, 0, bo_size);
- for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
-
- exec_queues[i] = xe_exec_queue_create(fd, vm, eci,
- to_user_pointer(&ext));
- };
+ for (i = 0; i < n_exec_queues; i++)
+ exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
pthread_barrier_wait(&barrier);
diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
index 3f486adf9..88b22ed11 100644
--- a/tests/intel/xe_noexec_ping_pong.c
+++ b/tests/intel/xe_noexec_ping_pong.c
@@ -64,13 +64,6 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
* stats.
*/
for (i = 0; i < NUM_VMS; ++i) {
- struct drm_xe_ext_exec_queue_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
- .property = XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE,
- .value = 1,
- };
-
vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
for (j = 0; j < NUM_BOS; ++j) {
igt_debug("Creating bo size %lu for vm %u\n",
@@ -82,8 +75,7 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
xe_vm_bind(fd, vm[i], bo[i][j], 0, 0x40000 + j*bo_size,
bo_size, NULL, 0);
}
- exec_queues[i] = xe_exec_queue_create(fd, vm[i], eci,
- to_user_pointer(&ext));
+ exec_queues[i] = xe_exec_queue_create(fd, vm[i], eci, 0);
}
igt_info("Now sleeping for %ds.\n", SECONDS_TO_WAIT);
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 08/16] drm-uapi/xe: Use common drm_xe_ext_set_property extension
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (6 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 07/16] xe: Update uAPI and remove XE_EXEC_QUEUE_SET_PROPERTY_COMPUTE_MODE Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 09/16] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension Rodrigo Vivi
` (10 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
Align with commit ("drm/xe/uapi: Use common drm_xe_ext_set_property extension")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 21 +++------------------
tests/intel/xe_exec_reset.c | 10 +++++-----
tests/intel/xe_exec_threads.c | 4 ++--
tests/intel/xe_vm.c | 2 +-
4 files changed, 11 insertions(+), 26 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index a9060bcf8..66acf49c4 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -572,12 +572,11 @@ struct drm_xe_vm_bind_op_error_capture {
__u64 size;
};
-/** struct drm_xe_ext_vm_set_property - VM set property extension */
-struct drm_xe_ext_vm_set_property {
+/** struct drm_xe_ext_set_property - XE set property extension */
+struct drm_xe_ext_set_property {
/** @base: base user extension */
struct xe_user_extension base;
-#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS 0
/** @property: property to set */
__u32 property;
@@ -593,6 +592,7 @@ struct drm_xe_ext_vm_set_property {
struct drm_xe_vm_create {
#define XE_VM_EXTENSION_SET_PROPERTY 0
+#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS 0
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
@@ -757,21 +757,6 @@ struct drm_xe_vm_bind {
__u64 reserved[2];
};
-/** struct drm_xe_ext_exec_queue_set_property - exec queue set property extension */
-struct drm_xe_ext_exec_queue_set_property {
- /** @base: base user extension */
- struct xe_user_extension base;
-
- /** @property: property to set */
- __u32 property;
-
- /** @pad: MBZ */
- __u32 pad;
-
- /** @value: property value */
- __u64 value;
-};
-
/**
* struct drm_xe_exec_queue_set_property - exec queue set property
*
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 68e17cc98..ca8d7cc13 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -185,13 +185,13 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
data = xe_bo_map(fd, bo, bo_size);
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property job_timeout = {
+ struct drm_xe_ext_set_property job_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
.value = 50,
};
- struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+ struct drm_xe_ext_set_property preempt_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
@@ -372,13 +372,13 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
data = xe_bo_map(fd, bo, bo_size);
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property job_timeout = {
+ struct drm_xe_ext_set_property job_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
.value = 50,
};
- struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+ struct drm_xe_ext_set_property preempt_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
@@ -540,7 +540,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
memset(data, 0, bo_size);
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+ struct drm_xe_ext_set_property preempt_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index 306d8113d..b22c9c052 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -518,7 +518,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
memset(sync_all, 0, sizeof(sync_all));
for (i = 0; i < n_exec_queues; i++) {
- struct drm_xe_ext_exec_queue_set_property preempt_timeout = {
+ struct drm_xe_ext_set_property preempt_timeout = {
.base.next_extension = 0,
.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
@@ -1054,7 +1054,7 @@ static void threads(int fd, int flags)
pthread_cond_init(&cond, 0);
if (flags & SHARED_VM) {
- struct drm_xe_ext_vm_set_property ext = {
+ struct drm_xe_ext_set_property ext = {
.base.next_extension = 0,
.base.name = XE_VM_EXTENSION_SET_PROPERTY,
.property =
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index f96305851..75e7a384b 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -404,7 +404,7 @@ static void vm_async_ops_err(int fd, bool destroy)
};
#define N_BINDS 32
struct drm_xe_vm_bind_op_error_capture capture = {};
- struct drm_xe_ext_vm_set_property ext = {
+ struct drm_xe_ext_set_property ext = {
.base.next_extension = 0,
.base.name = XE_VM_EXTENSION_SET_PROPERTY,
.property = XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 09/16] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (7 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 08/16] drm-uapi/xe: Use common drm_xe_ext_set_property extension Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 10/16] drm-uapi/xe: Replace useless 'instance' per unique gt_id Rodrigo Vivi
` (9 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
Align with commit ("drm/xe: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 23 +----------------------
tests/intel/xe_exec_threads.c | 14 +-------------
tests/intel/xe_vm.c | 13 +------------
3 files changed, 3 insertions(+), 47 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 66acf49c4..336b77074 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -555,23 +555,6 @@ struct drm_xe_gem_mmap_offset {
__u64 reserved[2];
};
-/**
- * struct drm_xe_vm_bind_op_error_capture - format of VM bind op error capture
- */
-struct drm_xe_vm_bind_op_error_capture {
- /** @error: errno that occurred */
- __s32 error;
-
- /** @op: operation that encounter an error */
- __u32 op;
-
- /** @addr: address of bind op */
- __u64 addr;
-
- /** @size: size of bind */
- __u64 size;
-};
-
/** struct drm_xe_ext_set_property - XE set property extension */
struct drm_xe_ext_set_property {
/** @base: base user extension */
@@ -592,7 +575,6 @@ struct drm_xe_ext_set_property {
struct drm_xe_vm_create {
#define XE_VM_EXTENSION_SET_PROPERTY 0
-#define XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS 0
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
@@ -677,10 +659,7 @@ struct drm_xe_vm_bind_op {
* practice the bind op is good and will complete.
*
* If this flag is set and doesn't return an error, the bind op can
- * still fail and recovery is needed. If configured, the bind op that
- * caused the error will be captured in drm_xe_vm_bind_op_error_capture.
- * Once the user sees the error (via a ufence +
- * XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS), it should free memory
+ * still fail and recovery is needed. It should free memory
* via non-async unbinds, and then restart all queued async binds op via
* XE_VM_BIND_OP_RESTART. Or alternatively the user should destroy the
* VM.
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index b22c9c052..c9a51fc00 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -740,7 +740,6 @@ static void *thread(void *data)
struct vm_thread_data {
pthread_t thread;
- struct drm_xe_vm_bind_op_error_capture *capture;
int fd;
int vm;
};
@@ -772,7 +771,6 @@ static void *vm_async_ops_err_thread(void *data)
/* Restart and wait for next error */
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
&bind), 0);
- args->capture->error = 0;
ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
}
@@ -1021,7 +1019,6 @@ static void threads(int fd, int flags)
int n_hw_engines = 0, class;
uint64_t i = 0;
uint32_t vm_legacy_mode = 0, vm_compute_mode = 0;
- struct drm_xe_vm_bind_op_error_capture capture = {};
struct vm_thread_data vm_err_thread = {};
bool go = false;
int n_threads = 0;
@@ -1054,23 +1051,14 @@ static void threads(int fd, int flags)
pthread_cond_init(&cond, 0);
if (flags & SHARED_VM) {
- struct drm_xe_ext_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_VM_EXTENSION_SET_PROPERTY,
- .property =
- XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
- .value = to_user_pointer(&capture),
- };
-
vm_legacy_mode = xe_vm_create(fd,
DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
- to_user_pointer(&ext));
+ 0);
vm_compute_mode = xe_vm_create(fd,
DRM_XE_VM_CREATE_ASYNC_BIND_OPS |
DRM_XE_VM_CREATE_COMPUTE_MODE,
0);
- vm_err_thread.capture = &capture;
vm_err_thread.fd = fd;
vm_err_thread.vm = vm_legacy_mode;
pthread_create(&vm_err_thread.thread, 0,
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 75e7a384b..89df6149a 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -324,7 +324,6 @@ static void userptr_invalid(int fd)
struct vm_thread_data {
pthread_t thread;
- struct drm_xe_vm_bind_op_error_capture *capture;
int fd;
int vm;
uint32_t bo;
@@ -388,7 +387,6 @@ static void *vm_async_ops_err_thread(void *data)
/* Restart and wait for next error */
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND,
&bind), 0);
- args->capture->error = 0;
ret = igt_ioctl(fd, DRM_IOCTL_XE_WAIT_USER_FENCE, &wait);
}
@@ -403,24 +401,15 @@ static void vm_async_ops_err(int fd, bool destroy)
.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
};
#define N_BINDS 32
- struct drm_xe_vm_bind_op_error_capture capture = {};
- struct drm_xe_ext_set_property ext = {
- .base.next_extension = 0,
- .base.name = XE_VM_EXTENSION_SET_PROPERTY,
- .property = XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS,
- .value = to_user_pointer(&capture),
- };
struct vm_thread_data thread = {};
uint32_t syncobjs[N_BINDS];
size_t bo_size = 0x1000 * 32;
uint32_t bo;
int i, j;
- vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS,
- to_user_pointer(&ext));
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
bo = xe_bo_create(fd, 0, vm, bo_size);
- thread.capture = &capture;
thread.fd = fd;
thread.vm = vm;
thread.bo = bo;
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 10/16] drm-uapi/xe: Replace useless 'instance' per unique gt_id
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (8 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 09/16] drm-uapi: Kill XE_VM_PROPERTY_BIND_OP_ERROR_CAPTURE_ADDRESS extension Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 11/16] drm-uapi/xe: Remove unused field of drm_xe_query_gt Rodrigo Vivi
` (8 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
Align with commit ("drm/xe/uapi: Replace useless 'instance' per unique gt_id")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 65 ++++++++++++++++++++++++++-------------
tests/intel/xe_query.c | 2 +-
2 files changed, 44 insertions(+), 23 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 336b77074..544f2f14b 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -336,6 +336,47 @@ struct drm_xe_query_config {
__u64 info[];
};
+/**
+ * struct drm_xe_query_gt - describe an individual GT.
+ *
+ * To be used with drm_xe_query_gts, which will return a list with all the
+ * existing GT individual descriptions.
+ * Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
+ * implementing graphics and/or media operations.
+ */
+struct drm_xe_query_gt {
+#define XE_QUERY_GT_TYPE_MAIN 0
+#define XE_QUERY_GT_TYPE_REMOTE 1
+#define XE_QUERY_GT_TYPE_MEDIA 2
+ /** @type: GT type: Main, Remote, or Media */
+ __u16 type;
+ /** @gt_id: Unique ID of this GT within the PCI Device */
+ __u16 gt_id;
+ /** @clock_freq: A clock frequency for timestamp */
+ __u32 clock_freq;
+ /** @features: Reserved for future information about GT features */
+ __u64 features;
+ /**
+ * @native_mem_regions: Bit mask of instances from
+ * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
+ * direct access.
+ */
+ __u64 native_mem_regions;
+ /**
+ * @slow_mem_regions: Bit mask of instances from
+ * drm_xe_query_mem_usage that this GT can indirectly access, although
+ * they live on a different GPU/Tile.
+ */
+ __u64 slow_mem_regions;
+ /**
+ * @inaccessible_mem_regions: Bit mask of instances from
+ * drm_xe_query_mem_usage that is not accessible by this GT at all.
+ */
+ __u64 inaccessible_mem_regions;
+ /** @reserved: Reserved */
+ __u64 reserved[8];
+};
+
/**
* struct drm_xe_query_gts - describe GTs
*
@@ -346,30 +387,10 @@ struct drm_xe_query_config {
struct drm_xe_query_gts {
/** @num_gt: number of GTs returned in gts */
__u32 num_gt;
-
/** @pad: MBZ */
__u32 pad;
-
- /**
- * @gts: The GTs returned for this device
- *
- * TODO: convert drm_xe_query_gt to proper kernel-doc.
- * TODO: Perhaps info about every mem region relative to this GT? e.g.
- * bandwidth between this GT and remote region?
- */
- struct drm_xe_query_gt {
-#define XE_QUERY_GT_TYPE_MAIN 0
-#define XE_QUERY_GT_TYPE_REMOTE 1
-#define XE_QUERY_GT_TYPE_MEDIA 2
- __u16 type;
- __u16 instance;
- __u32 clock_freq;
- __u64 features;
- __u64 native_mem_regions; /* bit mask of instances from drm_xe_query_mem_usage */
- __u64 slow_mem_regions; /* bit mask of instances from drm_xe_query_mem_usage */
- __u64 inaccessible_mem_regions; /* bit mask of instances from drm_xe_query_mem_usage */
- __u64 reserved[8];
- } gts[];
+ /** @gts: The GT list returned for this device */
+ struct drm_xe_query_gt gts[];
};
/**
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index acf069f46..eb8d52897 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -279,7 +279,7 @@ test_query_gts(int fd)
for (i = 0; i < gts->num_gt; i++) {
igt_info("type: %d\n", gts->gts[i].type);
- igt_info("instance: %d\n", gts->gts[i].instance);
+ igt_info("gt_id: %d\n", gts->gts[i].gt_id);
igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
igt_info("features: 0x%016llx\n", gts->gts[i].features);
igt_info("native_mem_regions: 0x%016llx\n",
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 11/16] drm-uapi/xe: Remove unused field of drm_xe_query_gt
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (9 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 10/16] drm-uapi/xe: Replace useless 'instance' per unique gt_id Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 12/16] drm-uapi/xe: Rename gts to gt_list Rodrigo Vivi
` (7 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
Align with commit ("drm/xe/uapi: Remove unused field of drm_xe_query_gt")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 2 --
tests/intel/xe_query.c | 1 -
2 files changed, 3 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 544f2f14b..6ba86c1f1 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -354,8 +354,6 @@ struct drm_xe_query_gt {
__u16 gt_id;
/** @clock_freq: A clock frequency for timestamp */
__u32 clock_freq;
- /** @features: Reserved for future information about GT features */
- __u64 features;
/**
* @native_mem_regions: Bit mask of instances from
* drm_xe_query_mem_usage that lives on the same GPU/Tile and have
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index eb8d52897..3aa2918f0 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -281,7 +281,6 @@ test_query_gts(int fd)
igt_info("type: %d\n", gts->gts[i].type);
igt_info("gt_id: %d\n", gts->gts[i].gt_id);
igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
- igt_info("features: 0x%016llx\n", gts->gts[i].features);
igt_info("native_mem_regions: 0x%016llx\n",
gts->gts[i].native_mem_regions);
igt_info("slow_mem_regions: 0x%016llx\n",
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 12/16] drm-uapi/xe: Rename gts to gt_list
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (10 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 11/16] drm-uapi/xe: Remove unused field of drm_xe_query_gt Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 13/16] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY Rodrigo Vivi
` (6 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
Align with commit ("drm/xe/uapi: Rename gts to gt_list")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 18 ++++----
lib/xe/xe_query.c | 52 ++++++++++++------------
lib/xe/xe_query.h | 10 ++---
lib/xe/xe_spin.c | 6 +--
tests/intel-ci/xe-fast-feedback.testlist | 2 +-
tests/intel/xe_query.c | 36 ++++++++--------
6 files changed, 62 insertions(+), 62 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 6ba86c1f1..69b62e84f 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -339,7 +339,7 @@ struct drm_xe_query_config {
/**
* struct drm_xe_query_gt - describe an individual GT.
*
- * To be used with drm_xe_query_gts, which will return a list with all the
+ * To be used with drm_xe_query_gt_list, which will return a list with all the
* existing GT individual descriptions.
* Graphics Technology (GT) is a subset of a GPU/tile that is responsible for
* implementing graphics and/or media operations.
@@ -376,19 +376,19 @@ struct drm_xe_query_gt {
};
/**
- * struct drm_xe_query_gts - describe GTs
+ * struct drm_xe_query_gt_list - A list with GT description items.
*
* If a query is made with a struct drm_xe_device_query where .query
- * is equal to DRM_XE_DEVICE_QUERY_GTS, then the reply uses struct
- * drm_xe_query_gts in .data.
+ * is equal to DRM_XE_DEVICE_QUERY_GT_LIST, then the reply uses struct
+ * drm_xe_query_gt_list in .data.
*/
-struct drm_xe_query_gts {
- /** @num_gt: number of GTs returned in gts */
+struct drm_xe_query_gt_list {
+ /** @num_gt: number of GT items returned in gt_list */
__u32 num_gt;
/** @pad: MBZ */
__u32 pad;
- /** @gts: The GT list returned for this device */
- struct drm_xe_query_gt gts[];
+ /** @gt_list: The GT list returned for this device */
+ struct drm_xe_query_gt gt_list[];
};
/**
@@ -481,7 +481,7 @@ struct drm_xe_device_query {
#define DRM_XE_DEVICE_QUERY_ENGINES 0
#define DRM_XE_DEVICE_QUERY_MEM_USAGE 1
#define DRM_XE_DEVICE_QUERY_CONFIG 2
-#define DRM_XE_DEVICE_QUERY_GTS 3
+#define DRM_XE_DEVICE_QUERY_GT_LIST 3
#define DRM_XE_DEVICE_QUERY_HWCONFIG 4
#define DRM_XE_DEVICE_QUERY_GT_TOPOLOGY 5
#define DRM_XE_QUERY_CS_CYCLES 6
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index c356abe1e..b018c7535 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -39,35 +39,35 @@ static struct drm_xe_query_config *xe_query_config_new(int fd)
return config;
}
-static struct drm_xe_query_gts *xe_query_gts_new(int fd)
+static struct drm_xe_query_gt_list *xe_query_gt_list_new(int fd)
{
- struct drm_xe_query_gts *gts;
+ struct drm_xe_query_gt_list *gt_list;
struct drm_xe_device_query query = {
.extensions = 0,
- .query = DRM_XE_DEVICE_QUERY_GTS,
+ .query = DRM_XE_DEVICE_QUERY_GT_LIST,
.size = 0,
.data = 0,
};
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
- gts = malloc(query.size);
- igt_assert(gts);
+ gt_list = malloc(query.size);
+ igt_assert(gt_list);
- query.data = to_user_pointer(gts);
+ query.data = to_user_pointer(gt_list);
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
- return gts;
+ return gt_list;
}
-static uint64_t __memory_regions(const struct drm_xe_query_gts *gts)
+static uint64_t __memory_regions(const struct drm_xe_query_gt_list *gt_list)
{
uint64_t regions = 0;
int i;
- for (i = 0; i < gts->num_gt; i++)
- regions |= gts->gts[i].native_mem_regions |
- gts->gts[i].slow_mem_regions;
+ for (i = 0; i < gt_list->num_gt; i++)
+ regions |= gt_list->gt_list[i].native_mem_regions |
+ gt_list->gt_list[i].slow_mem_regions;
return regions;
}
@@ -118,21 +118,21 @@ static struct drm_xe_query_mem_usage *xe_query_mem_usage_new(int fd)
return mem_usage;
}
-static uint64_t native_region_for_gt(const struct drm_xe_query_gts *gts, int gt)
+static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list, int gt)
{
uint64_t region;
- igt_assert(gts->num_gt > gt);
- region = gts->gts[gt].native_mem_regions;
+ igt_assert(gt_list->num_gt > gt);
+ region = gt_list->gt_list[gt].native_mem_regions;
igt_assert(region);
return region;
}
static uint64_t gt_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
- const struct drm_xe_query_gts *gts, int gt)
+ const struct drm_xe_query_gt_list *gt_list, int gt)
{
- int region_idx = ffs(native_region_for_gt(gts, gt)) - 1;
+ int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
return mem_usage->regions[region_idx].total_size;
@@ -141,9 +141,9 @@ static uint64_t gt_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
}
static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
- const struct drm_xe_query_gts *gts, int gt)
+ const struct drm_xe_query_gt_list *gt_list, int gt)
{
- int region_idx = ffs(native_region_for_gt(gts, gt)) - 1;
+ int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
return mem_usage->regions[region_idx].cpu_visible_size;
@@ -220,7 +220,7 @@ static struct xe_device *find_in_cache(int fd)
static void xe_device_free(struct xe_device *xe_dev)
{
free(xe_dev->config);
- free(xe_dev->gts);
+ free(xe_dev->gt_list);
free(xe_dev->hw_engines);
free(xe_dev->mem_usage);
free(xe_dev->vram_size);
@@ -252,18 +252,18 @@ struct xe_device *xe_device_get(int fd)
xe_dev->number_gt = xe_dev->config->info[XE_QUERY_CONFIG_GT_COUNT];
xe_dev->va_bits = xe_dev->config->info[XE_QUERY_CONFIG_VA_BITS];
xe_dev->dev_id = xe_dev->config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
- xe_dev->gts = xe_query_gts_new(fd);
- xe_dev->memory_regions = __memory_regions(xe_dev->gts);
+ xe_dev->gt_list = xe_query_gt_list_new(fd);
+ xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
xe_dev->mem_usage = xe_query_mem_usage_new(fd);
xe_dev->vram_size = calloc(xe_dev->number_gt, sizeof(*xe_dev->vram_size));
xe_dev->visible_vram_size = calloc(xe_dev->number_gt, sizeof(*xe_dev->visible_vram_size));
for (int gt = 0; gt < xe_dev->number_gt; gt++) {
xe_dev->vram_size[gt] = gt_vram_size(xe_dev->mem_usage,
- xe_dev->gts, gt);
+ xe_dev->gt_list, gt);
xe_dev->visible_vram_size[gt] =
gt_visible_vram_size(xe_dev->mem_usage,
- xe_dev->gts, gt);
+ xe_dev->gt_list, gt);
}
xe_dev->default_alignment = __mem_default_alignment(xe_dev->mem_usage);
xe_dev->has_vram = __mem_has_vram(xe_dev->mem_usage);
@@ -356,7 +356,7 @@ _TYPE _NAME(int fd) \
* xe_number_gt:
* @fd: xe device fd
*
- * Return number of gts for xe device fd.
+ * Return number of gt_list for xe device fd.
*/
xe_dev_FN(xe_number_gt, number_gt, unsigned int);
@@ -396,7 +396,7 @@ uint64_t vram_memory(int fd, int gt)
igt_assert(xe_dev);
igt_assert(gt >= 0 && gt < xe_dev->number_gt);
- return xe_has_vram(fd) ? native_region_for_gt(xe_dev->gts, gt) : 0;
+ return xe_has_vram(fd) ? native_region_for_gt(xe_dev->gt_list, gt) : 0;
}
static uint64_t __xe_visible_vram_size(int fd, int gt)
@@ -647,7 +647,7 @@ uint64_t xe_vram_available(int fd, int gt)
xe_dev = find_in_cache(fd);
igt_assert(xe_dev);
- region_idx = ffs(native_region_for_gt(xe_dev->gts, gt)) - 1;
+ region_idx = ffs(native_region_for_gt(xe_dev->gt_list, gt)) - 1;
mem_region = &xe_dev->mem_usage->regions[region_idx];
if (XE_IS_CLASS_VRAM(mem_region)) {
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index 20dbfa12c..da7deaf4c 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -26,13 +26,13 @@ struct xe_device {
/** @config: xe configuration */
struct drm_xe_query_config *config;
- /** @gts: gt info */
- struct drm_xe_query_gts *gts;
+ /** @gt_list: gt info */
+ struct drm_xe_query_gt_list *gt_list;
/** @number_gt: number of gt */
unsigned int number_gt;
- /** @gts: bitmask of all memory regions */
+ /** @gt_list: bitmask of all memory regions */
uint64_t memory_regions;
/** @hw_engines: array of hardware engines */
@@ -44,10 +44,10 @@ struct xe_device {
/** @mem_usage: regions memory information and usage */
struct drm_xe_query_mem_usage *mem_usage;
- /** @vram_size: array of vram sizes for all gts */
+ /** @vram_size: array of vram sizes for all gt_list */
uint64_t *vram_size;
- /** @visible_vram_size: array of visible vram sizes for all gts */
+ /** @visible_vram_size: array of visible vram sizes for all gt_list */
uint64_t *visible_vram_size;
/** @default_alignment: safe alignment regardless region location */
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index f0d77aed3..b05b38829 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -20,10 +20,10 @@ static uint32_t read_timestamp_frequency(int fd, int gt_id)
{
struct xe_device *dev = xe_device_get(fd);
- igt_assert(dev && dev->gts && dev->gts->num_gt);
- igt_assert(gt_id >= 0 && gt_id <= dev->gts->num_gt);
+ igt_assert(dev && dev->gt_list && dev->gt_list->num_gt);
+ igt_assert(gt_id >= 0 && gt_id <= dev->gt_list->num_gt);
- return dev->gts->gts[gt_id].clock_freq;
+ return dev->gt_list->gt_list[gt_id].clock_freq;
}
static uint64_t div64_u64_round_up(const uint64_t x, const uint64_t y)
diff --git a/tests/intel-ci/xe-fast-feedback.testlist b/tests/intel-ci/xe-fast-feedback.testlist
index a9fe43b08..0cf28baf9 100644
--- a/tests/intel-ci/xe-fast-feedback.testlist
+++ b/tests/intel-ci/xe-fast-feedback.testlist
@@ -147,7 +147,7 @@ igt@xe_prime_self_import@basic-with_fd_dup
#igt@xe_prime_self_import@basic-llseek-size
igt@xe_query@query-engines
igt@xe_query@query-mem-usage
-igt@xe_query@query-gts
+igt@xe_query@query-gt-list
igt@xe_query@query-config
igt@xe_query@query-hwconfig
igt@xe_query@query-topology
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 3aa2918f0..e0d14966b 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -252,17 +252,17 @@ test_query_mem_usage(int fd)
}
/**
- * SUBTEST: query-gts
+ * SUBTEST: query-gt-list
* Test category: functionality test
- * Description: Display information about available GTs for xe device.
+ * Description: Display information about available GT components for xe device.
*/
static void
-test_query_gts(int fd)
+test_query_gt_list(int fd)
{
- struct drm_xe_query_gts *gts;
+ struct drm_xe_query_gt_list *gt_list;
struct drm_xe_device_query query = {
.extensions = 0,
- .query = DRM_XE_DEVICE_QUERY_GTS,
+ .query = DRM_XE_DEVICE_QUERY_GT_LIST,
.size = 0,
.data = 0,
};
@@ -271,29 +271,29 @@ test_query_gts(int fd)
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
igt_assert_neq(query.size, 0);
- gts = malloc(query.size);
- igt_assert(gts);
+ gt_list = malloc(query.size);
+ igt_assert(gt_list);
- query.data = to_user_pointer(gts);
+ query.data = to_user_pointer(gt_list);
igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
- for (i = 0; i < gts->num_gt; i++) {
- igt_info("type: %d\n", gts->gts[i].type);
- igt_info("gt_id: %d\n", gts->gts[i].gt_id);
- igt_info("clock_freq: %u\n", gts->gts[i].clock_freq);
+ for (i = 0; i < gt_list->num_gt; i++) {
+ igt_info("type: %d\n", gt_list->gt_list[i].type);
+ igt_info("gt_id: %d\n", gt_list->gt_list[i].gt_id);
+ igt_info("clock_freq: %u\n", gt_list->gt_list[i].clock_freq);
igt_info("native_mem_regions: 0x%016llx\n",
- gts->gts[i].native_mem_regions);
+ gt_list->gt_list[i].native_mem_regions);
igt_info("slow_mem_regions: 0x%016llx\n",
- gts->gts[i].slow_mem_regions);
+ gt_list->gt_list[i].slow_mem_regions);
igt_info("inaccessible_mem_regions: 0x%016llx\n",
- gts->gts[i].inaccessible_mem_regions);
+ gt_list->gt_list[i].inaccessible_mem_regions);
}
}
/**
* SUBTEST: query-topology
* Test category: functionality test
- * Description: Display topology information of GTs.
+ * Description: Display topology information of GT.
*/
static void
test_query_gt_topology(int fd)
@@ -682,8 +682,8 @@ igt_main
igt_subtest("query-mem-usage")
test_query_mem_usage(xe);
- igt_subtest("query-gts")
- test_query_gts(xe);
+ igt_subtest("query-gt-list")
+ test_query_gt_list(xe);
igt_subtest("query-config")
test_query_config(xe);
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 13/16] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (11 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 12/16] drm-uapi/xe: Rename gts to gt_list Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 14/16] drm-uapi/xe: Align with documentation updates Rodrigo Vivi
` (5 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
Align with kernel commit
("drm/xe/uapi: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 4 ++--
tests/intel/xe_query.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 69b62e84f..9bef90b1f 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -330,8 +330,8 @@ struct drm_xe_query_config {
#define XE_QUERY_CONFIG_VA_BITS 3
#define XE_QUERY_CONFIG_GT_COUNT 4
#define XE_QUERY_CONFIG_MEM_REGION_COUNT 5
-#define XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY 6
-#define XE_QUERY_CONFIG_NUM_PARAM (XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY + 1)
+#define XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY 6
+#define XE_QUERY_CONFIG_NUM_PARAM (XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY + 1)
/** @info: array of elements containing the config info */
__u64 info[];
};
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index e0d14966b..17215fd72 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -380,8 +380,8 @@ test_query_config(int fd)
config->info[XE_QUERY_CONFIG_GT_COUNT]);
igt_info("XE_QUERY_CONFIG_MEM_REGION_COUNT\t%llu\n",
config->info[XE_QUERY_CONFIG_MEM_REGION_COUNT]);
- igt_info("XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY\t%llu\n",
- config->info[XE_QUERY_CONFIG_MAX_ENGINE_PRIORITY]);
+ igt_info("XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY\t%llu\n",
+ config->info[XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY]);
dump_hex_debug(config, query.size);
free(config);
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 14/16] drm-uapi/xe: Align with documentation updates
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (12 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 13/16] drm-uapi/xe: Fix naming of XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 15/16] drm-uapi/xe: Align with Crystal Reference Clock updates Rodrigo Vivi
` (4 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
Align with commit ("drm/xe/uapi: Add documentation for query")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 41 ++++++++++++++++++++++++++++++++++++---
1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 9bef90b1f..7fb6c1f72 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -323,14 +323,43 @@ struct drm_xe_query_config {
/** @pad: MBZ */
__u32 pad;
+ /*
+ * Device ID (lower 16 bits) and the device revision (next
+ * 8 bits)
+ */
#define XE_QUERY_CONFIG_REV_AND_DEVICE_ID 0
+ /*
+ * Flags describing the device configuration, see list below
+ */
#define XE_QUERY_CONFIG_FLAGS 1
+ /*
+ * Flag is set if the device has usable VRAM
+ */
#define XE_QUERY_CONFIG_FLAGS_HAS_VRAM (0x1 << 0)
+ /*
+ * Minimal memory aligment required by this device,
+ * typically SZ_4K or SZ_64K
+ */
#define XE_QUERY_CONFIG_MIN_ALIGNMENT 2
+ /*
+ * Maximum bits of a virtual address
+ */
#define XE_QUERY_CONFIG_VA_BITS 3
+ /*
+ * Total number of GTs for the entire device
+ */
#define XE_QUERY_CONFIG_GT_COUNT 4
+ /*
+ * Total number of accessible memory regions
+ */
#define XE_QUERY_CONFIG_MEM_REGION_COUNT 5
+ /*
+ * Value of the highest available exec queue priority
+ */
#define XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY 6
+ /*
+ * Number of elements in the info array
+ */
#define XE_QUERY_CONFIG_NUM_PARAM (XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY + 1)
/** @info: array of elements containing the config info */
__u64 info[];
@@ -442,9 +471,15 @@ struct drm_xe_query_topology_mask {
/**
* struct drm_xe_device_query - main structure to query device information
*
- * If size is set to 0, the driver fills it with the required size for the
- * requested type of data to query. If size is equal to the required size,
- * the queried information is copied into data.
+ * The user selects the type of data to query among DRM_XE_DEVICE_QUERY_*
+ * and sets the value in the query member. This determines the type of
+ * the structure provided by the driver in data, among struct drm_xe_query_*.
+ *
+ * If size is set to 0, the driver fills it with the required size for
+ * the requested type of data to query. If size is equal to the required
+ * size, the queried information is copied into data. If size is set to
+ * a value different from 0 and different from the required size, the
+ * IOCTL call returns -EINVAL.
*
* For example the following code snippet allows retrieving and printing
* information about the device engines with DRM_XE_DEVICE_QUERY_ENGINES:
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 15/16] drm-uapi/xe: Align with Crystal Reference Clock updates
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (13 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 14/16] drm-uapi/xe: Align with documentation updates Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 16/16] drm-uapi/xe: Align with extension of drm_xe_vm_bind_op Rodrigo Vivi
` (3 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
This patch only aims the simplest update as possible to get rid
of the ref_clock in favor of the cs_reference_clock, aligning
with the uapi changes on commit
b53c288afe30 ("drm/xe/uapi: Crystal Reference Clock updates")
This is a non-functional change since the values are exactly
the same. Any issues with current tests would still be present.
Any further update to xe_spin should be done in follow-up updates.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/drm-uapi/xe_drm.h | 10 ++++------
lib/xe/xe_query.c | 21 +++++++++++++++++++++
lib/xe/xe_query.h | 1 +
lib/xe/xe_spin.c | 11 +++++------
tests/intel/xe_query.c | 31 ++++++++-----------------------
5 files changed, 39 insertions(+), 35 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 7fb6c1f72..090144c92 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -248,8 +248,8 @@ struct drm_xe_query_mem_region {
* relevant GPU timestamp. clockid is used to return the specific CPU
* timestamp.
*
- * The query returns the command streamer cycles and the frequency that can
- * be used to calculate the command streamer timestamp. In addition the
+ * The query returns the command streamer cycles and the reference clock that
+ * can be used to calculate the command streamer timestamp. In addition the
* query returns a set of cpu timestamps that indicate when the command
* streamer cycle count was captured.
*/
@@ -266,8 +266,8 @@ struct drm_xe_query_cs_cycles {
*/
__u64 cs_cycles;
- /** Frequency of the cs cycles in Hz. */
- __u64 cs_frequency;
+ /** Reference Clock of the cs cycles in Hz. */
+ __u64 cs_reference_clock;
/**
* CPU timestamp in ns. The timestamp is captured before reading the
@@ -381,8 +381,6 @@ struct drm_xe_query_gt {
__u16 type;
/** @gt_id: Unique ID of this GT within the PCI Device */
__u16 gt_id;
- /** @clock_freq: A clock frequency for timestamp */
- __u32 clock_freq;
/**
* @native_mem_regions: Bit mask of instances from
* drm_xe_query_mem_usage that lives on the same GPU/Tile and have
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index b018c7535..81d661607 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -328,6 +328,27 @@ bool xe_supports_faults(int fd)
return supports_faults;
}
+/**
+ * xe_query_cs_cycles:
+ * @fd: xe device fd
+ * @resp: A pointer to a drm_xe_query_cs_cycles to get the output of the query
+ *
+ * Full DRM_XE_QUERY_CS_CYCLES returning the response on the
+ * struct drm_xe_query_cs_cycles pointer argument.
+ */
+void xe_query_cs_cycles(int fd, struct drm_xe_query_cs_cycles *resp)
+{
+ struct drm_xe_device_query query = {
+ .extensions = 0,
+ .query = DRM_XE_QUERY_CS_CYCLES,
+ .size = sizeof(*resp),
+ .data = to_user_pointer(resp),
+ };
+
+ do_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
+ igt_assert(query.size);
+}
+
static void xe_device_destroy_cache(void)
{
pthread_mutex_lock(&cache.cache_mutex);
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index da7deaf4c..da4461306 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -102,6 +102,7 @@ uint32_t xe_get_default_alignment(int fd);
uint32_t xe_va_bits(int fd);
uint16_t xe_dev_id(int fd);
bool xe_supports_faults(int fd);
+void xe_query_cs_cycles(int fd, struct drm_xe_query_cs_cycles *resp);
const char *xe_engine_class_string(uint32_t engine_class);
bool xe_has_engine_class(int fd, uint16_t engine_class);
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index b05b38829..986d63cb4 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -16,14 +16,13 @@
#include "xe_ioctl.h"
#include "xe_spin.h"
-static uint32_t read_timestamp_frequency(int fd, int gt_id)
+static uint32_t read_timestamp_frequency(int fd)
{
- struct xe_device *dev = xe_device_get(fd);
+ struct drm_xe_query_cs_cycles ts = {};
- igt_assert(dev && dev->gt_list && dev->gt_list->num_gt);
- igt_assert(gt_id >= 0 && gt_id <= dev->gt_list->num_gt);
+ xe_query_cs_cycles(fd, &ts);
- return dev->gt_list->gt_list[gt_id].clock_freq;
+ return ts.cs_reference_clock;
}
static uint64_t div64_u64_round_up(const uint64_t x, const uint64_t y)
@@ -43,7 +42,7 @@ static uint64_t div64_u64_round_up(const uint64_t x, const uint64_t y)
*/
uint32_t duration_to_ctx_ticks(int fd, int gt_id, uint64_t duration_ns)
{
- uint32_t f = read_timestamp_frequency(fd, gt_id);
+ uint32_t f = read_timestamp_frequency(fd);
uint64_t ctx_ticks = div64_u64_round_up(duration_ns * f, NSEC_PER_SEC);
igt_assert_lt_u64(ctx_ticks, XE_SPIN_MAX_CTX_TICKS);
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 17215fd72..872b889f9 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -280,7 +280,6 @@ test_query_gt_list(int fd)
for (i = 0; i < gt_list->num_gt; i++) {
igt_info("type: %d\n", gt_list->gt_list[i].type);
igt_info("gt_id: %d\n", gt_list->gt_list[i].gt_id);
- igt_info("clock_freq: %u\n", gt_list->gt_list[i].clock_freq);
igt_info("native_mem_regions: 0x%016llx\n",
gt_list->gt_list[i].native_mem_regions);
igt_info("slow_mem_regions: 0x%016llx\n",
@@ -488,20 +487,6 @@ query_cs_cycles_supported(int fd)
return igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query) == 0;
}
-static void
-query_cs_cycles(int fd, struct drm_xe_query_cs_cycles *resp)
-{
- struct drm_xe_device_query query = {
- .extensions = 0,
- .query = DRM_XE_QUERY_CS_CYCLES,
- .size = sizeof(*resp),
- .data = to_user_pointer(resp),
- };
-
- do_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query);
- igt_assert(query.size);
-}
-
static void
__cs_cycles(int fd, struct drm_xe_engine_class_instance *hwe)
{
@@ -544,29 +529,29 @@ __cs_cycles(int fd, struct drm_xe_engine_class_instance *hwe)
ts2.eci = *hwe;
ts2.clockid = clock[index].id;
- query_cs_cycles(fd, &ts1);
- query_cs_cycles(fd, &ts2);
+ xe_query_cs_cycles(fd, &ts1);
+ xe_query_cs_cycles(fd, &ts2);
igt_debug("[1] cpu_ts before %llu, reg read time %llu\n",
ts1.cpu_timestamp,
ts1.cpu_delta);
igt_debug("[1] cs_ts %llu, freq %llu Hz, width %u\n",
- ts1.cs_cycles, ts1.cs_frequency, ts1.width);
+ ts1.cs_cycles, ts1.cs_reference_clock, ts1.width);
igt_debug("[2] cpu_ts before %llu, reg read time %llu\n",
ts2.cpu_timestamp,
ts2.cpu_delta);
igt_debug("[2] cs_ts %llu, freq %llu Hz, width %u\n",
- ts2.cs_cycles, ts2.cs_frequency, ts2.width);
+ ts2.cs_cycles, ts2.cs_reference_clock, ts2.width);
delta_cpu = ts2.cpu_timestamp - ts1.cpu_timestamp;
if (ts2.cs_cycles >= ts1.cs_cycles)
delta_cs = (ts2.cs_cycles - ts1.cs_cycles) *
- NSEC_PER_SEC / ts1.cs_frequency;
+ NSEC_PER_SEC / ts1.cs_reference_clock;
else
delta_cs = (((1 << ts2.width) - ts2.cs_cycles) + ts1.cs_cycles) *
- NSEC_PER_SEC / ts1.cs_frequency;
+ NSEC_PER_SEC / ts1.cs_reference_clock;
igt_debug("delta_cpu[%lu], delta_cs[%lu]\n",
delta_cpu, delta_cs);
@@ -637,7 +622,7 @@ static void test_cs_cycles_invalid(int fd)
/* sanity check engine selection is valid */
ts.eci = *hwe;
- query_cs_cycles(fd, &ts);
+ xe_query_cs_cycles(fd, &ts);
/* bad instance */
ts.eci = *hwe;
@@ -666,7 +651,7 @@ static void test_cs_cycles_invalid(int fd)
ts.clockid = 0;
/* sanity check */
- query_cs_cycles(fd, &ts);
+ xe_query_cs_cycles(fd, &ts);
}
igt_main
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] [PATCH i-g-t 16/16] drm-uapi/xe: Align with extension of drm_xe_vm_bind_op
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (14 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 15/16] drm-uapi/xe: Align with Crystal Reference Clock updates Rodrigo Vivi
@ 2023-09-19 14:19 ` Rodrigo Vivi
2023-09-19 14:51 ` [igt-dev] ✗ GitLab.Pipeline: warning for uAPI Alignment - take 1 Patchwork
` (2 subsequent siblings)
18 siblings, 0 replies; 23+ messages in thread
From: Rodrigo Vivi @ 2023-09-19 14:19 UTC (permalink / raw)
To: intel-xe, igt-dev; +Cc: Rodrigo Vivi
Align with commit ("drm/xe: Extend drm_xe_vm_bind_op")
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
include/drm-uapi/xe_drm.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 090144c92..8fe422fa1 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -656,6 +656,9 @@ struct drm_xe_vm_destroy {
};
struct drm_xe_vm_bind_op {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
/**
* @obj: GEM object to operate on, MBZ for MAP_USERPTR, MBZ for UNMAP
*/
--
2.41.0
^ permalink raw reply related [flat|nested] 23+ messages in thread* [igt-dev] ✗ GitLab.Pipeline: warning for uAPI Alignment - take 1
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (15 preceding siblings ...)
2023-09-19 14:19 ` [igt-dev] [PATCH i-g-t 16/16] drm-uapi/xe: Align with extension of drm_xe_vm_bind_op Rodrigo Vivi
@ 2023-09-19 14:51 ` Patchwork
2023-09-19 15:30 ` [igt-dev] ✗ Fi.CI.BAT: failure " Patchwork
2023-09-19 16:00 ` [igt-dev] ✗ CI.xeBAT: " Patchwork
18 siblings, 0 replies; 23+ messages in thread
From: Patchwork @ 2023-09-19 14:51 UTC (permalink / raw)
To: Rodrigo Vivi; +Cc: igt-dev
== Series Details ==
Series: uAPI Alignment - take 1
URL : https://patchwork.freedesktop.org/series/123916/
State : warning
== Summary ==
Pipeline status: FAILED.
see https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/pipelines/989549 for the overview.
test:ninja-test has failed (https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/jobs/49191569):
371/375 assembler test/rnde-intsrc OK 0.01 s
372/375 assembler test/rndz OK 0.01 s
373/375 assembler test/lzd OK 0.01 s
374/375 assembler test/not OK 0.02 s
375/375 assembler test/immediate OK 0.02 s
Ok: 370
Expected Fail: 4
Fail: 1
Unexpected Pass: 0
Skipped: 0
Timeout: 0
Full log written to /builds/gfx-ci/igt-ci-tags/build/meson-logs/testlog.txt
section_end:1695134984:step_script
section_start:1695134984:cleanup_file_variables
Cleaning up project directory and file based variables
section_end:1695134985:cleanup_file_variables
ERROR: Job failed: exit code 1
== Logs ==
For more details see: https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/pipelines/989549
^ permalink raw reply [flat|nested] 23+ messages in thread* [igt-dev] ✗ Fi.CI.BAT: failure for uAPI Alignment - take 1
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (16 preceding siblings ...)
2023-09-19 14:51 ` [igt-dev] ✗ GitLab.Pipeline: warning for uAPI Alignment - take 1 Patchwork
@ 2023-09-19 15:30 ` Patchwork
2023-09-19 16:00 ` [igt-dev] ✗ CI.xeBAT: " Patchwork
18 siblings, 0 replies; 23+ messages in thread
From: Patchwork @ 2023-09-19 15:30 UTC (permalink / raw)
To: Rodrigo Vivi; +Cc: igt-dev
[-- Attachment #1: Type: text/plain, Size: 6171 bytes --]
== Series Details ==
Series: uAPI Alignment - take 1
URL : https://patchwork.freedesktop.org/series/123916/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_13651 -> IGTPW_9827
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with IGTPW_9827 absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in IGTPW_9827, please notify your bug team (lgci.bug.filing@intel.com) to allow them
to document this new failure mode, which will reduce false positives in CI.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/index.html
Participating hosts (38 -> 35)
------------------------------
Additional (1): bat-rpls-2
Missing (4): fi-hsw-4770 bat-adlm-1 fi-snb-2520m fi-pnv-d510
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in IGTPW_9827:
### IGT changes ###
#### Possible regressions ####
* igt@i915_module_load@load:
- bat-mtlp-8: [PASS][1] -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13651/bat-mtlp-8/igt@i915_module_load@load.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/bat-mtlp-8/igt@i915_module_load@load.html
* igt@runner@aborted:
- bat-rpls-2: NOTRUN -> [FAIL][3]
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/bat-rpls-2/igt@runner@aborted.html
Known issues
------------
Here are the changes found in IGTPW_9827 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@gem_exec_parallel@engines@fds:
- bat-mtlp-6: [PASS][4] -> [ABORT][5] ([i915#9262])
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13651/bat-mtlp-6/igt@gem_exec_parallel@engines@fds.html
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/bat-mtlp-6/igt@gem_exec_parallel@engines@fds.html
* igt@kms_pipe_crc_basic@read-crc-frame-sequence:
- bat-dg2-11: NOTRUN -> [SKIP][6] ([i915#1845])
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/bat-dg2-11/igt@kms_pipe_crc_basic@read-crc-frame-sequence.html
#### Possible fixes ####
* igt@i915_selftest@live@execlists:
- fi-bsw-n3050: [ABORT][7] ([i915#7913]) -> [PASS][8]
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13651/fi-bsw-n3050/igt@i915_selftest@live@execlists.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/fi-bsw-n3050/igt@i915_selftest@live@execlists.html
* igt@kms_chamelium_edid@hdmi-edid-read:
- {bat-dg2-13}: [DMESG-WARN][9] ([i915#7952]) -> [PASS][10]
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13651/bat-dg2-13/igt@kms_chamelium_edid@hdmi-edid-read.html
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/bat-dg2-13/igt@kms_chamelium_edid@hdmi-edid-read.html
* igt@kms_hdmi_inject@inject-audio:
- fi-kbl-guc: [FAIL][11] ([IGT#3] / [i915#6121]) -> [PASS][12]
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_13651/fi-kbl-guc/igt@kms_hdmi_inject@inject-audio.html
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/fi-kbl-guc/igt@kms_hdmi_inject@inject-audio.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[IGT#3]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/3
[i915#1845]: https://gitlab.freedesktop.org/drm/intel/issues/1845
[i915#6121]: https://gitlab.freedesktop.org/drm/intel/issues/6121
[i915#7913]: https://gitlab.freedesktop.org/drm/intel/issues/7913
[i915#7952]: https://gitlab.freedesktop.org/drm/intel/issues/7952
[i915#9262]: https://gitlab.freedesktop.org/drm/intel/issues/9262
Build changes
-------------
* CI: CI-20190529 -> None
* IGT: IGT_7493 -> IGTPW_9827
CI-20190529: 20190529
CI_DRM_13651: 61b71c3f061a44a6ab1dcf756918886aa03a5480 @ git://anongit.freedesktop.org/gfx-ci/linux
IGTPW_9827: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/index.html
IGT_7493: 2517e42d612e0c1ca096acf8b5f6177f7ef4bce7 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
Testlist changes
----------------
+igt@xe_exec_balancer@many-cm-parallel-basic
+igt@xe_exec_balancer@many-cm-parallel-rebind
+igt@xe_exec_balancer@many-cm-parallel-userptr
+igt@xe_exec_balancer@many-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@many-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@many-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@many-execqueues-cm-parallel-basic
+igt@xe_exec_balancer@many-execqueues-cm-parallel-rebind
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@many-execqueues-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@no-exec-cm-parallel-basic
+igt@xe_exec_balancer@no-exec-cm-parallel-rebind
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@no-exec-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@once-cm-parallel-basic
+igt@xe_exec_balancer@once-cm-parallel-rebind
+igt@xe_exec_balancer@once-cm-parallel-userptr
+igt@xe_exec_balancer@once-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@once-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@once-cm-parallel-userptr-rebind
+igt@xe_exec_balancer@twice-cm-parallel-basic
+igt@xe_exec_balancer@twice-cm-parallel-rebind
+igt@xe_exec_balancer@twice-cm-parallel-userptr
+igt@xe_exec_balancer@twice-cm-parallel-userptr-invalidate
+igt@xe_exec_balancer@twice-cm-parallel-userptr-invalidate-race
+igt@xe_exec_balancer@twice-cm-parallel-userptr-rebind
+igt@xe_query@query-cs-cycles
+igt@xe_query@query-gt-list
+igt@xe_query@query-invalid-cs-cycles
-igt@xe_mmio@mmio-invalid
-igt@xe_mmio@mmio-timestamp
-igt@xe_query@query-gts
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/index.html
[-- Attachment #2: Type: text/html, Size: 7185 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread* [igt-dev] ✗ CI.xeBAT: failure for uAPI Alignment - take 1
2023-09-19 14:19 [igt-dev] [PATCH i-g-t 00/16] uAPI Alignment - take 1 Rodrigo Vivi
` (17 preceding siblings ...)
2023-09-19 15:30 ` [igt-dev] ✗ Fi.CI.BAT: failure " Patchwork
@ 2023-09-19 16:00 ` Patchwork
18 siblings, 0 replies; 23+ messages in thread
From: Patchwork @ 2023-09-19 16:00 UTC (permalink / raw)
To: Rodrigo Vivi; +Cc: igt-dev
[-- Attachment #1: Type: text/plain, Size: 19135 bytes --]
== Series Details ==
Series: uAPI Alignment - take 1
URL : https://patchwork.freedesktop.org/series/123916/
State : failure
== Summary ==
CI Bug Log - changes from XEIGT_7493_BAT -> XEIGTPW_9827_BAT
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with XEIGTPW_9827_BAT absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in XEIGTPW_9827_BAT, please notify your bug team (lgci.bug.filing@intel.com) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in XEIGTPW_9827_BAT:
### IGT changes ###
#### Possible regressions ####
* igt@kms_psr@primary_page_flip:
- bat-adlp-7: NOTRUN -> [FAIL][1] +12 other tests fail
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@kms_psr@primary_page_flip.html
* igt@xe_exec_compute_mode@twice-userptr-invalidate:
- bat-atsm-2: [PASS][2] -> [FAIL][3] +127 other tests fail
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@xe_exec_compute_mode@twice-userptr-invalidate.html
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@xe_exec_compute_mode@twice-userptr-invalidate.html
* igt@xe_exec_fault_mode@twice-userptr-invalidate-prefetch:
- bat-pvc-2: NOTRUN -> [FAIL][4] +206 other tests fail
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-pvc-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-prefetch.html
* igt@xe_intel_bb@create-in-region:
- bat-dg2-oem2: [PASS][5] -> [FAIL][6] +177 other tests fail
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@xe_intel_bb@create-in-region.html
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@xe_intel_bb@create-in-region.html
- bat-adlp-7: [PASS][7] -> [FAIL][8] +155 other tests fail
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-adlp-7/igt@xe_intel_bb@create-in-region.html
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@xe_intel_bb@create-in-region.html
#### Warnings ####
* igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
- bat-dg2-oem2: [SKIP][9] ([Intel XE#623]) -> [FAIL][10]
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
* igt@kms_addfb_basic@basic-y-tiled-legacy:
- bat-dg2-oem2: [SKIP][11] ([Intel XE#624]) -> [FAIL][12]
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@kms_addfb_basic@basic-y-tiled-legacy.html
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@kms_addfb_basic@basic-y-tiled-legacy.html
- bat-adlp-7: [FAIL][13] ([Intel XE#609]) -> [FAIL][14] +2 other tests fail
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-adlp-7/igt@kms_addfb_basic@basic-y-tiled-legacy.html
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@kms_addfb_basic@basic-y-tiled-legacy.html
* igt@kms_addfb_basic@invalid-set-prop-any:
- bat-atsm-2: [SKIP][15] ([i915#6077]) -> [FAIL][16] +33 other tests fail
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@kms_addfb_basic@invalid-set-prop-any.html
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@kms_addfb_basic@invalid-set-prop-any.html
* igt@kms_addfb_basic@tile-pitch-mismatch:
- bat-dg2-oem2: [FAIL][17] ([Intel XE#609]) -> [FAIL][18] +1 other test fail
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@kms_addfb_basic@tile-pitch-mismatch.html
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@kms_addfb_basic@tile-pitch-mismatch.html
* igt@kms_cursor_legacy@basic-flip-before-cursor-legacy:
- bat-atsm-2: [SKIP][19] ([Intel XE#274] / [Intel XE#539]) -> [FAIL][20] +5 other tests fail
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html
* igt@kms_dsc@dsc-basic:
- bat-atsm-2: [SKIP][21] ([Intel XE#539]) -> [FAIL][22] +1 other test fail
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@kms_dsc@dsc-basic.html
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@kms_dsc@dsc-basic.html
- bat-dg2-oem2: [SKIP][23] ([Intel XE#423]) -> [FAIL][24]
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@kms_dsc@dsc-basic.html
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@kms_dsc@dsc-basic.html
- bat-adlp-7: [SKIP][25] ([Intel XE#423]) -> [FAIL][26]
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-adlp-7/igt@kms_dsc@dsc-basic.html
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@kms_dsc@dsc-basic.html
* igt@kms_flip@basic-flip-vs-modeset:
- bat-atsm-2: [SKIP][27] ([Intel XE#275]) -> [FAIL][28] +3 other tests fail
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@kms_flip@basic-flip-vs-modeset.html
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@kms_flip@basic-flip-vs-modeset.html
* igt@kms_force_connector_basic@force-connector-state:
- bat-atsm-2: [SKIP][29] ([Intel XE#277] / [Intel XE#540]) -> [FAIL][30] +2 other tests fail
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@kms_force_connector_basic@force-connector-state.html
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@kms_force_connector_basic@force-connector-state.html
* igt@kms_force_connector_basic@prune-stale-modes:
- bat-dg2-oem2: [SKIP][31] ([i915#5274]) -> [FAIL][32]
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@kms_force_connector_basic@prune-stale-modes.html
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@kms_force_connector_basic@prune-stale-modes.html
* igt@kms_frontbuffer_tracking@basic:
- bat-dg2-oem2: [FAIL][33] ([Intel XE#608]) -> [FAIL][34]
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@kms_frontbuffer_tracking@basic.html
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@kms_frontbuffer_tracking@basic.html
- bat-adlp-7: [INCOMPLETE][35] ([Intel XE#632]) -> [FAIL][36]
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html
* igt@kms_hdmi_inject@inject-audio:
- bat-atsm-2: [SKIP][37] ([Intel XE#540]) -> [FAIL][38]
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@kms_hdmi_inject@inject-audio.html
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@kms_hdmi_inject@inject-audio.html
* igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12:
- bat-dg2-oem2: [FAIL][39] ([Intel XE#400]) -> [FAIL][40]
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12.html
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12.html
* igt@kms_pipe_crc_basic@compare-crc-sanitycheck-xr24:
- bat-atsm-2: [SKIP][41] ([Intel XE#537] / [i915#1836]) -> [FAIL][42] +6 other tests fail
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-xr24.html
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-xr24.html
* igt@kms_prop_blob@basic:
- bat-atsm-2: [SKIP][43] ([Intel XE#273]) -> [FAIL][44]
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@kms_prop_blob@basic.html
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@kms_prop_blob@basic.html
* igt@kms_psr@cursor_plane_move:
- bat-atsm-2: [SKIP][45] ([i915#1072]) -> [FAIL][46] +2 other tests fail
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@kms_psr@cursor_plane_move.html
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@kms_psr@cursor_plane_move.html
* igt@kms_psr@primary_page_flip:
- bat-dg2-oem2: [SKIP][47] ([i915#1072]) -> [FAIL][48] +2 other tests fail
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@kms_psr@primary_page_flip.html
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@kms_psr@primary_page_flip.html
* igt@xe_compute@compute-square:
- bat-atsm-2: [SKIP][49] ([Intel XE#672]) -> [FAIL][50]
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@xe_compute@compute-square.html
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@xe_compute@compute-square.html
- bat-dg2-oem2: [SKIP][51] ([Intel XE#672]) -> [FAIL][52]
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@xe_compute@compute-square.html
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@xe_compute@compute-square.html
* igt@xe_evict@evict-beng-small-external:
- bat-adlp-7: [SKIP][53] ([Intel XE#261] / [Intel XE#688]) -> [FAIL][54] +15 other tests fail
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-adlp-7/igt@xe_evict@evict-beng-small-external.html
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@xe_evict@evict-beng-small-external.html
* igt@xe_exec_fault_mode@many-basic:
- bat-dg2-oem2: [SKIP][55] ([Intel XE#288]) -> [FAIL][56] +17 other tests fail
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@xe_exec_fault_mode@many-basic.html
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@xe_exec_fault_mode@many-basic.html
* igt@xe_exec_fault_mode@twice-userptr:
- bat-adlp-7: [SKIP][57] ([Intel XE#288]) -> [FAIL][58] +17 other tests fail
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-adlp-7/igt@xe_exec_fault_mode@twice-userptr.html
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@xe_exec_fault_mode@twice-userptr.html
* igt@xe_exec_fault_mode@twice-userptr-invalidate-imm:
- bat-atsm-2: [SKIP][59] ([Intel XE#288]) -> [FAIL][60] +17 other tests fail
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-imm.html
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@xe_exec_fault_mode@twice-userptr-invalidate-imm.html
* igt@xe_huc_copy@huc_copy:
- bat-dg2-oem2: [SKIP][61] ([Intel XE#255]) -> [FAIL][62]
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@xe_huc_copy@huc_copy.html
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@xe_huc_copy@huc_copy.html
- bat-atsm-2: [SKIP][63] ([Intel XE#255]) -> [FAIL][64]
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@xe_huc_copy@huc_copy.html
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@xe_huc_copy@huc_copy.html
* igt@xe_mmap@vram:
- bat-adlp-7: [SKIP][65] ([Intel XE#263]) -> [FAIL][66]
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-adlp-7/igt@xe_mmap@vram.html
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@xe_mmap@vram.html
* igt@xe_module_load@load:
- bat-pvc-2: [INCOMPLETE][67] ([Intel XE#597]) -> [FAIL][68]
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-pvc-2/igt@xe_module_load@load.html
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-pvc-2/igt@xe_module_load@load.html
#### Suppressed ####
The following results come from untrusted machines, tests, or statuses.
They do not affect the overall result.
* {igt@xe_exec_basic@no-exec-bindexecqueue}:
- bat-dg2-oem2: [PASS][69] -> [FAIL][70] +13 other tests fail
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@xe_exec_basic@no-exec-bindexecqueue.html
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@xe_exec_basic@no-exec-bindexecqueue.html
* {igt@xe_exec_compute_mode@twice-bindexecqueue-userptr-rebind}:
- bat-atsm-2: [PASS][71] -> [FAIL][72] +13 other tests fail
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@xe_exec_compute_mode@twice-bindexecqueue-userptr-rebind.html
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@xe_exec_compute_mode@twice-bindexecqueue-userptr-rebind.html
* {igt@xe_exec_fault_mode@twice-bindexecqueue-userptr}:
- bat-adlp-7: [SKIP][73] ([Intel XE#288]) -> [FAIL][74] +14 other tests fail
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-adlp-7/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr.html
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr.html
* {igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate}:
- bat-pvc-2: NOTRUN -> [FAIL][75] +29 other tests fail
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-pvc-2/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate.html
- bat-dg2-oem2: [SKIP][76] ([Intel XE#288]) -> [FAIL][77] +14 other tests fail
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-dg2-oem2/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate.html
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate.html
* {igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-rebind}:
- bat-atsm-2: [SKIP][78] ([Intel XE#288]) -> [FAIL][79] +14 other tests fail
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-atsm-2/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-rebind.html
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-rebind.html
* {igt@xe_query@query-gt-list}:
- bat-dg2-oem2: NOTRUN -> [FAIL][80]
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-dg2-oem2/igt@xe_query@query-gt-list.html
- bat-adlp-7: NOTRUN -> [FAIL][81]
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@xe_query@query-gt-list.html
- bat-atsm-2: NOTRUN -> [FAIL][82]
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-atsm-2/igt@xe_query@query-gt-list.html
* {igt@xe_vm@bind-execqueues-independent}:
- bat-adlp-7: [PASS][83] -> [FAIL][84] +13 other tests fail
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7493/bat-adlp-7/igt@xe_vm@bind-execqueues-independent.html
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@xe_vm@bind-execqueues-independent.html
Known issues
------------
Here are the changes found in XEIGTPW_9827_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@xe_live_ktest@migrate:
- bat-pvc-2: NOTRUN -> [SKIP][85] ([Intel XE#483]) +1 other test skip
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-pvc-2/igt@xe_live_ktest@migrate.html
- bat-adlp-7: NOTRUN -> [SKIP][86] ([Intel XE#483]) +1 other test skip
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/bat-adlp-7/igt@xe_live_ktest@migrate.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#255]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/255
[Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
[Intel XE#263]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/263
[Intel XE#273]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/273
[Intel XE#274]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/274
[Intel XE#275]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/275
[Intel XE#277]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/277
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#400]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/400
[Intel XE#423]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/423
[Intel XE#483]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/483
[Intel XE#537]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/537
[Intel XE#539]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/539
[Intel XE#540]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/540
[Intel XE#597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/597
[Intel XE#608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/608
[Intel XE#609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/609
[Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
[Intel XE#624]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/624
[Intel XE#632]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/632
[Intel XE#672]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/672
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
[i915#1836]: https://gitlab.freedesktop.org/drm/intel/issues/1836
[i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
[i915#6077]: https://gitlab.freedesktop.org/drm/intel/issues/6077
Build changes
-------------
* IGT: IGT_7493 -> IGTPW_9827
* Linux: xe-376-9da40abcc0ccdf8fdfed4e21d76060bfcd35fe7d -> xe-383-fac2e20c785bd790c250e4f4799dfa28e44e7082
IGTPW_9827: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_9827/index.html
IGT_7493: 2517e42d612e0c1ca096acf8b5f6177f7ef4bce7 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-376-9da40abcc0ccdf8fdfed4e21d76060bfcd35fe7d: 9da40abcc0ccdf8fdfed4e21d76060bfcd35fe7d
xe-383-fac2e20c785bd790c250e4f4799dfa28e44e7082: fac2e20c785bd790c250e4f4799dfa28e44e7082
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_9827/index.html
[-- Attachment #2: Type: text/html, Size: 21739 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread