Igt-dev Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming
@ 2023-11-14 13:44 Francois Dugast
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 1/8] drm-uapi/xe: Add missing DRM_ prefix in uAPI constants Francois Dugast
                   ` (8 more replies)
  0 siblings, 9 replies; 18+ messages in thread
From: Francois Dugast @ 2023-11-14 13:44 UTC (permalink / raw)
  To: igt-dev

This aligns on kernel series:
https://patchwork.freedesktop.org/series/126399/

Francois Dugast (3):
  drm-uapi/xe: Add missing DRM_ prefix in uAPI constants
  drm-uapi/xe: Add _FLAG to uAPI constants usable for flags
  drm-uapi/xe: Change rsvd to pad in struct drm_xe_class_instance

Rodrigo Vivi (5):
  drm-uapi/xe: Rename *_mem_regions mask.
  drm-uapi/xe: Rename query's mem_usage to mem_regions
  drm-uapi/xe: s/FLAGS_HAS_VRAM/FLAG_HAS_VRAM
  drm-uapi/xe: Differentiate WAIT_OP from WAIT_MASK
  drm-uapi/xe: Be more specific about vm_bind prefetch region

 benchmarks/gem_wsim.c                |  12 +-
 include/drm-uapi/xe_drm.h            | 205 ++++++++++++++-------------
 lib/igt_fb.c                         |   2 +-
 lib/intel_batchbuffer.c              |  24 ++--
 lib/intel_blt.c                      |   2 +-
 lib/intel_compute.c                  |   6 +-
 lib/intel_ctx.c                      |   4 +-
 lib/xe/xe_ioctl.c                    |  44 +++---
 lib/xe/xe_ioctl.h                    |   2 +-
 lib/xe/xe_query.c                    |  88 ++++++------
 lib/xe/xe_query.h                    |   8 +-
 lib/xe/xe_spin.c                     |   4 +-
 lib/xe/xe_util.c                     |  16 +--
 lib/xe/xe_util.h                     |   4 +-
 tests/intel/xe_access_counter.c      |   4 +-
 tests/intel/xe_ccs.c                 |   8 +-
 tests/intel/xe_copy_basic.c          |   6 +-
 tests/intel/xe_create.c              |   6 +-
 tests/intel/xe_debugfs.c             |  12 +-
 tests/intel/xe_dma_buf_sync.c        |   4 +-
 tests/intel/xe_drm_fdinfo.c          |  18 +--
 tests/intel/xe_evict.c               |  24 ++--
 tests/intel/xe_evict_ccs.c           |   2 +-
 tests/intel/xe_exec_balancer.c       |  34 ++---
 tests/intel/xe_exec_basic.c          |  24 ++--
 tests/intel/xe_exec_compute_mode.c   |   6 +-
 tests/intel/xe_exec_fault_mode.c     |  12 +-
 tests/intel/xe_exec_queue_property.c |  18 +--
 tests/intel/xe_exec_reset.c          |  62 ++++----
 tests/intel/xe_exec_store.c          |  26 ++--
 tests/intel/xe_exec_threads.c        |  48 +++----
 tests/intel/xe_exercise_blt.c        |   6 +-
 tests/intel/xe_guc_pc.c              |  12 +-
 tests/intel/xe_huc_copy.c            |   4 +-
 tests/intel/xe_intel_bb.c            |   2 +-
 tests/intel/xe_noexec_ping_pong.c    |   2 +-
 tests/intel/xe_perf_pmu.c            |  32 ++---
 tests/intel/xe_pm.c                  |  30 ++--
 tests/intel/xe_pm_residency.c        |   2 +-
 tests/intel/xe_query.c               | 106 +++++++-------
 tests/intel/xe_spin_batch.c          |   2 +-
 tests/intel/xe_vm.c                  | 138 +++++++++---------
 tests/intel/xe_waitfence.c           |  20 +--
 43 files changed, 549 insertions(+), 542 deletions(-)

-- 
2.34.1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [igt-dev] [PATCH v1 1/8] drm-uapi/xe: Add missing DRM_ prefix in uAPI constants
  2023-11-14 13:44 [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming Francois Dugast
@ 2023-11-14 13:44 ` Francois Dugast
  2023-11-14 13:48   ` Rodrigo Vivi
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 2/8] drm-uapi/xe: Add _FLAG to uAPI constants usable for flags Francois Dugast
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Francois Dugast @ 2023-11-14 13:44 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Add missing DRM_ prefix in uAPI constants")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h            | 124 +++++++++++++--------------
 lib/intel_batchbuffer.c              |   8 +-
 lib/intel_blt.c                      |   2 +-
 lib/xe/xe_ioctl.c                    |  22 ++---
 lib/xe/xe_query.c                    |  12 +--
 lib/xe/xe_query.h                    |   4 +-
 lib/xe/xe_util.c                     |  10 +--
 lib/xe/xe_util.h                     |   4 +-
 tests/intel/xe_access_counter.c      |   4 +-
 tests/intel/xe_ccs.c                 |   4 +-
 tests/intel/xe_copy_basic.c          |   4 +-
 tests/intel/xe_debugfs.c             |  12 +--
 tests/intel/xe_exec_basic.c          |   8 +-
 tests/intel/xe_exec_fault_mode.c     |   4 +-
 tests/intel/xe_exec_queue_property.c |  18 ++--
 tests/intel/xe_exec_reset.c          |  20 ++---
 tests/intel/xe_exec_threads.c        |   4 +-
 tests/intel/xe_exercise_blt.c        |   4 +-
 tests/intel/xe_perf_pmu.c            |   8 +-
 tests/intel/xe_pm.c                  |   2 +-
 tests/intel/xe_query.c               |  40 ++++-----
 tests/intel/xe_vm.c                  |  10 +--
 22 files changed, 164 insertions(+), 164 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index babfaf0fe..9ab6c3269 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -19,12 +19,12 @@ extern "C" {
 /**
  * DOC: uevent generated by xe on it's pci node.
  *
- * XE_RESET_FAILED_UEVENT - Event is generated when attempt to reset gt
+ * DRM_XE_RESET_FAILED_UEVENT - Event is generated when attempt to reset gt
  * fails. The value supplied with the event is always "NEEDS_RESET".
  * Additional information supplied is tile id and gt id of the gt unit for
  * which reset has failed.
  */
-#define XE_RESET_FAILED_UEVENT "DEVICE_STATUS"
+#define DRM_XE_RESET_FAILED_UEVENT "DEVICE_STATUS"
 
 /**
  * struct xe_user_extension - Base class for defining a chain of extensions
@@ -148,14 +148,14 @@ struct drm_xe_engine_class_instance {
  * enum drm_xe_memory_class - Supported memory classes.
  */
 enum drm_xe_memory_class {
-	/** @XE_MEM_REGION_CLASS_SYSMEM: Represents system memory. */
-	XE_MEM_REGION_CLASS_SYSMEM = 0,
+	/** @DRM_XE_MEM_REGION_CLASS_SYSMEM: Represents system memory. */
+	DRM_XE_MEM_REGION_CLASS_SYSMEM = 0,
 	/**
-	 * @XE_MEM_REGION_CLASS_VRAM: On discrete platforms, this
+	 * @DRM_XE_MEM_REGION_CLASS_VRAM: On discrete platforms, this
 	 * represents the memory that is local to the device, which we
 	 * call VRAM. Not valid on integrated platforms.
 	 */
-	XE_MEM_REGION_CLASS_VRAM
+	DRM_XE_MEM_REGION_CLASS_VRAM
 };
 
 /**
@@ -215,7 +215,7 @@ struct drm_xe_query_mem_region {
 	 * always equal the @total_size, since all of it will be CPU
 	 * accessible.
 	 *
-	 * Note this is only tracked for XE_MEM_REGION_CLASS_VRAM
+	 * Note this is only tracked for DRM_XE_MEM_REGION_CLASS_VRAM
 	 * regions (for other types the value here will always equal
 	 * zero).
 	 */
@@ -227,7 +227,7 @@ struct drm_xe_query_mem_region {
 	 * Requires CAP_PERFMON or CAP_SYS_ADMIN to get reliable
 	 * accounting. Without this the value here will always equal
 	 * zero.  Note this is only currently tracked for
-	 * XE_MEM_REGION_CLASS_VRAM regions (for other types the value
+	 * DRM_XE_MEM_REGION_CLASS_VRAM regions (for other types the value
 	 * here will always be zero).
 	 */
 	__u64 cpu_visible_used;
@@ -320,12 +320,12 @@ struct drm_xe_query_config {
 	/** @pad: MBZ */
 	__u32 pad;
 
-#define XE_QUERY_CONFIG_REV_AND_DEVICE_ID	0
-#define XE_QUERY_CONFIG_FLAGS			1
-	#define XE_QUERY_CONFIG_FLAGS_HAS_VRAM		(0x1 << 0)
-#define XE_QUERY_CONFIG_MIN_ALIGNMENT		2
-#define XE_QUERY_CONFIG_VA_BITS			3
-#define XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	4
+#define DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID	0
+#define DRM_XE_QUERY_CONFIG_FLAGS			1
+	#define DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM		(0x1 << 0)
+#define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT		2
+#define DRM_XE_QUERY_CONFIG_VA_BITS			3
+#define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	4
 	/** @info: array of elements containing the config info */
 	__u64 info[];
 };
@@ -339,8 +339,8 @@ struct drm_xe_query_config {
  * implementing graphics and/or media operations.
  */
 struct drm_xe_query_gt {
-#define XE_QUERY_GT_TYPE_MAIN		0
-#define XE_QUERY_GT_TYPE_MEDIA		1
+#define DRM_XE_QUERY_GT_TYPE_MAIN		0
+#define DRM_XE_QUERY_GT_TYPE_MEDIA		1
 	/** @type: GT type: Main or Media */
 	__u16 type;
 	/** @gt_id: Unique ID of this GT within the PCI Device */
@@ -400,7 +400,7 @@ struct drm_xe_query_topology_mask {
 	 *   DSS_GEOMETRY    ff ff ff ff 00 00 00 00
 	 * means 32 DSS are available for geometry.
 	 */
-#define XE_TOPO_DSS_GEOMETRY	(1 << 0)
+#define DRM_XE_TOPO_DSS_GEOMETRY	(1 << 0)
 	/*
 	 * To query the mask of Dual Sub Slices (DSS) available for compute
 	 * operations. For example a query response containing the following
@@ -408,7 +408,7 @@ struct drm_xe_query_topology_mask {
 	 *   DSS_COMPUTE    ff ff ff ff 00 00 00 00
 	 * means 32 DSS are available for compute.
 	 */
-#define XE_TOPO_DSS_COMPUTE	(1 << 1)
+#define DRM_XE_TOPO_DSS_COMPUTE		(1 << 1)
 	/*
 	 * To query the mask of Execution Units (EU) available per Dual Sub
 	 * Slices (DSS). For example a query response containing the following
@@ -416,7 +416,7 @@ struct drm_xe_query_topology_mask {
 	 *   EU_PER_DSS    ff ff 00 00 00 00 00 00
 	 * means each DSS has 16 EU.
 	 */
-#define XE_TOPO_EU_PER_DSS	(1 << 2)
+#define DRM_XE_TOPO_EU_PER_DSS		(1 << 2)
 	/** @type: type of mask */
 	__u16 type;
 
@@ -497,8 +497,8 @@ struct drm_xe_gem_create {
 	 */
 	__u64 size;
 
-#define XE_GEM_CREATE_FLAG_DEFER_BACKING	(0x1 << 24)
-#define XE_GEM_CREATE_FLAG_SCANOUT		(0x1 << 25)
+#define DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING		(0x1 << 24)
+#define DRM_XE_GEM_CREATE_FLAG_SCANOUT			(0x1 << 25)
 /*
  * When using VRAM as a possible placement, ensure that the corresponding VRAM
  * allocation will always use the CPU accessible part of VRAM. This is important
@@ -514,7 +514,7 @@ struct drm_xe_gem_create {
  * display surfaces, therefore the kernel requires setting this flag for such
  * objects, otherwise an error is thrown on small-bar systems.
  */
-#define XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM	(0x1 << 26)
+#define DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM	(0x1 << 26)
 	/**
 	 * @flags: Flags, currently a mask of memory instances of where BO can
 	 * be placed
@@ -581,14 +581,14 @@ struct drm_xe_ext_set_property {
 };
 
 struct drm_xe_vm_create {
-#define XE_VM_EXTENSION_SET_PROPERTY	0
+#define DRM_XE_VM_EXTENSION_SET_PROPERTY	0
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
-#define DRM_XE_VM_CREATE_SCRATCH_PAGE	(0x1 << 0)
-#define DRM_XE_VM_CREATE_COMPUTE_MODE	(0x1 << 1)
-#define DRM_XE_VM_CREATE_ASYNC_DEFAULT	(0x1 << 2)
-#define DRM_XE_VM_CREATE_FAULT_MODE	(0x1 << 3)
+#define DRM_XE_VM_CREATE_SCRATCH_PAGE		(0x1 << 0)
+#define DRM_XE_VM_CREATE_COMPUTE_MODE		(0x1 << 1)
+#define DRM_XE_VM_CREATE_ASYNC_DEFAULT		(0x1 << 2)
+#define DRM_XE_VM_CREATE_FAULT_MODE		(0x1 << 3)
 	/** @flags: Flags */
 	__u32 flags;
 
@@ -644,29 +644,29 @@ struct drm_xe_vm_bind_op {
 	 */
 	__u64 tile_mask;
 
-#define XE_VM_BIND_OP_MAP		0x0
-#define XE_VM_BIND_OP_UNMAP		0x1
-#define XE_VM_BIND_OP_MAP_USERPTR	0x2
-#define XE_VM_BIND_OP_UNMAP_ALL		0x3
-#define XE_VM_BIND_OP_PREFETCH		0x4
+#define DRM_XE_VM_BIND_OP_MAP		0x0
+#define DRM_XE_VM_BIND_OP_UNMAP		0x1
+#define DRM_XE_VM_BIND_OP_MAP_USERPTR	0x2
+#define DRM_XE_VM_BIND_OP_UNMAP_ALL	0x3
+#define DRM_XE_VM_BIND_OP_PREFETCH	0x4
 	/** @op: Bind operation to perform */
 	__u32 op;
 
-#define XE_VM_BIND_FLAG_READONLY	(0x1 << 0)
-#define XE_VM_BIND_FLAG_ASYNC		(0x1 << 1)
+#define DRM_XE_VM_BIND_FLAG_READONLY	(0x1 << 0)
+#define DRM_XE_VM_BIND_FLAG_ASYNC	(0x1 << 1)
 	/*
 	 * Valid on a faulting VM only, do the MAP operation immediately rather
 	 * than deferring the MAP to the page fault handler.
 	 */
-#define XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 2)
+#define DRM_XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 2)
 	/*
 	 * When the NULL flag is set, the page tables are setup with a special
 	 * bit which indicates writes are dropped and all reads return zero.  In
-	 * the future, the NULL flags will only be valid for XE_VM_BIND_OP_MAP
+	 * the future, the NULL flags will only be valid for DRM_XE_VM_BIND_OP_MAP
 	 * operations, the BO handle MBZ, and the BO offset MBZ. This flag is
 	 * intended to implement VK sparse bindings.
 	 */
-#define XE_VM_BIND_FLAG_NULL		(0x1 << 3)
+#define DRM_XE_VM_BIND_FLAG_NULL	(0x1 << 3)
 	/** @flags: Bind flags */
 	__u32 flags;
 
@@ -721,19 +721,19 @@ struct drm_xe_vm_bind {
 	__u64 reserved[2];
 };
 
-/* For use with XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY */
+/* For use with DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY */
 
 /* Monitor 128KB contiguous region with 4K sub-granularity */
-#define XE_ACC_GRANULARITY_128K 0
+#define DRM_XE_ACC_GRANULARITY_128K 0
 
 /* Monitor 2MB contiguous region with 64KB sub-granularity */
-#define XE_ACC_GRANULARITY_2M 1
+#define DRM_XE_ACC_GRANULARITY_2M 1
 
 /* Monitor 16MB contiguous region with 512KB sub-granularity */
-#define XE_ACC_GRANULARITY_16M 2
+#define DRM_XE_ACC_GRANULARITY_16M 2
 
 /* Monitor 64MB contiguous region with 2M sub-granularity */
-#define XE_ACC_GRANULARITY_64M 3
+#define DRM_XE_ACC_GRANULARITY_64M 3
 
 /**
  * struct drm_xe_exec_queue_set_property - exec queue set property
@@ -747,14 +747,14 @@ struct drm_xe_exec_queue_set_property {
 	/** @exec_queue_id: Exec queue ID */
 	__u32 exec_queue_id;
 
-#define XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY		0
-#define XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE		1
-#define XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT	2
-#define XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE		3
-#define XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT		4
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER		5
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY		6
-#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY	7
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY			0
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE		1
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT	2
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE		3
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT		4
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER		5
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY		6
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY		7
 	/** @property: property to set */
 	__u32 property;
 
@@ -766,7 +766,7 @@ struct drm_xe_exec_queue_set_property {
 };
 
 struct drm_xe_exec_queue_create {
-#define XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY               0
+#define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY               0
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
@@ -805,7 +805,7 @@ struct drm_xe_exec_queue_get_property {
 	/** @exec_queue_id: Exec queue ID */
 	__u32 exec_queue_id;
 
-#define XE_EXEC_QUEUE_GET_PROPERTY_BAN			0
+#define DRM_XE_EXEC_QUEUE_GET_PROPERTY_BAN	0
 	/** @property: property to get */
 	__u32 property;
 
@@ -973,11 +973,11 @@ struct drm_xe_wait_user_fence {
 /**
  * DOC: XE PMU event config IDs
  *
- * Check 'man perf_event_open' to use the ID's XE_PMU_XXXX listed in xe_drm.h
+ * Check 'man perf_event_open' to use the ID's DRM_XE_PMU_XXXX listed in xe_drm.h
  * in 'struct perf_event_attr' as part of perf_event_open syscall to read a
  * particular event.
  *
- * For example to open the XE_PMU_RENDER_GROUP_BUSY(0):
+ * For example to open the DRMXE_PMU_RENDER_GROUP_BUSY(0):
  *
  * .. code-block:: C
  *
@@ -991,7 +991,7 @@ struct drm_xe_wait_user_fence {
  *	attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
  *	attr.use_clockid = 1;
  *	attr.clockid = CLOCK_MONOTONIC;
- *	attr.config = XE_PMU_RENDER_GROUP_BUSY(0);
+ *	attr.config = DRM_XE_PMU_RENDER_GROUP_BUSY(0);
  *
  *	fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
  */
@@ -999,15 +999,15 @@ struct drm_xe_wait_user_fence {
 /*
  * Top bits of every counter are GT id.
  */
-#define __XE_PMU_GT_SHIFT (56)
+#define __DRM_XE_PMU_GT_SHIFT (56)
 
-#define ___XE_PMU_OTHER(gt, x) \
-	(((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
+#define ___DRM_XE_PMU_OTHER(gt, x) \
+	(((__u64)(x)) | ((__u64)(gt) << __DRM_XE_PMU_GT_SHIFT))
 
-#define XE_PMU_RENDER_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 0)
-#define XE_PMU_COPY_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 1)
-#define XE_PMU_MEDIA_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 2)
-#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___XE_PMU_OTHER(gt, 3)
+#define DRM_XE_PMU_RENDER_GROUP_BUSY(gt)	___DRM_XE_PMU_OTHER(gt, 0)
+#define DRM_XE_PMU_COPY_GROUP_BUSY(gt)		___DRM_XE_PMU_OTHER(gt, 1)
+#define DRM_XE_PMU_MEDIA_GROUP_BUSY(gt)		___DRM_XE_PMU_OTHER(gt, 2)
+#define DRM_XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___DRM_XE_PMU_OTHER(gt, 3)
 
 #if defined(__cplusplus)
 }
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index c32d04302..eb47ede50 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1286,7 +1286,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
 {
 	struct drm_i915_gem_exec_object2 **objects = ibb->objects;
 	struct drm_xe_vm_bind_op *bind_ops, *ops;
-	bool set_obj = (op & 0xffff) == XE_VM_BIND_OP_MAP;
+	bool set_obj = (op & 0xffff) == DRM_XE_VM_BIND_OP_MAP;
 
 	bind_ops = calloc(ibb->num_objects, sizeof(*bind_ops));
 	igt_assert(bind_ops);
@@ -1325,8 +1325,8 @@ static void __unbind_xe_objects(struct intel_bb *ibb)
 
 	if (ibb->num_objects > 1) {
 		struct drm_xe_vm_bind_op *bind_ops;
-		uint32_t op = XE_VM_BIND_OP_UNMAP;
-		uint32_t flags = XE_VM_BIND_FLAG_ASYNC;
+		uint32_t op = DRM_XE_VM_BIND_OP_UNMAP;
+		uint32_t flags = DRM_XE_VM_BIND_FLAG_ASYNC;
 
 		bind_ops = xe_alloc_bind_ops(ibb, op, flags, 0);
 		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
@@ -2357,7 +2357,7 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
 
 	syncs[0].handle = syncobj_create(ibb->fd, 0);
 	if (ibb->num_objects > 1) {
-		bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, 0);
+		bind_ops = xe_alloc_bind_ops(ibb, DRM_XE_VM_BIND_OP_MAP, DRM_XE_VM_BIND_FLAG_ASYNC, 0);
 		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
 				 ibb->num_objects, syncs, 1);
 		free(bind_ops);
diff --git a/lib/intel_blt.c b/lib/intel_blt.c
index 5b682c2b6..2edcd72f3 100644
--- a/lib/intel_blt.c
+++ b/lib/intel_blt.c
@@ -1804,7 +1804,7 @@ blt_create_object(const struct blt_copy_data *blt, uint32_t region,
 		uint64_t flags = region;
 
 		if (create_mapping && region != system_memory(blt->fd))
-			flags |= XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
+			flags |= DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
 
 		size = ALIGN(size, xe_get_default_alignment(blt->fd));
 		handle = xe_bo_create_flags(blt->fd, 0, size, flags);
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index c4077801e..36f10a49a 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -67,7 +67,7 @@ void xe_vm_unbind_all_async(int fd, uint32_t vm, uint32_t exec_queue,
 			    uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, bo, 0, 0, 0,
-			    XE_VM_BIND_OP_UNMAP_ALL, XE_VM_BIND_FLAG_ASYNC,
+			    DRM_XE_VM_BIND_OP_UNMAP_ALL, DRM_XE_VM_BIND_FLAG_ASYNC,
 			    sync, num_syncs, 0, 0);
 }
 
@@ -130,7 +130,7 @@ void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 		struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size,
-			    XE_VM_BIND_OP_MAP, 0, sync, num_syncs, 0, 0);
+			    DRM_XE_VM_BIND_OP_MAP, 0, sync, num_syncs, 0, 0);
 }
 
 void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
@@ -138,7 +138,7 @@ void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
 		  struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, 0, 0, offset, addr, size,
-			    XE_VM_BIND_OP_UNMAP, 0, sync, num_syncs, 0, 0);
+			    DRM_XE_VM_BIND_OP_UNMAP, 0, sync, num_syncs, 0, 0);
 }
 
 void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t offset,
@@ -147,7 +147,7 @@ void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t off
 			  uint32_t region)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
-			    XE_VM_BIND_OP_PREFETCH, XE_VM_BIND_FLAG_ASYNC,
+			    DRM_XE_VM_BIND_OP_PREFETCH, DRM_XE_VM_BIND_FLAG_ASYNC,
 			    sync, num_syncs, region, 0);
 }
 
@@ -156,7 +156,7 @@ void xe_vm_bind_async(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 		      struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
-			    XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, sync,
+			    DRM_XE_VM_BIND_OP_MAP, DRM_XE_VM_BIND_FLAG_ASYNC, sync,
 			    num_syncs, 0, 0);
 }
 
@@ -166,7 +166,7 @@ void xe_vm_bind_async_flags(int fd, uint32_t vm, uint32_t exec_queue, uint32_t b
 			    uint32_t flags)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
-			    XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC | flags,
+			    DRM_XE_VM_BIND_OP_MAP, DRM_XE_VM_BIND_FLAG_ASYNC | flags,
 			    sync, num_syncs, 0, 0);
 }
 
@@ -175,7 +175,7 @@ void xe_vm_bind_userptr_async(int fd, uint32_t vm, uint32_t exec_queue,
 			      struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
-			    XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC,
+			    DRM_XE_VM_BIND_OP_MAP_USERPTR, DRM_XE_VM_BIND_FLAG_ASYNC,
 			    sync, num_syncs, 0, 0);
 }
 
@@ -185,7 +185,7 @@ void xe_vm_bind_userptr_async_flags(int fd, uint32_t vm, uint32_t exec_queue,
 				    uint32_t num_syncs, uint32_t flags)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
-			    XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC |
+			    DRM_XE_VM_BIND_OP_MAP_USERPTR, DRM_XE_VM_BIND_FLAG_ASYNC |
 			    flags, sync, num_syncs, 0, 0);
 }
 
@@ -194,7 +194,7 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
 			struct drm_xe_sync *sync, uint32_t num_syncs)
 {
 	__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
-			    XE_VM_BIND_OP_UNMAP, XE_VM_BIND_FLAG_ASYNC, sync,
+			    DRM_XE_VM_BIND_OP_UNMAP, DRM_XE_VM_BIND_FLAG_ASYNC, sync,
 			    num_syncs, 0, 0);
 }
 
@@ -208,13 +208,13 @@ static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 void xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 		     uint64_t addr, uint64_t size)
 {
-	__xe_vm_bind_sync(fd, vm, bo, offset, addr, size, XE_VM_BIND_OP_MAP);
+	__xe_vm_bind_sync(fd, vm, bo, offset, addr, size, DRM_XE_VM_BIND_OP_MAP);
 }
 
 void xe_vm_unbind_sync(int fd, uint32_t vm, uint64_t offset,
 		       uint64_t addr, uint64_t size)
 {
-	__xe_vm_bind_sync(fd, vm, 0, offset, addr, size, XE_VM_BIND_OP_UNMAP);
+	__xe_vm_bind_sync(fd, vm, 0, offset, addr, size, DRM_XE_VM_BIND_OP_UNMAP);
 }
 
 void xe_vm_destroy(int fd, uint32_t vm)
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index 06d216cf9..8df3d317a 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -249,8 +249,8 @@ struct xe_device *xe_device_get(int fd)
 
 	xe_dev->fd = fd;
 	xe_dev->config = xe_query_config_new(fd);
-	xe_dev->va_bits = xe_dev->config->info[XE_QUERY_CONFIG_VA_BITS];
-	xe_dev->dev_id = xe_dev->config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
+	xe_dev->va_bits = xe_dev->config->info[DRM_XE_QUERY_CONFIG_VA_BITS];
+	xe_dev->dev_id = xe_dev->config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
 	xe_dev->gt_list = xe_query_gt_list_new(fd);
 	xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
 	xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
@@ -414,7 +414,7 @@ static uint64_t __xe_visible_vram_size(int fd, int gt)
  * @gt: gt id
  *
  * Returns vram memory bitmask for xe device @fd and @gt id, with
- * XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM also set, to ensure that CPU access is
+ * DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM also set, to ensure that CPU access is
  * possible.
  */
 uint64_t visible_vram_memory(int fd, int gt)
@@ -424,7 +424,7 @@ uint64_t visible_vram_memory(int fd, int gt)
 	 * has landed.
 	 */
 	if (__xe_visible_vram_size(fd, gt))
-		return vram_memory(fd, gt) | XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
+		return vram_memory(fd, gt) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
 	else
 		return vram_memory(fd, gt); /* older kernel */
 }
@@ -449,7 +449,7 @@ uint64_t vram_if_possible(int fd, int gt)
  *
  * Returns vram memory bitmask for xe device @fd and @gt id or system memory if
  * there's no vram memory available for @gt. Also attaches the
- * XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM to ensure that CPU access is possible
+ * DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM to ensure that CPU access is possible
  * when using vram.
  */
 uint64_t visible_vram_if_possible(int fd, int gt)
@@ -463,7 +463,7 @@ uint64_t visible_vram_if_possible(int fd, int gt)
 	 * has landed.
 	 */
 	if (__xe_visible_vram_size(fd, gt))
-		return vram ? vram | XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM : system_memory;
+		return vram ? vram | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM : system_memory;
 	else
 		return vram ? vram : system_memory; /* older kernel */
 }
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index fc81cc263..3d7e22a9b 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -71,8 +71,8 @@ struct xe_device {
 	for (uint64_t __i = 0; __i < igt_fls(__memreg); __i++) \
 		for_if(__r = (__memreg & (1ull << __i)))
 
-#define XE_IS_CLASS_SYSMEM(__region) ((__region)->mem_class == XE_MEM_REGION_CLASS_SYSMEM)
-#define XE_IS_CLASS_VRAM(__region) ((__region)->mem_class == XE_MEM_REGION_CLASS_VRAM)
+#define XE_IS_CLASS_SYSMEM(__region) ((__region)->mem_class == DRM_XE_MEM_REGION_CLASS_SYSMEM)
+#define XE_IS_CLASS_VRAM(__region) ((__region)->mem_class == DRM_XE_MEM_REGION_CLASS_VRAM)
 
 unsigned int xe_number_gt(int fd);
 uint64_t all_memory_regions(int fd);
diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
index 5fa4d4610..780125f92 100644
--- a/lib/xe/xe_util.c
+++ b/lib/xe/xe_util.c
@@ -134,12 +134,12 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
 		ops = &bind_ops[i];
 
 		if (obj->bind_op == XE_OBJECT_BIND) {
-			op = XE_VM_BIND_OP_MAP;
-			flags = XE_VM_BIND_FLAG_ASYNC;
+			op = DRM_XE_VM_BIND_OP_MAP;
+			flags = DRM_XE_VM_BIND_FLAG_ASYNC;
 			ops->obj = obj->handle;
 		} else {
-			op = XE_VM_BIND_OP_UNMAP;
-			flags = XE_VM_BIND_FLAG_ASYNC;
+			op = DRM_XE_VM_BIND_OP_UNMAP;
+			flags = DRM_XE_VM_BIND_FLAG_ASYNC;
 		}
 
 		ops->op = op;
@@ -211,7 +211,7 @@ void xe_bind_unbind_async(int xe, uint32_t vm, uint32_t bind_engine,
 		  tabsyncs[0].handle, tabsyncs[1].handle);
 
 	if (num_binds == 1) {
-		if ((bind_ops[0].op & 0xffff) == XE_VM_BIND_OP_MAP)
+		if ((bind_ops[0].op & 0xffff) == DRM_XE_VM_BIND_OP_MAP)
 			xe_vm_bind_async(xe, vm, bind_engine, bind_ops[0].obj, 0,
 					 bind_ops[0].addr, bind_ops[0].range,
 					 syncs, num_syncs);
diff --git a/lib/xe/xe_util.h b/lib/xe/xe_util.h
index e97d236b8..21b312071 100644
--- a/lib/xe/xe_util.h
+++ b/lib/xe/xe_util.h
@@ -13,9 +13,9 @@
 #include <xe_drm.h>
 
 #define XE_IS_SYSMEM_MEMORY_REGION(fd, region) \
-	(xe_region_class(fd, region) == XE_MEM_REGION_CLASS_SYSMEM)
+	(xe_region_class(fd, region) == DRM_XE_MEM_REGION_CLASS_SYSMEM)
 #define XE_IS_VRAM_MEMORY_REGION(fd, region) \
-	(xe_region_class(fd, region) == XE_MEM_REGION_CLASS_VRAM)
+	(xe_region_class(fd, region) == DRM_XE_MEM_REGION_CLASS_VRAM)
 
 struct igt_collection *
 __xe_get_memory_region_set(int xe, uint32_t *mem_regions_type, int num_regions);
diff --git a/tests/intel/xe_access_counter.c b/tests/intel/xe_access_counter.c
index b738ebc86..8966bfc9c 100644
--- a/tests/intel/xe_access_counter.c
+++ b/tests/intel/xe_access_counter.c
@@ -47,8 +47,8 @@ igt_main
 
 		struct drm_xe_ext_set_property ext = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY,
 			.value = SIZE_64M + 1,
 		};
 
diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
index 876c239e4..bb844b641 100644
--- a/tests/intel/xe_ccs.c
+++ b/tests/intel/xe_ccs.c
@@ -634,8 +634,8 @@ igt_main_args("bf:pst:W:H:", NULL, help_str, opt_handler, NULL)
 		xe_device_get(xe);
 
 		set = xe_get_memory_region_set(xe,
-					       XE_MEM_REGION_CLASS_SYSMEM,
-					       XE_MEM_REGION_CLASS_VRAM);
+					       DRM_XE_MEM_REGION_CLASS_SYSMEM,
+					       DRM_XE_MEM_REGION_CLASS_VRAM);
 	}
 
 	igt_describe("Check block-copy uncompressed blit");
diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c
index fe78ac50f..1dafbb276 100644
--- a/tests/intel/xe_copy_basic.c
+++ b/tests/intel/xe_copy_basic.c
@@ -164,8 +164,8 @@ igt_main
 		fd = drm_open_driver(DRIVER_XE);
 		xe_device_get(fd);
 		set = xe_get_memory_region_set(fd,
-					       XE_MEM_REGION_CLASS_SYSMEM,
-					       XE_MEM_REGION_CLASS_VRAM);
+					       DRM_XE_MEM_REGION_CLASS_SYSMEM,
+					       DRM_XE_MEM_REGION_CLASS_VRAM);
 	}
 
 	for (int i = 0; i < ARRAY_SIZE(size); i++) {
diff --git a/tests/intel/xe_debugfs.c b/tests/intel/xe_debugfs.c
index 4104bf5ae..60ddceda7 100644
--- a/tests/intel/xe_debugfs.c
+++ b/tests/intel/xe_debugfs.c
@@ -91,20 +91,20 @@ test_base(int fd, struct drm_xe_query_config *config)
 
 	igt_assert(config);
 	sprintf(reference, "devid 0x%llx",
-			config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
+			config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
 	igt_assert(igt_debugfs_search(fd, "info", reference));
 
 	sprintf(reference, "revid %lld",
-			config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
+			config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
 	igt_assert(igt_debugfs_search(fd, "info", reference));
 
-	sprintf(reference, "is_dgfx %s", config->info[XE_QUERY_CONFIG_FLAGS] &
-		XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "yes" : "no");
+	sprintf(reference, "is_dgfx %s", config->info[DRM_XE_QUERY_CONFIG_FLAGS] &
+		DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "yes" : "no");
 
 	igt_assert(igt_debugfs_search(fd, "info", reference));
 
 	if (!AT_LEAST_GEN(devid, 20)) {
-		switch (config->info[XE_QUERY_CONFIG_VA_BITS]) {
+		switch (config->info[DRM_XE_QUERY_CONFIG_VA_BITS]) {
 		case 48:
 			val = 3;
 			break;
@@ -125,7 +125,7 @@ test_base(int fd, struct drm_xe_query_config *config)
 	igt_assert(igt_debugfs_exists(fd, "gtt_mm", O_RDONLY));
 	igt_debugfs_dump(fd, "gtt_mm");
 
-	if (config->info[XE_QUERY_CONFIG_FLAGS] & XE_QUERY_CONFIG_FLAGS_HAS_VRAM) {
+	if (config->info[DRM_XE_QUERY_CONFIG_FLAGS] & DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM) {
 		igt_assert(igt_debugfs_exists(fd, "vram0_mm", O_RDONLY));
 		igt_debugfs_dump(fd, "vram0_mm");
 	}
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index 8dbce524d..232ddde8e 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -138,7 +138,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 		bo_flags = visible_vram_if_possible(fd, eci->gt_id);
 		if (flags & DEFER_ALLOC)
-			bo_flags |= XE_GEM_CREATE_FLAG_DEFER_BACKING;
+			bo_flags |= DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING;
 
 		bo = xe_bo_create_flags(fd, n_vm == 1 ? vm[0] : 0,
 					bo_size, bo_flags);
@@ -172,9 +172,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		if (flags & SPARSE)
 			__xe_vm_bind_assert(fd, vm[i], bind_exec_queues[i],
 					    0, 0, sparse_addr[i], bo_size,
-					    XE_VM_BIND_OP_MAP,
-					    XE_VM_BIND_FLAG_ASYNC |
-					    XE_VM_BIND_FLAG_NULL, sync,
+					    DRM_XE_VM_BIND_OP_MAP,
+					    DRM_XE_VM_BIND_FLAG_ASYNC |
+					    DRM_XE_VM_BIND_FLAG_NULL, sync,
 					    1, 0, 0);
 	}
 
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index 64b5c59a2..477d0824d 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -175,12 +175,12 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		if (bo)
 			xe_vm_bind_async_flags(fd, vm, bind_exec_queues[0], bo, 0,
 					       addr, bo_size, sync, 1,
-					       XE_VM_BIND_FLAG_IMMEDIATE);
+					       DRM_XE_VM_BIND_FLAG_IMMEDIATE);
 		else
 			xe_vm_bind_userptr_async_flags(fd, vm, bind_exec_queues[0],
 						       to_user_pointer(data),
 						       addr, bo_size, sync, 1,
-						       XE_VM_BIND_FLAG_IMMEDIATE);
+						       DRM_XE_VM_BIND_FLAG_IMMEDIATE);
 	} else {
 		if (bo)
 			xe_vm_bind_async(fd, vm, bind_exec_queues[0], bo, 0, addr,
diff --git a/tests/intel/xe_exec_queue_property.c b/tests/intel/xe_exec_queue_property.c
index 4e32aefa5..ae6b445cd 100644
--- a/tests/intel/xe_exec_queue_property.c
+++ b/tests/intel/xe_exec_queue_property.c
@@ -43,11 +43,11 @@
 static int get_property_name(const char *property)
 {
 	if (strstr(property, "preempt"))
-		return XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT;
+		return DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT;
 	else if (strstr(property, "job_timeout"))
-		return XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT;
+		return DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT;
 	else if (strstr(property, "timeslice"))
-		return XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE;
+		return DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE;
 	else
 		return -1;
 }
@@ -60,7 +60,7 @@ static void test_set_property(int xe, int property_name,
 	};
 	struct drm_xe_ext_set_property ext = {
 		.base.next_extension = 0,
-		.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+		.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
 		.property = property_name,
 		.value = property_value,
 	};
@@ -130,19 +130,19 @@ igt_main
 
 	igt_subtest("priority-set-property") {
 		/* Tests priority property by setting positive values. */
-		test_set_property(xe, XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
+		test_set_property(xe, DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
 				  DRM_SCHED_PRIORITY_NORMAL, 0);
 
 		/* Tests priority property by setting invalid value. */
-		test_set_property(xe, XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
+		test_set_property(xe, DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
 				  DRM_SCHED_PRIORITY_HIGH + 1, -EINVAL);
 		igt_fork(child, 1) {
 			igt_drop_root();
 
 			/* Tests priority property by dropping root permissions. */
-			test_set_property(xe, XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
+			test_set_property(xe, DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
 					  DRM_SCHED_PRIORITY_HIGH, -EPERM);
-			test_set_property(xe, XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
+			test_set_property(xe, DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
 					  DRM_SCHED_PRIORITY_NORMAL, 0);
 		}
 		igt_waitchildren();
@@ -150,7 +150,7 @@ igt_main
 
 	igt_subtest("persistence-set-property") {
 		/* Tests persistence property by setting positive values. */
-		test_set_property(xe, XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE, 1, 0);
+		test_set_property(xe, DRM_XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE, 1, 0);
 
 	}
 
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 44248776b..39647b736 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -187,14 +187,14 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	for (i = 0; i < n_exec_queues; i++) {
 		struct drm_xe_ext_set_property job_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
 			.value = 50,
 		};
 		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
 			.value = 1000,
 		};
 		struct drm_xe_exec_queue_create create = {
@@ -374,14 +374,14 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	for (i = 0; i < n_exec_queues; i++) {
 		struct drm_xe_ext_set_property job_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
 			.value = 50,
 		};
 		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
 			.value = 1000,
 		};
 		uint64_t ext = 0;
@@ -542,8 +542,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	for (i = 0; i < n_exec_queues; i++) {
 		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
 			.value = 1000,
 		};
 		uint64_t ext = 0;
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index a0c96d08d..b814dcdf5 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -520,8 +520,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 	for (i = 0; i < n_exec_queues; i++) {
 		struct drm_xe_ext_set_property preempt_timeout = {
 			.base.next_extension = 0,
-			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
-			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
+			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
 			.value = 1000,
 		};
 		uint64_t ext = to_user_pointer(&preempt_timeout);
diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
index 2f349b16d..df774130f 100644
--- a/tests/intel/xe_exercise_blt.c
+++ b/tests/intel/xe_exercise_blt.c
@@ -358,8 +358,8 @@ igt_main_args("b:pst:W:H:", NULL, help_str, opt_handler, NULL)
 		xe_device_get(xe);
 
 		set = xe_get_memory_region_set(xe,
-					       XE_MEM_REGION_CLASS_SYSMEM,
-					       XE_MEM_REGION_CLASS_VRAM);
+					       DRM_XE_MEM_REGION_CLASS_SYSMEM,
+					       DRM_XE_MEM_REGION_CLASS_VRAM);
 	}
 
 	igt_describe("Check fast-copy blit");
diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
index 0b25a859f..a0dd30e50 100644
--- a/tests/intel/xe_perf_pmu.c
+++ b/tests/intel/xe_perf_pmu.c
@@ -51,15 +51,15 @@ static uint64_t engine_group_get_config(int gt, int class)
 
 	switch (class) {
 	case DRM_XE_ENGINE_CLASS_COPY:
-		config = XE_PMU_COPY_GROUP_BUSY(gt);
+		config = DRM_XE_PMU_COPY_GROUP_BUSY(gt);
 		break;
 	case DRM_XE_ENGINE_CLASS_RENDER:
 	case DRM_XE_ENGINE_CLASS_COMPUTE:
-		config = XE_PMU_RENDER_GROUP_BUSY(gt);
+		config = DRM_XE_PMU_RENDER_GROUP_BUSY(gt);
 		break;
 	case DRM_XE_ENGINE_CLASS_VIDEO_DECODE:
 	case DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE:
-		config = XE_PMU_MEDIA_GROUP_BUSY(gt);
+		config = DRM_XE_PMU_MEDIA_GROUP_BUSY(gt);
 		break;
 	}
 
@@ -112,7 +112,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
 	sync[0].handle = syncobj_create(fd, 0);
 	xe_vm_bind_async(fd, vm, 0, bo, 0, addr, bo_size, sync, 1);
 
-	pmu_fd = open_pmu(fd, XE_PMU_ANY_ENGINE_GROUP_BUSY(eci->gt_id));
+	pmu_fd = open_pmu(fd, DRM_XE_PMU_ANY_ENGINE_GROUP_BUSY(eci->gt_id));
 	idle = pmu_read(pmu_fd);
 	igt_assert(!idle);
 
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index b2976ec84..d07ed4535 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -400,7 +400,7 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
 	igt_assert_eq(igt_ioctl(device.fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
 	for (i = 0; i < mem_usage->num_regions; i++) {
-		if (mem_usage->regions[i].mem_class == XE_MEM_REGION_CLASS_VRAM) {
+		if (mem_usage->regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
 			vram_used_mb +=  (mem_usage->regions[i].used / (1024 * 1024));
 			vram_total_mb += (mem_usage->regions[i].total_size / (1024 * 1024));
 		}
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index cf966d40d..969ad1c7f 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -163,9 +163,9 @@ void process_hwconfig(void *data, uint32_t len)
 const char *get_topo_name(int value)
 {
 	switch(value) {
-	case XE_TOPO_DSS_GEOMETRY: return "DSS_GEOMETRY";
-	case XE_TOPO_DSS_COMPUTE: return "DSS_COMPUTE";
-	case XE_TOPO_EU_PER_DSS: return "EU_PER_DSS";
+	case DRM_XE_TOPO_DSS_GEOMETRY: return "DSS_GEOMETRY";
+	case DRM_XE_TOPO_DSS_COMPUTE: return "DSS_COMPUTE";
+	case DRM_XE_TOPO_EU_PER_DSS: return "EU_PER_DSS";
 	}
 	return "??";
 }
@@ -221,9 +221,9 @@ test_query_mem_usage(int fd)
 	for (i = 0; i < mem_usage->num_regions; i++) {
 		igt_info("mem region %d: %s\t%#llx / %#llx\n", i,
 			mem_usage->regions[i].mem_class ==
-			XE_MEM_REGION_CLASS_SYSMEM ? "SYSMEM"
+			DRM_XE_MEM_REGION_CLASS_SYSMEM ? "SYSMEM"
 			:mem_usage->regions[i].mem_class ==
-			XE_MEM_REGION_CLASS_VRAM ? "VRAM" : "?",
+			DRM_XE_MEM_REGION_CLASS_VRAM ? "VRAM" : "?",
 			mem_usage->regions[i].used,
 			mem_usage->regions[i].total_size
 		);
@@ -359,23 +359,23 @@ test_query_config(int fd)
 
 	igt_assert(config->num_params > 0);
 
-	igt_info("XE_QUERY_CONFIG_REV_AND_DEVICE_ID\t%#llx\n",
-		config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID]);
+	igt_info("DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID\t%#llx\n",
+		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID]);
 	igt_info("  REV_ID\t\t\t\t%#llx\n",
-		config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
+		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
 	igt_info("  DEVICE_ID\t\t\t\t%#llx\n",
-		config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
-	igt_info("XE_QUERY_CONFIG_FLAGS\t\t\t%#llx\n",
-		config->info[XE_QUERY_CONFIG_FLAGS]);
-	igt_info("  XE_QUERY_CONFIG_FLAGS_HAS_VRAM\t%s\n",
-		config->info[XE_QUERY_CONFIG_FLAGS] &
-		XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "ON":"OFF");
-	igt_info("XE_QUERY_CONFIG_MIN_ALIGNMENT\t\t%#llx\n",
-		config->info[XE_QUERY_CONFIG_MIN_ALIGNMENT]);
-	igt_info("XE_QUERY_CONFIG_VA_BITS\t\t\t%llu\n",
-		config->info[XE_QUERY_CONFIG_VA_BITS]);
-	igt_info("XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY\t%llu\n",
-		config->info[XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY]);
+		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
+	igt_info("DRM_XE_QUERY_CONFIG_FLAGS\t\t\t%#llx\n",
+		config->info[DRM_XE_QUERY_CONFIG_FLAGS]);
+	igt_info("  DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM\t%s\n",
+		config->info[DRM_XE_QUERY_CONFIG_FLAGS] &
+		DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "ON":"OFF");
+	igt_info("DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT\t\t%#llx\n",
+		config->info[DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT]);
+	igt_info("DRM_XE_QUERY_CONFIG_VA_BITS\t\t\t%llu\n",
+		config->info[DRM_XE_QUERY_CONFIG_VA_BITS]);
+	igt_info("DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY\t%llu\n",
+		config->info[DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY]);
 	dump_hex_debug(config, query.size);
 
 	free(config);
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index f1ccd6c21..6700a6a55 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -356,7 +356,7 @@ static void userptr_invalid(int fd)
 	vm = xe_vm_create(fd, 0, 0);
 	munmap(data, size);
 	ret = __xe_vm_bind(fd, vm, 0, 0, to_user_pointer(data), 0x40000,
-			   size, XE_VM_BIND_OP_MAP_USERPTR, 0, NULL, 0, 0, 0);
+			   size, DRM_XE_VM_BIND_OP_MAP_USERPTR, 0, NULL, 0, 0, 0);
 	igt_assert(ret == -EFAULT);
 
 	xe_vm_destroy(fd, vm);
@@ -795,8 +795,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 		bind_ops[i].range = bo_size;
 		bind_ops[i].addr = addr;
 		bind_ops[i].tile_mask = 0x1 << eci->gt_id;
-		bind_ops[i].op = XE_VM_BIND_OP_MAP;
-		bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
+		bind_ops[i].op = DRM_XE_VM_BIND_OP_MAP;
+		bind_ops[i].flags = DRM_XE_VM_BIND_FLAG_ASYNC;
 		bind_ops[i].region = 0;
 		bind_ops[i].reserved[0] = 0;
 		bind_ops[i].reserved[1] = 0;
@@ -840,8 +840,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 
 	for (i = 0; i < n_execs; ++i) {
 		bind_ops[i].obj = 0;
-		bind_ops[i].op = XE_VM_BIND_OP_UNMAP;
-		bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
+		bind_ops[i].op = DRM_XE_VM_BIND_OP_UNMAP;
+		bind_ops[i].flags = DRM_XE_VM_BIND_FLAG_ASYNC;
 	}
 
 	syncobj_reset(fd, &sync[0].handle, 1);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [igt-dev] [PATCH v1 2/8] drm-uapi/xe: Add _FLAG to uAPI constants usable for flags
  2023-11-14 13:44 [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming Francois Dugast
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 1/8] drm-uapi/xe: Add missing DRM_ prefix in uAPI constants Francois Dugast
@ 2023-11-14 13:44 ` Francois Dugast
  2023-11-14 13:47   ` Rodrigo Vivi
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 3/8] drm-uapi/xe: Change rsvd to pad in struct drm_xe_class_instance Francois Dugast
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Francois Dugast @ 2023-11-14 13:44 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Add _FLAG to uAPI constants usable for flags")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 benchmarks/gem_wsim.c              |  12 +--
 include/drm-uapi/xe_drm.h          |  30 +++----
 lib/igt_fb.c                       |   2 +-
 lib/intel_batchbuffer.c            |  12 +--
 lib/intel_compute.c                |   6 +-
 lib/intel_ctx.c                    |   4 +-
 lib/xe/xe_ioctl.c                  |   6 +-
 lib/xe/xe_query.c                  |   4 +-
 lib/xe/xe_spin.c                   |   4 +-
 lib/xe/xe_util.c                   |   4 +-
 tests/intel/xe_ccs.c               |   4 +-
 tests/intel/xe_copy_basic.c        |   2 +-
 tests/intel/xe_create.c            |   6 +-
 tests/intel/xe_dma_buf_sync.c      |   4 +-
 tests/intel/xe_drm_fdinfo.c        |  18 ++---
 tests/intel/xe_evict.c             |  24 +++---
 tests/intel/xe_evict_ccs.c         |   2 +-
 tests/intel/xe_exec_balancer.c     |  34 ++++----
 tests/intel/xe_exec_basic.c        |  16 ++--
 tests/intel/xe_exec_compute_mode.c |   6 +-
 tests/intel/xe_exec_fault_mode.c   |   8 +-
 tests/intel/xe_exec_reset.c        |  42 +++++-----
 tests/intel/xe_exec_store.c        |  26 +++---
 tests/intel/xe_exec_threads.c      |  44 +++++-----
 tests/intel/xe_exercise_blt.c      |   2 +-
 tests/intel/xe_guc_pc.c            |  12 +--
 tests/intel/xe_huc_copy.c          |   4 +-
 tests/intel/xe_intel_bb.c          |   2 +-
 tests/intel/xe_noexec_ping_pong.c  |   2 +-
 tests/intel/xe_perf_pmu.c          |  24 +++---
 tests/intel/xe_pm.c                |  12 +--
 tests/intel/xe_pm_residency.c      |   2 +-
 tests/intel/xe_spin_batch.c        |   2 +-
 tests/intel/xe_vm.c                | 126 ++++++++++++++---------------
 tests/intel/xe_waitfence.c         |  10 +--
 35 files changed, 259 insertions(+), 259 deletions(-)

diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
index 28b809520..df4850086 100644
--- a/benchmarks/gem_wsim.c
+++ b/benchmarks/gem_wsim.c
@@ -1772,21 +1772,21 @@ xe_alloc_step_batch(struct workload *wrk, struct w_step *w)
 	i = 0;
 	/* out fence */
 	w->xe.syncs[i].handle = syncobj_create(fd, 0);
-	w->xe.syncs[i++].flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
+	w->xe.syncs[i++].flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
 	/* in fence(s) */
 	for_each_dep(dep, w->data_deps) {
 		int dep_idx = w->idx + dep->target;
 
 		igt_assert(wrk->steps[dep_idx].xe.syncs && wrk->steps[dep_idx].xe.syncs[0].handle);
 		w->xe.syncs[i].handle = wrk->steps[dep_idx].xe.syncs[0].handle;
-		w->xe.syncs[i++].flags = DRM_XE_SYNC_SYNCOBJ;
+		w->xe.syncs[i++].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
 	}
 	for_each_dep(dep, w->fence_deps) {
 		int dep_idx = w->idx + dep->target;
 
 		igt_assert(wrk->steps[dep_idx].xe.syncs && wrk->steps[dep_idx].xe.syncs[0].handle);
 		w->xe.syncs[i].handle = wrk->steps[dep_idx].xe.syncs[0].handle;
-		w->xe.syncs[i++].flags = DRM_XE_SYNC_SYNCOBJ;
+		w->xe.syncs[i++].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
 	}
 	w->xe.exec.syncs = to_user_pointer(w->xe.syncs);
 }
@@ -2024,8 +2024,8 @@ static void xe_vm_create_(struct xe_vm *vm)
 	uint32_t flags = 0;
 
 	if (vm->compute_mode)
-		flags |= DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			 DRM_XE_VM_CREATE_COMPUTE_MODE;
+		flags |= DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			 DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE;
 
 	vm->id = xe_vm_create(fd, flags, 0);
 }
@@ -2363,7 +2363,7 @@ static int xe_prepare_contexts(unsigned int id, struct workload *wrk)
 		if (w->type == SW_FENCE) {
 			w->xe.syncs = calloc(1, sizeof(struct drm_xe_sync));
 			w->xe.syncs[0].handle = syncobj_create(fd, 0);
-			w->xe.syncs[0].flags = DRM_XE_SYNC_SYNCOBJ;
+			w->xe.syncs[0].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
 		}
 
 	return 0;
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 9ab6c3269..68d005202 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -585,10 +585,10 @@ struct drm_xe_vm_create {
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
-#define DRM_XE_VM_CREATE_SCRATCH_PAGE		(0x1 << 0)
-#define DRM_XE_VM_CREATE_COMPUTE_MODE		(0x1 << 1)
-#define DRM_XE_VM_CREATE_ASYNC_DEFAULT		(0x1 << 2)
-#define DRM_XE_VM_CREATE_FAULT_MODE		(0x1 << 3)
+#define DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE	(0x1 << 0)
+#define DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE	(0x1 << 1)
+#define DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT	(0x1 << 2)
+#define DRM_XE_VM_CREATE_FLAG_FAULT_MODE	(0x1 << 3)
 	/** @flags: Flags */
 	__u32 flags;
 
@@ -831,11 +831,11 @@ struct drm_xe_sync {
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
-#define DRM_XE_SYNC_SYNCOBJ		0x0
-#define DRM_XE_SYNC_TIMELINE_SYNCOBJ	0x1
-#define DRM_XE_SYNC_DMA_BUF		0x2
-#define DRM_XE_SYNC_USER_FENCE		0x3
-#define DRM_XE_SYNC_SIGNAL		0x10
+#define DRM_XE_SYNC_FLAG_SYNCOBJ		0x0
+#define DRM_XE_SYNC_FLAG_TIMELINE_SYNCOBJ	0x1
+#define DRM_XE_SYNC_FLAG_DMA_BUF		0x2
+#define DRM_XE_SYNC_FLAG_USER_FENCE		0x3
+#define DRM_XE_SYNC_FLAG_SIGNAL		0x10
 	__u32 flags;
 
 	/** @pad: MBZ */
@@ -921,8 +921,8 @@ struct drm_xe_wait_user_fence {
 	/** @op: wait operation (type of comparison) */
 	__u16 op;
 
-#define DRM_XE_UFENCE_WAIT_SOFT_OP	(1 << 0)	/* e.g. Wait on VM bind */
-#define DRM_XE_UFENCE_WAIT_ABSTIME	(1 << 1)
+#define DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP	(1 << 0)	/* e.g. Wait on VM bind */
+#define DRM_XE_UFENCE_WAIT_FLAG_ABSTIME	(1 << 1)
 	/** @flags: wait flags */
 	__u16 flags;
 
@@ -940,10 +940,10 @@ struct drm_xe_wait_user_fence {
 	__u64 mask;
 	/**
 	 * @timeout: how long to wait before bailing, value in nanoseconds.
-	 * Without DRM_XE_UFENCE_WAIT_ABSTIME flag set (relative timeout)
+	 * Without DRM_XE_UFENCE_WAIT_FLAG_ABSTIME flag set (relative timeout)
 	 * it contains timeout expressed in nanoseconds to wait (fence will
 	 * expire at now() + timeout).
-	 * When DRM_XE_UFENCE_WAIT_ABSTIME flat is set (absolute timeout) wait
+	 * When DRM_XE_UFENCE_WAIT_FLAG_ABSTIME flat is set (absolute timeout) wait
 	 * will end at timeout (uses system MONOTONIC_CLOCK).
 	 * Passing negative timeout leads to neverending wait.
 	 *
@@ -956,13 +956,13 @@ struct drm_xe_wait_user_fence {
 
 	/**
 	 * @num_engines: number of engine instances to wait on, must be zero
-	 * when DRM_XE_UFENCE_WAIT_SOFT_OP set
+	 * when DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP set
 	 */
 	__u64 num_engines;
 
 	/**
 	 * @instances: user pointer to array of drm_xe_engine_class_instance to
-	 * wait on, must be NULL when DRM_XE_UFENCE_WAIT_SOFT_OP set
+	 * wait on, must be NULL when DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP set
 	 */
 	__u64 instances;
 
diff --git a/lib/igt_fb.c b/lib/igt_fb.c
index e531a041e..e70d2e3ce 100644
--- a/lib/igt_fb.c
+++ b/lib/igt_fb.c
@@ -2892,7 +2892,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
 							  &bb_size,
 							  mem_region) == 0);
 	} else if (is_xe) {
-		vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		exec_queue = xe_exec_queue_create(dst_fb->fd, vm, &inst, 0);
 		xe_ctx = intel_ctx_xe(dst_fb->fd, vm, exec_queue, 0, 0, 0);
 		mem_region = vram_if_possible(dst_fb->fd, 0);
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index eb47ede50..b59c490db 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -953,7 +953,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
 
 		if (!vm) {
 			igt_assert_f(!ctx, "No vm provided for engine");
-			vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+			vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		}
 
 		ibb->uses_full_ppgtt = true;
@@ -1315,8 +1315,8 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
 static void __unbind_xe_objects(struct intel_bb *ibb)
 {
 	struct drm_xe_sync syncs[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	int ret;
 
@@ -2302,8 +2302,8 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
 	uint32_t engine = flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK);
 	uint32_t engine_id;
 	struct drm_xe_sync syncs[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_vm_bind_op *bind_ops;
 	void *map;
@@ -2371,7 +2371,7 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
 	}
 	ibb->xe_bound = true;
 
-	syncs[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+	syncs[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 	ibb->engine_syncobj = syncobj_create(ibb->fd, 0);
 	syncs[1].handle = ibb->engine_syncobj;
 
diff --git a/lib/intel_compute.c b/lib/intel_compute.c
index 7f1ea90e7..7cb0f001c 100644
--- a/lib/intel_compute.c
+++ b/lib/intel_compute.c
@@ -80,7 +80,7 @@ static void bo_execenv_create(int fd, struct bo_execenv *execenv)
 		else
 			engine_class = DRM_XE_ENGINE_CLASS_COMPUTE;
 
-		execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		execenv->exec_queue = xe_exec_queue_create_class(fd, execenv->vm,
 								 engine_class);
 	}
@@ -106,7 +106,7 @@ static void bo_execenv_bind(struct bo_execenv *execenv,
 		uint64_t alignment = xe_get_default_alignment(fd);
 		struct drm_xe_sync sync = { 0 };
 
-		sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
+		sync.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
 		sync.handle = syncobj_create(fd, 0);
 
 		for (int i = 0; i < entries; i++) {
@@ -162,7 +162,7 @@ static void bo_execenv_unbind(struct bo_execenv *execenv,
 		uint32_t vm = execenv->vm;
 		struct drm_xe_sync sync = { 0 };
 
-		sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
+		sync.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
 		sync.handle = syncobj_create(fd, 0);
 
 		for (int i = 0; i < entries; i++) {
diff --git a/lib/intel_ctx.c b/lib/intel_ctx.c
index f927b7df8..f82564572 100644
--- a/lib/intel_ctx.c
+++ b/lib/intel_ctx.c
@@ -423,8 +423,8 @@ intel_ctx_t *intel_ctx_xe(int fd, uint32_t vm, uint32_t exec_queue,
 int __intel_ctx_xe_exec(const intel_ctx_t *ctx, uint64_t ahnd, uint64_t bb_offset)
 {
 	struct drm_xe_sync syncs[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.exec_queue_id = ctx->exec_queue,
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 36f10a49a..db41d5ba5 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -399,7 +399,7 @@ void xe_exec_sync(int fd, uint32_t exec_queue, uint64_t addr,
 void xe_exec_wait(int fd, uint32_t exec_queue, uint64_t addr)
 {
 	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 		.handle = syncobj_create(fd, 0),
 	};
 
@@ -416,7 +416,7 @@ int64_t xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
 	struct drm_xe_wait_user_fence wait = {
 		.addr = to_user_pointer(addr),
 		.op = DRM_XE_UFENCE_WAIT_EQ,
-		.flags = !eci ? DRM_XE_UFENCE_WAIT_SOFT_OP : 0,
+		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP : 0,
 		.value = value,
 		.mask = DRM_XE_UFENCE_WAIT_U64,
 		.timeout = timeout,
@@ -448,7 +448,7 @@ int64_t xe_wait_ufence_abstime(int fd, uint64_t *addr, uint64_t value,
 	struct drm_xe_wait_user_fence wait = {
 		.addr = to_user_pointer(addr),
 		.op = DRM_XE_UFENCE_WAIT_EQ,
-		.flags = !eci ? DRM_XE_UFENCE_WAIT_SOFT_OP | DRM_XE_UFENCE_WAIT_ABSTIME : 0,
+		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP | DRM_XE_UFENCE_WAIT_FLAG_ABSTIME : 0,
 		.value = value,
 		.mask = DRM_XE_UFENCE_WAIT_U64,
 		.timeout = timeout,
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index 8df3d317a..d459893e1 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -315,8 +315,8 @@ bool xe_supports_faults(int fd)
 	bool supports_faults;
 
 	struct drm_xe_vm_create create = {
-		.flags = DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			 DRM_XE_VM_CREATE_FAULT_MODE,
+		.flags = DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			 DRM_XE_VM_CREATE_FLAG_FAULT_MODE,
 	};
 
 	supports_faults = !igt_ioctl(fd, DRM_IOCTL_XE_VM_CREATE, &create);
diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
index b05b38829..cfc663acc 100644
--- a/lib/xe/xe_spin.c
+++ b/lib/xe/xe_spin.c
@@ -191,7 +191,7 @@ xe_spin_create(int fd, const struct igt_spin_factory *opt)
 	struct igt_spin *spin;
 	struct xe_spin *xe_spin;
 	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -288,7 +288,7 @@ void xe_cork_init(int fd, struct drm_xe_engine_class_instance *hwe,
 	uint32_t vm, bo, exec_queue, syncobj;
 	struct xe_spin *spin;
 	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
index 780125f92..2635edf72 100644
--- a/lib/xe/xe_util.c
+++ b/lib/xe/xe_util.c
@@ -179,8 +179,8 @@ void xe_bind_unbind_async(int xe, uint32_t vm, uint32_t bind_engine,
 {
 	struct drm_xe_vm_bind_op *bind_ops;
 	struct drm_xe_sync tabsyncs[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ, .handle = sync_in },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, .handle = sync_out },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ, .handle = sync_in },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, .handle = sync_out },
 	};
 	struct drm_xe_sync *syncs;
 	uint32_t num_binds = 0;
diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
index bb844b641..465f67e23 100644
--- a/tests/intel/xe_ccs.c
+++ b/tests/intel/xe_ccs.c
@@ -343,7 +343,7 @@ static void block_copy(int xe,
 		uint32_t vm, exec_queue;
 
 		if (config->new_ctx) {
-			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 			exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
 			surf_ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
 			surf_ahnd = intel_allocator_open(xe, surf_ctx->vm,
@@ -550,7 +550,7 @@ static void block_copy_test(int xe,
 				      copyfns[copy_function].suffix) {
 				uint32_t sync_bind, sync_out;
 
-				vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+				vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 				exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
 				sync_bind = syncobj_create(xe, 0);
 				sync_out = syncobj_create(xe, 0);
diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c
index 1dafbb276..191c29155 100644
--- a/tests/intel/xe_copy_basic.c
+++ b/tests/intel/xe_copy_basic.c
@@ -134,7 +134,7 @@ static void copy_test(int fd, uint32_t size, enum blt_cmd_type cmd, uint32_t reg
 
 	src_handle = xe_bo_create_flags(fd, 0, bo_size, region);
 	dst_handle = xe_bo_create_flags(fd, 0, bo_size, region);
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	exec_queue = xe_exec_queue_create(fd, vm, &inst, 0);
 	ctx = intel_ctx_xe(fd, vm, exec_queue, 0, 0, 0);
 
diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
index d99bd51cf..4242e1a67 100644
--- a/tests/intel/xe_create.c
+++ b/tests/intel/xe_create.c
@@ -54,7 +54,7 @@ static void create_invalid_size(int fd)
 	uint32_t handle;
 	int ret;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	xe_for_each_mem_region(fd, memreg, region) {
 		memregion = xe_mem_region(fd, region);
@@ -140,7 +140,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
 
 	fd = drm_reopen_driver(fd);
 	num_engines = xe_number_hw_engines(fd);
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	exec_queues_per_process = max_t(uint32_t, 1, MAXEXECQUEUES / nproc);
 	igt_debug("nproc: %u, exec_queues per process: %u\n", nproc, exec_queues_per_process);
@@ -199,7 +199,7 @@ static void create_massive_size(int fd)
 	uint32_t handle;
 	int ret;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	xe_for_each_mem_region(fd, memreg, region) {
 		ret = __create_bo(fd, vm, -1ULL << 32, region, &handle);
diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
index 5c401b6dd..0d835dddb 100644
--- a/tests/intel/xe_dma_buf_sync.c
+++ b/tests/intel/xe_dma_buf_sync.c
@@ -144,8 +144,8 @@ test_export_dma_buf(struct drm_xe_engine_class_instance *hwe0,
 		uint64_t sdi_addr = addr + sdi_offset;
 		uint64_t spin_offset = (char *)&data[i]->spin - (char *)data[i];
 		struct drm_xe_sync sync[2] = {
-			{ .flags = DRM_XE_SYNC_SYNCOBJ, },
-			{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+			{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ, },
+			{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 		};
 		struct drm_xe_exec exec = {
 			.num_batch_buffer = 1,
diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
index 64168ed19..4ef30cf49 100644
--- a/tests/intel/xe_drm_fdinfo.c
+++ b/tests/intel/xe_drm_fdinfo.c
@@ -48,8 +48,8 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -71,7 +71,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 	struct xe_spin_opts spin_opts = { .preempt = true };
 	int i, b, ret;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * N_EXEC_QUEUES;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -110,20 +110,20 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 				xe_spin_init(&data[i].spin, &spin_opts);
 				exec.exec_queue_id = exec_queues[e];
 				exec.address = spin_opts.addr;
-				sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-				sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+				sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+				sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 				sync[1].handle = syncobjs[e];
 				xe_exec(fd, &exec);
 				xe_spin_wait_started(&data[i].spin);
 
 				addr += bo_size;
-				sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+				sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 				sync[1].handle = syncobjs[e];
 				xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
 						 bo_size, sync + 1, 1);
 				addr += bo_size;
 			} else {
-				sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+				sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 				xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
 						 bo_size, sync, 1);
 			}
@@ -149,7 +149,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
 
 		syncobj_destroy(fd, sync[0].handle);
 		sync[0].handle = syncobj_create(fd, 0);
-		sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		xe_vm_unbind_all_async(fd, vm, 0, bo, sync, 1);
 		igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -221,7 +221,7 @@ static void test_total_resident(int xe)
 	uint64_t addr = 0x1a0000;
 	int ret;
 
-	vm = xe_vm_create(xe, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0);
+	vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0);
 
 	xe_for_each_mem_region(xe, memreg, region) {
 		uint64_t pre_size;
diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
index 5d8463270..6d953e58b 100644
--- a/tests/intel/xe_evict.c
+++ b/tests/intel/xe_evict.c
@@ -38,8 +38,8 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
 	uint32_t bind_exec_queues[3] = { 0, 0, 0 };
 	uint64_t addr = 0x100000000, base_addr = 0x100000000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -63,12 +63,12 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
 
 	fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	if (flags & BIND_EXEC_QUEUE)
 		bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
 	if (flags & MULTI_VM) {
-		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
-		vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
+		vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		if (flags & BIND_EXEC_QUEUE) {
 			bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
 									0, true);
@@ -121,7 +121,7 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
 				 ALIGN(sizeof(*data) * n_execs, 0x1000));
 
 		if (i < n_execs / 2) {
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			sync[0].handle = syncobj_create(fd, 0);
 			if (flags & MULTI_VM) {
 				xe_vm_bind_async(fd, vm3, bind_exec_queues[2], __bo,
@@ -149,7 +149,7 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		if (i >= n_exec_queues)
 			syncobj_reset(fd, &syncobjs[e], 1);
 		sync[1].handle = syncobjs[e];
@@ -216,7 +216,7 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
 	uint64_t addr = 0x100000000, base_addr = 0x100000000;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 		  .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -242,13 +242,13 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
 
 	fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 	if (flags & BIND_EXEC_QUEUE)
 		bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
 	if (flags & MULTI_VM) {
-		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-				   DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+				   DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 		if (flags & BIND_EXEC_QUEUE)
 			bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
 									0, true);
diff --git a/tests/intel/xe_evict_ccs.c b/tests/intel/xe_evict_ccs.c
index 4f2876ecb..1f5c795ef 100644
--- a/tests/intel/xe_evict_ccs.c
+++ b/tests/intel/xe_evict_ccs.c
@@ -226,7 +226,7 @@ static void evict_single(int fd, int child, const struct config *config)
 	uint32_t kb_left = config->mb_per_proc * SZ_1K;
 	uint32_t min_alloc_kb = config->param->min_size_kb;
 	uint32_t max_alloc_kb = config->param->max_size_kb;
-	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	uint64_t ahnd = intel_allocator_open(fd, vm, INTEL_ALLOCATOR_RELOC);
 	uint8_t uc_mocs = intel_get_uc_mocs_index(fd);
 	struct object *obj, *tmp;
diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
index 3ca3de881..8a0165b8c 100644
--- a/tests/intel/xe_exec_balancer.c
+++ b/tests/intel/xe_exec_balancer.c
@@ -37,8 +37,8 @@ static void test_all_active(int fd, int gt, int class)
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -66,7 +66,7 @@ static void test_all_active(int fd, int gt, int class)
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * num_placements;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
@@ -93,8 +93,8 @@ static void test_all_active(int fd, int gt, int class)
 	for (i = 0; i < num_placements; i++) {
 		spin_opts.addr = addr + (char *)&data[i].spin - (char *)data;
 		xe_spin_init(&data[i].spin, &spin_opts);
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[i];
 
 		exec.exec_queue_id = exec_queues[i];
@@ -110,7 +110,7 @@ static void test_all_active(int fd, int gt, int class)
 	}
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -176,8 +176,8 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_syncs = 2,
@@ -207,7 +207,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
@@ -269,8 +269,8 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -281,11 +281,11 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		xe_exec(fd, &exec);
 
 		if (flags & REBIND && i + 1 != n_execs) {
-			sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+			sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 			xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size,
 					   sync + 1, 1);
 
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			addr += bo_size;
 			if (bo)
 				xe_vm_bind_async(fd, vm, 0, bo, 0, addr,
@@ -329,7 +329,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
 					NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -399,7 +399,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	uint64_t addr = 0x1a0000;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -433,8 +433,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
index 232ddde8e..a401f0165 100644
--- a/tests/intel/xe_exec_basic.c
+++ b/tests/intel/xe_exec_basic.c
@@ -81,8 +81,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	  int n_exec_queues, int n_execs, int n_vm, unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -109,7 +109,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	igt_assert(n_vm <= MAX_N_EXEC_QUEUES);
 
 	for (i = 0; i < n_vm; ++i)
-		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -199,9 +199,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[0].handle = bind_syncobjs[cur_vm];
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -213,11 +213,11 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		if (flags & REBIND && i + 1 != n_execs) {
 			uint32_t __vm = vm[cur_vm];
 
-			sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+			sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 			xe_vm_unbind_async(fd, __vm, bind_exec_queues[e], 0,
 					   __addr, bo_size, sync + 1, 1);
 
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			addr[i % n_vm] += bo_size;
 			__addr = addr[i % n_vm];
 			if (bo)
@@ -266,7 +266,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 		igt_assert(syncobj_wait(fd, &bind_syncobjs[i], 1, INT64_MAX, 0,
 					NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	for (i = 0; i < n_vm; ++i) {
 		syncobj_reset(fd, &sync[0].handle, 1);
 		xe_vm_unbind_async(fd, vm[i], bind_exec_queues[i], 0, addr[i],
diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
index b0a677dca..20d3fc6e8 100644
--- a/tests/intel/xe_exec_compute_mode.c
+++ b/tests/intel/xe_exec_compute_mode.c
@@ -88,7 +88,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	uint64_t addr = 0x1a0000;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -113,8 +113,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 	igt_assert(n_exec_queues <= MAX_N_EXECQUEUES);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
index 477d0824d..92d552f97 100644
--- a/tests/intel/xe_exec_fault_mode.c
+++ b/tests/intel/xe_exec_fault_mode.c
@@ -8,7 +8,7 @@
  * Category: Hardware building block
  * Sub-category: execbuf
  * Functionality: fault mode
- * GPU requirements: GPU needs support for DRM_XE_VM_CREATE_FAULT_MODE
+ * GPU requirements: GPU needs support for DRM_XE_VM_CREATE_FLAG_FAULT_MODE
  */
 
 #include <fcntl.h>
@@ -107,7 +107,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 	uint64_t addr = 0x1a0000;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -131,8 +131,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
 
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_FAULT_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_FAULT_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
index 39647b736..195e62911 100644
--- a/tests/intel/xe_exec_reset.c
+++ b/tests/intel/xe_exec_reset.c
@@ -30,8 +30,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -45,7 +45,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 	struct xe_spin *spin;
 	struct xe_spin_opts spin_opts = { .addr = addr, .preempt = false };
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*spin);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -62,8 +62,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 
 	xe_spin_init(spin, &spin_opts);
 
-	sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-	sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+	sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	sync[1].handle = syncobj;
 
 	exec.exec_queue_id = exec_queue;
@@ -78,7 +78,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
 	igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -140,8 +140,8 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_syncs = 2,
@@ -176,7 +176,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 	if (num_placements < 2)
 		return;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -257,8 +257,8 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 		for (j = 0; j < num_placements && flags & PARALLEL; ++j)
 			batches[j] = exec_addr;
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -288,7 +288,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
 					NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -336,8 +336,8 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -362,7 +362,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	if (flags & CLOSE_FD)
 		fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -425,8 +425,8 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 			exec_addr = batch_addr;
 		}
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -455,7 +455,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
 					NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -501,7 +501,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	uint64_t addr = 0x1a0000;
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -528,8 +528,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
 	if (flags & CLOSE_FD)
 		fd = drm_open_driver(DRIVER_XE);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index 4ca76b43a..9c14bfd14 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -55,7 +55,7 @@ static void store_dword_batch(struct data *data, uint64_t addr, int value)
 static void store(int fd)
 {
 	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -75,7 +75,7 @@ static void store(int fd)
 	syncobj = syncobj_create(fd, 0);
 	sync.handle = syncobj;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -91,7 +91,7 @@ static void store(int fd)
 	exec_queue = xe_exec_queue_create(fd, vm, hw_engine, 0);
 	exec.exec_queue_id = exec_queue;
 	exec.address = data->addr;
-	sync.flags &= DRM_XE_SYNC_SIGNAL;
+	sync.flags &= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_exec(fd, &exec);
 
 	igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL));
@@ -121,8 +121,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
 			     unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, }
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, }
 	};
 
 	struct drm_xe_exec exec = {
@@ -143,7 +143,7 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
 	size_t bo_size = 4096;
 
 	bo_size = ALIGN(bo_size, xe_get_default_alignment(fd));
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
 	exec_queues = xe_exec_queue_create(fd, vm, eci, 0);
 	syncobjs = syncobj_create(fd, 0);
@@ -173,8 +173,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
 		batch_map[b++] = value[n];
 	}
 	batch_map[b++] = MI_BATCH_BUFFER_END;
-	sync[0].flags &= DRM_XE_SYNC_SIGNAL;
-	sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags &= DRM_XE_SYNC_FLAG_SIGNAL;
+	sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	sync[1].handle = syncobjs;
 	exec.exec_queue_id = exec_queues;
 	xe_exec(fd, &exec);
@@ -210,8 +210,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
 static void store_all(int fd, int gt, int class)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, }
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, }
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -230,7 +230,7 @@ static void store_all(int fd, int gt, int class)
 	struct drm_xe_engine_class_instance *hwe;
 	int i, num_placements = 0;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -267,8 +267,8 @@ static void store_all(int fd, int gt, int class)
 	for (i = 0; i < num_placements; i++) {
 
 		store_dword_batch(data, addr, i);
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[i];
 
 		exec.exec_queue_id = exec_queues[i];
diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
index b814dcdf5..bb979b18c 100644
--- a/tests/intel/xe_exec_threads.c
+++ b/tests/intel/xe_exec_threads.c
@@ -47,8 +47,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 	      int class, int n_exec_queues, int n_execs, unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_sync sync_all[MAX_N_EXEC_QUEUES];
 	struct drm_xe_exec exec = {
@@ -77,7 +77,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 	}
 
 	if (!vm) {
-		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		owns_vm = true;
 	}
 
@@ -125,7 +125,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 					&create), 0);
 		exec_queues[i] = create.exec_queue_id;
 		syncobjs[i] = syncobj_create(fd, 0);
-		sync_all[i].flags = DRM_XE_SYNC_SYNCOBJ;
+		sync_all[i].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
 		sync_all[i].handle = syncobjs[i];
 	};
 	exec.num_batch_buffer = flags & PARALLEL ? num_placements : 1;
@@ -158,8 +158,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -173,7 +173,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 			xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size,
 					   sync_all, n_exec_queues);
 
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			addr += bo_size;
 			if (bo)
 				xe_vm_bind_async(fd, vm, 0, bo, 0, addr,
@@ -221,7 +221,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
 					NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -254,7 +254,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 {
 #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
+		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
 	          .timeline_value = USER_FENCE_VALUE },
 	};
 	struct drm_xe_exec exec = {
@@ -285,8 +285,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 	}
 
 	if (!vm) {
-		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-				  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+				  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 		owns_vm = true;
 	}
 
@@ -457,8 +457,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		 int n_execs, unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_sync sync_all[MAX_N_EXEC_QUEUES];
 	struct drm_xe_exec exec = {
@@ -489,7 +489,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 	}
 
 	if (!vm) {
-		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		owns_vm = true;
 	}
 
@@ -536,7 +536,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 		else
 			bind_exec_queues[i] = 0;
 		syncobjs[i] = syncobj_create(fd, 0);
-		sync_all[i].flags = DRM_XE_SYNC_SYNCOBJ;
+		sync_all[i].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
 		sync_all[i].handle = syncobjs[i];
 	};
 
@@ -576,8 +576,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 			exec_addr = batch_addr;
 		}
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -599,7 +599,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 					   0, addr, bo_size,
 					   sync_all, n_exec_queues);
 
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			addr += bo_size;
 			if (bo)
 				xe_vm_bind_async(fd, vm, bind_exec_queues[e],
@@ -649,7 +649,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
 					NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, bind_exec_queues[0], 0, addr,
 			   bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
@@ -1001,11 +1001,11 @@ static void threads(int fd, int flags)
 
 	if (flags & SHARED_VM) {
 		vm_legacy_mode = xe_vm_create(fd,
-					      DRM_XE_VM_CREATE_ASYNC_DEFAULT,
+					      DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT,
 					      0);
 		vm_compute_mode = xe_vm_create(fd,
-					       DRM_XE_VM_CREATE_ASYNC_DEFAULT |
-					       DRM_XE_VM_CREATE_COMPUTE_MODE,
+					       DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
+					       DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE,
 					       0);
 	}
 
diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
index df774130f..fd310138d 100644
--- a/tests/intel/xe_exercise_blt.c
+++ b/tests/intel/xe_exercise_blt.c
@@ -280,7 +280,7 @@ static void fast_copy_test(int xe,
 			region1 = igt_collection_get_value(regions, 0);
 			region2 = igt_collection_get_value(regions, 1);
 
-			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 			exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
 			ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
 
diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
index 3f2c4ae23..fa2f20cca 100644
--- a/tests/intel/xe_guc_pc.c
+++ b/tests/intel/xe_guc_pc.c
@@ -37,8 +37,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -60,7 +60,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
 	igt_assert(n_execs > 0);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -95,8 +95,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -114,7 +114,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
 
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, bind_exec_queues[0], 0, addr,
 			   bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
diff --git a/tests/intel/xe_huc_copy.c b/tests/intel/xe_huc_copy.c
index 4f5ce2212..eda9e5216 100644
--- a/tests/intel/xe_huc_copy.c
+++ b/tests/intel/xe_huc_copy.c
@@ -118,7 +118,7 @@ __test_huc_copy(int fd, uint32_t vm, struct drm_xe_engine_class_instance *hwe)
 	};
 
 	exec_queue = xe_exec_queue_create(fd, vm, hwe, 0);
-	sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
+	sync.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
 	sync.handle = syncobj_create(fd, 0);
 
 	for(int i = 0; i < BO_DICT_ENTRIES; i++) {
@@ -156,7 +156,7 @@ test_huc_copy(int fd)
 	uint32_t vm;
 	uint32_t tested_gts = 0;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	xe_for_each_hw_engine(fd, hwe) {
 		if (hwe->engine_class == DRM_XE_ENGINE_CLASS_VIDEO_DECODE &&
diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
index 26e4dcc85..d66996cd5 100644
--- a/tests/intel/xe_intel_bb.c
+++ b/tests/intel/xe_intel_bb.c
@@ -191,7 +191,7 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
 	intel_bb_reset(ibb, true);
 
 	if (new_context) {
-		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 		ctx = xe_exec_queue_create(xe, vm, xe_hw_engine(xe, 0), 0);
 		intel_bb_destroy(ibb);
 		ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
index 88b22ed11..9c2a70ff3 100644
--- a/tests/intel/xe_noexec_ping_pong.c
+++ b/tests/intel/xe_noexec_ping_pong.c
@@ -64,7 +64,7 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
 	 * stats.
 	 */
 	for (i = 0; i < NUM_VMS; ++i) {
-		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
+		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
 		for (j = 0; j < NUM_BOS; ++j) {
 			igt_debug("Creating bo size %lu for vm %u\n",
 				  (unsigned long) bo_size,
diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
index a0dd30e50..e9d05cf2b 100644
--- a/tests/intel/xe_perf_pmu.c
+++ b/tests/intel/xe_perf_pmu.c
@@ -81,8 +81,8 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -98,7 +98,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
 	uint32_t pmu_fd;
 	uint64_t count, idle;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*spin);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -118,8 +118,8 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
 
 	xe_spin_init(spin, &spin_opts);
 
-	sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-	sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+	sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	sync[1].handle = syncobj;
 
 	exec.exec_queue_id = exec_queue;
@@ -135,7 +135,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
 	igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL));
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -185,8 +185,8 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -219,7 +219,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
 	igt_skip_on_f(!num_placements, "Engine class:%d gt:%d not enabled on this platform\n",
 		      class, gt);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * num_placements;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
 
@@ -250,8 +250,8 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
 	for (i = 0; i < num_placements; i++) {
 		spin_opts.addr = addr + (char *)&data[i].spin - (char *)data;
 		xe_spin_init(&data[i].spin, &spin_opts);
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[i];
 
 		exec.exec_queue_id = exec_queues[i];
@@ -268,7 +268,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
 
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index d07ed4535..18afb68b0 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -231,8 +231,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -259,7 +259,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 	if (check_rpm)
 		igt_assert(in_d3(device, d_state));
 
-	vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	if (check_rpm)
 		igt_assert(out_of_d3(device, d_state));
@@ -304,8 +304,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -331,7 +331,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
 	if (check_rpm && runtime_usage_available(device.pci_xe))
 		rpm_usage = igt_pm_get_runtime_usage(device.pci_xe);
 
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(device.fd_xe, vm, bind_exec_queues[0], 0, addr,
 			   bo_size, sync, 1);
 	igt_assert(syncobj_wait(device.fd_xe, &sync[0].handle, 1, INT64_MAX, 0,
diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
index 8e9197fae..c87eeef3c 100644
--- a/tests/intel/xe_pm_residency.c
+++ b/tests/intel/xe_pm_residency.c
@@ -87,7 +87,7 @@ static void exec_load(int fd, struct drm_xe_engine_class_instance *hwe, unsigned
 	} *data;
 
 	struct drm_xe_sync sync = {
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	};
 
 	struct drm_xe_exec exec = {
diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
index eb5d6aba8..6ab604d9b 100644
--- a/tests/intel/xe_spin_batch.c
+++ b/tests/intel/xe_spin_batch.c
@@ -145,7 +145,7 @@ static void xe_spin_fixed_duration(int fd)
 {
 	struct drm_xe_sync sync = {
 		.handle = syncobj_create(fd, 0),
-		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 6700a6a55..86c8d0c5d 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -89,7 +89,7 @@ write_dwords(int fd, uint32_t vm, int n_dwords, uint64_t *addrs)
 static void
 test_scratch(int fd)
 {
-	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0);
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0);
 	uint64_t addrs[] = {
 		0x000000000000ull,
 		0x7ffdb86402d8ull,
@@ -124,7 +124,7 @@ __test_bind_one_bo(int fd, uint32_t vm, int n_addrs, uint64_t *addrs)
 		uint64_t bind_addr = addrs[i] & ~(uint64_t)(bo_size - 1);
 
 		if (!vm)
-			vms[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE,
+			vms[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE,
 					      0);
 		igt_debug("Binding addr %"PRIx64"\n", addrs[i]);
 		xe_vm_bind_sync(fd, vm ? vm : vms[i], bo, 0,
@@ -214,7 +214,7 @@ test_bind_once(int fd)
 	uint64_t addr = 0x7ffdb86402d8ull;
 
 	__test_bind_one_bo(fd,
-			   xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0),
+			   xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0),
 			   1, &addr);
 }
 
@@ -234,7 +234,7 @@ test_bind_one_bo_many_times(int fd)
 						ARRAY_SIZE(addrs_48b);
 
 	__test_bind_one_bo(fd,
-			   xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0),
+			   xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0),
 			   addrs_size, addrs);
 }
 
@@ -265,14 +265,14 @@ test_bind_one_bo_many_times_many_vm(int fd)
 
 static void test_partial_unbinds(int fd)
 {
-	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	size_t bo_size = 3 * xe_get_default_alignment(fd);
 	uint32_t bo = xe_bo_create(fd, 0, vm, bo_size);
 	uint64_t unbind_size = bo_size / 3;
 	uint64_t addr = 0x1a0000;
 
 	struct drm_xe_sync sync = {
-	    .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
+	    .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
 	    .handle = syncobj_create(fd, 0),
 	};
 
@@ -312,10 +312,10 @@ static void unbind_all(int fd, int n_vmas)
 	uint32_t vm;
 	int i;
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo = xe_bo_create(fd, 0, vm, bo_size);
 
 	for (i = 0; i < n_vmas; ++i)
@@ -387,8 +387,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 	uint32_t vm;
 	uint64_t addr = 0x1000 * 512;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_sync sync_all[MAX_N_EXEC_QUEUES + 1];
 	struct drm_xe_exec exec = {
@@ -412,7 +412,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 	data = malloc(sizeof(*data) * n_bo);
 	igt_assert(data);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(struct shared_pte_page_data);
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -430,7 +430,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 	for (i = 0; i < n_exec_queues; i++) {
 		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
 		syncobjs[i] = syncobj_create(fd, 0);
-		sync_all[i].flags = DRM_XE_SYNC_SYNCOBJ;
+		sync_all[i].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
 		sync_all[i].handle = syncobjs[i];
 	};
 
@@ -455,8 +455,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 		data[i]->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i]->batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -468,7 +468,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 		if (i % 2)
 			continue;
 
-		sync_all[n_execs].flags = DRM_XE_SYNC_SIGNAL;
+		sync_all[n_execs].flags = DRM_XE_SYNC_FLAG_SIGNAL;
 		sync_all[n_execs].handle = sync[0].handle;
 		xe_vm_unbind_async(fd, vm, 0, 0, addr + i * addr_stride,
 				   bo_size, sync_all, n_execs + 1);
@@ -504,8 +504,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 		data[i]->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i]->batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		exec.exec_queue_id = exec_queues[e];
@@ -518,7 +518,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
 		if (!(i % 2))
 			continue;
 
-		sync_all[n_execs].flags = DRM_XE_SYNC_SIGNAL;
+		sync_all[n_execs].flags = DRM_XE_SYNC_FLAG_SIGNAL;
 		sync_all[n_execs].handle = sync[0].handle;
 		xe_vm_unbind_async(fd, vm, 0, 0, addr + i * addr_stride,
 				   bo_size, sync_all, n_execs + 1);
@@ -573,8 +573,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 	uint32_t vm;
 	uint64_t addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -596,7 +596,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 	struct xe_spin_opts spin_opts = { .preempt = true };
 	int i, b;
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * N_EXEC_QUEUES;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -630,22 +630,22 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 			xe_spin_init(&data[i].spin, &spin_opts);
 			exec.exec_queue_id = exec_queues[e];
 			exec.address = spin_opts.addr;
-			sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-			sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+			sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			sync[1].handle = syncobjs[e];
 			xe_exec(fd, &exec);
 			xe_spin_wait_started(&data[i].spin);
 
 			/* Do bind to 1st exec_queue blocked on cork */
 			addr += (flags & CONFLICT) ? (0x1 << 21) : bo_size;
-			sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+			sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 			sync[1].handle = syncobjs[e];
 			xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
 					 bo_size, sync + 1, 1);
 			addr += bo_size;
 		} else {
 			/* Do bind to 2nd exec_queue which blocks write below */
-			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 			xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
 					 bo_size, sync, 1);
 		}
@@ -663,8 +663,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[!i ? N_EXEC_QUEUES : e];
 
 		exec.num_syncs = 2;
@@ -708,7 +708,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
 
 	syncobj_destroy(fd, sync[0].handle);
 	sync[0].handle = syncobj_create(fd, 0);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_all_async(fd, vm, 0, bo, sync, 1);
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
@@ -755,8 +755,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 	uint32_t vm;
 	uint64_t addr = 0x1a0000, base_addr = 0x1a0000;
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -776,7 +776,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 
 	igt_assert(n_execs <= BIND_ARRAY_MAX_N_EXEC);
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = sizeof(*data) * n_execs;
 	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
 			xe_get_default_alignment(fd));
@@ -822,8 +822,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		if (i == n_execs - 1) {
 			sync[1].handle = syncobj_create(fd, 0);
 			exec.num_syncs = 2;
@@ -845,8 +845,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 	}
 
 	syncobj_reset(fd, &sync[0].handle, 1);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
-	sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
+	sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_bind_array(fd, vm, bind_exec_queue, bind_ops, n_execs, sync, 2);
 
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
@@ -943,8 +943,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 		 unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -970,7 +970,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 	}
 
 	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	if (flags & LARGE_BIND_FLAG_USERPTR) {
 		map = aligned_alloc(xe_get_default_alignment(fd), bo_size);
@@ -1027,8 +1027,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 		data[i].batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 		sync[1].handle = syncobjs[e];
 
 		if (i != e)
@@ -1050,7 +1050,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
 	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
 
 	syncobj_reset(fd, &sync[0].handle, 1);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	if (flags & LARGE_BIND_FLAG_SPLIT) {
 		xe_vm_unbind_async(fd, vm, 0, 0, base_addr,
 				   bo_size / 2, NULL, 0);
@@ -1103,7 +1103,7 @@ static void *hammer_thread(void *tdata)
 {
 	struct thread_data *t = tdata;
 	struct drm_xe_sync sync[1] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -1227,8 +1227,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 			 unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -1262,7 +1262,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 			unbind_n_page_offset *= n_page_per_2mb;
 	}
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = page_size * bo_n_pages;
 
 	if (flags & MAP_FLAG_USERPTR) {
@@ -1330,10 +1330,10 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 		data->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		if (i)
 			syncobj_reset(fd, &sync[1].handle, 1);
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 
 		exec.exec_queue_id = exec_queue;
 		exec.address = batch_addr;
@@ -1345,8 +1345,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
 
 	/* Unbind some of the pages */
 	syncobj_reset(fd, &sync[0].handle, 1);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
-	sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
+	sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 	xe_vm_unbind_async(fd, vm, 0, 0,
 			   addr + unbind_n_page_offset * page_size,
 			   unbind_n_pages * page_size, sync, 2);
@@ -1387,9 +1387,9 @@ try_again_after_invalidate:
 			data->batch[b++] = MI_BATCH_BUFFER_END;
 			igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-			sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+			sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 			syncobj_reset(fd, &sync[1].handle, 1);
-			sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+			sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 
 			exec.exec_queue_id = exec_queue;
 			exec.address = batch_addr;
@@ -1430,7 +1430,7 @@ try_again_after_invalidate:
 
 	/* Confirm unbound region can be rebound */
 	syncobj_reset(fd, &sync[0].handle, 1);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 	if (flags & MAP_FLAG_USERPTR)
 		xe_vm_bind_userptr_async(fd, vm, 0,
 					 addr + unbind_n_page_offset * page_size,
@@ -1458,9 +1458,9 @@ try_again_after_invalidate:
 		data->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		syncobj_reset(fd, &sync[1].handle, 1);
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 
 		exec.exec_queue_id = exec_queue;
 		exec.address = batch_addr;
@@ -1528,8 +1528,8 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 		     int unbind_n_pages, unsigned int flags)
 {
 	struct drm_xe_sync sync[2] = {
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
-		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
+		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
 	};
 	struct drm_xe_exec exec = {
 		.num_batch_buffer = 1,
@@ -1562,7 +1562,7 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 			unbind_n_page_offset *= n_page_per_2mb;
 	}
 
-	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_size = page_size * bo_n_pages;
 
 	if (flags & MAP_FLAG_USERPTR) {
@@ -1636,10 +1636,10 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 		data->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		if (i)
 			syncobj_reset(fd, &sync[1].handle, 1);
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 
 		exec.exec_queue_id = exec_queue;
 		exec.address = batch_addr;
@@ -1651,8 +1651,8 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 
 	/* Bind some of the pages to different BO / userptr */
 	syncobj_reset(fd, &sync[0].handle, 1);
-	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
-	sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
+	sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 	if (flags & MAP_FLAG_USERPTR)
 		xe_vm_bind_userptr_async(fd, vm, 0, addr + bo_size +
 					 unbind_n_page_offset * page_size,
@@ -1704,10 +1704,10 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
 		data->batch[b++] = MI_BATCH_BUFFER_END;
 		igt_assert(b <= ARRAY_SIZE(data[i].batch));
 
-		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
 		if (i)
 			syncobj_reset(fd, &sync[1].handle, 1);
-		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
 
 		exec.exec_queue_id = exec_queue;
 		exec.address = batch_addr;
diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
index ac7e99dde..2efdc1245 100644
--- a/tests/intel/xe_waitfence.c
+++ b/tests/intel/xe_waitfence.c
@@ -30,7 +30,7 @@ static void do_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 		    uint64_t addr, uint64_t size, uint64_t val)
 {
 	struct drm_xe_sync sync[1] = {};
-	sync[0].flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL;
+	sync[0].flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL;
 
 	sync[0].addr = to_user_pointer(&wait_fence);
 	sync[0].timeline_value = val;
@@ -63,7 +63,7 @@ waitfence(int fd, enum waittype wt)
 	uint32_t bo_7;
 	int64_t timeout;
 
-	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 	bo_1 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
 	do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
 	bo_2 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
@@ -132,7 +132,7 @@ invalid_flag(int fd)
 		.instances = 0,
 	};
 
-	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
 
@@ -157,7 +157,7 @@ invalid_ops(int fd)
 		.instances = 0,
 	};
 
-	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
 
@@ -182,7 +182,7 @@ invalid_engine(int fd)
 		.instances = 0,
 	};
 
-	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
+	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
 
 	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [igt-dev] [PATCH v1 3/8] drm-uapi/xe: Change rsvd to pad in struct drm_xe_class_instance
  2023-11-14 13:44 [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming Francois Dugast
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 1/8] drm-uapi/xe: Add missing DRM_ prefix in uAPI constants Francois Dugast
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 2/8] drm-uapi/xe: Add _FLAG to uAPI constants usable for flags Francois Dugast
@ 2023-11-14 13:44 ` Francois Dugast
  2023-11-14 13:47   ` Rodrigo Vivi
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 4/8] drm-uapi/xe: Rename *_mem_regions mask Francois Dugast
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Francois Dugast @ 2023-11-14 13:44 UTC (permalink / raw)
  To: igt-dev

Align with commit ("drm/xe/uapi: Change rsvd to pad in struct drm_xe_class_instance")

Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 68d005202..32f6cf631 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -141,7 +141,8 @@ struct drm_xe_engine_class_instance {
 
 	__u16 engine_instance;
 	__u16 gt_id;
-	__u16 rsvd;
+	/** @pad: MBZ */
+	__u16 pad;
 };
 
 /**
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [igt-dev] [PATCH v1 4/8] drm-uapi/xe: Rename *_mem_regions mask.
  2023-11-14 13:44 [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming Francois Dugast
                   ` (2 preceding siblings ...)
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 3/8] drm-uapi/xe: Change rsvd to pad in struct drm_xe_class_instance Francois Dugast
@ 2023-11-14 13:44 ` Francois Dugast
  2023-11-14 14:44   ` Kamil Konieczny
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 5/8] drm-uapi/xe: Rename query's mem_usage to mem_regions Francois Dugast
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Francois Dugast @ 2023-11-14 13:44 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with kernel commit ("drm/xe/uapi: Rename *_mem_regions masks")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 17 +++++++++--------
 lib/xe/xe_query.c         |  6 +++---
 tests/intel/xe_query.c    |  8 ++++----
 3 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 32f6cf631..621d6c0e3 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -349,17 +349,18 @@ struct drm_xe_query_gt {
 	/** @clock_freq: A clock frequency for timestamp */
 	__u32 clock_freq;
 	/**
-	 * @native_mem_regions: Bit mask of instances from
-	 * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
-	 * direct access.
+	 * @near_mem_regions: Bit mask of instances from
+	 * drm_xe_query_mem_usage that is near the current engines of this GT.
 	 */
-	__u64 native_mem_regions;
+	__u64 near_mem_regions;
 	/**
-	 * @slow_mem_regions: Bit mask of instances from
-	 * drm_xe_query_mem_usage that this GT can indirectly access, although
-	 * they live on a different GPU/Tile.
+	 * @far_mem_regions: Bit mask of instances from
+	 * drm_xe_query_mem_usage that is far from the engines of this GT.
+	 * In general, it has extra indirections when compared to the
+	 * @near_mem_regions. For a discrete device this could mean system
+	 * memory and memory living in a different Tile.
 	 */
-	__u64 slow_mem_regions;
+	__u64 far_mem_regions;
 	/** @reserved: Reserved */
 	__u64 reserved[8];
 };
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index d459893e1..c33bfd432 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -66,8 +66,8 @@ static uint64_t __memory_regions(const struct drm_xe_query_gt_list *gt_list)
 	int i;
 
 	for (i = 0; i < gt_list->num_gt; i++)
-		regions |= gt_list->gt_list[i].native_mem_regions |
-			   gt_list->gt_list[i].slow_mem_regions;
+		regions |= gt_list->gt_list[i].near_mem_regions |
+			   gt_list->gt_list[i].far_mem_regions;
 
 	return regions;
 }
@@ -123,7 +123,7 @@ static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list,
 	uint64_t region;
 
 	igt_assert(gt_list->num_gt > gt);
-	region = gt_list->gt_list[gt].native_mem_regions;
+	region = gt_list->gt_list[gt].near_mem_regions;
 	igt_assert(region);
 
 	return region;
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 969ad1c7f..b960ccfa2 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -281,10 +281,10 @@ test_query_gt_list(int fd)
 		igt_info("type: %d\n", gt_list->gt_list[i].type);
 		igt_info("gt_id: %d\n", gt_list->gt_list[i].gt_id);
 		igt_info("clock_freq: %u\n", gt_list->gt_list[i].clock_freq);
-		igt_info("native_mem_regions: 0x%016llx\n",
-		       gt_list->gt_list[i].native_mem_regions);
-		igt_info("slow_mem_regions: 0x%016llx\n",
-		       gt_list->gt_list[i].slow_mem_regions);
+		igt_info("near_mem_regions: 0x%016llx\n",
+		       gt_list->gt_list[i].near_mem_regions);
+		igt_info("far_mem_regions: 0x%016llx\n",
+		       gt_list->gt_list[i].far_mem_regions);
 	}
 }
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [igt-dev] [PATCH v1 5/8] drm-uapi/xe: Rename query's mem_usage to mem_regions
  2023-11-14 13:44 [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming Francois Dugast
                   ` (3 preceding siblings ...)
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 4/8] drm-uapi/xe: Rename *_mem_regions mask Francois Dugast
@ 2023-11-14 13:44 ` Francois Dugast
  2023-11-14 15:30   ` Kamil Konieczny
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 6/8] drm-uapi/xe: s/FLAGS_HAS_VRAM/FLAG_HAS_VRAM Francois Dugast
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Francois Dugast @ 2023-11-14 13:44 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with kernel's commit ("drm/xe/uapi: Rename query's mem_usage to mem_regions")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 14 ++++-----
 lib/xe/xe_query.c         | 66 +++++++++++++++++++--------------------
 lib/xe/xe_query.h         |  4 +--
 tests/intel/xe_pm.c       | 18 +++++------
 tests/intel/xe_query.c    | 58 +++++++++++++++++-----------------
 5 files changed, 80 insertions(+), 80 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 621d6c0e3..ec37f6811 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -291,13 +291,13 @@ struct drm_xe_query_engine_cycles {
 };
 
 /**
- * struct drm_xe_query_mem_usage - describe memory regions and usage
+ * struct drm_xe_query_mem_regions - describe memory regions
  *
  * If a query is made with a struct drm_xe_device_query where .query
- * is equal to DRM_XE_DEVICE_QUERY_MEM_USAGE, then the reply uses
- * struct drm_xe_query_mem_usage in .data.
+ * is equal to DRM_XE_DEVICE_QUERY_MEM_REGIONS, then the reply uses
+ * struct drm_xe_query_mem_regions in .data.
  */
-struct drm_xe_query_mem_usage {
+struct drm_xe_query_mem_regions {
 	/** @num_regions: number of memory regions returned in @regions */
 	__u32 num_regions;
 	/** @pad: MBZ */
@@ -350,12 +350,12 @@ struct drm_xe_query_gt {
 	__u32 clock_freq;
 	/**
 	 * @near_mem_regions: Bit mask of instances from
-	 * drm_xe_query_mem_usage that is near the current engines of this GT.
+	 * drm_xe_query_mem_regions that is near the current engines of this GT.
 	 */
 	__u64 near_mem_regions;
 	/**
 	 * @far_mem_regions: Bit mask of instances from
-	 * drm_xe_query_mem_usage that is far from the engines of this GT.
+	 * drm_xe_query_mem_regions that is far from the engines of this GT.
 	 * In general, it has extra indirections when compared to the
 	 * @near_mem_regions. For a discrete device this could mean system
 	 * memory and memory living in a different Tile.
@@ -469,7 +469,7 @@ struct drm_xe_device_query {
 	__u64 extensions;
 
 #define DRM_XE_DEVICE_QUERY_ENGINES		0
-#define DRM_XE_DEVICE_QUERY_MEM_USAGE		1
+#define DRM_XE_DEVICE_QUERY_MEM_REGIONS		1
 #define DRM_XE_DEVICE_QUERY_CONFIG		2
 #define DRM_XE_DEVICE_QUERY_GT_LIST		3
 #define DRM_XE_DEVICE_QUERY_HWCONFIG		4
diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
index c33bfd432..afd443be3 100644
--- a/lib/xe/xe_query.c
+++ b/lib/xe/xe_query.c
@@ -97,25 +97,25 @@ xe_query_engines_new(int fd, unsigned int *num_engines)
 	return hw_engines;
 }
 
-static struct drm_xe_query_mem_usage *xe_query_mem_usage_new(int fd)
+static struct drm_xe_query_mem_regions *xe_query_mem_regions_new(int fd)
 {
-	struct drm_xe_query_mem_usage *mem_usage;
+	struct drm_xe_query_mem_regions *mem_regions;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
-		.query = DRM_XE_DEVICE_QUERY_MEM_USAGE,
+		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
 		.size = 0,
 		.data = 0,
 	};
 
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	mem_usage = malloc(query.size);
-	igt_assert(mem_usage);
+	mem_regions = malloc(query.size);
+	igt_assert(mem_regions);
 
-	query.data = to_user_pointer(mem_usage);
+	query.data = to_user_pointer(mem_regions);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	return mem_usage;
+	return mem_regions;
 }
 
 static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list, int gt)
@@ -129,44 +129,44 @@ static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list,
 	return region;
 }
 
-static uint64_t gt_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
+static uint64_t gt_vram_size(const struct drm_xe_query_mem_regions *mem_regions,
 			     const struct drm_xe_query_gt_list *gt_list, int gt)
 {
 	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
 
-	if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
-		return mem_usage->regions[region_idx].total_size;
+	if (XE_IS_CLASS_VRAM(&mem_regions->regions[region_idx]))
+		return mem_regions->regions[region_idx].total_size;
 
 	return 0;
 }
 
-static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
+static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_regions *mem_regions,
 				     const struct drm_xe_query_gt_list *gt_list, int gt)
 {
 	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
 
-	if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
-		return mem_usage->regions[region_idx].cpu_visible_size;
+	if (XE_IS_CLASS_VRAM(&mem_regions->regions[region_idx]))
+		return mem_regions->regions[region_idx].cpu_visible_size;
 
 	return 0;
 }
 
-static bool __mem_has_vram(struct drm_xe_query_mem_usage *mem_usage)
+static bool __mem_has_vram(struct drm_xe_query_mem_regions *mem_regions)
 {
-	for (int i = 0; i < mem_usage->num_regions; i++)
-		if (XE_IS_CLASS_VRAM(&mem_usage->regions[i]))
+	for (int i = 0; i < mem_regions->num_regions; i++)
+		if (XE_IS_CLASS_VRAM(&mem_regions->regions[i]))
 			return true;
 
 	return false;
 }
 
-static uint32_t __mem_default_alignment(struct drm_xe_query_mem_usage *mem_usage)
+static uint32_t __mem_default_alignment(struct drm_xe_query_mem_regions *mem_regions)
 {
 	uint32_t alignment = XE_DEFAULT_ALIGNMENT;
 
-	for (int i = 0; i < mem_usage->num_regions; i++)
-		if (alignment < mem_usage->regions[i].min_page_size)
-			alignment = mem_usage->regions[i].min_page_size;
+	for (int i = 0; i < mem_regions->num_regions; i++)
+		if (alignment < mem_regions->regions[i].min_page_size)
+			alignment = mem_regions->regions[i].min_page_size;
 
 	return alignment;
 }
@@ -222,7 +222,7 @@ static void xe_device_free(struct xe_device *xe_dev)
 	free(xe_dev->config);
 	free(xe_dev->gt_list);
 	free(xe_dev->hw_engines);
-	free(xe_dev->mem_usage);
+	free(xe_dev->mem_regions);
 	free(xe_dev->vram_size);
 	free(xe_dev);
 }
@@ -254,18 +254,18 @@ struct xe_device *xe_device_get(int fd)
 	xe_dev->gt_list = xe_query_gt_list_new(fd);
 	xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
 	xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
-	xe_dev->mem_usage = xe_query_mem_usage_new(fd);
+	xe_dev->mem_regions = xe_query_mem_regions_new(fd);
 	xe_dev->vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->vram_size));
 	xe_dev->visible_vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->visible_vram_size));
 	for (int gt = 0; gt < xe_dev->gt_list->num_gt; gt++) {
-		xe_dev->vram_size[gt] = gt_vram_size(xe_dev->mem_usage,
+		xe_dev->vram_size[gt] = gt_vram_size(xe_dev->mem_regions,
 						     xe_dev->gt_list, gt);
 		xe_dev->visible_vram_size[gt] =
-			gt_visible_vram_size(xe_dev->mem_usage,
+			gt_visible_vram_size(xe_dev->mem_regions,
 					     xe_dev->gt_list, gt);
 	}
-	xe_dev->default_alignment = __mem_default_alignment(xe_dev->mem_usage);
-	xe_dev->has_vram = __mem_has_vram(xe_dev->mem_usage);
+	xe_dev->default_alignment = __mem_default_alignment(xe_dev->mem_regions);
+	xe_dev->has_vram = __mem_has_vram(xe_dev->mem_regions);
 
 	/* We may get here from multiple threads, use first cached xe_dev */
 	pthread_mutex_lock(&cache.cache_mutex);
@@ -508,9 +508,9 @@ struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region)
 
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
-	igt_assert(xe_dev->mem_usage->num_regions > region_idx);
+	igt_assert(xe_dev->mem_regions->num_regions > region_idx);
 
-	return &xe_dev->mem_usage->regions[region_idx];
+	return &xe_dev->mem_regions->regions[region_idx];
 }
 
 /**
@@ -641,23 +641,23 @@ uint64_t xe_vram_available(int fd, int gt)
 	struct xe_device *xe_dev;
 	int region_idx;
 	struct drm_xe_query_mem_region *mem_region;
-	struct drm_xe_query_mem_usage *mem_usage;
+	struct drm_xe_query_mem_regions *mem_regions;
 
 	xe_dev = find_in_cache(fd);
 	igt_assert(xe_dev);
 
 	region_idx = ffs(native_region_for_gt(xe_dev->gt_list, gt)) - 1;
-	mem_region = &xe_dev->mem_usage->regions[region_idx];
+	mem_region = &xe_dev->mem_regions->regions[region_idx];
 
 	if (XE_IS_CLASS_VRAM(mem_region)) {
 		uint64_t available_vram;
 
-		mem_usage = xe_query_mem_usage_new(fd);
+		mem_regions = xe_query_mem_regions_new(fd);
 		pthread_mutex_lock(&cache.cache_mutex);
-		mem_region->used = mem_usage->regions[region_idx].used;
+		mem_region->used = mem_regions->regions[region_idx].used;
 		available_vram = mem_region->total_size - mem_region->used;
 		pthread_mutex_unlock(&cache.cache_mutex);
-		free(mem_usage);
+		free(mem_regions);
 
 		return available_vram;
 	}
diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
index 3d7e22a9b..38e9aa440 100644
--- a/lib/xe/xe_query.h
+++ b/lib/xe/xe_query.h
@@ -36,8 +36,8 @@ struct xe_device {
 	/** @number_hw_engines: length of hardware engines array */
 	unsigned int number_hw_engines;
 
-	/** @mem_usage: regions memory information and usage */
-	struct drm_xe_query_mem_usage *mem_usage;
+	/** @mem_regions: regions memory information and usage */
+	struct drm_xe_query_mem_regions *mem_regions;
 
 	/** @vram_size: array of vram sizes for all gt_list */
 	uint64_t *vram_size;
diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
index 18afb68b0..9423984cc 100644
--- a/tests/intel/xe_pm.c
+++ b/tests/intel/xe_pm.c
@@ -372,10 +372,10 @@ NULL));
  */
 static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
 {
-	struct drm_xe_query_mem_usage *mem_usage;
+	struct drm_xe_query_mem_regions *mem_regions;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
-		.query = DRM_XE_DEVICE_QUERY_MEM_USAGE,
+		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
 		.size = 0,
 		.data = 0,
 	};
@@ -393,16 +393,16 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
 	igt_assert_eq(igt_ioctl(device.fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 	igt_assert_neq(query.size, 0);
 
-	mem_usage = malloc(query.size);
-	igt_assert(mem_usage);
+	mem_regions = malloc(query.size);
+	igt_assert(mem_regions);
 
-	query.data = to_user_pointer(mem_usage);
+	query.data = to_user_pointer(mem_regions);
 	igt_assert_eq(igt_ioctl(device.fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	for (i = 0; i < mem_usage->num_regions; i++) {
-		if (mem_usage->regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
-			vram_used_mb +=  (mem_usage->regions[i].used / (1024 * 1024));
-			vram_total_mb += (mem_usage->regions[i].total_size / (1024 * 1024));
+	for (i = 0; i < mem_regions->num_regions; i++) {
+		if (mem_regions->regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
+			vram_used_mb +=  (mem_regions->regions[i].used / (1024 * 1024));
+			vram_total_mb += (mem_regions->regions[i].total_size / (1024 * 1024));
 		}
 	}
 
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index b960ccfa2..5860add0b 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -198,12 +198,12 @@ test_query_engines(int fd)
  *	and alignment.
  */
 static void
-test_query_mem_usage(int fd)
+test_query_mem_regions(int fd)
 {
-	struct drm_xe_query_mem_usage *mem_usage;
+	struct drm_xe_query_mem_regions *mem_regions;
 	struct drm_xe_device_query query = {
 		.extensions = 0,
-		.query = DRM_XE_DEVICE_QUERY_MEM_USAGE,
+		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
 		.size = 0,
 		.data = 0,
 	};
@@ -212,43 +212,43 @@ test_query_mem_usage(int fd)
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 	igt_assert_neq(query.size, 0);
 
-	mem_usage = malloc(query.size);
-	igt_assert(mem_usage);
+	mem_regions = malloc(query.size);
+	igt_assert(mem_regions);
 
-	query.data = to_user_pointer(mem_usage);
+	query.data = to_user_pointer(mem_regions);
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
 
-	for (i = 0; i < mem_usage->num_regions; i++) {
+	for (i = 0; i < mem_regions->num_regions; i++) {
 		igt_info("mem region %d: %s\t%#llx / %#llx\n", i,
-			mem_usage->regions[i].mem_class ==
+			mem_regions->regions[i].mem_class ==
 			DRM_XE_MEM_REGION_CLASS_SYSMEM ? "SYSMEM"
-			:mem_usage->regions[i].mem_class ==
+			:mem_regions->regions[i].mem_class ==
 			DRM_XE_MEM_REGION_CLASS_VRAM ? "VRAM" : "?",
-			mem_usage->regions[i].used,
-			mem_usage->regions[i].total_size
+			mem_regions->regions[i].used,
+			mem_regions->regions[i].total_size
 		);
 		igt_info("min_page_size=0x%x\n",
-		       mem_usage->regions[i].min_page_size);
+		       mem_regions->regions[i].min_page_size);
 
 		igt_info("visible size=%lluMiB\n",
-			 mem_usage->regions[i].cpu_visible_size >> 20);
+			 mem_regions->regions[i].cpu_visible_size >> 20);
 		igt_info("visible used=%lluMiB\n",
-			 mem_usage->regions[i].cpu_visible_used >> 20);
-
-		igt_assert_lte_u64(mem_usage->regions[i].cpu_visible_size,
-				   mem_usage->regions[i].total_size);
-		igt_assert_lte_u64(mem_usage->regions[i].cpu_visible_used,
-				   mem_usage->regions[i].cpu_visible_size);
-		igt_assert_lte_u64(mem_usage->regions[i].cpu_visible_used,
-				   mem_usage->regions[i].used);
-		igt_assert_lte_u64(mem_usage->regions[i].used,
-				   mem_usage->regions[i].total_size);
-		igt_assert_lte_u64(mem_usage->regions[i].used -
-				   mem_usage->regions[i].cpu_visible_used,
-				   mem_usage->regions[i].total_size);
+			 mem_regions->regions[i].cpu_visible_used >> 20);
+
+		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_size,
+				   mem_regions->regions[i].total_size);
+		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_used,
+				   mem_regions->regions[i].cpu_visible_size);
+		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_used,
+				   mem_regions->regions[i].used);
+		igt_assert_lte_u64(mem_regions->regions[i].used,
+				   mem_regions->regions[i].total_size);
+		igt_assert_lte_u64(mem_regions->regions[i].used -
+				   mem_regions->regions[i].cpu_visible_used,
+				   mem_regions->regions[i].total_size);
 	}
-	dump_hex_debug(mem_usage, query.size);
-	free(mem_usage);
+	dump_hex_debug(mem_regions, query.size);
+	free(mem_regions);
 }
 
 /**
@@ -669,7 +669,7 @@ igt_main
 		test_query_engines(xe);
 
 	igt_subtest("query-mem-usage")
-		test_query_mem_usage(xe);
+		test_query_mem_regions(xe);
 
 	igt_subtest("query-gt-list")
 		test_query_gt_list(xe);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [igt-dev] [PATCH v1 6/8] drm-uapi/xe: s/FLAGS_HAS_VRAM/FLAG_HAS_VRAM
  2023-11-14 13:44 [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming Francois Dugast
                   ` (4 preceding siblings ...)
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 5/8] drm-uapi/xe: Rename query's mem_usage to mem_regions Francois Dugast
@ 2023-11-14 13:44 ` Francois Dugast
  2023-11-14 15:41   ` Kamil Konieczny
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 7/8] drm-uapi/xe: Differentiate WAIT_OP from WAIT_MASK Francois Dugast
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Francois Dugast @ 2023-11-14 13:44 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with commit ("drm/xe/uapi: Standardize the FLAG naming and assignment")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 18 +++++++++---------
 tests/intel/xe_debugfs.c  |  4 ++--
 tests/intel/xe_query.c    |  4 ++--
 3 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index ec37f6811..2dae8b03e 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -323,7 +323,7 @@ struct drm_xe_query_config {
 
 #define DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID	0
 #define DRM_XE_QUERY_CONFIG_FLAGS			1
-	#define DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM		(0x1 << 0)
+	#define DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM	(1 << 0)
 #define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT		2
 #define DRM_XE_QUERY_CONFIG_VA_BITS			3
 #define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	4
@@ -587,10 +587,10 @@ struct drm_xe_vm_create {
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
-#define DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE	(0x1 << 0)
-#define DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE	(0x1 << 1)
-#define DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT	(0x1 << 2)
-#define DRM_XE_VM_CREATE_FLAG_FAULT_MODE	(0x1 << 3)
+#define DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE	(1 << 0)
+#define DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE	(1 << 1)
+#define DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT	(1 << 2)
+#define DRM_XE_VM_CREATE_FLAG_FAULT_MODE	(1 << 3)
 	/** @flags: Flags */
 	__u32 flags;
 
@@ -654,13 +654,13 @@ struct drm_xe_vm_bind_op {
 	/** @op: Bind operation to perform */
 	__u32 op;
 
-#define DRM_XE_VM_BIND_FLAG_READONLY	(0x1 << 0)
-#define DRM_XE_VM_BIND_FLAG_ASYNC	(0x1 << 1)
+#define DRM_XE_VM_BIND_FLAG_READONLY	(1 << 0)
+#define DRM_XE_VM_BIND_FLAG_ASYNC	(1 << 1)
 	/*
 	 * Valid on a faulting VM only, do the MAP operation immediately rather
 	 * than deferring the MAP to the page fault handler.
 	 */
-#define DRM_XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 2)
+#define DRM_XE_VM_BIND_FLAG_IMMEDIATE	(1 << 2)
 	/*
 	 * When the NULL flag is set, the page tables are setup with a special
 	 * bit which indicates writes are dropped and all reads return zero.  In
@@ -668,7 +668,7 @@ struct drm_xe_vm_bind_op {
 	 * operations, the BO handle MBZ, and the BO offset MBZ. This flag is
 	 * intended to implement VK sparse bindings.
 	 */
-#define DRM_XE_VM_BIND_FLAG_NULL	(0x1 << 3)
+#define DRM_XE_VM_BIND_FLAG_NULL	(1 << 3)
 	/** @flags: Bind flags */
 	__u32 flags;
 
diff --git a/tests/intel/xe_debugfs.c b/tests/intel/xe_debugfs.c
index 60ddceda7..4fd5ebc28 100644
--- a/tests/intel/xe_debugfs.c
+++ b/tests/intel/xe_debugfs.c
@@ -99,7 +99,7 @@ test_base(int fd, struct drm_xe_query_config *config)
 	igt_assert(igt_debugfs_search(fd, "info", reference));
 
 	sprintf(reference, "is_dgfx %s", config->info[DRM_XE_QUERY_CONFIG_FLAGS] &
-		DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "yes" : "no");
+		DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM ? "yes" : "no");
 
 	igt_assert(igt_debugfs_search(fd, "info", reference));
 
@@ -125,7 +125,7 @@ test_base(int fd, struct drm_xe_query_config *config)
 	igt_assert(igt_debugfs_exists(fd, "gtt_mm", O_RDONLY));
 	igt_debugfs_dump(fd, "gtt_mm");
 
-	if (config->info[DRM_XE_QUERY_CONFIG_FLAGS] & DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM) {
+	if (config->info[DRM_XE_QUERY_CONFIG_FLAGS] & DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM) {
 		igt_assert(igt_debugfs_exists(fd, "vram0_mm", O_RDONLY));
 		igt_debugfs_dump(fd, "vram0_mm");
 	}
diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
index 5860add0b..4a23dcb60 100644
--- a/tests/intel/xe_query.c
+++ b/tests/intel/xe_query.c
@@ -367,9 +367,9 @@ test_query_config(int fd)
 		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
 	igt_info("DRM_XE_QUERY_CONFIG_FLAGS\t\t\t%#llx\n",
 		config->info[DRM_XE_QUERY_CONFIG_FLAGS]);
-	igt_info("  DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM\t%s\n",
+	igt_info("  DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM\t%s\n",
 		config->info[DRM_XE_QUERY_CONFIG_FLAGS] &
-		DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "ON":"OFF");
+		DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM ? "ON":"OFF");
 	igt_info("DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT\t\t%#llx\n",
 		config->info[DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT]);
 	igt_info("DRM_XE_QUERY_CONFIG_VA_BITS\t\t\t%llu\n",
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [igt-dev] [PATCH v1 7/8] drm-uapi/xe: Differentiate WAIT_OP from WAIT_MASK
  2023-11-14 13:44 [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming Francois Dugast
                   ` (5 preceding siblings ...)
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 6/8] drm-uapi/xe: s/FLAGS_HAS_VRAM/FLAG_HAS_VRAM Francois Dugast
@ 2023-11-14 13:44 ` Francois Dugast
  2023-11-14 15:50   ` Kamil Konieczny
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 8/8] drm-uapi/xe: Be more specific about vm_bind prefetch region Francois Dugast
  2023-11-14 15:11 ` [igt-dev] ✗ Fi.CI.BUILD: failure for uAPI Alignment - Renaming Patchwork
  8 siblings, 1 reply; 18+ messages in thread
From: Francois Dugast @ 2023-11-14 13:44 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with kernel commit ("drm/xe/uapi: Differentiate WAIT_OP from WAIT_MASK")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h  | 21 +++++++++++----------
 lib/xe/xe_ioctl.c          |  8 ++++----
 tests/intel/xe_waitfence.c | 10 +++++-----
 3 files changed, 20 insertions(+), 19 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 2dae8b03e..7a02b78bf 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -914,12 +914,12 @@ struct drm_xe_wait_user_fence {
 	 */
 	__u64 addr;
 
-#define DRM_XE_UFENCE_WAIT_EQ	0
-#define DRM_XE_UFENCE_WAIT_NEQ	1
-#define DRM_XE_UFENCE_WAIT_GT	2
-#define DRM_XE_UFENCE_WAIT_GTE	3
-#define DRM_XE_UFENCE_WAIT_LT	4
-#define DRM_XE_UFENCE_WAIT_LTE	5
+#define DRM_XE_UFENCE_WAIT_OP_EQ	0x0
+#define DRM_XE_UFENCE_WAIT_OP_NEQ	0x1
+#define DRM_XE_UFENCE_WAIT_OP_GT	0x2
+#define DRM_XE_UFENCE_WAIT_OP_GTE	0x3
+#define DRM_XE_UFENCE_WAIT_OP_LT	0x4
+#define DRM_XE_UFENCE_WAIT_OP_LTE	0x5
 	/** @op: wait operation (type of comparison) */
 	__u16 op;
 
@@ -934,12 +934,13 @@ struct drm_xe_wait_user_fence {
 	/** @value: compare value */
 	__u64 value;
 
-#define DRM_XE_UFENCE_WAIT_U8		0xffu
-#define DRM_XE_UFENCE_WAIT_U16		0xffffu
-#define DRM_XE_UFENCE_WAIT_U32		0xffffffffu
-#define DRM_XE_UFENCE_WAIT_U64		0xffffffffffffffffu
+#define DRM_XE_UFENCE_WAIT_MASK_U8	0xffu
+#define DRM_XE_UFENCE_WAIT_MASK_U16	0xffffu
+#define DRM_XE_UFENCE_WAIT_MASK_U32	0xffffffffu
+#define DRM_XE_UFENCE_WAIT_MASK_U64	0xffffffffffffffffu
 	/** @mask: comparison mask */
 	__u64 mask;
+
 	/**
 	 * @timeout: how long to wait before bailing, value in nanoseconds.
 	 * Without DRM_XE_UFENCE_WAIT_FLAG_ABSTIME flag set (relative timeout)
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index db41d5ba5..a9cfdbf9d 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -415,10 +415,10 @@ int64_t xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
 {
 	struct drm_xe_wait_user_fence wait = {
 		.addr = to_user_pointer(addr),
-		.op = DRM_XE_UFENCE_WAIT_EQ,
+		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
 		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP : 0,
 		.value = value,
-		.mask = DRM_XE_UFENCE_WAIT_U64,
+		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
 		.timeout = timeout,
 		.num_engines = eci ? 1 :0,
 		.instances = eci ? to_user_pointer(eci) : 0,
@@ -447,10 +447,10 @@ int64_t xe_wait_ufence_abstime(int fd, uint64_t *addr, uint64_t value,
 {
 	struct drm_xe_wait_user_fence wait = {
 		.addr = to_user_pointer(addr),
-		.op = DRM_XE_UFENCE_WAIT_EQ,
+		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
 		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP | DRM_XE_UFENCE_WAIT_FLAG_ABSTIME : 0,
 		.value = value,
-		.mask = DRM_XE_UFENCE_WAIT_U64,
+		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
 		.timeout = timeout,
 		.num_engines = eci ? 1 : 0,
 		.instances = eci ? to_user_pointer(eci) : 0,
diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
index 2efdc1245..b1cae0d9b 100644
--- a/tests/intel/xe_waitfence.c
+++ b/tests/intel/xe_waitfence.c
@@ -123,10 +123,10 @@ invalid_flag(int fd)
 
 	struct drm_xe_wait_user_fence wait = {
 		.addr = to_user_pointer(&wait_fence),
-		.op = DRM_XE_UFENCE_WAIT_EQ,
+		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
 		.flags = -1,
 		.value = 1,
-		.mask = DRM_XE_UFENCE_WAIT_U64,
+		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
 		.timeout = -1,
 		.num_engines = 0,
 		.instances = 0,
@@ -151,7 +151,7 @@ invalid_ops(int fd)
 		.op = -1,
 		.flags = 0,
 		.value = 1,
-		.mask = DRM_XE_UFENCE_WAIT_U64,
+		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
 		.timeout = 1,
 		.num_engines = 0,
 		.instances = 0,
@@ -173,10 +173,10 @@ invalid_engine(int fd)
 
 	struct drm_xe_wait_user_fence wait = {
 		.addr = to_user_pointer(&wait_fence),
-		.op = DRM_XE_UFENCE_WAIT_EQ,
+		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
 		.flags = 0,
 		.value = 1,
-		.mask = DRM_XE_UFENCE_WAIT_U64,
+		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
 		.timeout = -1,
 		.num_engines = 1,
 		.instances = 0,
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [igt-dev] [PATCH v1 8/8] drm-uapi/xe: Be more specific about vm_bind prefetch region
  2023-11-14 13:44 [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming Francois Dugast
                   ` (6 preceding siblings ...)
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 7/8] drm-uapi/xe: Differentiate WAIT_OP from WAIT_MASK Francois Dugast
@ 2023-11-14 13:44 ` Francois Dugast
  2023-11-14 17:04   ` Kamil Konieczny
  2023-11-14 15:11 ` [igt-dev] ✗ Fi.CI.BUILD: failure for uAPI Alignment - Renaming Patchwork
  8 siblings, 1 reply; 18+ messages in thread
From: Francois Dugast @ 2023-11-14 13:44 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

From: Rodrigo Vivi <rodrigo.vivi@intel.com>

Align with kernel commit ("drm/xe/uapi: Be more specific about the vm_bind prefetch region")

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
 include/drm-uapi/xe_drm.h | 8 ++++++--
 lib/intel_batchbuffer.c   | 4 ++--
 lib/xe/xe_ioctl.c         | 8 ++++----
 lib/xe/xe_ioctl.h         | 2 +-
 lib/xe/xe_util.c          | 2 +-
 tests/intel/xe_vm.c       | 2 +-
 6 files changed, 15 insertions(+), 11 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 7a02b78bf..af32ec161 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -672,8 +672,12 @@ struct drm_xe_vm_bind_op {
 	/** @flags: Bind flags */
 	__u32 flags;
 
-	/** @mem_region: Memory region to prefetch VMA to, instance not a mask */
-	__u32 region;
+	/**
+	 * @prefetch_mem_region_instance: Memory region to prefetch VMA to.
+	 * It is a region instance, not a mask.
+	 * To be used only with %DRM_XE_VM_BIND_OP_PREFETCH operation.
+	 */
+	__u32 prefetch_mem_region_instance;
 
 	/** @reserved: Reserved */
 	__u64 reserved[2];
diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index b59c490db..f12d6219d 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -1282,7 +1282,7 @@ void intel_bb_destroy(struct intel_bb *ibb)
 
 static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
 						   uint32_t op, uint32_t flags,
-						   uint32_t region)
+						   uint32_t prefetch_region)
 {
 	struct drm_i915_gem_exec_object2 **objects = ibb->objects;
 	struct drm_xe_vm_bind_op *bind_ops, *ops;
@@ -1303,7 +1303,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
 		ops->obj_offset = 0;
 		ops->addr = objects[i]->offset;
 		ops->range = objects[i]->rsvd1;
-		ops->region = region;
+		ops->prefetch_mem_region_instance = prefetch_region;
 
 		igt_debug("  [%d]: handle: %u, offset: %llx, size: %llx\n",
 			  i, ops->obj, (long long)ops->addr, (long long)ops->range);
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index a9cfdbf9d..738c4ffdb 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -92,7 +92,7 @@ void xe_vm_bind_array(int fd, uint32_t vm, uint32_t exec_queue,
 int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 		  uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
 		  uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
-		  uint32_t region, uint64_t ext)
+		  uint32_t prefetch_region, uint64_t ext)
 {
 	struct drm_xe_vm_bind bind = {
 		.extensions = ext,
@@ -104,7 +104,7 @@ int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 		.bind.addr = addr,
 		.bind.op = op,
 		.bind.flags = flags,
-		.bind.region = region,
+		.bind.prefetch_mem_region_instance = prefetch_region,
 		.num_syncs = num_syncs,
 		.syncs = (uintptr_t)sync,
 		.exec_queue_id = exec_queue,
@@ -119,10 +119,10 @@ int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 void  __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 			  uint64_t offset, uint64_t addr, uint64_t size,
 			  uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
-			  uint32_t num_syncs, uint32_t region, uint64_t ext)
+			  uint32_t num_syncs, uint32_t prefetch_region, uint64_t ext)
 {
 	igt_assert_eq(__xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
-				   op, flags, sync, num_syncs, region, ext), 0);
+				   op, flags, sync, num_syncs, prefetch_region, ext), 0);
 }
 
 void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index d9c97bf22..a9171bcf7 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -24,7 +24,7 @@ int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 void  __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 			  uint64_t offset, uint64_t addr, uint64_t size,
 			  uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
-			  uint32_t num_syncs, uint32_t region, uint64_t ext);
+			  uint32_t num_syncs, uint32_t prefetch_region, uint64_t ext);
 void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
 		uint64_t addr, uint64_t size,
 		struct drm_xe_sync *sync, uint32_t num_syncs);
diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
index 2635edf72..742e6333e 100644
--- a/lib/xe/xe_util.c
+++ b/lib/xe/xe_util.c
@@ -147,7 +147,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
 		ops->obj_offset = 0;
 		ops->addr = obj->offset;
 		ops->range = obj->size;
-		ops->region = 0;
+		ops->prefetch_mem_region_instance = 0;
 
 		bind_info("  [%d]: [%6s] handle: %u, offset: %llx, size: %llx\n",
 			  i, obj->bind_op == XE_OBJECT_BIND ? "BIND" : "UNBIND",
diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
index 86c8d0c5d..05e8e7516 100644
--- a/tests/intel/xe_vm.c
+++ b/tests/intel/xe_vm.c
@@ -797,7 +797,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
 		bind_ops[i].tile_mask = 0x1 << eci->gt_id;
 		bind_ops[i].op = DRM_XE_VM_BIND_OP_MAP;
 		bind_ops[i].flags = DRM_XE_VM_BIND_FLAG_ASYNC;
-		bind_ops[i].region = 0;
+		bind_ops[i].prefetch_mem_region_instance = 0;
 		bind_ops[i].reserved[0] = 0;
 		bind_ops[i].reserved[1] = 0;
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [igt-dev] [PATCH v1 2/8] drm-uapi/xe: Add _FLAG to uAPI constants usable for flags
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 2/8] drm-uapi/xe: Add _FLAG to uAPI constants usable for flags Francois Dugast
@ 2023-11-14 13:47   ` Rodrigo Vivi
  0 siblings, 0 replies; 18+ messages in thread
From: Rodrigo Vivi @ 2023-11-14 13:47 UTC (permalink / raw)
  To: Francois Dugast; +Cc: igt-dev

On Tue, Nov 14, 2023 at 01:44:20PM +0000, Francois Dugast wrote:
> Align with commit ("drm/xe/uapi: Add _FLAG to uAPI constants usable for flags")
> 
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>


Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>


> ---
>  benchmarks/gem_wsim.c              |  12 +--
>  include/drm-uapi/xe_drm.h          |  30 +++----
>  lib/igt_fb.c                       |   2 +-
>  lib/intel_batchbuffer.c            |  12 +--
>  lib/intel_compute.c                |   6 +-
>  lib/intel_ctx.c                    |   4 +-
>  lib/xe/xe_ioctl.c                  |   6 +-
>  lib/xe/xe_query.c                  |   4 +-
>  lib/xe/xe_spin.c                   |   4 +-
>  lib/xe/xe_util.c                   |   4 +-
>  tests/intel/xe_ccs.c               |   4 +-
>  tests/intel/xe_copy_basic.c        |   2 +-
>  tests/intel/xe_create.c            |   6 +-
>  tests/intel/xe_dma_buf_sync.c      |   4 +-
>  tests/intel/xe_drm_fdinfo.c        |  18 ++---
>  tests/intel/xe_evict.c             |  24 +++---
>  tests/intel/xe_evict_ccs.c         |   2 +-
>  tests/intel/xe_exec_balancer.c     |  34 ++++----
>  tests/intel/xe_exec_basic.c        |  16 ++--
>  tests/intel/xe_exec_compute_mode.c |   6 +-
>  tests/intel/xe_exec_fault_mode.c   |   8 +-
>  tests/intel/xe_exec_reset.c        |  42 +++++-----
>  tests/intel/xe_exec_store.c        |  26 +++---
>  tests/intel/xe_exec_threads.c      |  44 +++++-----
>  tests/intel/xe_exercise_blt.c      |   2 +-
>  tests/intel/xe_guc_pc.c            |  12 +--
>  tests/intel/xe_huc_copy.c          |   4 +-
>  tests/intel/xe_intel_bb.c          |   2 +-
>  tests/intel/xe_noexec_ping_pong.c  |   2 +-
>  tests/intel/xe_perf_pmu.c          |  24 +++---
>  tests/intel/xe_pm.c                |  12 +--
>  tests/intel/xe_pm_residency.c      |   2 +-
>  tests/intel/xe_spin_batch.c        |   2 +-
>  tests/intel/xe_vm.c                | 126 ++++++++++++++---------------
>  tests/intel/xe_waitfence.c         |  10 +--
>  35 files changed, 259 insertions(+), 259 deletions(-)
> 
> diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
> index 28b809520..df4850086 100644
> --- a/benchmarks/gem_wsim.c
> +++ b/benchmarks/gem_wsim.c
> @@ -1772,21 +1772,21 @@ xe_alloc_step_batch(struct workload *wrk, struct w_step *w)
>  	i = 0;
>  	/* out fence */
>  	w->xe.syncs[i].handle = syncobj_create(fd, 0);
> -	w->xe.syncs[i++].flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
> +	w->xe.syncs[i++].flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
>  	/* in fence(s) */
>  	for_each_dep(dep, w->data_deps) {
>  		int dep_idx = w->idx + dep->target;
>  
>  		igt_assert(wrk->steps[dep_idx].xe.syncs && wrk->steps[dep_idx].xe.syncs[0].handle);
>  		w->xe.syncs[i].handle = wrk->steps[dep_idx].xe.syncs[0].handle;
> -		w->xe.syncs[i++].flags = DRM_XE_SYNC_SYNCOBJ;
> +		w->xe.syncs[i++].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
>  	}
>  	for_each_dep(dep, w->fence_deps) {
>  		int dep_idx = w->idx + dep->target;
>  
>  		igt_assert(wrk->steps[dep_idx].xe.syncs && wrk->steps[dep_idx].xe.syncs[0].handle);
>  		w->xe.syncs[i].handle = wrk->steps[dep_idx].xe.syncs[0].handle;
> -		w->xe.syncs[i++].flags = DRM_XE_SYNC_SYNCOBJ;
> +		w->xe.syncs[i++].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
>  	}
>  	w->xe.exec.syncs = to_user_pointer(w->xe.syncs);
>  }
> @@ -2024,8 +2024,8 @@ static void xe_vm_create_(struct xe_vm *vm)
>  	uint32_t flags = 0;
>  
>  	if (vm->compute_mode)
> -		flags |= DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> -			 DRM_XE_VM_CREATE_COMPUTE_MODE;
> +		flags |= DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
> +			 DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE;
>  
>  	vm->id = xe_vm_create(fd, flags, 0);
>  }
> @@ -2363,7 +2363,7 @@ static int xe_prepare_contexts(unsigned int id, struct workload *wrk)
>  		if (w->type == SW_FENCE) {
>  			w->xe.syncs = calloc(1, sizeof(struct drm_xe_sync));
>  			w->xe.syncs[0].handle = syncobj_create(fd, 0);
> -			w->xe.syncs[0].flags = DRM_XE_SYNC_SYNCOBJ;
> +			w->xe.syncs[0].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
>  		}
>  
>  	return 0;
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 9ab6c3269..68d005202 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -585,10 +585,10 @@ struct drm_xe_vm_create {
>  	/** @extensions: Pointer to the first extension struct, if any */
>  	__u64 extensions;
>  
> -#define DRM_XE_VM_CREATE_SCRATCH_PAGE		(0x1 << 0)
> -#define DRM_XE_VM_CREATE_COMPUTE_MODE		(0x1 << 1)
> -#define DRM_XE_VM_CREATE_ASYNC_DEFAULT		(0x1 << 2)
> -#define DRM_XE_VM_CREATE_FAULT_MODE		(0x1 << 3)
> +#define DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE	(0x1 << 0)
> +#define DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE	(0x1 << 1)
> +#define DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT	(0x1 << 2)
> +#define DRM_XE_VM_CREATE_FLAG_FAULT_MODE	(0x1 << 3)
>  	/** @flags: Flags */
>  	__u32 flags;
>  
> @@ -831,11 +831,11 @@ struct drm_xe_sync {
>  	/** @extensions: Pointer to the first extension struct, if any */
>  	__u64 extensions;
>  
> -#define DRM_XE_SYNC_SYNCOBJ		0x0
> -#define DRM_XE_SYNC_TIMELINE_SYNCOBJ	0x1
> -#define DRM_XE_SYNC_DMA_BUF		0x2
> -#define DRM_XE_SYNC_USER_FENCE		0x3
> -#define DRM_XE_SYNC_SIGNAL		0x10
> +#define DRM_XE_SYNC_FLAG_SYNCOBJ		0x0
> +#define DRM_XE_SYNC_FLAG_TIMELINE_SYNCOBJ	0x1
> +#define DRM_XE_SYNC_FLAG_DMA_BUF		0x2
> +#define DRM_XE_SYNC_FLAG_USER_FENCE		0x3
> +#define DRM_XE_SYNC_FLAG_SIGNAL		0x10
>  	__u32 flags;
>  
>  	/** @pad: MBZ */
> @@ -921,8 +921,8 @@ struct drm_xe_wait_user_fence {
>  	/** @op: wait operation (type of comparison) */
>  	__u16 op;
>  
> -#define DRM_XE_UFENCE_WAIT_SOFT_OP	(1 << 0)	/* e.g. Wait on VM bind */
> -#define DRM_XE_UFENCE_WAIT_ABSTIME	(1 << 1)
> +#define DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP	(1 << 0)	/* e.g. Wait on VM bind */
> +#define DRM_XE_UFENCE_WAIT_FLAG_ABSTIME	(1 << 1)
>  	/** @flags: wait flags */
>  	__u16 flags;
>  
> @@ -940,10 +940,10 @@ struct drm_xe_wait_user_fence {
>  	__u64 mask;
>  	/**
>  	 * @timeout: how long to wait before bailing, value in nanoseconds.
> -	 * Without DRM_XE_UFENCE_WAIT_ABSTIME flag set (relative timeout)
> +	 * Without DRM_XE_UFENCE_WAIT_FLAG_ABSTIME flag set (relative timeout)
>  	 * it contains timeout expressed in nanoseconds to wait (fence will
>  	 * expire at now() + timeout).
> -	 * When DRM_XE_UFENCE_WAIT_ABSTIME flat is set (absolute timeout) wait
> +	 * When DRM_XE_UFENCE_WAIT_FLAG_ABSTIME flat is set (absolute timeout) wait
>  	 * will end at timeout (uses system MONOTONIC_CLOCK).
>  	 * Passing negative timeout leads to neverending wait.
>  	 *
> @@ -956,13 +956,13 @@ struct drm_xe_wait_user_fence {
>  
>  	/**
>  	 * @num_engines: number of engine instances to wait on, must be zero
> -	 * when DRM_XE_UFENCE_WAIT_SOFT_OP set
> +	 * when DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP set
>  	 */
>  	__u64 num_engines;
>  
>  	/**
>  	 * @instances: user pointer to array of drm_xe_engine_class_instance to
> -	 * wait on, must be NULL when DRM_XE_UFENCE_WAIT_SOFT_OP set
> +	 * wait on, must be NULL when DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP set
>  	 */
>  	__u64 instances;
>  
> diff --git a/lib/igt_fb.c b/lib/igt_fb.c
> index e531a041e..e70d2e3ce 100644
> --- a/lib/igt_fb.c
> +++ b/lib/igt_fb.c
> @@ -2892,7 +2892,7 @@ static void blitcopy(const struct igt_fb *dst_fb,
>  							  &bb_size,
>  							  mem_region) == 0);
>  	} else if (is_xe) {
> -		vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +		vm = xe_vm_create(dst_fb->fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  		exec_queue = xe_exec_queue_create(dst_fb->fd, vm, &inst, 0);
>  		xe_ctx = intel_ctx_xe(dst_fb->fd, vm, exec_queue, 0, 0, 0);
>  		mem_region = vram_if_possible(dst_fb->fd, 0);
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index eb47ede50..b59c490db 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -953,7 +953,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
>  
>  		if (!vm) {
>  			igt_assert_f(!ctx, "No vm provided for engine");
> -			vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +			vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  		}
>  
>  		ibb->uses_full_ppgtt = true;
> @@ -1315,8 +1315,8 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
>  static void __unbind_xe_objects(struct intel_bb *ibb)
>  {
>  	struct drm_xe_sync syncs[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	int ret;
>  
> @@ -2302,8 +2302,8 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
>  	uint32_t engine = flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK);
>  	uint32_t engine_id;
>  	struct drm_xe_sync syncs[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_vm_bind_op *bind_ops;
>  	void *map;
> @@ -2371,7 +2371,7 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
>  	}
>  	ibb->xe_bound = true;
>  
> -	syncs[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> +	syncs[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  	ibb->engine_syncobj = syncobj_create(ibb->fd, 0);
>  	syncs[1].handle = ibb->engine_syncobj;
>  
> diff --git a/lib/intel_compute.c b/lib/intel_compute.c
> index 7f1ea90e7..7cb0f001c 100644
> --- a/lib/intel_compute.c
> +++ b/lib/intel_compute.c
> @@ -80,7 +80,7 @@ static void bo_execenv_create(int fd, struct bo_execenv *execenv)
>  		else
>  			engine_class = DRM_XE_ENGINE_CLASS_COMPUTE;
>  
> -		execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +		execenv->vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  		execenv->exec_queue = xe_exec_queue_create_class(fd, execenv->vm,
>  								 engine_class);
>  	}
> @@ -106,7 +106,7 @@ static void bo_execenv_bind(struct bo_execenv *execenv,
>  		uint64_t alignment = xe_get_default_alignment(fd);
>  		struct drm_xe_sync sync = { 0 };
>  
> -		sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
> +		sync.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync.handle = syncobj_create(fd, 0);
>  
>  		for (int i = 0; i < entries; i++) {
> @@ -162,7 +162,7 @@ static void bo_execenv_unbind(struct bo_execenv *execenv,
>  		uint32_t vm = execenv->vm;
>  		struct drm_xe_sync sync = { 0 };
>  
> -		sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
> +		sync.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync.handle = syncobj_create(fd, 0);
>  
>  		for (int i = 0; i < entries; i++) {
> diff --git a/lib/intel_ctx.c b/lib/intel_ctx.c
> index f927b7df8..f82564572 100644
> --- a/lib/intel_ctx.c
> +++ b/lib/intel_ctx.c
> @@ -423,8 +423,8 @@ intel_ctx_t *intel_ctx_xe(int fd, uint32_t vm, uint32_t exec_queue,
>  int __intel_ctx_xe_exec(const intel_ctx_t *ctx, uint64_t ahnd, uint64_t bb_offset)
>  {
>  	struct drm_xe_sync syncs[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.exec_queue_id = ctx->exec_queue,
> diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> index 36f10a49a..db41d5ba5 100644
> --- a/lib/xe/xe_ioctl.c
> +++ b/lib/xe/xe_ioctl.c
> @@ -399,7 +399,7 @@ void xe_exec_sync(int fd, uint32_t exec_queue, uint64_t addr,
>  void xe_exec_wait(int fd, uint32_t exec_queue, uint64_t addr)
>  {
>  	struct drm_xe_sync sync = {
> -		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> +		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
>  		.handle = syncobj_create(fd, 0),
>  	};
>  
> @@ -416,7 +416,7 @@ int64_t xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
>  	struct drm_xe_wait_user_fence wait = {
>  		.addr = to_user_pointer(addr),
>  		.op = DRM_XE_UFENCE_WAIT_EQ,
> -		.flags = !eci ? DRM_XE_UFENCE_WAIT_SOFT_OP : 0,
> +		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP : 0,
>  		.value = value,
>  		.mask = DRM_XE_UFENCE_WAIT_U64,
>  		.timeout = timeout,
> @@ -448,7 +448,7 @@ int64_t xe_wait_ufence_abstime(int fd, uint64_t *addr, uint64_t value,
>  	struct drm_xe_wait_user_fence wait = {
>  		.addr = to_user_pointer(addr),
>  		.op = DRM_XE_UFENCE_WAIT_EQ,
> -		.flags = !eci ? DRM_XE_UFENCE_WAIT_SOFT_OP | DRM_XE_UFENCE_WAIT_ABSTIME : 0,
> +		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP | DRM_XE_UFENCE_WAIT_FLAG_ABSTIME : 0,
>  		.value = value,
>  		.mask = DRM_XE_UFENCE_WAIT_U64,
>  		.timeout = timeout,
> diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> index 8df3d317a..d459893e1 100644
> --- a/lib/xe/xe_query.c
> +++ b/lib/xe/xe_query.c
> @@ -315,8 +315,8 @@ bool xe_supports_faults(int fd)
>  	bool supports_faults;
>  
>  	struct drm_xe_vm_create create = {
> -		.flags = DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> -			 DRM_XE_VM_CREATE_FAULT_MODE,
> +		.flags = DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
> +			 DRM_XE_VM_CREATE_FLAG_FAULT_MODE,
>  	};
>  
>  	supports_faults = !igt_ioctl(fd, DRM_IOCTL_XE_VM_CREATE, &create);
> diff --git a/lib/xe/xe_spin.c b/lib/xe/xe_spin.c
> index b05b38829..cfc663acc 100644
> --- a/lib/xe/xe_spin.c
> +++ b/lib/xe/xe_spin.c
> @@ -191,7 +191,7 @@ xe_spin_create(int fd, const struct igt_spin_factory *opt)
>  	struct igt_spin *spin;
>  	struct xe_spin *xe_spin;
>  	struct drm_xe_sync sync = {
> -		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> +		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -288,7 +288,7 @@ void xe_cork_init(int fd, struct drm_xe_engine_class_instance *hwe,
>  	uint32_t vm, bo, exec_queue, syncobj;
>  	struct xe_spin *spin;
>  	struct drm_xe_sync sync = {
> -		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> +		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
> index 780125f92..2635edf72 100644
> --- a/lib/xe/xe_util.c
> +++ b/lib/xe/xe_util.c
> @@ -179,8 +179,8 @@ void xe_bind_unbind_async(int xe, uint32_t vm, uint32_t bind_engine,
>  {
>  	struct drm_xe_vm_bind_op *bind_ops;
>  	struct drm_xe_sync tabsyncs[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ, .handle = sync_in },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, .handle = sync_out },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ, .handle = sync_in },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, .handle = sync_out },
>  	};
>  	struct drm_xe_sync *syncs;
>  	uint32_t num_binds = 0;
> diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
> index bb844b641..465f67e23 100644
> --- a/tests/intel/xe_ccs.c
> +++ b/tests/intel/xe_ccs.c
> @@ -343,7 +343,7 @@ static void block_copy(int xe,
>  		uint32_t vm, exec_queue;
>  
>  		if (config->new_ctx) {
> -			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  			exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
>  			surf_ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
>  			surf_ahnd = intel_allocator_open(xe, surf_ctx->vm,
> @@ -550,7 +550,7 @@ static void block_copy_test(int xe,
>  				      copyfns[copy_function].suffix) {
>  				uint32_t sync_bind, sync_out;
>  
> -				vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +				vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  				exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
>  				sync_bind = syncobj_create(xe, 0);
>  				sync_out = syncobj_create(xe, 0);
> diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c
> index 1dafbb276..191c29155 100644
> --- a/tests/intel/xe_copy_basic.c
> +++ b/tests/intel/xe_copy_basic.c
> @@ -134,7 +134,7 @@ static void copy_test(int fd, uint32_t size, enum blt_cmd_type cmd, uint32_t reg
>  
>  	src_handle = xe_bo_create_flags(fd, 0, bo_size, region);
>  	dst_handle = xe_bo_create_flags(fd, 0, bo_size, region);
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	exec_queue = xe_exec_queue_create(fd, vm, &inst, 0);
>  	ctx = intel_ctx_xe(fd, vm, exec_queue, 0, 0, 0);
>  
> diff --git a/tests/intel/xe_create.c b/tests/intel/xe_create.c
> index d99bd51cf..4242e1a67 100644
> --- a/tests/intel/xe_create.c
> +++ b/tests/intel/xe_create.c
> @@ -54,7 +54,7 @@ static void create_invalid_size(int fd)
>  	uint32_t handle;
>  	int ret;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
>  	xe_for_each_mem_region(fd, memreg, region) {
>  		memregion = xe_mem_region(fd, region);
> @@ -140,7 +140,7 @@ static void create_execqueues(int fd, enum exec_queue_destroy ed)
>  
>  	fd = drm_reopen_driver(fd);
>  	num_engines = xe_number_hw_engines(fd);
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
>  	exec_queues_per_process = max_t(uint32_t, 1, MAXEXECQUEUES / nproc);
>  	igt_debug("nproc: %u, exec_queues per process: %u\n", nproc, exec_queues_per_process);
> @@ -199,7 +199,7 @@ static void create_massive_size(int fd)
>  	uint32_t handle;
>  	int ret;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
>  	xe_for_each_mem_region(fd, memreg, region) {
>  		ret = __create_bo(fd, vm, -1ULL << 32, region, &handle);
> diff --git a/tests/intel/xe_dma_buf_sync.c b/tests/intel/xe_dma_buf_sync.c
> index 5c401b6dd..0d835dddb 100644
> --- a/tests/intel/xe_dma_buf_sync.c
> +++ b/tests/intel/xe_dma_buf_sync.c
> @@ -144,8 +144,8 @@ test_export_dma_buf(struct drm_xe_engine_class_instance *hwe0,
>  		uint64_t sdi_addr = addr + sdi_offset;
>  		uint64_t spin_offset = (char *)&data[i]->spin - (char *)data[i];
>  		struct drm_xe_sync sync[2] = {
> -			{ .flags = DRM_XE_SYNC_SYNCOBJ, },
> -			{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +			{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ, },
> +			{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  		};
>  		struct drm_xe_exec exec = {
>  			.num_batch_buffer = 1,
> diff --git a/tests/intel/xe_drm_fdinfo.c b/tests/intel/xe_drm_fdinfo.c
> index 64168ed19..4ef30cf49 100644
> --- a/tests/intel/xe_drm_fdinfo.c
> +++ b/tests/intel/xe_drm_fdinfo.c
> @@ -48,8 +48,8 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -71,7 +71,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
>  	struct xe_spin_opts spin_opts = { .preempt = true };
>  	int i, b, ret;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data) * N_EXEC_QUEUES;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -110,20 +110,20 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
>  				xe_spin_init(&data[i].spin, &spin_opts);
>  				exec.exec_queue_id = exec_queues[e];
>  				exec.address = spin_opts.addr;
> -				sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -				sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +				sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +				sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  				sync[1].handle = syncobjs[e];
>  				xe_exec(fd, &exec);
>  				xe_spin_wait_started(&data[i].spin);
>  
>  				addr += bo_size;
> -				sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
> +				sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  				sync[1].handle = syncobjs[e];
>  				xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
>  						 bo_size, sync + 1, 1);
>  				addr += bo_size;
>  			} else {
> -				sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +				sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  				xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
>  						 bo_size, sync, 1);
>  			}
> @@ -149,7 +149,7 @@ static void test_active(int fd, struct drm_xe_engine_class_instance *eci)
>  
>  		syncobj_destroy(fd, sync[0].handle);
>  		sync[0].handle = syncobj_create(fd, 0);
> -		sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		xe_vm_unbind_all_async(fd, vm, 0, bo, sync, 1);
>  		igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> @@ -221,7 +221,7 @@ static void test_total_resident(int xe)
>  	uint64_t addr = 0x1a0000;
>  	int ret;
>  
> -	vm = xe_vm_create(xe, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0);
> +	vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0);
>  
>  	xe_for_each_mem_region(xe, memreg, region) {
>  		uint64_t pre_size;
> diff --git a/tests/intel/xe_evict.c b/tests/intel/xe_evict.c
> index 5d8463270..6d953e58b 100644
> --- a/tests/intel/xe_evict.c
> +++ b/tests/intel/xe_evict.c
> @@ -38,8 +38,8 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
>  	uint32_t bind_exec_queues[3] = { 0, 0, 0 };
>  	uint64_t addr = 0x100000000, base_addr = 0x100000000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -63,12 +63,12 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
>  
>  	fd = drm_open_driver(DRIVER_XE);
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	if (flags & BIND_EXEC_QUEUE)
>  		bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
>  	if (flags & MULTI_VM) {
> -		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> -		vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
> +		vm3 = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  		if (flags & BIND_EXEC_QUEUE) {
>  			bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
>  									0, true);
> @@ -121,7 +121,7 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
>  				 ALIGN(sizeof(*data) * n_execs, 0x1000));
>  
>  		if (i < n_execs / 2) {
> -			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  			sync[0].handle = syncobj_create(fd, 0);
>  			if (flags & MULTI_VM) {
>  				xe_vm_bind_async(fd, vm3, bind_exec_queues[2], __bo,
> @@ -149,7 +149,7 @@ test_evict(int fd, struct drm_xe_engine_class_instance *eci,
>  		data[i].batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  		if (i >= n_exec_queues)
>  			syncobj_reset(fd, &syncobjs[e], 1);
>  		sync[1].handle = syncobjs[e];
> @@ -216,7 +216,7 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
>  	uint64_t addr = 0x100000000, base_addr = 0x100000000;
>  #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
>  	struct drm_xe_sync sync[1] = {
> -		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
> +		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
>  		  .timeline_value = USER_FENCE_VALUE },
>  	};
>  	struct drm_xe_exec exec = {
> @@ -242,13 +242,13 @@ test_evict_cm(int fd, struct drm_xe_engine_class_instance *eci,
>  
>  	fd = drm_open_driver(DRIVER_XE);
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> -			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
> +			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
>  	if (flags & BIND_EXEC_QUEUE)
>  		bind_exec_queues[0] = xe_bind_exec_queue_create(fd, vm, 0, true);
>  	if (flags & MULTI_VM) {
> -		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> -				   DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> +		vm2 = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
> +				   DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
>  		if (flags & BIND_EXEC_QUEUE)
>  			bind_exec_queues[1] = xe_bind_exec_queue_create(fd, vm2,
>  									0, true);
> diff --git a/tests/intel/xe_evict_ccs.c b/tests/intel/xe_evict_ccs.c
> index 4f2876ecb..1f5c795ef 100644
> --- a/tests/intel/xe_evict_ccs.c
> +++ b/tests/intel/xe_evict_ccs.c
> @@ -226,7 +226,7 @@ static void evict_single(int fd, int child, const struct config *config)
>  	uint32_t kb_left = config->mb_per_proc * SZ_1K;
>  	uint32_t min_alloc_kb = config->param->min_size_kb;
>  	uint32_t max_alloc_kb = config->param->max_size_kb;
> -	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	uint64_t ahnd = intel_allocator_open(fd, vm, INTEL_ALLOCATOR_RELOC);
>  	uint8_t uc_mocs = intel_get_uc_mocs_index(fd);
>  	struct object *obj, *tmp;
> diff --git a/tests/intel/xe_exec_balancer.c b/tests/intel/xe_exec_balancer.c
> index 3ca3de881..8a0165b8c 100644
> --- a/tests/intel/xe_exec_balancer.c
> +++ b/tests/intel/xe_exec_balancer.c
> @@ -37,8 +37,8 @@ static void test_all_active(int fd, int gt, int class)
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -66,7 +66,7 @@ static void test_all_active(int fd, int gt, int class)
>  	if (num_placements < 2)
>  		return;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data) * num_placements;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
>  
> @@ -93,8 +93,8 @@ static void test_all_active(int fd, int gt, int class)
>  	for (i = 0; i < num_placements; i++) {
>  		spin_opts.addr = addr + (char *)&data[i].spin - (char *)data;
>  		xe_spin_init(&data[i].spin, &spin_opts);
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[i];
>  
>  		exec.exec_queue_id = exec_queues[i];
> @@ -110,7 +110,7 @@ static void test_all_active(int fd, int gt, int class)
>  	}
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> @@ -176,8 +176,8 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_syncs = 2,
> @@ -207,7 +207,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  	if (num_placements < 2)
>  		return;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data) * n_execs;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
>  
> @@ -269,8 +269,8 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  		data[i].batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		exec.exec_queue_id = exec_queues[e];
> @@ -281,11 +281,11 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  		xe_exec(fd, &exec);
>  
>  		if (flags & REBIND && i + 1 != n_execs) {
> -			sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
> +			sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  			xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size,
>  					   sync + 1, 1);
>  
> -			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  			addr += bo_size;
>  			if (bo)
>  				xe_vm_bind_async(fd, vm, 0, bo, 0, addr,
> @@ -329,7 +329,7 @@ test_exec(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  					NULL));
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> @@ -399,7 +399,7 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  	uint64_t addr = 0x1a0000;
>  #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
>  	struct drm_xe_sync sync[1] = {
> -		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
> +		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
>  	          .timeline_value = USER_FENCE_VALUE },
>  	};
>  	struct drm_xe_exec exec = {
> @@ -433,8 +433,8 @@ test_cm(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  	if (num_placements < 2)
>  		return;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> -			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
> +			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
>  	bo_size = sizeof(*data) * n_execs;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
> index 232ddde8e..a401f0165 100644
> --- a/tests/intel/xe_exec_basic.c
> +++ b/tests/intel/xe_exec_basic.c
> @@ -81,8 +81,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  	  int n_exec_queues, int n_execs, int n_vm, unsigned int flags)
>  {
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -109,7 +109,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  	igt_assert(n_vm <= MAX_N_EXEC_QUEUES);
>  
>  	for (i = 0; i < n_vm; ++i)
> -		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data) * n_execs;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -199,9 +199,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  		data[i].batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[0].handle = bind_syncobjs[cur_vm];
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		exec.exec_queue_id = exec_queues[e];
> @@ -213,11 +213,11 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  		if (flags & REBIND && i + 1 != n_execs) {
>  			uint32_t __vm = vm[cur_vm];
>  
> -			sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
> +			sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  			xe_vm_unbind_async(fd, __vm, bind_exec_queues[e], 0,
>  					   __addr, bo_size, sync + 1, 1);
>  
> -			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  			addr[i % n_vm] += bo_size;
>  			__addr = addr[i % n_vm];
>  			if (bo)
> @@ -266,7 +266,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  		igt_assert(syncobj_wait(fd, &bind_syncobjs[i], 1, INT64_MAX, 0,
>  					NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	for (i = 0; i < n_vm; ++i) {
>  		syncobj_reset(fd, &sync[0].handle, 1);
>  		xe_vm_unbind_async(fd, vm[i], bind_exec_queues[i], 0, addr[i],
> diff --git a/tests/intel/xe_exec_compute_mode.c b/tests/intel/xe_exec_compute_mode.c
> index b0a677dca..20d3fc6e8 100644
> --- a/tests/intel/xe_exec_compute_mode.c
> +++ b/tests/intel/xe_exec_compute_mode.c
> @@ -88,7 +88,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  	uint64_t addr = 0x1a0000;
>  #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
>  	struct drm_xe_sync sync[1] = {
> -		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
> +		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
>  	          .timeline_value = USER_FENCE_VALUE },
>  	};
>  	struct drm_xe_exec exec = {
> @@ -113,8 +113,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  
>  	igt_assert(n_exec_queues <= MAX_N_EXECQUEUES);
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> -			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
> +			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
>  	bo_size = sizeof(*data) * n_execs;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
> index 477d0824d..92d552f97 100644
> --- a/tests/intel/xe_exec_fault_mode.c
> +++ b/tests/intel/xe_exec_fault_mode.c
> @@ -8,7 +8,7 @@
>   * Category: Hardware building block
>   * Sub-category: execbuf
>   * Functionality: fault mode
> - * GPU requirements: GPU needs support for DRM_XE_VM_CREATE_FAULT_MODE
> + * GPU requirements: GPU needs support for DRM_XE_VM_CREATE_FLAG_FAULT_MODE
>   */
>  
>  #include <fcntl.h>
> @@ -107,7 +107,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  	uint64_t addr = 0x1a0000;
>  #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
>  	struct drm_xe_sync sync[1] = {
> -		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
> +		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
>  	          .timeline_value = USER_FENCE_VALUE },
>  	};
>  	struct drm_xe_exec exec = {
> @@ -131,8 +131,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  
>  	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> -			  DRM_XE_VM_CREATE_FAULT_MODE, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
> +			  DRM_XE_VM_CREATE_FLAG_FAULT_MODE, 0);
>  	bo_size = sizeof(*data) * n_execs;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
> index 39647b736..195e62911 100644
> --- a/tests/intel/xe_exec_reset.c
> +++ b/tests/intel/xe_exec_reset.c
> @@ -30,8 +30,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -45,7 +45,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
>  	struct xe_spin *spin;
>  	struct xe_spin_opts spin_opts = { .addr = addr, .preempt = false };
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*spin);
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -62,8 +62,8 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
>  
>  	xe_spin_init(spin, &spin_opts);
>  
> -	sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -	sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +	sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	sync[1].handle = syncobj;
>  
>  	exec.exec_queue_id = exec_queue;
> @@ -78,7 +78,7 @@ static void test_spin(int fd, struct drm_xe_engine_class_instance *eci)
>  	igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL));
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> @@ -140,8 +140,8 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_syncs = 2,
> @@ -176,7 +176,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  	if (num_placements < 2)
>  		return;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data) * n_execs;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -257,8 +257,8 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  		for (j = 0; j < num_placements && flags & PARALLEL; ++j)
>  			batches[j] = exec_addr;
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		exec.exec_queue_id = exec_queues[e];
> @@ -288,7 +288,7 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  					NULL));
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> @@ -336,8 +336,8 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -362,7 +362,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  	if (flags & CLOSE_FD)
>  		fd = drm_open_driver(DRIVER_XE);
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data) * n_execs;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -425,8 +425,8 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  			exec_addr = batch_addr;
>  		}
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		exec.exec_queue_id = exec_queues[e];
> @@ -455,7 +455,7 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  					NULL));
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> @@ -501,7 +501,7 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  	uint64_t addr = 0x1a0000;
>  #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
>  	struct drm_xe_sync sync[1] = {
> -		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
> +		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
>  	          .timeline_value = USER_FENCE_VALUE },
>  	};
>  	struct drm_xe_exec exec = {
> @@ -528,8 +528,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  	if (flags & CLOSE_FD)
>  		fd = drm_open_driver(DRIVER_XE);
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> -			  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
> +			  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
>  	bo_size = sizeof(*data) * n_execs;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
> index 4ca76b43a..9c14bfd14 100644
> --- a/tests/intel/xe_exec_store.c
> +++ b/tests/intel/xe_exec_store.c
> @@ -55,7 +55,7 @@ static void store_dword_batch(struct data *data, uint64_t addr, int value)
>  static void store(int fd)
>  {
>  	struct drm_xe_sync sync = {
> -		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> +		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -75,7 +75,7 @@ static void store(int fd)
>  	syncobj = syncobj_create(fd, 0);
>  	sync.handle = syncobj;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data);
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -91,7 +91,7 @@ static void store(int fd)
>  	exec_queue = xe_exec_queue_create(fd, vm, hw_engine, 0);
>  	exec.exec_queue_id = exec_queue;
>  	exec.address = data->addr;
> -	sync.flags &= DRM_XE_SYNC_SIGNAL;
> +	sync.flags &= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_exec(fd, &exec);
>  
>  	igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL));
> @@ -121,8 +121,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
>  			     unsigned int flags)
>  {
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, }
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, }
>  	};
>  
>  	struct drm_xe_exec exec = {
> @@ -143,7 +143,7 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
>  	size_t bo_size = 4096;
>  
>  	bo_size = ALIGN(bo_size, xe_get_default_alignment(fd));
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	ahnd = intel_allocator_open(fd, 0, INTEL_ALLOCATOR_SIMPLE);
>  	exec_queues = xe_exec_queue_create(fd, vm, eci, 0);
>  	syncobjs = syncobj_create(fd, 0);
> @@ -173,8 +173,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
>  		batch_map[b++] = value[n];
>  	}
>  	batch_map[b++] = MI_BATCH_BUFFER_END;
> -	sync[0].flags &= DRM_XE_SYNC_SIGNAL;
> -	sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags &= DRM_XE_SYNC_FLAG_SIGNAL;
> +	sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	sync[1].handle = syncobjs;
>  	exec.exec_queue_id = exec_queues;
>  	xe_exec(fd, &exec);
> @@ -210,8 +210,8 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
>  static void store_all(int fd, int gt, int class)
>  {
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, }
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, }
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -230,7 +230,7 @@ static void store_all(int fd, int gt, int class)
>  	struct drm_xe_engine_class_instance *hwe;
>  	int i, num_placements = 0;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data);
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -267,8 +267,8 @@ static void store_all(int fd, int gt, int class)
>  	for (i = 0; i < num_placements; i++) {
>  
>  		store_dword_batch(data, addr, i);
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[i];
>  
>  		exec.exec_queue_id = exec_queues[i];
> diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> index b814dcdf5..bb979b18c 100644
> --- a/tests/intel/xe_exec_threads.c
> +++ b/tests/intel/xe_exec_threads.c
> @@ -47,8 +47,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
>  	      int class, int n_exec_queues, int n_execs, unsigned int flags)
>  {
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_sync sync_all[MAX_N_EXEC_QUEUES];
>  	struct drm_xe_exec exec = {
> @@ -77,7 +77,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
>  	}
>  
>  	if (!vm) {
> -		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  		owns_vm = true;
>  	}
>  
> @@ -125,7 +125,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
>  					&create), 0);
>  		exec_queues[i] = create.exec_queue_id;
>  		syncobjs[i] = syncobj_create(fd, 0);
> -		sync_all[i].flags = DRM_XE_SYNC_SYNCOBJ;
> +		sync_all[i].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
>  		sync_all[i].handle = syncobjs[i];
>  	};
>  	exec.num_batch_buffer = flags & PARALLEL ? num_placements : 1;
> @@ -158,8 +158,8 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
>  		data[i].batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		exec.exec_queue_id = exec_queues[e];
> @@ -173,7 +173,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
>  			xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size,
>  					   sync_all, n_exec_queues);
>  
> -			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  			addr += bo_size;
>  			if (bo)
>  				xe_vm_bind_async(fd, vm, 0, bo, 0, addr,
> @@ -221,7 +221,7 @@ test_balancer(int fd, int gt, uint32_t vm, uint64_t addr, uint64_t userptr,
>  					NULL));
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> @@ -254,7 +254,7 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  {
>  #define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
>  	struct drm_xe_sync sync[1] = {
> -		{ .flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL,
> +		{ .flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL,
>  	          .timeline_value = USER_FENCE_VALUE },
>  	};
>  	struct drm_xe_exec exec = {
> @@ -285,8 +285,8 @@ test_compute_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  	}
>  
>  	if (!vm) {
> -		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> -				  DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> +		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
> +				  DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
>  		owns_vm = true;
>  	}
>  
> @@ -457,8 +457,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  		 int n_execs, unsigned int flags)
>  {
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_sync sync_all[MAX_N_EXEC_QUEUES];
>  	struct drm_xe_exec exec = {
> @@ -489,7 +489,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  	}
>  
>  	if (!vm) {
> -		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +		vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  		owns_vm = true;
>  	}
>  
> @@ -536,7 +536,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  		else
>  			bind_exec_queues[i] = 0;
>  		syncobjs[i] = syncobj_create(fd, 0);
> -		sync_all[i].flags = DRM_XE_SYNC_SYNCOBJ;
> +		sync_all[i].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
>  		sync_all[i].handle = syncobjs[i];
>  	};
>  
> @@ -576,8 +576,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  			exec_addr = batch_addr;
>  		}
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		exec.exec_queue_id = exec_queues[e];
> @@ -599,7 +599,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  					   0, addr, bo_size,
>  					   sync_all, n_exec_queues);
>  
> -			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  			addr += bo_size;
>  			if (bo)
>  				xe_vm_bind_async(fd, vm, bind_exec_queues[e],
> @@ -649,7 +649,7 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  					NULL));
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, bind_exec_queues[0], 0, addr,
>  			   bo_size, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
> @@ -1001,11 +1001,11 @@ static void threads(int fd, int flags)
>  
>  	if (flags & SHARED_VM) {
>  		vm_legacy_mode = xe_vm_create(fd,
> -					      DRM_XE_VM_CREATE_ASYNC_DEFAULT,
> +					      DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT,
>  					      0);
>  		vm_compute_mode = xe_vm_create(fd,
> -					       DRM_XE_VM_CREATE_ASYNC_DEFAULT |
> -					       DRM_XE_VM_CREATE_COMPUTE_MODE,
> +					       DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT |
> +					       DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE,
>  					       0);
>  	}
>  
> diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
> index df774130f..fd310138d 100644
> --- a/tests/intel/xe_exercise_blt.c
> +++ b/tests/intel/xe_exercise_blt.c
> @@ -280,7 +280,7 @@ static void fast_copy_test(int xe,
>  			region1 = igt_collection_get_value(regions, 0);
>  			region2 = igt_collection_get_value(regions, 1);
>  
> -			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +			vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  			exec_queue = xe_exec_queue_create(xe, vm, &inst, 0);
>  			ctx = intel_ctx_xe(xe, vm, exec_queue, 0, 0, 0);
>  
> diff --git a/tests/intel/xe_guc_pc.c b/tests/intel/xe_guc_pc.c
> index 3f2c4ae23..fa2f20cca 100644
> --- a/tests/intel/xe_guc_pc.c
> +++ b/tests/intel/xe_guc_pc.c
> @@ -37,8 +37,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -60,7 +60,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
>  	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
>  	igt_assert(n_execs > 0);
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data) * n_execs;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -95,8 +95,8 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
>  		data[i].batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		exec.exec_queue_id = exec_queues[e];
> @@ -114,7 +114,7 @@ static void exec_basic(int fd, struct drm_xe_engine_class_instance *eci,
>  
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, bind_exec_queues[0], 0, addr,
>  			   bo_size, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
> diff --git a/tests/intel/xe_huc_copy.c b/tests/intel/xe_huc_copy.c
> index 4f5ce2212..eda9e5216 100644
> --- a/tests/intel/xe_huc_copy.c
> +++ b/tests/intel/xe_huc_copy.c
> @@ -118,7 +118,7 @@ __test_huc_copy(int fd, uint32_t vm, struct drm_xe_engine_class_instance *hwe)
>  	};
>  
>  	exec_queue = xe_exec_queue_create(fd, vm, hwe, 0);
> -	sync.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL;
> +	sync.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL;
>  	sync.handle = syncobj_create(fd, 0);
>  
>  	for(int i = 0; i < BO_DICT_ENTRIES; i++) {
> @@ -156,7 +156,7 @@ test_huc_copy(int fd)
>  	uint32_t vm;
>  	uint32_t tested_gts = 0;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
>  	xe_for_each_hw_engine(fd, hwe) {
>  		if (hwe->engine_class == DRM_XE_ENGINE_CLASS_VIDEO_DECODE &&
> diff --git a/tests/intel/xe_intel_bb.c b/tests/intel/xe_intel_bb.c
> index 26e4dcc85..d66996cd5 100644
> --- a/tests/intel/xe_intel_bb.c
> +++ b/tests/intel/xe_intel_bb.c
> @@ -191,7 +191,7 @@ static void simple_bb(struct buf_ops *bops, bool new_context)
>  	intel_bb_reset(ibb, true);
>  
>  	if (new_context) {
> -		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +		vm = xe_vm_create(xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  		ctx = xe_exec_queue_create(xe, vm, xe_hw_engine(xe, 0), 0);
>  		intel_bb_destroy(ibb);
>  		ibb = intel_bb_create_with_context(xe, ctx, vm, NULL, PAGE_SIZE);
> diff --git a/tests/intel/xe_noexec_ping_pong.c b/tests/intel/xe_noexec_ping_pong.c
> index 88b22ed11..9c2a70ff3 100644
> --- a/tests/intel/xe_noexec_ping_pong.c
> +++ b/tests/intel/xe_noexec_ping_pong.c
> @@ -64,7 +64,7 @@ static void test_ping_pong(int fd, struct drm_xe_engine_class_instance *eci)
>  	 * stats.
>  	 */
>  	for (i = 0; i < NUM_VMS; ++i) {
> -		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_COMPUTE_MODE, 0);
> +		vm[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE, 0);
>  		for (j = 0; j < NUM_BOS; ++j) {
>  			igt_debug("Creating bo size %lu for vm %u\n",
>  				  (unsigned long) bo_size,
> diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
> index a0dd30e50..e9d05cf2b 100644
> --- a/tests/intel/xe_perf_pmu.c
> +++ b/tests/intel/xe_perf_pmu.c
> @@ -81,8 +81,8 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -98,7 +98,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
>  	uint32_t pmu_fd;
>  	uint64_t count, idle;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*spin);
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -118,8 +118,8 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
>  
>  	xe_spin_init(spin, &spin_opts);
>  
> -	sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -	sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +	sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	sync[1].handle = syncobj;
>  
>  	exec.exec_queue_id = exec_queue;
> @@ -135,7 +135,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
>  	igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL));
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> @@ -185,8 +185,8 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -219,7 +219,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
>  	igt_skip_on_f(!num_placements, "Engine class:%d gt:%d not enabled on this platform\n",
>  		      class, gt);
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data) * num_placements;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd), xe_get_default_alignment(fd));
>  
> @@ -250,8 +250,8 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
>  	for (i = 0; i < num_placements; i++) {
>  		spin_opts.addr = addr + (char *)&data[i].spin - (char *)data;
>  		xe_spin_init(&data[i].spin, &spin_opts);
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[i];
>  
>  		exec.exec_queue_id = exec_queues[i];
> @@ -268,7 +268,7 @@ static void test_engine_group_busyness(int fd, int gt, int class, const char *na
>  
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, 0, 0, addr, bo_size, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> index d07ed4535..18afb68b0 100644
> --- a/tests/intel/xe_pm.c
> +++ b/tests/intel/xe_pm.c
> @@ -231,8 +231,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -259,7 +259,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
>  	if (check_rpm)
>  		igt_assert(in_d3(device, d_state));
>  
> -	vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(device.fd_xe, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
>  	if (check_rpm)
>  		igt_assert(out_of_d3(device, d_state));
> @@ -304,8 +304,8 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
>  		data[i].batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		exec.exec_queue_id = exec_queues[e];
> @@ -331,7 +331,7 @@ test_exec(device_t device, struct drm_xe_engine_class_instance *eci,
>  	if (check_rpm && runtime_usage_available(device.pci_xe))
>  		rpm_usage = igt_pm_get_runtime_usage(device.pci_xe);
>  
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(device.fd_xe, vm, bind_exec_queues[0], 0, addr,
>  			   bo_size, sync, 1);
>  	igt_assert(syncobj_wait(device.fd_xe, &sync[0].handle, 1, INT64_MAX, 0,
> diff --git a/tests/intel/xe_pm_residency.c b/tests/intel/xe_pm_residency.c
> index 8e9197fae..c87eeef3c 100644
> --- a/tests/intel/xe_pm_residency.c
> +++ b/tests/intel/xe_pm_residency.c
> @@ -87,7 +87,7 @@ static void exec_load(int fd, struct drm_xe_engine_class_instance *hwe, unsigned
>  	} *data;
>  
>  	struct drm_xe_sync sync = {
> -		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> +		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
>  	};
>  
>  	struct drm_xe_exec exec = {
> diff --git a/tests/intel/xe_spin_batch.c b/tests/intel/xe_spin_batch.c
> index eb5d6aba8..6ab604d9b 100644
> --- a/tests/intel/xe_spin_batch.c
> +++ b/tests/intel/xe_spin_batch.c
> @@ -145,7 +145,7 @@ static void xe_spin_fixed_duration(int fd)
>  {
>  	struct drm_xe_sync sync = {
>  		.handle = syncobj_create(fd, 0),
> -		.flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> +		.flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index 6700a6a55..86c8d0c5d 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -89,7 +89,7 @@ write_dwords(int fd, uint32_t vm, int n_dwords, uint64_t *addrs)
>  static void
>  test_scratch(int fd)
>  {
> -	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0);
> +	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0);
>  	uint64_t addrs[] = {
>  		0x000000000000ull,
>  		0x7ffdb86402d8ull,
> @@ -124,7 +124,7 @@ __test_bind_one_bo(int fd, uint32_t vm, int n_addrs, uint64_t *addrs)
>  		uint64_t bind_addr = addrs[i] & ~(uint64_t)(bo_size - 1);
>  
>  		if (!vm)
> -			vms[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE,
> +			vms[i] = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE,
>  					      0);
>  		igt_debug("Binding addr %"PRIx64"\n", addrs[i]);
>  		xe_vm_bind_sync(fd, vm ? vm : vms[i], bo, 0,
> @@ -214,7 +214,7 @@ test_bind_once(int fd)
>  	uint64_t addr = 0x7ffdb86402d8ull;
>  
>  	__test_bind_one_bo(fd,
> -			   xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0),
> +			   xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0),
>  			   1, &addr);
>  }
>  
> @@ -234,7 +234,7 @@ test_bind_one_bo_many_times(int fd)
>  						ARRAY_SIZE(addrs_48b);
>  
>  	__test_bind_one_bo(fd,
> -			   xe_vm_create(fd, DRM_XE_VM_CREATE_SCRATCH_PAGE, 0),
> +			   xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0),
>  			   addrs_size, addrs);
>  }
>  
> @@ -265,14 +265,14 @@ test_bind_one_bo_many_times_many_vm(int fd)
>  
>  static void test_partial_unbinds(int fd)
>  {
> -	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	size_t bo_size = 3 * xe_get_default_alignment(fd);
>  	uint32_t bo = xe_bo_create(fd, 0, vm, bo_size);
>  	uint64_t unbind_size = bo_size / 3;
>  	uint64_t addr = 0x1a0000;
>  
>  	struct drm_xe_sync sync = {
> -	    .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL,
> +	    .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL,
>  	    .handle = syncobj_create(fd, 0),
>  	};
>  
> @@ -312,10 +312,10 @@ static void unbind_all(int fd, int n_vmas)
>  	uint32_t vm;
>  	int i;
>  	struct drm_xe_sync sync[1] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo = xe_bo_create(fd, 0, vm, bo_size);
>  
>  	for (i = 0; i < n_vmas; ++i)
> @@ -387,8 +387,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
>  	uint32_t vm;
>  	uint64_t addr = 0x1000 * 512;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_sync sync_all[MAX_N_EXEC_QUEUES + 1];
>  	struct drm_xe_exec exec = {
> @@ -412,7 +412,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
>  	data = malloc(sizeof(*data) * n_bo);
>  	igt_assert(data);
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(struct shared_pte_page_data);
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -430,7 +430,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
>  	for (i = 0; i < n_exec_queues; i++) {
>  		exec_queues[i] = xe_exec_queue_create(fd, vm, eci, 0);
>  		syncobjs[i] = syncobj_create(fd, 0);
> -		sync_all[i].flags = DRM_XE_SYNC_SYNCOBJ;
> +		sync_all[i].flags = DRM_XE_SYNC_FLAG_SYNCOBJ;
>  		sync_all[i].handle = syncobjs[i];
>  	};
>  
> @@ -455,8 +455,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
>  		data[i]->batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i]->batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		exec.exec_queue_id = exec_queues[e];
> @@ -468,7 +468,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
>  		if (i % 2)
>  			continue;
>  
> -		sync_all[n_execs].flags = DRM_XE_SYNC_SIGNAL;
> +		sync_all[n_execs].flags = DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync_all[n_execs].handle = sync[0].handle;
>  		xe_vm_unbind_async(fd, vm, 0, 0, addr + i * addr_stride,
>  				   bo_size, sync_all, n_execs + 1);
> @@ -504,8 +504,8 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
>  		data[i]->batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i]->batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		exec.exec_queue_id = exec_queues[e];
> @@ -518,7 +518,7 @@ shared_pte_page(int fd, struct drm_xe_engine_class_instance *eci, int n_bo,
>  		if (!(i % 2))
>  			continue;
>  
> -		sync_all[n_execs].flags = DRM_XE_SYNC_SIGNAL;
> +		sync_all[n_execs].flags = DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync_all[n_execs].handle = sync[0].handle;
>  		xe_vm_unbind_async(fd, vm, 0, 0, addr + i * addr_stride,
>  				   bo_size, sync_all, n_execs + 1);
> @@ -573,8 +573,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -596,7 +596,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
>  	struct xe_spin_opts spin_opts = { .preempt = true };
>  	int i, b;
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data) * N_EXEC_QUEUES;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -630,22 +630,22 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
>  			xe_spin_init(&data[i].spin, &spin_opts);
>  			exec.exec_queue_id = exec_queues[e];
>  			exec.address = spin_opts.addr;
> -			sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -			sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +			sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +			sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  			sync[1].handle = syncobjs[e];
>  			xe_exec(fd, &exec);
>  			xe_spin_wait_started(&data[i].spin);
>  
>  			/* Do bind to 1st exec_queue blocked on cork */
>  			addr += (flags & CONFLICT) ? (0x1 << 21) : bo_size;
> -			sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
> +			sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  			sync[1].handle = syncobjs[e];
>  			xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
>  					 bo_size, sync + 1, 1);
>  			addr += bo_size;
>  		} else {
>  			/* Do bind to 2nd exec_queue which blocks write below */
> -			sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +			sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  			xe_vm_bind_async(fd, vm, bind_exec_queues[e], bo, 0, addr,
>  					 bo_size, sync, 1);
>  		}
> @@ -663,8 +663,8 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
>  		data[i].batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[!i ? N_EXEC_QUEUES : e];
>  
>  		exec.num_syncs = 2;
> @@ -708,7 +708,7 @@ test_bind_execqueues_independent(int fd, struct drm_xe_engine_class_instance *ec
>  
>  	syncobj_destroy(fd, sync[0].handle);
>  	sync[0].handle = syncobj_create(fd, 0);
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_all_async(fd, vm, 0, bo, sync, 1);
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
> @@ -755,8 +755,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>  	uint32_t vm;
>  	uint64_t addr = 0x1a0000, base_addr = 0x1a0000;
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -776,7 +776,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>  
>  	igt_assert(n_execs <= BIND_ARRAY_MAX_N_EXEC);
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = sizeof(*data) * n_execs;
>  	bo_size = ALIGN(bo_size + xe_cs_prefetch_size(fd),
>  			xe_get_default_alignment(fd));
> @@ -822,8 +822,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>  		data[i].batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		if (i == n_execs - 1) {
>  			sync[1].handle = syncobj_create(fd, 0);
>  			exec.num_syncs = 2;
> @@ -845,8 +845,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>  	}
>  
>  	syncobj_reset(fd, &sync[0].handle, 1);
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> -	sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
> +	sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_bind_array(fd, vm, bind_exec_queue, bind_ops, n_execs, sync, 2);
>  
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
> @@ -943,8 +943,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
>  		 unsigned int flags)
>  {
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -970,7 +970,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
>  	}
>  
>  	igt_assert(n_exec_queues <= MAX_N_EXEC_QUEUES);
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
>  	if (flags & LARGE_BIND_FLAG_USERPTR) {
>  		map = aligned_alloc(xe_get_default_alignment(fd), bo_size);
> @@ -1027,8 +1027,8 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
>  		data[i].batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  		sync[1].handle = syncobjs[e];
>  
>  		if (i != e)
> @@ -1050,7 +1050,7 @@ test_large_binds(int fd, struct drm_xe_engine_class_instance *eci,
>  	igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
>  
>  	syncobj_reset(fd, &sync[0].handle, 1);
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	if (flags & LARGE_BIND_FLAG_SPLIT) {
>  		xe_vm_unbind_async(fd, vm, 0, 0, base_addr,
>  				   bo_size / 2, NULL, 0);
> @@ -1103,7 +1103,7 @@ static void *hammer_thread(void *tdata)
>  {
>  	struct thread_data *t = tdata;
>  	struct drm_xe_sync sync[1] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -1227,8 +1227,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
>  			 unsigned int flags)
>  {
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -1262,7 +1262,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
>  			unbind_n_page_offset *= n_page_per_2mb;
>  	}
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = page_size * bo_n_pages;
>  
>  	if (flags & MAP_FLAG_USERPTR) {
> @@ -1330,10 +1330,10 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
>  		data->batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  		if (i)
>  			syncobj_reset(fd, &sync[1].handle, 1);
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  
>  		exec.exec_queue_id = exec_queue;
>  		exec.address = batch_addr;
> @@ -1345,8 +1345,8 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
>  
>  	/* Unbind some of the pages */
>  	syncobj_reset(fd, &sync[0].handle, 1);
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> -	sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
> +	sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  	xe_vm_unbind_async(fd, vm, 0, 0,
>  			   addr + unbind_n_page_offset * page_size,
>  			   unbind_n_pages * page_size, sync, 2);
> @@ -1387,9 +1387,9 @@ try_again_after_invalidate:
>  			data->batch[b++] = MI_BATCH_BUFFER_END;
>  			igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -			sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> +			sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  			syncobj_reset(fd, &sync[1].handle, 1);
> -			sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +			sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  
>  			exec.exec_queue_id = exec_queue;
>  			exec.address = batch_addr;
> @@ -1430,7 +1430,7 @@ try_again_after_invalidate:
>  
>  	/* Confirm unbound region can be rebound */
>  	syncobj_reset(fd, &sync[0].handle, 1);
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  	if (flags & MAP_FLAG_USERPTR)
>  		xe_vm_bind_userptr_async(fd, vm, 0,
>  					 addr + unbind_n_page_offset * page_size,
> @@ -1458,9 +1458,9 @@ try_again_after_invalidate:
>  		data->batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  		syncobj_reset(fd, &sync[1].handle, 1);
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  
>  		exec.exec_queue_id = exec_queue;
>  		exec.address = batch_addr;
> @@ -1528,8 +1528,8 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
>  		     int unbind_n_pages, unsigned int flags)
>  {
>  	struct drm_xe_sync sync[2] = {
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> -		{ .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
> +		{ .flags = DRM_XE_SYNC_FLAG_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, },
>  	};
>  	struct drm_xe_exec exec = {
>  		.num_batch_buffer = 1,
> @@ -1562,7 +1562,7 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
>  			unbind_n_page_offset *= n_page_per_2mb;
>  	}
>  
> -	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_size = page_size * bo_n_pages;
>  
>  	if (flags & MAP_FLAG_USERPTR) {
> @@ -1636,10 +1636,10 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
>  		data->batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  		if (i)
>  			syncobj_reset(fd, &sync[1].handle, 1);
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  
>  		exec.exec_queue_id = exec_queue;
>  		exec.address = batch_addr;
> @@ -1651,8 +1651,8 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
>  
>  	/* Bind some of the pages to different BO / userptr */
>  	syncobj_reset(fd, &sync[0].handle, 1);
> -	sync[0].flags |= DRM_XE_SYNC_SIGNAL;
> -	sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
> +	sync[1].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  	if (flags & MAP_FLAG_USERPTR)
>  		xe_vm_bind_userptr_async(fd, vm, 0, addr + bo_size +
>  					 unbind_n_page_offset * page_size,
> @@ -1704,10 +1704,10 @@ test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
>  		data->batch[b++] = MI_BATCH_BUFFER_END;
>  		igt_assert(b <= ARRAY_SIZE(data[i].batch));
>  
> -		sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
> +		sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
>  		if (i)
>  			syncobj_reset(fd, &sync[1].handle, 1);
> -		sync[1].flags |= DRM_XE_SYNC_SIGNAL;
> +		sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
>  
>  		exec.exec_queue_id = exec_queue;
>  		exec.address = batch_addr;
> diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
> index ac7e99dde..2efdc1245 100644
> --- a/tests/intel/xe_waitfence.c
> +++ b/tests/intel/xe_waitfence.c
> @@ -30,7 +30,7 @@ static void do_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
>  		    uint64_t addr, uint64_t size, uint64_t val)
>  {
>  	struct drm_xe_sync sync[1] = {};
> -	sync[0].flags = DRM_XE_SYNC_USER_FENCE | DRM_XE_SYNC_SIGNAL;
> +	sync[0].flags = DRM_XE_SYNC_FLAG_USER_FENCE | DRM_XE_SYNC_FLAG_SIGNAL;
>  
>  	sync[0].addr = to_user_pointer(&wait_fence);
>  	sync[0].timeline_value = val;
> @@ -63,7 +63,7 @@ waitfence(int fd, enum waittype wt)
>  	uint32_t bo_7;
>  	int64_t timeout;
>  
> -	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  	bo_1 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
>  	do_bind(fd, vm, bo_1, 0, 0x200000, 0x40000, 1);
>  	bo_2 = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
> @@ -132,7 +132,7 @@ invalid_flag(int fd)
>  		.instances = 0,
>  	};
>  
> -	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
>  	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
>  
> @@ -157,7 +157,7 @@ invalid_ops(int fd)
>  		.instances = 0,
>  	};
>  
> -	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
>  	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
>  
> @@ -182,7 +182,7 @@ invalid_engine(int fd)
>  		.instances = 0,
>  	};
>  
> -	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_DEFAULT, 0);
> +	uint32_t vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT, 0);
>  
>  	bo = xe_bo_create_flags(fd, vm, 0x40000, MY_FLAG);
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [igt-dev] [PATCH v1 3/8] drm-uapi/xe: Change rsvd to pad in struct drm_xe_class_instance
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 3/8] drm-uapi/xe: Change rsvd to pad in struct drm_xe_class_instance Francois Dugast
@ 2023-11-14 13:47   ` Rodrigo Vivi
  0 siblings, 0 replies; 18+ messages in thread
From: Rodrigo Vivi @ 2023-11-14 13:47 UTC (permalink / raw)
  To: Francois Dugast; +Cc: igt-dev

On Tue, Nov 14, 2023 at 01:44:21PM +0000, Francois Dugast wrote:
> Align with commit ("drm/xe/uapi: Change rsvd to pad in struct drm_xe_class_instance")
> 
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>


Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>


> ---
>  include/drm-uapi/xe_drm.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 68d005202..32f6cf631 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -141,7 +141,8 @@ struct drm_xe_engine_class_instance {
>  
>  	__u16 engine_instance;
>  	__u16 gt_id;
> -	__u16 rsvd;
> +	/** @pad: MBZ */
> +	__u16 pad;
>  };
>  
>  /**
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [igt-dev] [PATCH v1 1/8] drm-uapi/xe: Add missing DRM_ prefix in uAPI constants
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 1/8] drm-uapi/xe: Add missing DRM_ prefix in uAPI constants Francois Dugast
@ 2023-11-14 13:48   ` Rodrigo Vivi
  0 siblings, 0 replies; 18+ messages in thread
From: Rodrigo Vivi @ 2023-11-14 13:48 UTC (permalink / raw)
  To: Francois Dugast; +Cc: igt-dev

On Tue, Nov 14, 2023 at 01:44:19PM +0000, Francois Dugast wrote:
> Align with commit ("drm/xe/uapi: Add missing DRM_ prefix in uAPI constants")
> 
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>


Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>


> ---
>  include/drm-uapi/xe_drm.h            | 124 +++++++++++++--------------
>  lib/intel_batchbuffer.c              |   8 +-
>  lib/intel_blt.c                      |   2 +-
>  lib/xe/xe_ioctl.c                    |  22 ++---
>  lib/xe/xe_query.c                    |  12 +--
>  lib/xe/xe_query.h                    |   4 +-
>  lib/xe/xe_util.c                     |  10 +--
>  lib/xe/xe_util.h                     |   4 +-
>  tests/intel/xe_access_counter.c      |   4 +-
>  tests/intel/xe_ccs.c                 |   4 +-
>  tests/intel/xe_copy_basic.c          |   4 +-
>  tests/intel/xe_debugfs.c             |  12 +--
>  tests/intel/xe_exec_basic.c          |   8 +-
>  tests/intel/xe_exec_fault_mode.c     |   4 +-
>  tests/intel/xe_exec_queue_property.c |  18 ++--
>  tests/intel/xe_exec_reset.c          |  20 ++---
>  tests/intel/xe_exec_threads.c        |   4 +-
>  tests/intel/xe_exercise_blt.c        |   4 +-
>  tests/intel/xe_perf_pmu.c            |   8 +-
>  tests/intel/xe_pm.c                  |   2 +-
>  tests/intel/xe_query.c               |  40 ++++-----
>  tests/intel/xe_vm.c                  |  10 +--
>  22 files changed, 164 insertions(+), 164 deletions(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index babfaf0fe..9ab6c3269 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -19,12 +19,12 @@ extern "C" {
>  /**
>   * DOC: uevent generated by xe on it's pci node.
>   *
> - * XE_RESET_FAILED_UEVENT - Event is generated when attempt to reset gt
> + * DRM_XE_RESET_FAILED_UEVENT - Event is generated when attempt to reset gt
>   * fails. The value supplied with the event is always "NEEDS_RESET".
>   * Additional information supplied is tile id and gt id of the gt unit for
>   * which reset has failed.
>   */
> -#define XE_RESET_FAILED_UEVENT "DEVICE_STATUS"
> +#define DRM_XE_RESET_FAILED_UEVENT "DEVICE_STATUS"
>  
>  /**
>   * struct xe_user_extension - Base class for defining a chain of extensions
> @@ -148,14 +148,14 @@ struct drm_xe_engine_class_instance {
>   * enum drm_xe_memory_class - Supported memory classes.
>   */
>  enum drm_xe_memory_class {
> -	/** @XE_MEM_REGION_CLASS_SYSMEM: Represents system memory. */
> -	XE_MEM_REGION_CLASS_SYSMEM = 0,
> +	/** @DRM_XE_MEM_REGION_CLASS_SYSMEM: Represents system memory. */
> +	DRM_XE_MEM_REGION_CLASS_SYSMEM = 0,
>  	/**
> -	 * @XE_MEM_REGION_CLASS_VRAM: On discrete platforms, this
> +	 * @DRM_XE_MEM_REGION_CLASS_VRAM: On discrete platforms, this
>  	 * represents the memory that is local to the device, which we
>  	 * call VRAM. Not valid on integrated platforms.
>  	 */
> -	XE_MEM_REGION_CLASS_VRAM
> +	DRM_XE_MEM_REGION_CLASS_VRAM
>  };
>  
>  /**
> @@ -215,7 +215,7 @@ struct drm_xe_query_mem_region {
>  	 * always equal the @total_size, since all of it will be CPU
>  	 * accessible.
>  	 *
> -	 * Note this is only tracked for XE_MEM_REGION_CLASS_VRAM
> +	 * Note this is only tracked for DRM_XE_MEM_REGION_CLASS_VRAM
>  	 * regions (for other types the value here will always equal
>  	 * zero).
>  	 */
> @@ -227,7 +227,7 @@ struct drm_xe_query_mem_region {
>  	 * Requires CAP_PERFMON or CAP_SYS_ADMIN to get reliable
>  	 * accounting. Without this the value here will always equal
>  	 * zero.  Note this is only currently tracked for
> -	 * XE_MEM_REGION_CLASS_VRAM regions (for other types the value
> +	 * DRM_XE_MEM_REGION_CLASS_VRAM regions (for other types the value
>  	 * here will always be zero).
>  	 */
>  	__u64 cpu_visible_used;
> @@ -320,12 +320,12 @@ struct drm_xe_query_config {
>  	/** @pad: MBZ */
>  	__u32 pad;
>  
> -#define XE_QUERY_CONFIG_REV_AND_DEVICE_ID	0
> -#define XE_QUERY_CONFIG_FLAGS			1
> -	#define XE_QUERY_CONFIG_FLAGS_HAS_VRAM		(0x1 << 0)
> -#define XE_QUERY_CONFIG_MIN_ALIGNMENT		2
> -#define XE_QUERY_CONFIG_VA_BITS			3
> -#define XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	4
> +#define DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID	0
> +#define DRM_XE_QUERY_CONFIG_FLAGS			1
> +	#define DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM		(0x1 << 0)
> +#define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT		2
> +#define DRM_XE_QUERY_CONFIG_VA_BITS			3
> +#define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	4
>  	/** @info: array of elements containing the config info */
>  	__u64 info[];
>  };
> @@ -339,8 +339,8 @@ struct drm_xe_query_config {
>   * implementing graphics and/or media operations.
>   */
>  struct drm_xe_query_gt {
> -#define XE_QUERY_GT_TYPE_MAIN		0
> -#define XE_QUERY_GT_TYPE_MEDIA		1
> +#define DRM_XE_QUERY_GT_TYPE_MAIN		0
> +#define DRM_XE_QUERY_GT_TYPE_MEDIA		1
>  	/** @type: GT type: Main or Media */
>  	__u16 type;
>  	/** @gt_id: Unique ID of this GT within the PCI Device */
> @@ -400,7 +400,7 @@ struct drm_xe_query_topology_mask {
>  	 *   DSS_GEOMETRY    ff ff ff ff 00 00 00 00
>  	 * means 32 DSS are available for geometry.
>  	 */
> -#define XE_TOPO_DSS_GEOMETRY	(1 << 0)
> +#define DRM_XE_TOPO_DSS_GEOMETRY	(1 << 0)
>  	/*
>  	 * To query the mask of Dual Sub Slices (DSS) available for compute
>  	 * operations. For example a query response containing the following
> @@ -408,7 +408,7 @@ struct drm_xe_query_topology_mask {
>  	 *   DSS_COMPUTE    ff ff ff ff 00 00 00 00
>  	 * means 32 DSS are available for compute.
>  	 */
> -#define XE_TOPO_DSS_COMPUTE	(1 << 1)
> +#define DRM_XE_TOPO_DSS_COMPUTE		(1 << 1)
>  	/*
>  	 * To query the mask of Execution Units (EU) available per Dual Sub
>  	 * Slices (DSS). For example a query response containing the following
> @@ -416,7 +416,7 @@ struct drm_xe_query_topology_mask {
>  	 *   EU_PER_DSS    ff ff 00 00 00 00 00 00
>  	 * means each DSS has 16 EU.
>  	 */
> -#define XE_TOPO_EU_PER_DSS	(1 << 2)
> +#define DRM_XE_TOPO_EU_PER_DSS		(1 << 2)
>  	/** @type: type of mask */
>  	__u16 type;
>  
> @@ -497,8 +497,8 @@ struct drm_xe_gem_create {
>  	 */
>  	__u64 size;
>  
> -#define XE_GEM_CREATE_FLAG_DEFER_BACKING	(0x1 << 24)
> -#define XE_GEM_CREATE_FLAG_SCANOUT		(0x1 << 25)
> +#define DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING		(0x1 << 24)
> +#define DRM_XE_GEM_CREATE_FLAG_SCANOUT			(0x1 << 25)
>  /*
>   * When using VRAM as a possible placement, ensure that the corresponding VRAM
>   * allocation will always use the CPU accessible part of VRAM. This is important
> @@ -514,7 +514,7 @@ struct drm_xe_gem_create {
>   * display surfaces, therefore the kernel requires setting this flag for such
>   * objects, otherwise an error is thrown on small-bar systems.
>   */
> -#define XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM	(0x1 << 26)
> +#define DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM	(0x1 << 26)
>  	/**
>  	 * @flags: Flags, currently a mask of memory instances of where BO can
>  	 * be placed
> @@ -581,14 +581,14 @@ struct drm_xe_ext_set_property {
>  };
>  
>  struct drm_xe_vm_create {
> -#define XE_VM_EXTENSION_SET_PROPERTY	0
> +#define DRM_XE_VM_EXTENSION_SET_PROPERTY	0
>  	/** @extensions: Pointer to the first extension struct, if any */
>  	__u64 extensions;
>  
> -#define DRM_XE_VM_CREATE_SCRATCH_PAGE	(0x1 << 0)
> -#define DRM_XE_VM_CREATE_COMPUTE_MODE	(0x1 << 1)
> -#define DRM_XE_VM_CREATE_ASYNC_DEFAULT	(0x1 << 2)
> -#define DRM_XE_VM_CREATE_FAULT_MODE	(0x1 << 3)
> +#define DRM_XE_VM_CREATE_SCRATCH_PAGE		(0x1 << 0)
> +#define DRM_XE_VM_CREATE_COMPUTE_MODE		(0x1 << 1)
> +#define DRM_XE_VM_CREATE_ASYNC_DEFAULT		(0x1 << 2)
> +#define DRM_XE_VM_CREATE_FAULT_MODE		(0x1 << 3)
>  	/** @flags: Flags */
>  	__u32 flags;
>  
> @@ -644,29 +644,29 @@ struct drm_xe_vm_bind_op {
>  	 */
>  	__u64 tile_mask;
>  
> -#define XE_VM_BIND_OP_MAP		0x0
> -#define XE_VM_BIND_OP_UNMAP		0x1
> -#define XE_VM_BIND_OP_MAP_USERPTR	0x2
> -#define XE_VM_BIND_OP_UNMAP_ALL		0x3
> -#define XE_VM_BIND_OP_PREFETCH		0x4
> +#define DRM_XE_VM_BIND_OP_MAP		0x0
> +#define DRM_XE_VM_BIND_OP_UNMAP		0x1
> +#define DRM_XE_VM_BIND_OP_MAP_USERPTR	0x2
> +#define DRM_XE_VM_BIND_OP_UNMAP_ALL	0x3
> +#define DRM_XE_VM_BIND_OP_PREFETCH	0x4
>  	/** @op: Bind operation to perform */
>  	__u32 op;
>  
> -#define XE_VM_BIND_FLAG_READONLY	(0x1 << 0)
> -#define XE_VM_BIND_FLAG_ASYNC		(0x1 << 1)
> +#define DRM_XE_VM_BIND_FLAG_READONLY	(0x1 << 0)
> +#define DRM_XE_VM_BIND_FLAG_ASYNC	(0x1 << 1)
>  	/*
>  	 * Valid on a faulting VM only, do the MAP operation immediately rather
>  	 * than deferring the MAP to the page fault handler.
>  	 */
> -#define XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 2)
> +#define DRM_XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 2)
>  	/*
>  	 * When the NULL flag is set, the page tables are setup with a special
>  	 * bit which indicates writes are dropped and all reads return zero.  In
> -	 * the future, the NULL flags will only be valid for XE_VM_BIND_OP_MAP
> +	 * the future, the NULL flags will only be valid for DRM_XE_VM_BIND_OP_MAP
>  	 * operations, the BO handle MBZ, and the BO offset MBZ. This flag is
>  	 * intended to implement VK sparse bindings.
>  	 */
> -#define XE_VM_BIND_FLAG_NULL		(0x1 << 3)
> +#define DRM_XE_VM_BIND_FLAG_NULL	(0x1 << 3)
>  	/** @flags: Bind flags */
>  	__u32 flags;
>  
> @@ -721,19 +721,19 @@ struct drm_xe_vm_bind {
>  	__u64 reserved[2];
>  };
>  
> -/* For use with XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY */
> +/* For use with DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY */
>  
>  /* Monitor 128KB contiguous region with 4K sub-granularity */
> -#define XE_ACC_GRANULARITY_128K 0
> +#define DRM_XE_ACC_GRANULARITY_128K 0
>  
>  /* Monitor 2MB contiguous region with 64KB sub-granularity */
> -#define XE_ACC_GRANULARITY_2M 1
> +#define DRM_XE_ACC_GRANULARITY_2M 1
>  
>  /* Monitor 16MB contiguous region with 512KB sub-granularity */
> -#define XE_ACC_GRANULARITY_16M 2
> +#define DRM_XE_ACC_GRANULARITY_16M 2
>  
>  /* Monitor 64MB contiguous region with 2M sub-granularity */
> -#define XE_ACC_GRANULARITY_64M 3
> +#define DRM_XE_ACC_GRANULARITY_64M 3
>  
>  /**
>   * struct drm_xe_exec_queue_set_property - exec queue set property
> @@ -747,14 +747,14 @@ struct drm_xe_exec_queue_set_property {
>  	/** @exec_queue_id: Exec queue ID */
>  	__u32 exec_queue_id;
>  
> -#define XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY		0
> -#define XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE		1
> -#define XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT	2
> -#define XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE		3
> -#define XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT		4
> -#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER		5
> -#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY		6
> -#define XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY	7
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY			0
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE		1
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT	2
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE		3
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT		4
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER		5
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY		6
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY		7
>  	/** @property: property to set */
>  	__u32 property;
>  
> @@ -766,7 +766,7 @@ struct drm_xe_exec_queue_set_property {
>  };
>  
>  struct drm_xe_exec_queue_create {
> -#define XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY               0
> +#define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY               0
>  	/** @extensions: Pointer to the first extension struct, if any */
>  	__u64 extensions;
>  
> @@ -805,7 +805,7 @@ struct drm_xe_exec_queue_get_property {
>  	/** @exec_queue_id: Exec queue ID */
>  	__u32 exec_queue_id;
>  
> -#define XE_EXEC_QUEUE_GET_PROPERTY_BAN			0
> +#define DRM_XE_EXEC_QUEUE_GET_PROPERTY_BAN	0
>  	/** @property: property to get */
>  	__u32 property;
>  
> @@ -973,11 +973,11 @@ struct drm_xe_wait_user_fence {
>  /**
>   * DOC: XE PMU event config IDs
>   *
> - * Check 'man perf_event_open' to use the ID's XE_PMU_XXXX listed in xe_drm.h
> + * Check 'man perf_event_open' to use the ID's DRM_XE_PMU_XXXX listed in xe_drm.h
>   * in 'struct perf_event_attr' as part of perf_event_open syscall to read a
>   * particular event.
>   *
> - * For example to open the XE_PMU_RENDER_GROUP_BUSY(0):
> + * For example to open the DRMXE_PMU_RENDER_GROUP_BUSY(0):
>   *
>   * .. code-block:: C
>   *
> @@ -991,7 +991,7 @@ struct drm_xe_wait_user_fence {
>   *	attr.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED;
>   *	attr.use_clockid = 1;
>   *	attr.clockid = CLOCK_MONOTONIC;
> - *	attr.config = XE_PMU_RENDER_GROUP_BUSY(0);
> + *	attr.config = DRM_XE_PMU_RENDER_GROUP_BUSY(0);
>   *
>   *	fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
>   */
> @@ -999,15 +999,15 @@ struct drm_xe_wait_user_fence {
>  /*
>   * Top bits of every counter are GT id.
>   */
> -#define __XE_PMU_GT_SHIFT (56)
> +#define __DRM_XE_PMU_GT_SHIFT (56)
>  
> -#define ___XE_PMU_OTHER(gt, x) \
> -	(((__u64)(x)) | ((__u64)(gt) << __XE_PMU_GT_SHIFT))
> +#define ___DRM_XE_PMU_OTHER(gt, x) \
> +	(((__u64)(x)) | ((__u64)(gt) << __DRM_XE_PMU_GT_SHIFT))
>  
> -#define XE_PMU_RENDER_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 0)
> -#define XE_PMU_COPY_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 1)
> -#define XE_PMU_MEDIA_GROUP_BUSY(gt)		___XE_PMU_OTHER(gt, 2)
> -#define XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___XE_PMU_OTHER(gt, 3)
> +#define DRM_XE_PMU_RENDER_GROUP_BUSY(gt)	___DRM_XE_PMU_OTHER(gt, 0)
> +#define DRM_XE_PMU_COPY_GROUP_BUSY(gt)		___DRM_XE_PMU_OTHER(gt, 1)
> +#define DRM_XE_PMU_MEDIA_GROUP_BUSY(gt)		___DRM_XE_PMU_OTHER(gt, 2)
> +#define DRM_XE_PMU_ANY_ENGINE_GROUP_BUSY(gt)	___DRM_XE_PMU_OTHER(gt, 3)
>  
>  #if defined(__cplusplus)
>  }
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index c32d04302..eb47ede50 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -1286,7 +1286,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
>  {
>  	struct drm_i915_gem_exec_object2 **objects = ibb->objects;
>  	struct drm_xe_vm_bind_op *bind_ops, *ops;
> -	bool set_obj = (op & 0xffff) == XE_VM_BIND_OP_MAP;
> +	bool set_obj = (op & 0xffff) == DRM_XE_VM_BIND_OP_MAP;
>  
>  	bind_ops = calloc(ibb->num_objects, sizeof(*bind_ops));
>  	igt_assert(bind_ops);
> @@ -1325,8 +1325,8 @@ static void __unbind_xe_objects(struct intel_bb *ibb)
>  
>  	if (ibb->num_objects > 1) {
>  		struct drm_xe_vm_bind_op *bind_ops;
> -		uint32_t op = XE_VM_BIND_OP_UNMAP;
> -		uint32_t flags = XE_VM_BIND_FLAG_ASYNC;
> +		uint32_t op = DRM_XE_VM_BIND_OP_UNMAP;
> +		uint32_t flags = DRM_XE_VM_BIND_FLAG_ASYNC;
>  
>  		bind_ops = xe_alloc_bind_ops(ibb, op, flags, 0);
>  		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
> @@ -2357,7 +2357,7 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
>  
>  	syncs[0].handle = syncobj_create(ibb->fd, 0);
>  	if (ibb->num_objects > 1) {
> -		bind_ops = xe_alloc_bind_ops(ibb, XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, 0);
> +		bind_ops = xe_alloc_bind_ops(ibb, DRM_XE_VM_BIND_OP_MAP, DRM_XE_VM_BIND_FLAG_ASYNC, 0);
>  		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
>  				 ibb->num_objects, syncs, 1);
>  		free(bind_ops);
> diff --git a/lib/intel_blt.c b/lib/intel_blt.c
> index 5b682c2b6..2edcd72f3 100644
> --- a/lib/intel_blt.c
> +++ b/lib/intel_blt.c
> @@ -1804,7 +1804,7 @@ blt_create_object(const struct blt_copy_data *blt, uint32_t region,
>  		uint64_t flags = region;
>  
>  		if (create_mapping && region != system_memory(blt->fd))
> -			flags |= XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
> +			flags |= DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
>  
>  		size = ALIGN(size, xe_get_default_alignment(blt->fd));
>  		handle = xe_bo_create_flags(blt->fd, 0, size, flags);
> diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> index c4077801e..36f10a49a 100644
> --- a/lib/xe/xe_ioctl.c
> +++ b/lib/xe/xe_ioctl.c
> @@ -67,7 +67,7 @@ void xe_vm_unbind_all_async(int fd, uint32_t vm, uint32_t exec_queue,
>  			    uint32_t num_syncs)
>  {
>  	__xe_vm_bind_assert(fd, vm, exec_queue, bo, 0, 0, 0,
> -			    XE_VM_BIND_OP_UNMAP_ALL, XE_VM_BIND_FLAG_ASYNC,
> +			    DRM_XE_VM_BIND_OP_UNMAP_ALL, DRM_XE_VM_BIND_FLAG_ASYNC,
>  			    sync, num_syncs, 0, 0);
>  }
>  
> @@ -130,7 +130,7 @@ void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
>  		struct drm_xe_sync *sync, uint32_t num_syncs)
>  {
>  	__xe_vm_bind_assert(fd, vm, 0, bo, offset, addr, size,
> -			    XE_VM_BIND_OP_MAP, 0, sync, num_syncs, 0, 0);
> +			    DRM_XE_VM_BIND_OP_MAP, 0, sync, num_syncs, 0, 0);
>  }
>  
>  void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
> @@ -138,7 +138,7 @@ void xe_vm_unbind(int fd, uint32_t vm, uint64_t offset,
>  		  struct drm_xe_sync *sync, uint32_t num_syncs)
>  {
>  	__xe_vm_bind_assert(fd, vm, 0, 0, offset, addr, size,
> -			    XE_VM_BIND_OP_UNMAP, 0, sync, num_syncs, 0, 0);
> +			    DRM_XE_VM_BIND_OP_UNMAP, 0, sync, num_syncs, 0, 0);
>  }
>  
>  void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t offset,
> @@ -147,7 +147,7 @@ void xe_vm_prefetch_async(int fd, uint32_t vm, uint32_t exec_queue, uint64_t off
>  			  uint32_t region)
>  {
>  	__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
> -			    XE_VM_BIND_OP_PREFETCH, XE_VM_BIND_FLAG_ASYNC,
> +			    DRM_XE_VM_BIND_OP_PREFETCH, DRM_XE_VM_BIND_FLAG_ASYNC,
>  			    sync, num_syncs, region, 0);
>  }
>  
> @@ -156,7 +156,7 @@ void xe_vm_bind_async(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>  		      struct drm_xe_sync *sync, uint32_t num_syncs)
>  {
>  	__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
> -			    XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC, sync,
> +			    DRM_XE_VM_BIND_OP_MAP, DRM_XE_VM_BIND_FLAG_ASYNC, sync,
>  			    num_syncs, 0, 0);
>  }
>  
> @@ -166,7 +166,7 @@ void xe_vm_bind_async_flags(int fd, uint32_t vm, uint32_t exec_queue, uint32_t b
>  			    uint32_t flags)
>  {
>  	__xe_vm_bind_assert(fd, vm, exec_queue, bo, offset, addr, size,
> -			    XE_VM_BIND_OP_MAP, XE_VM_BIND_FLAG_ASYNC | flags,
> +			    DRM_XE_VM_BIND_OP_MAP, DRM_XE_VM_BIND_FLAG_ASYNC | flags,
>  			    sync, num_syncs, 0, 0);
>  }
>  
> @@ -175,7 +175,7 @@ void xe_vm_bind_userptr_async(int fd, uint32_t vm, uint32_t exec_queue,
>  			      struct drm_xe_sync *sync, uint32_t num_syncs)
>  {
>  	__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
> -			    XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC,
> +			    DRM_XE_VM_BIND_OP_MAP_USERPTR, DRM_XE_VM_BIND_FLAG_ASYNC,
>  			    sync, num_syncs, 0, 0);
>  }
>  
> @@ -185,7 +185,7 @@ void xe_vm_bind_userptr_async_flags(int fd, uint32_t vm, uint32_t exec_queue,
>  				    uint32_t num_syncs, uint32_t flags)
>  {
>  	__xe_vm_bind_assert(fd, vm, exec_queue, 0, userptr, addr, size,
> -			    XE_VM_BIND_OP_MAP_USERPTR, XE_VM_BIND_FLAG_ASYNC |
> +			    DRM_XE_VM_BIND_OP_MAP_USERPTR, DRM_XE_VM_BIND_FLAG_ASYNC |
>  			    flags, sync, num_syncs, 0, 0);
>  }
>  
> @@ -194,7 +194,7 @@ void xe_vm_unbind_async(int fd, uint32_t vm, uint32_t exec_queue,
>  			struct drm_xe_sync *sync, uint32_t num_syncs)
>  {
>  	__xe_vm_bind_assert(fd, vm, exec_queue, 0, offset, addr, size,
> -			    XE_VM_BIND_OP_UNMAP, XE_VM_BIND_FLAG_ASYNC, sync,
> +			    DRM_XE_VM_BIND_OP_UNMAP, DRM_XE_VM_BIND_FLAG_ASYNC, sync,
>  			    num_syncs, 0, 0);
>  }
>  
> @@ -208,13 +208,13 @@ static void __xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
>  void xe_vm_bind_sync(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
>  		     uint64_t addr, uint64_t size)
>  {
> -	__xe_vm_bind_sync(fd, vm, bo, offset, addr, size, XE_VM_BIND_OP_MAP);
> +	__xe_vm_bind_sync(fd, vm, bo, offset, addr, size, DRM_XE_VM_BIND_OP_MAP);
>  }
>  
>  void xe_vm_unbind_sync(int fd, uint32_t vm, uint64_t offset,
>  		       uint64_t addr, uint64_t size)
>  {
> -	__xe_vm_bind_sync(fd, vm, 0, offset, addr, size, XE_VM_BIND_OP_UNMAP);
> +	__xe_vm_bind_sync(fd, vm, 0, offset, addr, size, DRM_XE_VM_BIND_OP_UNMAP);
>  }
>  
>  void xe_vm_destroy(int fd, uint32_t vm)
> diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> index 06d216cf9..8df3d317a 100644
> --- a/lib/xe/xe_query.c
> +++ b/lib/xe/xe_query.c
> @@ -249,8 +249,8 @@ struct xe_device *xe_device_get(int fd)
>  
>  	xe_dev->fd = fd;
>  	xe_dev->config = xe_query_config_new(fd);
> -	xe_dev->va_bits = xe_dev->config->info[XE_QUERY_CONFIG_VA_BITS];
> -	xe_dev->dev_id = xe_dev->config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
> +	xe_dev->va_bits = xe_dev->config->info[DRM_XE_QUERY_CONFIG_VA_BITS];
> +	xe_dev->dev_id = xe_dev->config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff;
>  	xe_dev->gt_list = xe_query_gt_list_new(fd);
>  	xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
>  	xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
> @@ -414,7 +414,7 @@ static uint64_t __xe_visible_vram_size(int fd, int gt)
>   * @gt: gt id
>   *
>   * Returns vram memory bitmask for xe device @fd and @gt id, with
> - * XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM also set, to ensure that CPU access is
> + * DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM also set, to ensure that CPU access is
>   * possible.
>   */
>  uint64_t visible_vram_memory(int fd, int gt)
> @@ -424,7 +424,7 @@ uint64_t visible_vram_memory(int fd, int gt)
>  	 * has landed.
>  	 */
>  	if (__xe_visible_vram_size(fd, gt))
> -		return vram_memory(fd, gt) | XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
> +		return vram_memory(fd, gt) | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
>  	else
>  		return vram_memory(fd, gt); /* older kernel */
>  }
> @@ -449,7 +449,7 @@ uint64_t vram_if_possible(int fd, int gt)
>   *
>   * Returns vram memory bitmask for xe device @fd and @gt id or system memory if
>   * there's no vram memory available for @gt. Also attaches the
> - * XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM to ensure that CPU access is possible
> + * DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM to ensure that CPU access is possible
>   * when using vram.
>   */
>  uint64_t visible_vram_if_possible(int fd, int gt)
> @@ -463,7 +463,7 @@ uint64_t visible_vram_if_possible(int fd, int gt)
>  	 * has landed.
>  	 */
>  	if (__xe_visible_vram_size(fd, gt))
> -		return vram ? vram | XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM : system_memory;
> +		return vram ? vram | DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM : system_memory;
>  	else
>  		return vram ? vram : system_memory; /* older kernel */
>  }
> diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
> index fc81cc263..3d7e22a9b 100644
> --- a/lib/xe/xe_query.h
> +++ b/lib/xe/xe_query.h
> @@ -71,8 +71,8 @@ struct xe_device {
>  	for (uint64_t __i = 0; __i < igt_fls(__memreg); __i++) \
>  		for_if(__r = (__memreg & (1ull << __i)))
>  
> -#define XE_IS_CLASS_SYSMEM(__region) ((__region)->mem_class == XE_MEM_REGION_CLASS_SYSMEM)
> -#define XE_IS_CLASS_VRAM(__region) ((__region)->mem_class == XE_MEM_REGION_CLASS_VRAM)
> +#define XE_IS_CLASS_SYSMEM(__region) ((__region)->mem_class == DRM_XE_MEM_REGION_CLASS_SYSMEM)
> +#define XE_IS_CLASS_VRAM(__region) ((__region)->mem_class == DRM_XE_MEM_REGION_CLASS_VRAM)
>  
>  unsigned int xe_number_gt(int fd);
>  uint64_t all_memory_regions(int fd);
> diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
> index 5fa4d4610..780125f92 100644
> --- a/lib/xe/xe_util.c
> +++ b/lib/xe/xe_util.c
> @@ -134,12 +134,12 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
>  		ops = &bind_ops[i];
>  
>  		if (obj->bind_op == XE_OBJECT_BIND) {
> -			op = XE_VM_BIND_OP_MAP;
> -			flags = XE_VM_BIND_FLAG_ASYNC;
> +			op = DRM_XE_VM_BIND_OP_MAP;
> +			flags = DRM_XE_VM_BIND_FLAG_ASYNC;
>  			ops->obj = obj->handle;
>  		} else {
> -			op = XE_VM_BIND_OP_UNMAP;
> -			flags = XE_VM_BIND_FLAG_ASYNC;
> +			op = DRM_XE_VM_BIND_OP_UNMAP;
> +			flags = DRM_XE_VM_BIND_FLAG_ASYNC;
>  		}
>  
>  		ops->op = op;
> @@ -211,7 +211,7 @@ void xe_bind_unbind_async(int xe, uint32_t vm, uint32_t bind_engine,
>  		  tabsyncs[0].handle, tabsyncs[1].handle);
>  
>  	if (num_binds == 1) {
> -		if ((bind_ops[0].op & 0xffff) == XE_VM_BIND_OP_MAP)
> +		if ((bind_ops[0].op & 0xffff) == DRM_XE_VM_BIND_OP_MAP)
>  			xe_vm_bind_async(xe, vm, bind_engine, bind_ops[0].obj, 0,
>  					 bind_ops[0].addr, bind_ops[0].range,
>  					 syncs, num_syncs);
> diff --git a/lib/xe/xe_util.h b/lib/xe/xe_util.h
> index e97d236b8..21b312071 100644
> --- a/lib/xe/xe_util.h
> +++ b/lib/xe/xe_util.h
> @@ -13,9 +13,9 @@
>  #include <xe_drm.h>
>  
>  #define XE_IS_SYSMEM_MEMORY_REGION(fd, region) \
> -	(xe_region_class(fd, region) == XE_MEM_REGION_CLASS_SYSMEM)
> +	(xe_region_class(fd, region) == DRM_XE_MEM_REGION_CLASS_SYSMEM)
>  #define XE_IS_VRAM_MEMORY_REGION(fd, region) \
> -	(xe_region_class(fd, region) == XE_MEM_REGION_CLASS_VRAM)
> +	(xe_region_class(fd, region) == DRM_XE_MEM_REGION_CLASS_VRAM)
>  
>  struct igt_collection *
>  __xe_get_memory_region_set(int xe, uint32_t *mem_regions_type, int num_regions);
> diff --git a/tests/intel/xe_access_counter.c b/tests/intel/xe_access_counter.c
> index b738ebc86..8966bfc9c 100644
> --- a/tests/intel/xe_access_counter.c
> +++ b/tests/intel/xe_access_counter.c
> @@ -47,8 +47,8 @@ igt_main
>  
>  		struct drm_xe_ext_set_property ext = {
>  			.base.next_extension = 0,
> -			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> -			.property = XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY,
> +			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> +			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY,
>  			.value = SIZE_64M + 1,
>  		};
>  
> diff --git a/tests/intel/xe_ccs.c b/tests/intel/xe_ccs.c
> index 876c239e4..bb844b641 100644
> --- a/tests/intel/xe_ccs.c
> +++ b/tests/intel/xe_ccs.c
> @@ -634,8 +634,8 @@ igt_main_args("bf:pst:W:H:", NULL, help_str, opt_handler, NULL)
>  		xe_device_get(xe);
>  
>  		set = xe_get_memory_region_set(xe,
> -					       XE_MEM_REGION_CLASS_SYSMEM,
> -					       XE_MEM_REGION_CLASS_VRAM);
> +					       DRM_XE_MEM_REGION_CLASS_SYSMEM,
> +					       DRM_XE_MEM_REGION_CLASS_VRAM);
>  	}
>  
>  	igt_describe("Check block-copy uncompressed blit");
> diff --git a/tests/intel/xe_copy_basic.c b/tests/intel/xe_copy_basic.c
> index fe78ac50f..1dafbb276 100644
> --- a/tests/intel/xe_copy_basic.c
> +++ b/tests/intel/xe_copy_basic.c
> @@ -164,8 +164,8 @@ igt_main
>  		fd = drm_open_driver(DRIVER_XE);
>  		xe_device_get(fd);
>  		set = xe_get_memory_region_set(fd,
> -					       XE_MEM_REGION_CLASS_SYSMEM,
> -					       XE_MEM_REGION_CLASS_VRAM);
> +					       DRM_XE_MEM_REGION_CLASS_SYSMEM,
> +					       DRM_XE_MEM_REGION_CLASS_VRAM);
>  	}
>  
>  	for (int i = 0; i < ARRAY_SIZE(size); i++) {
> diff --git a/tests/intel/xe_debugfs.c b/tests/intel/xe_debugfs.c
> index 4104bf5ae..60ddceda7 100644
> --- a/tests/intel/xe_debugfs.c
> +++ b/tests/intel/xe_debugfs.c
> @@ -91,20 +91,20 @@ test_base(int fd, struct drm_xe_query_config *config)
>  
>  	igt_assert(config);
>  	sprintf(reference, "devid 0x%llx",
> -			config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
> +			config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
>  	igt_assert(igt_debugfs_search(fd, "info", reference));
>  
>  	sprintf(reference, "revid %lld",
> -			config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
> +			config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
>  	igt_assert(igt_debugfs_search(fd, "info", reference));
>  
> -	sprintf(reference, "is_dgfx %s", config->info[XE_QUERY_CONFIG_FLAGS] &
> -		XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "yes" : "no");
> +	sprintf(reference, "is_dgfx %s", config->info[DRM_XE_QUERY_CONFIG_FLAGS] &
> +		DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "yes" : "no");
>  
>  	igt_assert(igt_debugfs_search(fd, "info", reference));
>  
>  	if (!AT_LEAST_GEN(devid, 20)) {
> -		switch (config->info[XE_QUERY_CONFIG_VA_BITS]) {
> +		switch (config->info[DRM_XE_QUERY_CONFIG_VA_BITS]) {
>  		case 48:
>  			val = 3;
>  			break;
> @@ -125,7 +125,7 @@ test_base(int fd, struct drm_xe_query_config *config)
>  	igt_assert(igt_debugfs_exists(fd, "gtt_mm", O_RDONLY));
>  	igt_debugfs_dump(fd, "gtt_mm");
>  
> -	if (config->info[XE_QUERY_CONFIG_FLAGS] & XE_QUERY_CONFIG_FLAGS_HAS_VRAM) {
> +	if (config->info[DRM_XE_QUERY_CONFIG_FLAGS] & DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM) {
>  		igt_assert(igt_debugfs_exists(fd, "vram0_mm", O_RDONLY));
>  		igt_debugfs_dump(fd, "vram0_mm");
>  	}
> diff --git a/tests/intel/xe_exec_basic.c b/tests/intel/xe_exec_basic.c
> index 8dbce524d..232ddde8e 100644
> --- a/tests/intel/xe_exec_basic.c
> +++ b/tests/intel/xe_exec_basic.c
> @@ -138,7 +138,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  
>  		bo_flags = visible_vram_if_possible(fd, eci->gt_id);
>  		if (flags & DEFER_ALLOC)
> -			bo_flags |= XE_GEM_CREATE_FLAG_DEFER_BACKING;
> +			bo_flags |= DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING;
>  
>  		bo = xe_bo_create_flags(fd, n_vm == 1 ? vm[0] : 0,
>  					bo_size, bo_flags);
> @@ -172,9 +172,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  		if (flags & SPARSE)
>  			__xe_vm_bind_assert(fd, vm[i], bind_exec_queues[i],
>  					    0, 0, sparse_addr[i], bo_size,
> -					    XE_VM_BIND_OP_MAP,
> -					    XE_VM_BIND_FLAG_ASYNC |
> -					    XE_VM_BIND_FLAG_NULL, sync,
> +					    DRM_XE_VM_BIND_OP_MAP,
> +					    DRM_XE_VM_BIND_FLAG_ASYNC |
> +					    DRM_XE_VM_BIND_FLAG_NULL, sync,
>  					    1, 0, 0);
>  	}
>  
> diff --git a/tests/intel/xe_exec_fault_mode.c b/tests/intel/xe_exec_fault_mode.c
> index 64b5c59a2..477d0824d 100644
> --- a/tests/intel/xe_exec_fault_mode.c
> +++ b/tests/intel/xe_exec_fault_mode.c
> @@ -175,12 +175,12 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>  		if (bo)
>  			xe_vm_bind_async_flags(fd, vm, bind_exec_queues[0], bo, 0,
>  					       addr, bo_size, sync, 1,
> -					       XE_VM_BIND_FLAG_IMMEDIATE);
> +					       DRM_XE_VM_BIND_FLAG_IMMEDIATE);
>  		else
>  			xe_vm_bind_userptr_async_flags(fd, vm, bind_exec_queues[0],
>  						       to_user_pointer(data),
>  						       addr, bo_size, sync, 1,
> -						       XE_VM_BIND_FLAG_IMMEDIATE);
> +						       DRM_XE_VM_BIND_FLAG_IMMEDIATE);
>  	} else {
>  		if (bo)
>  			xe_vm_bind_async(fd, vm, bind_exec_queues[0], bo, 0, addr,
> diff --git a/tests/intel/xe_exec_queue_property.c b/tests/intel/xe_exec_queue_property.c
> index 4e32aefa5..ae6b445cd 100644
> --- a/tests/intel/xe_exec_queue_property.c
> +++ b/tests/intel/xe_exec_queue_property.c
> @@ -43,11 +43,11 @@
>  static int get_property_name(const char *property)
>  {
>  	if (strstr(property, "preempt"))
> -		return XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT;
> +		return DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT;
>  	else if (strstr(property, "job_timeout"))
> -		return XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT;
> +		return DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT;
>  	else if (strstr(property, "timeslice"))
> -		return XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE;
> +		return DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE;
>  	else
>  		return -1;
>  }
> @@ -60,7 +60,7 @@ static void test_set_property(int xe, int property_name,
>  	};
>  	struct drm_xe_ext_set_property ext = {
>  		.base.next_extension = 0,
> -		.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> +		.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
>  		.property = property_name,
>  		.value = property_value,
>  	};
> @@ -130,19 +130,19 @@ igt_main
>  
>  	igt_subtest("priority-set-property") {
>  		/* Tests priority property by setting positive values. */
> -		test_set_property(xe, XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
> +		test_set_property(xe, DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
>  				  DRM_SCHED_PRIORITY_NORMAL, 0);
>  
>  		/* Tests priority property by setting invalid value. */
> -		test_set_property(xe, XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
> +		test_set_property(xe, DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
>  				  DRM_SCHED_PRIORITY_HIGH + 1, -EINVAL);
>  		igt_fork(child, 1) {
>  			igt_drop_root();
>  
>  			/* Tests priority property by dropping root permissions. */
> -			test_set_property(xe, XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
> +			test_set_property(xe, DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
>  					  DRM_SCHED_PRIORITY_HIGH, -EPERM);
> -			test_set_property(xe, XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
> +			test_set_property(xe, DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY,
>  					  DRM_SCHED_PRIORITY_NORMAL, 0);
>  		}
>  		igt_waitchildren();
> @@ -150,7 +150,7 @@ igt_main
>  
>  	igt_subtest("persistence-set-property") {
>  		/* Tests persistence property by setting positive values. */
> -		test_set_property(xe, XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE, 1, 0);
> +		test_set_property(xe, DRM_XE_EXEC_QUEUE_SET_PROPERTY_PERSISTENCE, 1, 0);
>  
>  	}
>  
> diff --git a/tests/intel/xe_exec_reset.c b/tests/intel/xe_exec_reset.c
> index 44248776b..39647b736 100644
> --- a/tests/intel/xe_exec_reset.c
> +++ b/tests/intel/xe_exec_reset.c
> @@ -187,14 +187,14 @@ test_balancer(int fd, int gt, int class, int n_exec_queues, int n_execs,
>  	for (i = 0; i < n_exec_queues; i++) {
>  		struct drm_xe_ext_set_property job_timeout = {
>  			.base.next_extension = 0,
> -			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> -			.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
> +			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> +			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
>  			.value = 50,
>  		};
>  		struct drm_xe_ext_set_property preempt_timeout = {
>  			.base.next_extension = 0,
> -			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> -			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
> +			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> +			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
>  			.value = 1000,
>  		};
>  		struct drm_xe_exec_queue_create create = {
> @@ -374,14 +374,14 @@ test_legacy_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  	for (i = 0; i < n_exec_queues; i++) {
>  		struct drm_xe_ext_set_property job_timeout = {
>  			.base.next_extension = 0,
> -			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> -			.property = XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
> +			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> +			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT,
>  			.value = 50,
>  		};
>  		struct drm_xe_ext_set_property preempt_timeout = {
>  			.base.next_extension = 0,
> -			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> -			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
> +			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> +			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
>  			.value = 1000,
>  		};
>  		uint64_t ext = 0;
> @@ -542,8 +542,8 @@ test_compute_mode(int fd, struct drm_xe_engine_class_instance *eci,
>  	for (i = 0; i < n_exec_queues; i++) {
>  		struct drm_xe_ext_set_property preempt_timeout = {
>  			.base.next_extension = 0,
> -			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> -			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
> +			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> +			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
>  			.value = 1000,
>  		};
>  		uint64_t ext = 0;
> diff --git a/tests/intel/xe_exec_threads.c b/tests/intel/xe_exec_threads.c
> index a0c96d08d..b814dcdf5 100644
> --- a/tests/intel/xe_exec_threads.c
> +++ b/tests/intel/xe_exec_threads.c
> @@ -520,8 +520,8 @@ test_legacy_mode(int fd, uint32_t vm, uint64_t addr, uint64_t userptr,
>  	for (i = 0; i < n_exec_queues; i++) {
>  		struct drm_xe_ext_set_property preempt_timeout = {
>  			.base.next_extension = 0,
> -			.base.name = XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> -			.property = XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
> +			.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> +			.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT,
>  			.value = 1000,
>  		};
>  		uint64_t ext = to_user_pointer(&preempt_timeout);
> diff --git a/tests/intel/xe_exercise_blt.c b/tests/intel/xe_exercise_blt.c
> index 2f349b16d..df774130f 100644
> --- a/tests/intel/xe_exercise_blt.c
> +++ b/tests/intel/xe_exercise_blt.c
> @@ -358,8 +358,8 @@ igt_main_args("b:pst:W:H:", NULL, help_str, opt_handler, NULL)
>  		xe_device_get(xe);
>  
>  		set = xe_get_memory_region_set(xe,
> -					       XE_MEM_REGION_CLASS_SYSMEM,
> -					       XE_MEM_REGION_CLASS_VRAM);
> +					       DRM_XE_MEM_REGION_CLASS_SYSMEM,
> +					       DRM_XE_MEM_REGION_CLASS_VRAM);
>  	}
>  
>  	igt_describe("Check fast-copy blit");
> diff --git a/tests/intel/xe_perf_pmu.c b/tests/intel/xe_perf_pmu.c
> index 0b25a859f..a0dd30e50 100644
> --- a/tests/intel/xe_perf_pmu.c
> +++ b/tests/intel/xe_perf_pmu.c
> @@ -51,15 +51,15 @@ static uint64_t engine_group_get_config(int gt, int class)
>  
>  	switch (class) {
>  	case DRM_XE_ENGINE_CLASS_COPY:
> -		config = XE_PMU_COPY_GROUP_BUSY(gt);
> +		config = DRM_XE_PMU_COPY_GROUP_BUSY(gt);
>  		break;
>  	case DRM_XE_ENGINE_CLASS_RENDER:
>  	case DRM_XE_ENGINE_CLASS_COMPUTE:
> -		config = XE_PMU_RENDER_GROUP_BUSY(gt);
> +		config = DRM_XE_PMU_RENDER_GROUP_BUSY(gt);
>  		break;
>  	case DRM_XE_ENGINE_CLASS_VIDEO_DECODE:
>  	case DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE:
> -		config = XE_PMU_MEDIA_GROUP_BUSY(gt);
> +		config = DRM_XE_PMU_MEDIA_GROUP_BUSY(gt);
>  		break;
>  	}
>  
> @@ -112,7 +112,7 @@ static void test_any_engine_busyness(int fd, struct drm_xe_engine_class_instance
>  	sync[0].handle = syncobj_create(fd, 0);
>  	xe_vm_bind_async(fd, vm, 0, bo, 0, addr, bo_size, sync, 1);
>  
> -	pmu_fd = open_pmu(fd, XE_PMU_ANY_ENGINE_GROUP_BUSY(eci->gt_id));
> +	pmu_fd = open_pmu(fd, DRM_XE_PMU_ANY_ENGINE_GROUP_BUSY(eci->gt_id));
>  	idle = pmu_read(pmu_fd);
>  	igt_assert(!idle);
>  
> diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> index b2976ec84..d07ed4535 100644
> --- a/tests/intel/xe_pm.c
> +++ b/tests/intel/xe_pm.c
> @@ -400,7 +400,7 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
>  	igt_assert_eq(igt_ioctl(device.fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  
>  	for (i = 0; i < mem_usage->num_regions; i++) {
> -		if (mem_usage->regions[i].mem_class == XE_MEM_REGION_CLASS_VRAM) {
> +		if (mem_usage->regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
>  			vram_used_mb +=  (mem_usage->regions[i].used / (1024 * 1024));
>  			vram_total_mb += (mem_usage->regions[i].total_size / (1024 * 1024));
>  		}
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index cf966d40d..969ad1c7f 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -163,9 +163,9 @@ void process_hwconfig(void *data, uint32_t len)
>  const char *get_topo_name(int value)
>  {
>  	switch(value) {
> -	case XE_TOPO_DSS_GEOMETRY: return "DSS_GEOMETRY";
> -	case XE_TOPO_DSS_COMPUTE: return "DSS_COMPUTE";
> -	case XE_TOPO_EU_PER_DSS: return "EU_PER_DSS";
> +	case DRM_XE_TOPO_DSS_GEOMETRY: return "DSS_GEOMETRY";
> +	case DRM_XE_TOPO_DSS_COMPUTE: return "DSS_COMPUTE";
> +	case DRM_XE_TOPO_EU_PER_DSS: return "EU_PER_DSS";
>  	}
>  	return "??";
>  }
> @@ -221,9 +221,9 @@ test_query_mem_usage(int fd)
>  	for (i = 0; i < mem_usage->num_regions; i++) {
>  		igt_info("mem region %d: %s\t%#llx / %#llx\n", i,
>  			mem_usage->regions[i].mem_class ==
> -			XE_MEM_REGION_CLASS_SYSMEM ? "SYSMEM"
> +			DRM_XE_MEM_REGION_CLASS_SYSMEM ? "SYSMEM"
>  			:mem_usage->regions[i].mem_class ==
> -			XE_MEM_REGION_CLASS_VRAM ? "VRAM" : "?",
> +			DRM_XE_MEM_REGION_CLASS_VRAM ? "VRAM" : "?",
>  			mem_usage->regions[i].used,
>  			mem_usage->regions[i].total_size
>  		);
> @@ -359,23 +359,23 @@ test_query_config(int fd)
>  
>  	igt_assert(config->num_params > 0);
>  
> -	igt_info("XE_QUERY_CONFIG_REV_AND_DEVICE_ID\t%#llx\n",
> -		config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID]);
> +	igt_info("DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID\t%#llx\n",
> +		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID]);
>  	igt_info("  REV_ID\t\t\t\t%#llx\n",
> -		config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
> +		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] >> 16);
>  	igt_info("  DEVICE_ID\t\t\t\t%#llx\n",
> -		config->info[XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
> -	igt_info("XE_QUERY_CONFIG_FLAGS\t\t\t%#llx\n",
> -		config->info[XE_QUERY_CONFIG_FLAGS]);
> -	igt_info("  XE_QUERY_CONFIG_FLAGS_HAS_VRAM\t%s\n",
> -		config->info[XE_QUERY_CONFIG_FLAGS] &
> -		XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "ON":"OFF");
> -	igt_info("XE_QUERY_CONFIG_MIN_ALIGNMENT\t\t%#llx\n",
> -		config->info[XE_QUERY_CONFIG_MIN_ALIGNMENT]);
> -	igt_info("XE_QUERY_CONFIG_VA_BITS\t\t\t%llu\n",
> -		config->info[XE_QUERY_CONFIG_VA_BITS]);
> -	igt_info("XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY\t%llu\n",
> -		config->info[XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY]);
> +		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
> +	igt_info("DRM_XE_QUERY_CONFIG_FLAGS\t\t\t%#llx\n",
> +		config->info[DRM_XE_QUERY_CONFIG_FLAGS]);
> +	igt_info("  DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM\t%s\n",
> +		config->info[DRM_XE_QUERY_CONFIG_FLAGS] &
> +		DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "ON":"OFF");
> +	igt_info("DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT\t\t%#llx\n",
> +		config->info[DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT]);
> +	igt_info("DRM_XE_QUERY_CONFIG_VA_BITS\t\t\t%llu\n",
> +		config->info[DRM_XE_QUERY_CONFIG_VA_BITS]);
> +	igt_info("DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY\t%llu\n",
> +		config->info[DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY]);
>  	dump_hex_debug(config, query.size);
>  
>  	free(config);
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index f1ccd6c21..6700a6a55 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -356,7 +356,7 @@ static void userptr_invalid(int fd)
>  	vm = xe_vm_create(fd, 0, 0);
>  	munmap(data, size);
>  	ret = __xe_vm_bind(fd, vm, 0, 0, to_user_pointer(data), 0x40000,
> -			   size, XE_VM_BIND_OP_MAP_USERPTR, 0, NULL, 0, 0, 0);
> +			   size, DRM_XE_VM_BIND_OP_MAP_USERPTR, 0, NULL, 0, 0, 0);
>  	igt_assert(ret == -EFAULT);
>  
>  	xe_vm_destroy(fd, vm);
> @@ -795,8 +795,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>  		bind_ops[i].range = bo_size;
>  		bind_ops[i].addr = addr;
>  		bind_ops[i].tile_mask = 0x1 << eci->gt_id;
> -		bind_ops[i].op = XE_VM_BIND_OP_MAP;
> -		bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
> +		bind_ops[i].op = DRM_XE_VM_BIND_OP_MAP;
> +		bind_ops[i].flags = DRM_XE_VM_BIND_FLAG_ASYNC;
>  		bind_ops[i].region = 0;
>  		bind_ops[i].reserved[0] = 0;
>  		bind_ops[i].reserved[1] = 0;
> @@ -840,8 +840,8 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>  
>  	for (i = 0; i < n_execs; ++i) {
>  		bind_ops[i].obj = 0;
> -		bind_ops[i].op = XE_VM_BIND_OP_UNMAP;
> -		bind_ops[i].flags = XE_VM_BIND_FLAG_ASYNC;
> +		bind_ops[i].op = DRM_XE_VM_BIND_OP_UNMAP;
> +		bind_ops[i].flags = DRM_XE_VM_BIND_FLAG_ASYNC;
>  	}
>  
>  	syncobj_reset(fd, &sync[0].handle, 1);
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [igt-dev] [PATCH v1 4/8] drm-uapi/xe: Rename *_mem_regions mask.
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 4/8] drm-uapi/xe: Rename *_mem_regions mask Francois Dugast
@ 2023-11-14 14:44   ` Kamil Konieczny
  0 siblings, 0 replies; 18+ messages in thread
From: Kamil Konieczny @ 2023-11-14 14:44 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Hi Francois,
On 2023-11-14 at 13:44:22 +0000, Francois Dugast wrote:

please remove last dot "." from subject:

[PATCH v1 4/8] drm-uapi/xe: Rename *_mem_regions mask.
-----------------------------------------------------^
s/mask\./mask/

with that:

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Align with kernel commit ("drm/xe/uapi: Rename *_mem_regions masks")
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
>  include/drm-uapi/xe_drm.h | 17 +++++++++--------
>  lib/xe/xe_query.c         |  6 +++---
>  tests/intel/xe_query.c    |  8 ++++----
>  3 files changed, 16 insertions(+), 15 deletions(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 32f6cf631..621d6c0e3 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -349,17 +349,18 @@ struct drm_xe_query_gt {
>  	/** @clock_freq: A clock frequency for timestamp */
>  	__u32 clock_freq;
>  	/**
> -	 * @native_mem_regions: Bit mask of instances from
> -	 * drm_xe_query_mem_usage that lives on the same GPU/Tile and have
> -	 * direct access.
> +	 * @near_mem_regions: Bit mask of instances from
> +	 * drm_xe_query_mem_usage that is near the current engines of this GT.
>  	 */
> -	__u64 native_mem_regions;
> +	__u64 near_mem_regions;
>  	/**
> -	 * @slow_mem_regions: Bit mask of instances from
> -	 * drm_xe_query_mem_usage that this GT can indirectly access, although
> -	 * they live on a different GPU/Tile.
> +	 * @far_mem_regions: Bit mask of instances from
> +	 * drm_xe_query_mem_usage that is far from the engines of this GT.
> +	 * In general, it has extra indirections when compared to the
> +	 * @near_mem_regions. For a discrete device this could mean system
> +	 * memory and memory living in a different Tile.
>  	 */
> -	__u64 slow_mem_regions;
> +	__u64 far_mem_regions;
>  	/** @reserved: Reserved */
>  	__u64 reserved[8];
>  };
> diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> index d459893e1..c33bfd432 100644
> --- a/lib/xe/xe_query.c
> +++ b/lib/xe/xe_query.c
> @@ -66,8 +66,8 @@ static uint64_t __memory_regions(const struct drm_xe_query_gt_list *gt_list)
>  	int i;
>  
>  	for (i = 0; i < gt_list->num_gt; i++)
> -		regions |= gt_list->gt_list[i].native_mem_regions |
> -			   gt_list->gt_list[i].slow_mem_regions;
> +		regions |= gt_list->gt_list[i].near_mem_regions |
> +			   gt_list->gt_list[i].far_mem_regions;
>  
>  	return regions;
>  }
> @@ -123,7 +123,7 @@ static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list,
>  	uint64_t region;
>  
>  	igt_assert(gt_list->num_gt > gt);
> -	region = gt_list->gt_list[gt].native_mem_regions;
> +	region = gt_list->gt_list[gt].near_mem_regions;
>  	igt_assert(region);
>  
>  	return region;
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index 969ad1c7f..b960ccfa2 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -281,10 +281,10 @@ test_query_gt_list(int fd)
>  		igt_info("type: %d\n", gt_list->gt_list[i].type);
>  		igt_info("gt_id: %d\n", gt_list->gt_list[i].gt_id);
>  		igt_info("clock_freq: %u\n", gt_list->gt_list[i].clock_freq);
> -		igt_info("native_mem_regions: 0x%016llx\n",
> -		       gt_list->gt_list[i].native_mem_regions);
> -		igt_info("slow_mem_regions: 0x%016llx\n",
> -		       gt_list->gt_list[i].slow_mem_regions);
> +		igt_info("near_mem_regions: 0x%016llx\n",
> +		       gt_list->gt_list[i].near_mem_regions);
> +		igt_info("far_mem_regions: 0x%016llx\n",
> +		       gt_list->gt_list[i].far_mem_regions);
>  	}
>  }
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [igt-dev] ✗ Fi.CI.BUILD: failure for uAPI Alignment - Renaming
  2023-11-14 13:44 [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming Francois Dugast
                   ` (7 preceding siblings ...)
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 8/8] drm-uapi/xe: Be more specific about vm_bind prefetch region Francois Dugast
@ 2023-11-14 15:11 ` Patchwork
  8 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2023-11-14 15:11 UTC (permalink / raw)
  To: Francois Dugast; +Cc: igt-dev

== Series Details ==

Series: uAPI Alignment - Renaming
URL   : https://patchwork.freedesktop.org/series/126402/
State : failure

== Summary ==

Applying: drm-uapi/xe: Add missing DRM_ prefix in uAPI constants
Patch failed at 0001 drm-uapi/xe: Add missing DRM_ prefix in uAPI constants
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [igt-dev] [PATCH v1 5/8] drm-uapi/xe: Rename query's mem_usage to mem_regions
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 5/8] drm-uapi/xe: Rename query's mem_usage to mem_regions Francois Dugast
@ 2023-11-14 15:30   ` Kamil Konieczny
  0 siblings, 0 replies; 18+ messages in thread
From: Kamil Konieczny @ 2023-11-14 15:30 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Hi Francois,
On 2023-11-14 at 13:44:23 +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Align with kernel's commit ("drm/xe/uapi: Rename query's mem_usage to mem_regions")
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> ---
>  include/drm-uapi/xe_drm.h | 14 ++++-----
>  lib/xe/xe_query.c         | 66 +++++++++++++++++++--------------------
>  lib/xe/xe_query.h         |  4 +--
>  tests/intel/xe_pm.c       | 18 +++++------
>  tests/intel/xe_query.c    | 58 +++++++++++++++++-----------------
>  5 files changed, 80 insertions(+), 80 deletions(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 621d6c0e3..ec37f6811 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -291,13 +291,13 @@ struct drm_xe_query_engine_cycles {
>  };
>  
>  /**
> - * struct drm_xe_query_mem_usage - describe memory regions and usage
> + * struct drm_xe_query_mem_regions - describe memory regions
>   *
>   * If a query is made with a struct drm_xe_device_query where .query
> - * is equal to DRM_XE_DEVICE_QUERY_MEM_USAGE, then the reply uses
> - * struct drm_xe_query_mem_usage in .data.
> + * is equal to DRM_XE_DEVICE_QUERY_MEM_REGIONS, then the reply uses
> + * struct drm_xe_query_mem_regions in .data.
>   */
> -struct drm_xe_query_mem_usage {
> +struct drm_xe_query_mem_regions {
>  	/** @num_regions: number of memory regions returned in @regions */
>  	__u32 num_regions;
>  	/** @pad: MBZ */
> @@ -350,12 +350,12 @@ struct drm_xe_query_gt {
>  	__u32 clock_freq;
>  	/**
>  	 * @near_mem_regions: Bit mask of instances from
> -	 * drm_xe_query_mem_usage that is near the current engines of this GT.
> +	 * drm_xe_query_mem_regions that is near the current engines of this GT.
>  	 */
>  	__u64 near_mem_regions;
>  	/**
>  	 * @far_mem_regions: Bit mask of instances from
> -	 * drm_xe_query_mem_usage that is far from the engines of this GT.
> +	 * drm_xe_query_mem_regions that is far from the engines of this GT.
>  	 * In general, it has extra indirections when compared to the
>  	 * @near_mem_regions. For a discrete device this could mean system
>  	 * memory and memory living in a different Tile.
> @@ -469,7 +469,7 @@ struct drm_xe_device_query {
>  	__u64 extensions;
>  
>  #define DRM_XE_DEVICE_QUERY_ENGINES		0
> -#define DRM_XE_DEVICE_QUERY_MEM_USAGE		1
> +#define DRM_XE_DEVICE_QUERY_MEM_REGIONS		1
>  #define DRM_XE_DEVICE_QUERY_CONFIG		2
>  #define DRM_XE_DEVICE_QUERY_GT_LIST		3
>  #define DRM_XE_DEVICE_QUERY_HWCONFIG		4
> diff --git a/lib/xe/xe_query.c b/lib/xe/xe_query.c
> index c33bfd432..afd443be3 100644
> --- a/lib/xe/xe_query.c
> +++ b/lib/xe/xe_query.c
> @@ -97,25 +97,25 @@ xe_query_engines_new(int fd, unsigned int *num_engines)
>  	return hw_engines;
>  }
>  
> -static struct drm_xe_query_mem_usage *xe_query_mem_usage_new(int fd)
> +static struct drm_xe_query_mem_regions *xe_query_mem_regions_new(int fd)
>  {
> -	struct drm_xe_query_mem_usage *mem_usage;
> +	struct drm_xe_query_mem_regions *mem_regions;
>  	struct drm_xe_device_query query = {
>  		.extensions = 0,
> -		.query = DRM_XE_DEVICE_QUERY_MEM_USAGE,
> +		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
>  		.size = 0,
>  		.data = 0,
>  	};
>  
>  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  
> -	mem_usage = malloc(query.size);
> -	igt_assert(mem_usage);
> +	mem_regions = malloc(query.size);
> +	igt_assert(mem_regions);
>  
> -	query.data = to_user_pointer(mem_usage);
> +	query.data = to_user_pointer(mem_regions);
>  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  
> -	return mem_usage;
> +	return mem_regions;
>  }
>  
>  static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list, int gt)
> @@ -129,44 +129,44 @@ static uint64_t native_region_for_gt(const struct drm_xe_query_gt_list *gt_list,
>  	return region;
>  }
>  
> -static uint64_t gt_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
> +static uint64_t gt_vram_size(const struct drm_xe_query_mem_regions *mem_regions,
>  			     const struct drm_xe_query_gt_list *gt_list, int gt)
>  {
>  	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
>  
> -	if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
> -		return mem_usage->regions[region_idx].total_size;
> +	if (XE_IS_CLASS_VRAM(&mem_regions->regions[region_idx]))
> +		return mem_regions->regions[region_idx].total_size;
>  
>  	return 0;
>  }
>  
> -static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_usage *mem_usage,
> +static uint64_t gt_visible_vram_size(const struct drm_xe_query_mem_regions *mem_regions,
>  				     const struct drm_xe_query_gt_list *gt_list, int gt)
>  {
>  	int region_idx = ffs(native_region_for_gt(gt_list, gt)) - 1;
>  
> -	if (XE_IS_CLASS_VRAM(&mem_usage->regions[region_idx]))
> -		return mem_usage->regions[region_idx].cpu_visible_size;
> +	if (XE_IS_CLASS_VRAM(&mem_regions->regions[region_idx]))
> +		return mem_regions->regions[region_idx].cpu_visible_size;
>  
>  	return 0;
>  }
>  
> -static bool __mem_has_vram(struct drm_xe_query_mem_usage *mem_usage)
> +static bool __mem_has_vram(struct drm_xe_query_mem_regions *mem_regions)
>  {
> -	for (int i = 0; i < mem_usage->num_regions; i++)
> -		if (XE_IS_CLASS_VRAM(&mem_usage->regions[i]))
> +	for (int i = 0; i < mem_regions->num_regions; i++)
> +		if (XE_IS_CLASS_VRAM(&mem_regions->regions[i]))
>  			return true;
>  
>  	return false;
>  }
>  
> -static uint32_t __mem_default_alignment(struct drm_xe_query_mem_usage *mem_usage)
> +static uint32_t __mem_default_alignment(struct drm_xe_query_mem_regions *mem_regions)
>  {
>  	uint32_t alignment = XE_DEFAULT_ALIGNMENT;
>  
> -	for (int i = 0; i < mem_usage->num_regions; i++)
> -		if (alignment < mem_usage->regions[i].min_page_size)
> -			alignment = mem_usage->regions[i].min_page_size;
> +	for (int i = 0; i < mem_regions->num_regions; i++)
> +		if (alignment < mem_regions->regions[i].min_page_size)
> +			alignment = mem_regions->regions[i].min_page_size;
>  
>  	return alignment;
>  }
> @@ -222,7 +222,7 @@ static void xe_device_free(struct xe_device *xe_dev)
>  	free(xe_dev->config);
>  	free(xe_dev->gt_list);
>  	free(xe_dev->hw_engines);
> -	free(xe_dev->mem_usage);
> +	free(xe_dev->mem_regions);
>  	free(xe_dev->vram_size);
>  	free(xe_dev);
>  }
> @@ -254,18 +254,18 @@ struct xe_device *xe_device_get(int fd)
>  	xe_dev->gt_list = xe_query_gt_list_new(fd);
>  	xe_dev->memory_regions = __memory_regions(xe_dev->gt_list);
>  	xe_dev->hw_engines = xe_query_engines_new(fd, &xe_dev->number_hw_engines);
> -	xe_dev->mem_usage = xe_query_mem_usage_new(fd);
> +	xe_dev->mem_regions = xe_query_mem_regions_new(fd);
>  	xe_dev->vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->vram_size));
>  	xe_dev->visible_vram_size = calloc(xe_dev->gt_list->num_gt, sizeof(*xe_dev->visible_vram_size));
>  	for (int gt = 0; gt < xe_dev->gt_list->num_gt; gt++) {
> -		xe_dev->vram_size[gt] = gt_vram_size(xe_dev->mem_usage,
> +		xe_dev->vram_size[gt] = gt_vram_size(xe_dev->mem_regions,
>  						     xe_dev->gt_list, gt);
>  		xe_dev->visible_vram_size[gt] =
> -			gt_visible_vram_size(xe_dev->mem_usage,
> +			gt_visible_vram_size(xe_dev->mem_regions,
>  					     xe_dev->gt_list, gt);
>  	}
> -	xe_dev->default_alignment = __mem_default_alignment(xe_dev->mem_usage);
> -	xe_dev->has_vram = __mem_has_vram(xe_dev->mem_usage);
> +	xe_dev->default_alignment = __mem_default_alignment(xe_dev->mem_regions);
> +	xe_dev->has_vram = __mem_has_vram(xe_dev->mem_regions);
>  
>  	/* We may get here from multiple threads, use first cached xe_dev */
>  	pthread_mutex_lock(&cache.cache_mutex);
> @@ -508,9 +508,9 @@ struct drm_xe_query_mem_region *xe_mem_region(int fd, uint64_t region)
>  
>  	xe_dev = find_in_cache(fd);
>  	igt_assert(xe_dev);
> -	igt_assert(xe_dev->mem_usage->num_regions > region_idx);
> +	igt_assert(xe_dev->mem_regions->num_regions > region_idx);
>  
> -	return &xe_dev->mem_usage->regions[region_idx];
> +	return &xe_dev->mem_regions->regions[region_idx];
>  }
>  
>  /**
> @@ -641,23 +641,23 @@ uint64_t xe_vram_available(int fd, int gt)
>  	struct xe_device *xe_dev;
>  	int region_idx;
>  	struct drm_xe_query_mem_region *mem_region;
> -	struct drm_xe_query_mem_usage *mem_usage;
> +	struct drm_xe_query_mem_regions *mem_regions;
>  
>  	xe_dev = find_in_cache(fd);
>  	igt_assert(xe_dev);
>  
>  	region_idx = ffs(native_region_for_gt(xe_dev->gt_list, gt)) - 1;
> -	mem_region = &xe_dev->mem_usage->regions[region_idx];
> +	mem_region = &xe_dev->mem_regions->regions[region_idx];
>  
>  	if (XE_IS_CLASS_VRAM(mem_region)) {
>  		uint64_t available_vram;
>  
> -		mem_usage = xe_query_mem_usage_new(fd);
> +		mem_regions = xe_query_mem_regions_new(fd);
>  		pthread_mutex_lock(&cache.cache_mutex);
> -		mem_region->used = mem_usage->regions[region_idx].used;
> +		mem_region->used = mem_regions->regions[region_idx].used;
>  		available_vram = mem_region->total_size - mem_region->used;
>  		pthread_mutex_unlock(&cache.cache_mutex);
> -		free(mem_usage);
> +		free(mem_regions);
>  
>  		return available_vram;
>  	}
> diff --git a/lib/xe/xe_query.h b/lib/xe/xe_query.h
> index 3d7e22a9b..38e9aa440 100644
> --- a/lib/xe/xe_query.h
> +++ b/lib/xe/xe_query.h
> @@ -36,8 +36,8 @@ struct xe_device {
>  	/** @number_hw_engines: length of hardware engines array */
>  	unsigned int number_hw_engines;
>  
> -	/** @mem_usage: regions memory information and usage */
> -	struct drm_xe_query_mem_usage *mem_usage;
> +	/** @mem_regions: regions memory information and usage */
> +	struct drm_xe_query_mem_regions *mem_regions;
>  
>  	/** @vram_size: array of vram sizes for all gt_list */
>  	uint64_t *vram_size;
> diff --git a/tests/intel/xe_pm.c b/tests/intel/xe_pm.c
> index 18afb68b0..9423984cc 100644
> --- a/tests/intel/xe_pm.c
> +++ b/tests/intel/xe_pm.c
> @@ -372,10 +372,10 @@ NULL));
>   */
>  static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
>  {
> -	struct drm_xe_query_mem_usage *mem_usage;
> +	struct drm_xe_query_mem_regions *mem_regions;
>  	struct drm_xe_device_query query = {
>  		.extensions = 0,
> -		.query = DRM_XE_DEVICE_QUERY_MEM_USAGE,
> +		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
>  		.size = 0,
>  		.data = 0,
>  	};
> @@ -393,16 +393,16 @@ static void test_vram_d3cold_threshold(device_t device, int sysfs_fd)
>  	igt_assert_eq(igt_ioctl(device.fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  	igt_assert_neq(query.size, 0);
>  
> -	mem_usage = malloc(query.size);
> -	igt_assert(mem_usage);
> +	mem_regions = malloc(query.size);
> +	igt_assert(mem_regions);
>  
> -	query.data = to_user_pointer(mem_usage);
> +	query.data = to_user_pointer(mem_regions);
>  	igt_assert_eq(igt_ioctl(device.fd_xe, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  
> -	for (i = 0; i < mem_usage->num_regions; i++) {
> -		if (mem_usage->regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
> -			vram_used_mb +=  (mem_usage->regions[i].used / (1024 * 1024));
> -			vram_total_mb += (mem_usage->regions[i].total_size / (1024 * 1024));
> +	for (i = 0; i < mem_regions->num_regions; i++) {
> +		if (mem_regions->regions[i].mem_class == DRM_XE_MEM_REGION_CLASS_VRAM) {
> +			vram_used_mb +=  (mem_regions->regions[i].used / (1024 * 1024));
> +			vram_total_mb += (mem_regions->regions[i].total_size / (1024 * 1024));
>  		}
>  	}
>  
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index b960ccfa2..5860add0b 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -198,12 +198,12 @@ test_query_engines(int fd)
>   *	and alignment.
>   */
>  static void
> -test_query_mem_usage(int fd)
> +test_query_mem_regions(int fd)
>  {
> -	struct drm_xe_query_mem_usage *mem_usage;
> +	struct drm_xe_query_mem_regions *mem_regions;
>  	struct drm_xe_device_query query = {
>  		.extensions = 0,
> -		.query = DRM_XE_DEVICE_QUERY_MEM_USAGE,
> +		.query = DRM_XE_DEVICE_QUERY_MEM_REGIONS,
>  		.size = 0,
>  		.data = 0,
>  	};
> @@ -212,43 +212,43 @@ test_query_mem_usage(int fd)
>  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  	igt_assert_neq(query.size, 0);
>  
> -	mem_usage = malloc(query.size);
> -	igt_assert(mem_usage);
> +	mem_regions = malloc(query.size);
> +	igt_assert(mem_regions);
>  
> -	query.data = to_user_pointer(mem_usage);
> +	query.data = to_user_pointer(mem_regions);
>  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>  
> -	for (i = 0; i < mem_usage->num_regions; i++) {
> +	for (i = 0; i < mem_regions->num_regions; i++) {
>  		igt_info("mem region %d: %s\t%#llx / %#llx\n", i,
> -			mem_usage->regions[i].mem_class ==
> +			mem_regions->regions[i].mem_class ==
>  			DRM_XE_MEM_REGION_CLASS_SYSMEM ? "SYSMEM"
> -			:mem_usage->regions[i].mem_class ==
> +			:mem_regions->regions[i].mem_class ==
>  			DRM_XE_MEM_REGION_CLASS_VRAM ? "VRAM" : "?",
> -			mem_usage->regions[i].used,
> -			mem_usage->regions[i].total_size
> +			mem_regions->regions[i].used,
> +			mem_regions->regions[i].total_size
>  		);
>  		igt_info("min_page_size=0x%x\n",
> -		       mem_usage->regions[i].min_page_size);
> +		       mem_regions->regions[i].min_page_size);
>  
>  		igt_info("visible size=%lluMiB\n",
> -			 mem_usage->regions[i].cpu_visible_size >> 20);
> +			 mem_regions->regions[i].cpu_visible_size >> 20);
>  		igt_info("visible used=%lluMiB\n",
> -			 mem_usage->regions[i].cpu_visible_used >> 20);
> -
> -		igt_assert_lte_u64(mem_usage->regions[i].cpu_visible_size,
> -				   mem_usage->regions[i].total_size);
> -		igt_assert_lte_u64(mem_usage->regions[i].cpu_visible_used,
> -				   mem_usage->regions[i].cpu_visible_size);
> -		igt_assert_lte_u64(mem_usage->regions[i].cpu_visible_used,
> -				   mem_usage->regions[i].used);
> -		igt_assert_lte_u64(mem_usage->regions[i].used,
> -				   mem_usage->regions[i].total_size);
> -		igt_assert_lte_u64(mem_usage->regions[i].used -
> -				   mem_usage->regions[i].cpu_visible_used,
> -				   mem_usage->regions[i].total_size);
> +			 mem_regions->regions[i].cpu_visible_used >> 20);
> +
> +		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_size,
> +				   mem_regions->regions[i].total_size);
> +		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_used,
> +				   mem_regions->regions[i].cpu_visible_size);
> +		igt_assert_lte_u64(mem_regions->regions[i].cpu_visible_used,
> +				   mem_regions->regions[i].used);
> +		igt_assert_lte_u64(mem_regions->regions[i].used,
> +				   mem_regions->regions[i].total_size);
> +		igt_assert_lte_u64(mem_regions->regions[i].used -
> +				   mem_regions->regions[i].cpu_visible_used,
> +				   mem_regions->regions[i].total_size);
>  	}
> -	dump_hex_debug(mem_usage, query.size);
> -	free(mem_usage);
> +	dump_hex_debug(mem_regions, query.size);
> +	free(mem_regions);
>  }
>  
>  /**
> @@ -669,7 +669,7 @@ igt_main
>  		test_query_engines(xe);
>  
>  	igt_subtest("query-mem-usage")
> -		test_query_mem_usage(xe);
> +		test_query_mem_regions(xe);
>  
>  	igt_subtest("query-gt-list")
>  		test_query_gt_list(xe);
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [igt-dev] [PATCH v1 6/8] drm-uapi/xe: s/FLAGS_HAS_VRAM/FLAG_HAS_VRAM
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 6/8] drm-uapi/xe: s/FLAGS_HAS_VRAM/FLAG_HAS_VRAM Francois Dugast
@ 2023-11-14 15:41   ` Kamil Konieczny
  0 siblings, 0 replies; 18+ messages in thread
From: Kamil Konieczny @ 2023-11-14 15:41 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Hi Francois,
On 2023-11-14 at 13:44:24 +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Align with commit ("drm/xe/uapi: Standardize the FLAG naming and assignment")

In subject there is FLAGS -> FLAG but below there are some
more cleanups like s/0x1/1/, add here a note about it.

Btw there are few 0x1 left behind this patch:

#define DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING            (0x1 << 24)
#define DRM_XE_GEM_CREATE_FLAG_SCANOUT                  (0x1 << 25)
#define DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM       (0x1 << 26)

so maybe make separate patch for 0x1 cleanup? Up to you,
with or without it:

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
>  include/drm-uapi/xe_drm.h | 18 +++++++++---------
>  tests/intel/xe_debugfs.c  |  4 ++--
>  tests/intel/xe_query.c    |  4 ++--
>  3 files changed, 13 insertions(+), 13 deletions(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index ec37f6811..2dae8b03e 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -323,7 +323,7 @@ struct drm_xe_query_config {
>  
>  #define DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID	0
>  #define DRM_XE_QUERY_CONFIG_FLAGS			1
> -	#define DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM		(0x1 << 0)
> +	#define DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM	(1 << 0)
>  #define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT		2
>  #define DRM_XE_QUERY_CONFIG_VA_BITS			3
>  #define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY	4
> @@ -587,10 +587,10 @@ struct drm_xe_vm_create {
>  	/** @extensions: Pointer to the first extension struct, if any */
>  	__u64 extensions;
>  
> -#define DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE	(0x1 << 0)
> -#define DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE	(0x1 << 1)
> -#define DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT	(0x1 << 2)
> -#define DRM_XE_VM_CREATE_FLAG_FAULT_MODE	(0x1 << 3)
> +#define DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE	(1 << 0)
> +#define DRM_XE_VM_CREATE_FLAG_COMPUTE_MODE	(1 << 1)
> +#define DRM_XE_VM_CREATE_FLAG_ASYNC_DEFAULT	(1 << 2)
> +#define DRM_XE_VM_CREATE_FLAG_FAULT_MODE	(1 << 3)
>  	/** @flags: Flags */
>  	__u32 flags;
>  
> @@ -654,13 +654,13 @@ struct drm_xe_vm_bind_op {
>  	/** @op: Bind operation to perform */
>  	__u32 op;
>  
> -#define DRM_XE_VM_BIND_FLAG_READONLY	(0x1 << 0)
> -#define DRM_XE_VM_BIND_FLAG_ASYNC	(0x1 << 1)
> +#define DRM_XE_VM_BIND_FLAG_READONLY	(1 << 0)
> +#define DRM_XE_VM_BIND_FLAG_ASYNC	(1 << 1)
>  	/*
>  	 * Valid on a faulting VM only, do the MAP operation immediately rather
>  	 * than deferring the MAP to the page fault handler.
>  	 */
> -#define DRM_XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 2)
> +#define DRM_XE_VM_BIND_FLAG_IMMEDIATE	(1 << 2)
>  	/*
>  	 * When the NULL flag is set, the page tables are setup with a special
>  	 * bit which indicates writes are dropped and all reads return zero.  In
> @@ -668,7 +668,7 @@ struct drm_xe_vm_bind_op {
>  	 * operations, the BO handle MBZ, and the BO offset MBZ. This flag is
>  	 * intended to implement VK sparse bindings.
>  	 */
> -#define DRM_XE_VM_BIND_FLAG_NULL	(0x1 << 3)
> +#define DRM_XE_VM_BIND_FLAG_NULL	(1 << 3)
>  	/** @flags: Bind flags */
>  	__u32 flags;
>  
> diff --git a/tests/intel/xe_debugfs.c b/tests/intel/xe_debugfs.c
> index 60ddceda7..4fd5ebc28 100644
> --- a/tests/intel/xe_debugfs.c
> +++ b/tests/intel/xe_debugfs.c
> @@ -99,7 +99,7 @@ test_base(int fd, struct drm_xe_query_config *config)
>  	igt_assert(igt_debugfs_search(fd, "info", reference));
>  
>  	sprintf(reference, "is_dgfx %s", config->info[DRM_XE_QUERY_CONFIG_FLAGS] &
> -		DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "yes" : "no");
> +		DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM ? "yes" : "no");
>  
>  	igt_assert(igt_debugfs_search(fd, "info", reference));
>  
> @@ -125,7 +125,7 @@ test_base(int fd, struct drm_xe_query_config *config)
>  	igt_assert(igt_debugfs_exists(fd, "gtt_mm", O_RDONLY));
>  	igt_debugfs_dump(fd, "gtt_mm");
>  
> -	if (config->info[DRM_XE_QUERY_CONFIG_FLAGS] & DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM) {
> +	if (config->info[DRM_XE_QUERY_CONFIG_FLAGS] & DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM) {
>  		igt_assert(igt_debugfs_exists(fd, "vram0_mm", O_RDONLY));
>  		igt_debugfs_dump(fd, "vram0_mm");
>  	}
> diff --git a/tests/intel/xe_query.c b/tests/intel/xe_query.c
> index 5860add0b..4a23dcb60 100644
> --- a/tests/intel/xe_query.c
> +++ b/tests/intel/xe_query.c
> @@ -367,9 +367,9 @@ test_query_config(int fd)
>  		config->info[DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID] & 0xffff);
>  	igt_info("DRM_XE_QUERY_CONFIG_FLAGS\t\t\t%#llx\n",
>  		config->info[DRM_XE_QUERY_CONFIG_FLAGS]);
> -	igt_info("  DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM\t%s\n",
> +	igt_info("  DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM\t%s\n",
>  		config->info[DRM_XE_QUERY_CONFIG_FLAGS] &
> -		DRM_XE_QUERY_CONFIG_FLAGS_HAS_VRAM ? "ON":"OFF");
> +		DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM ? "ON":"OFF");
>  	igt_info("DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT\t\t%#llx\n",
>  		config->info[DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT]);
>  	igt_info("DRM_XE_QUERY_CONFIG_VA_BITS\t\t\t%llu\n",
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [igt-dev] [PATCH v1 7/8] drm-uapi/xe: Differentiate WAIT_OP from WAIT_MASK
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 7/8] drm-uapi/xe: Differentiate WAIT_OP from WAIT_MASK Francois Dugast
@ 2023-11-14 15:50   ` Kamil Konieczny
  0 siblings, 0 replies; 18+ messages in thread
From: Kamil Konieczny @ 2023-11-14 15:50 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Hi Francois,
On 2023-11-14 at 13:44:25 +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Align with kernel commit ("drm/xe/uapi: Differentiate WAIT_OP from WAIT_MASK")
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> ---
>  include/drm-uapi/xe_drm.h  | 21 +++++++++++----------
>  lib/xe/xe_ioctl.c          |  8 ++++----
>  tests/intel/xe_waitfence.c | 10 +++++-----
>  3 files changed, 20 insertions(+), 19 deletions(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 2dae8b03e..7a02b78bf 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -914,12 +914,12 @@ struct drm_xe_wait_user_fence {
>  	 */
>  	__u64 addr;
>  
> -#define DRM_XE_UFENCE_WAIT_EQ	0
> -#define DRM_XE_UFENCE_WAIT_NEQ	1
> -#define DRM_XE_UFENCE_WAIT_GT	2
> -#define DRM_XE_UFENCE_WAIT_GTE	3
> -#define DRM_XE_UFENCE_WAIT_LT	4
> -#define DRM_XE_UFENCE_WAIT_LTE	5
> +#define DRM_XE_UFENCE_WAIT_OP_EQ	0x0
> +#define DRM_XE_UFENCE_WAIT_OP_NEQ	0x1
> +#define DRM_XE_UFENCE_WAIT_OP_GT	0x2
> +#define DRM_XE_UFENCE_WAIT_OP_GTE	0x3
> +#define DRM_XE_UFENCE_WAIT_OP_LT	0x4
> +#define DRM_XE_UFENCE_WAIT_OP_LTE	0x5
>  	/** @op: wait operation (type of comparison) */
>  	__u16 op;
>  
> @@ -934,12 +934,13 @@ struct drm_xe_wait_user_fence {
>  	/** @value: compare value */
>  	__u64 value;
>  
> -#define DRM_XE_UFENCE_WAIT_U8		0xffu
> -#define DRM_XE_UFENCE_WAIT_U16		0xffffu
> -#define DRM_XE_UFENCE_WAIT_U32		0xffffffffu
> -#define DRM_XE_UFENCE_WAIT_U64		0xffffffffffffffffu
> +#define DRM_XE_UFENCE_WAIT_MASK_U8	0xffu
> +#define DRM_XE_UFENCE_WAIT_MASK_U16	0xffffu
> +#define DRM_XE_UFENCE_WAIT_MASK_U32	0xffffffffu
> +#define DRM_XE_UFENCE_WAIT_MASK_U64	0xffffffffffffffffu
>  	/** @mask: comparison mask */
>  	__u64 mask;
> +
>  	/**
>  	 * @timeout: how long to wait before bailing, value in nanoseconds.
>  	 * Without DRM_XE_UFENCE_WAIT_FLAG_ABSTIME flag set (relative timeout)
> diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> index db41d5ba5..a9cfdbf9d 100644
> --- a/lib/xe/xe_ioctl.c
> +++ b/lib/xe/xe_ioctl.c
> @@ -415,10 +415,10 @@ int64_t xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
>  {
>  	struct drm_xe_wait_user_fence wait = {
>  		.addr = to_user_pointer(addr),
> -		.op = DRM_XE_UFENCE_WAIT_EQ,
> +		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
>  		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP : 0,
>  		.value = value,
> -		.mask = DRM_XE_UFENCE_WAIT_U64,
> +		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
>  		.timeout = timeout,
>  		.num_engines = eci ? 1 :0,
>  		.instances = eci ? to_user_pointer(eci) : 0,
> @@ -447,10 +447,10 @@ int64_t xe_wait_ufence_abstime(int fd, uint64_t *addr, uint64_t value,
>  {
>  	struct drm_xe_wait_user_fence wait = {
>  		.addr = to_user_pointer(addr),
> -		.op = DRM_XE_UFENCE_WAIT_EQ,
> +		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
>  		.flags = !eci ? DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP | DRM_XE_UFENCE_WAIT_FLAG_ABSTIME : 0,
>  		.value = value,
> -		.mask = DRM_XE_UFENCE_WAIT_U64,
> +		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
>  		.timeout = timeout,
>  		.num_engines = eci ? 1 : 0,
>  		.instances = eci ? to_user_pointer(eci) : 0,
> diff --git a/tests/intel/xe_waitfence.c b/tests/intel/xe_waitfence.c
> index 2efdc1245..b1cae0d9b 100644
> --- a/tests/intel/xe_waitfence.c
> +++ b/tests/intel/xe_waitfence.c
> @@ -123,10 +123,10 @@ invalid_flag(int fd)
>  
>  	struct drm_xe_wait_user_fence wait = {
>  		.addr = to_user_pointer(&wait_fence),
> -		.op = DRM_XE_UFENCE_WAIT_EQ,
> +		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
>  		.flags = -1,
>  		.value = 1,
> -		.mask = DRM_XE_UFENCE_WAIT_U64,
> +		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
>  		.timeout = -1,
>  		.num_engines = 0,
>  		.instances = 0,
> @@ -151,7 +151,7 @@ invalid_ops(int fd)
>  		.op = -1,
>  		.flags = 0,
>  		.value = 1,
> -		.mask = DRM_XE_UFENCE_WAIT_U64,
> +		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
>  		.timeout = 1,
>  		.num_engines = 0,
>  		.instances = 0,
> @@ -173,10 +173,10 @@ invalid_engine(int fd)
>  
>  	struct drm_xe_wait_user_fence wait = {
>  		.addr = to_user_pointer(&wait_fence),
> -		.op = DRM_XE_UFENCE_WAIT_EQ,
> +		.op = DRM_XE_UFENCE_WAIT_OP_EQ,
>  		.flags = 0,
>  		.value = 1,
> -		.mask = DRM_XE_UFENCE_WAIT_U64,
> +		.mask = DRM_XE_UFENCE_WAIT_MASK_U64,
>  		.timeout = -1,
>  		.num_engines = 1,
>  		.instances = 0,
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [igt-dev] [PATCH v1 8/8] drm-uapi/xe: Be more specific about vm_bind prefetch region
  2023-11-14 13:44 ` [igt-dev] [PATCH v1 8/8] drm-uapi/xe: Be more specific about vm_bind prefetch region Francois Dugast
@ 2023-11-14 17:04   ` Kamil Konieczny
  0 siblings, 0 replies; 18+ messages in thread
From: Kamil Konieczny @ 2023-11-14 17:04 UTC (permalink / raw)
  To: igt-dev; +Cc: Rodrigo Vivi

Hi Francois,
On 2023-11-14 at 13:44:26 +0000, Francois Dugast wrote:
> From: Rodrigo Vivi <rodrigo.vivi@intel.com>
> 
> Align with kernel commit ("drm/xe/uapi: Be more specific about the vm_bind prefetch region")
> 
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
>  include/drm-uapi/xe_drm.h | 8 ++++++--
>  lib/intel_batchbuffer.c   | 4 ++--
>  lib/xe/xe_ioctl.c         | 8 ++++----
>  lib/xe/xe_ioctl.h         | 2 +-
>  lib/xe/xe_util.c          | 2 +-
>  tests/intel/xe_vm.c       | 2 +-
>  6 files changed, 15 insertions(+), 11 deletions(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 7a02b78bf..af32ec161 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -672,8 +672,12 @@ struct drm_xe_vm_bind_op {
>  	/** @flags: Bind flags */
>  	__u32 flags;
>  
> -	/** @mem_region: Memory region to prefetch VMA to, instance not a mask */
> -	__u32 region;
> +	/**
> +	 * @prefetch_mem_region_instance: Memory region to prefetch VMA to.
> +	 * It is a region instance, not a mask.
> +	 * To be used only with %DRM_XE_VM_BIND_OP_PREFETCH operation.
> +	 */
> +	__u32 prefetch_mem_region_instance;
>  
>  	/** @reserved: Reserved */
>  	__u64 reserved[2];
> diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
> index b59c490db..f12d6219d 100644
> --- a/lib/intel_batchbuffer.c
> +++ b/lib/intel_batchbuffer.c
> @@ -1282,7 +1282,7 @@ void intel_bb_destroy(struct intel_bb *ibb)
>  
>  static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
>  						   uint32_t op, uint32_t flags,
> -						   uint32_t region)
> +						   uint32_t prefetch_region)
>  {
>  	struct drm_i915_gem_exec_object2 **objects = ibb->objects;
>  	struct drm_xe_vm_bind_op *bind_ops, *ops;
> @@ -1303,7 +1303,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct intel_bb *ibb,
>  		ops->obj_offset = 0;
>  		ops->addr = objects[i]->offset;
>  		ops->range = objects[i]->rsvd1;
> -		ops->region = region;
> +		ops->prefetch_mem_region_instance = prefetch_region;
>  
>  		igt_debug("  [%d]: handle: %u, offset: %llx, size: %llx\n",
>  			  i, ops->obj, (long long)ops->addr, (long long)ops->range);
> diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> index a9cfdbf9d..738c4ffdb 100644
> --- a/lib/xe/xe_ioctl.c
> +++ b/lib/xe/xe_ioctl.c
> @@ -92,7 +92,7 @@ void xe_vm_bind_array(int fd, uint32_t vm, uint32_t exec_queue,
>  int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>  		  uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
>  		  uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
> -		  uint32_t region, uint64_t ext)
> +		  uint32_t prefetch_region, uint64_t ext)

Make this change also in header lib/xe/xe_ioctl.h

With this
Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

>  {
>  	struct drm_xe_vm_bind bind = {
>  		.extensions = ext,
> @@ -104,7 +104,7 @@ int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>  		.bind.addr = addr,
>  		.bind.op = op,
>  		.bind.flags = flags,
> -		.bind.region = region,
> +		.bind.prefetch_mem_region_instance = prefetch_region,
>  		.num_syncs = num_syncs,
>  		.syncs = (uintptr_t)sync,
>  		.exec_queue_id = exec_queue,
> @@ -119,10 +119,10 @@ int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>  void  __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>  			  uint64_t offset, uint64_t addr, uint64_t size,
>  			  uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
> -			  uint32_t num_syncs, uint32_t region, uint64_t ext)
> +			  uint32_t num_syncs, uint32_t prefetch_region, uint64_t ext)
>  {
>  	igt_assert_eq(__xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
> -				   op, flags, sync, num_syncs, region, ext), 0);
> +				   op, flags, sync, num_syncs, prefetch_region, ext), 0);
>  }
>  
>  void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
> diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
> index d9c97bf22..a9171bcf7 100644
> --- a/lib/xe/xe_ioctl.h
> +++ b/lib/xe/xe_ioctl.h
> @@ -24,7 +24,7 @@ int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>  void  __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>  			  uint64_t offset, uint64_t addr, uint64_t size,
>  			  uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
> -			  uint32_t num_syncs, uint32_t region, uint64_t ext);
> +			  uint32_t num_syncs, uint32_t prefetch_region, uint64_t ext);
>  void xe_vm_bind(int fd, uint32_t vm, uint32_t bo, uint64_t offset,
>  		uint64_t addr, uint64_t size,
>  		struct drm_xe_sync *sync, uint32_t num_syncs);
> diff --git a/lib/xe/xe_util.c b/lib/xe/xe_util.c
> index 2635edf72..742e6333e 100644
> --- a/lib/xe/xe_util.c
> +++ b/lib/xe/xe_util.c
> @@ -147,7 +147,7 @@ static struct drm_xe_vm_bind_op *xe_alloc_bind_ops(struct igt_list_head *obj_lis
>  		ops->obj_offset = 0;
>  		ops->addr = obj->offset;
>  		ops->range = obj->size;
> -		ops->region = 0;
> +		ops->prefetch_mem_region_instance = 0;
>  
>  		bind_info("  [%d]: [%6s] handle: %u, offset: %llx, size: %llx\n",
>  			  i, obj->bind_op == XE_OBJECT_BIND ? "BIND" : "UNBIND",
> diff --git a/tests/intel/xe_vm.c b/tests/intel/xe_vm.c
> index 86c8d0c5d..05e8e7516 100644
> --- a/tests/intel/xe_vm.c
> +++ b/tests/intel/xe_vm.c
> @@ -797,7 +797,7 @@ test_bind_array(int fd, struct drm_xe_engine_class_instance *eci, int n_execs,
>  		bind_ops[i].tile_mask = 0x1 << eci->gt_id;
>  		bind_ops[i].op = DRM_XE_VM_BIND_OP_MAP;
>  		bind_ops[i].flags = DRM_XE_VM_BIND_FLAG_ASYNC;
> -		bind_ops[i].region = 0;
> +		bind_ops[i].prefetch_mem_region_instance = 0;
>  		bind_ops[i].reserved[0] = 0;
>  		bind_ops[i].reserved[1] = 0;
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2023-11-14 17:05 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-14 13:44 [igt-dev] [PATCH v1 0/8] uAPI Alignment - Renaming Francois Dugast
2023-11-14 13:44 ` [igt-dev] [PATCH v1 1/8] drm-uapi/xe: Add missing DRM_ prefix in uAPI constants Francois Dugast
2023-11-14 13:48   ` Rodrigo Vivi
2023-11-14 13:44 ` [igt-dev] [PATCH v1 2/8] drm-uapi/xe: Add _FLAG to uAPI constants usable for flags Francois Dugast
2023-11-14 13:47   ` Rodrigo Vivi
2023-11-14 13:44 ` [igt-dev] [PATCH v1 3/8] drm-uapi/xe: Change rsvd to pad in struct drm_xe_class_instance Francois Dugast
2023-11-14 13:47   ` Rodrigo Vivi
2023-11-14 13:44 ` [igt-dev] [PATCH v1 4/8] drm-uapi/xe: Rename *_mem_regions mask Francois Dugast
2023-11-14 14:44   ` Kamil Konieczny
2023-11-14 13:44 ` [igt-dev] [PATCH v1 5/8] drm-uapi/xe: Rename query's mem_usage to mem_regions Francois Dugast
2023-11-14 15:30   ` Kamil Konieczny
2023-11-14 13:44 ` [igt-dev] [PATCH v1 6/8] drm-uapi/xe: s/FLAGS_HAS_VRAM/FLAG_HAS_VRAM Francois Dugast
2023-11-14 15:41   ` Kamil Konieczny
2023-11-14 13:44 ` [igt-dev] [PATCH v1 7/8] drm-uapi/xe: Differentiate WAIT_OP from WAIT_MASK Francois Dugast
2023-11-14 15:50   ` Kamil Konieczny
2023-11-14 13:44 ` [igt-dev] [PATCH v1 8/8] drm-uapi/xe: Be more specific about vm_bind prefetch region Francois Dugast
2023-11-14 17:04   ` Kamil Konieczny
2023-11-14 15:11 ` [igt-dev] ✗ Fi.CI.BUILD: failure for uAPI Alignment - Renaming Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox